The Attack on SolarWinds Reveals Two Nightmares: What Has Been Done Right and What Has Been Done Wrong

Sergio de los Santos    15 January, 2021

All cyber security professionals now know at least part of what was originally thought to be “just” an attack on SolarWinds, which has just truned out to be one of the most interesting operations of recent years. We will dwell on the more curious details of the incident, but we will also focus on the management of this crisis. What has been done right and what has been done wrong to gauge the maturity of an industry that will suffer more and worse attakcs than this in the future

FireEye raises the alarm on Tuesday 8 December. They have been attacked. But the industry does not blame FireEye for this, but backs them up and supports them in general, their response is exemplary. It has happened to many and it can happen to all of us, so the important thing is how you deal with it and be resilient.

Since attackers have access to sensitive tools internal to their company, FireEye does something for the industry that honours them: they publish the Yara rules necessary to detect whether someone is using those tools stolen from the FireEye offensive team against a company. A fine gesture that is again publicly credited. Not much more is known about the incident and it is still being investigated.

But then everything gets complicated, and in a massive way. The news begins: The US Treasury Department and many other government departments also admit an attack. On the same day, the 13th, FireEye offers a very important detail: the problem lies in the Trojanization of SolarWinds’ Orion software. An upgrade package, signed by SolarWinds itself, included a backdoor. It is estimated that over 18,000 companies use this system. Pandora’s box is opened because of the characteristics of the attack and because it is a software used in many large companies and governments. And since global problems require global and coordinated reactions, this is where something seemed to have gone completely wrong.

Did the Coordination Fail?

The next day, December 14, with the information needed to point at “ground zero” of the attack, the reactive methods still did not work. In particular:

  • Antivirus engines were still unable to detect the malware (which has become known as SUNBURST). On that same Monday it was not found in the static signatures of the popular engines.
  • The certificate with which the attackers signed the software was still not revoked. Whether they gained access to the private key or not (unknown), that certificate had to be revoked in case the attacker was able to sign other software on behalf of SolarWinds.

Here we can only guess why this “reactive” element failed. Was SolarWinds late in the attack? Did FireEye publish the details to put pressure on SolarWindws when it was already clear that the attack concealed a much more complex offensive?  Of course, the stock market has “punished” both companies differently, if it can be used as a quick method of assessing the market’s reaction to a serious compromise. FireEye has turned out to be the hero. SolarWinds, the bad guy.

However, there have been reactions that have worked, such as Microsoft hijacking the domain under which the whole attack is based (avsavmcloud.com). Which, by the way, was sent from Spain to urlscan.io manually on 8 July. Someone may have noticed something strange. The campaign had been active since March.

The Malware itself and the Community

The “good” thing about SUNBURST is that it is created in .NET language, making it relatively easy to decompile and know what the attacker has programmed. And so, the community began to analyse the software from top to bottom and program tools for a better understanding.

The malware is extremely subtle. It did not start until about two weeks after it was found on the victim. It modified scheduled system tasks to be launched and then returned them to their original state. But one of the most interesting features of the malware is the ability to hide the domains it uses, which required brute force to reveal them (they were hashes). In addition, it contained the hash of other domains that it did not want to infect. But which ones?

All, most likely, internal to the SolarWinds network, to go unnoticed in its internal network. An indication that the initial victim was SolarWinds and that in order to achieve this, the attackers had to know their victim well. A code was issued to pull out any tool list (their names were also hashed) to find out what the trojan didn’t want to see on the machine. Many of the tools and hashed domains were revealed in record time and it was possible to recognise what these attackers had in mind. Another tool has been published to decrypt the DGA (Domain Generator Algorithm) where it tried to contact the malware. One of the strong points of the algorithm was precisely the DGA, but also its weak point (the top-level domain was always the same).

In the end, the malware ended up composing URLs like this:

  • hxxps://3mu76044hgf7shjf[.]appsync-api[.]eu-west-1[.]avsvmcloud[.]com /swip/upd/Orion[.]Wireless[.]xml

Where it “exfiltrated” the information and communicated with the Command and Control. Well thought out from the attacker’s point of view because it goes unnoticed due to its “normality”, but badly thought out from the perspective of persistence

Another very interesting point that seems to have gone unnoticed, is that the attackers seemed to “inflate” during 2019 the trojan module from 500 to 900k, without injecting relevant code but increasing the size of the DLL. In February 2020 they introduced the espionage charge into the same DLL, thus achieving an extra invisibility without raising suspicions due to the increase in size.

Don’t Go Yet, There Is Still More

More recently, it seems that Orion from SolarWinds was not only trojanized with SUNBURST but also with what has come to be called SUPERNOVA. Perhaps another actor also had the possibility to enter the network and deployed a different trojan in the tool. Although we still do not have many details of how it worked, this is the second nightmare that can still be talked about.

Conclusions

We are facing one of the most sophisticated attacks in recent times, which has not only put in check a company that is dedicated to defending other companies, but also governments, major ones like Microsoft and others that we cannot even imagine. They have gone one step further launching a campaign that is almost perfect in its impact and execution. On other occasions (the RSA, Bit9, Operation Aurora…), large companies have been attacked too and also sometimes only as a side effect in order to reach a third party, but on this occasion a step forward has been taken in the discretion, precision and “good work” of the attackers. And all thanks to a single fault, of course: the weakest point they have been able to detect in the supply chain on which major players depend. And yes, SolarWinds seemed a very weak link. On their website they recommended deactivating the antivirus (although this is unfortunately common for certain types of tools) and they have shown to use weak passwords in their operations, in addition to the fact that there are indications that they have been compromised for more than a year… twice

Should we be surprised at such weak links in the cyber security chain on which so much depends? We depend on an admittedly patchy picture in terms of cyber security skills. Asymmetric in response, defence and prevention capabilities, both for victims and attackers… but very democratic in the importance of each piece in the industry. There is no choice but to respond in a coordinated and joint manner to mitigate the risk. It is not difficult to find similarities outside the field of cyber security. In any case, and fortunately, once again, the industry has shown itself to be mature and capable of responding jointly, not only by the community, but also by the major actors. Perhaps this is the positive message we can pull out of a story that still seems to be unfinished.

How IoT technology is helping candy producers make sweet profits!

Patrick Buckley    15 January, 2021

From chocolate bars to lollipops, gumdrops to Haribos, the confectionary industry is now worth an estimated 210$ billion worldwide. With the industry experiencing strong growth, it is no surprise that confectionary producers are also beginning to implement IoT (Internet of Things) technologies in their production processes in order to achieve efficient outcomes and take further steps forward towards the digital transformation.  

Why is IoT technology especially useful in this industry?

Due to the nature of candies, especially chocolate-based confectionary, slight variations in environmental factors such as temperature and moisture level can affect the quality of the final product. This is especially true when we talk about  smaller products, such as Hershey’s Twizzlers or Rees’s Peanut Butter Cups, that may have been easily disfigured by temperature fluctuations.

By lining production lines with multiple IoT connected sensors, and linking these with temperature control systems, candy manufacturers are able to maintain a constant environment in their production line. Furthermore, these same sensors are able to check the weight and shape of all pieces, thus allowing manufacturers to reduce waste and provide a better quality, more consistent final product to the consumer.

A current Use Case

Back in 2016, Hershey, a leading American confectionary manufacturer, started to implement IoT technologies in their production line with the aim of optimising production output by more effectively controlling environmental factors.

Currently, Hershey have tested this technology on the Twizzlers production line, installing 22 sensors in the holding tank which deliver 60 million data points. In this case, Microsoft Azure algorithms are used to process these insights. 

Further benefits 

The benefits don’t end there! This technology also means that manufactures can continue to innovate products without being restricted by product size.

A more controlled environment allows for smaller pieces to be manufactured that would have been deemed previously too ‘environmentally sensitive’. Although this could be bad news for consumers who may soon observe reduced sizes of candy products, it is great news for manufacturers as it allows them to experience dramatic cost savings. In the case of Hersheys, the company estimate that a 1% reduction in the size of each Twizzler would lead to a dramatic cost saving of $500,000 each year!

This technology may also allow manufacturers to experiment with different ingredients and offer vegan/healthier product alternatives, allowing us to all become a little healthier whilst quenching our desire for a sweet treat!

In summary,

Today’s candy factories may not be run by Willy Wonka and his faithful band of Oompa Loompas, but the magic of IoT technology is transforming production processes worldwide beyond my wildest imagination. Thanks to the use of IoT connected sensors, producers can experience dramatic cost savings and consumers receive a more homogenous and higher quality product, even if it may be little bit smaller in size! As this technology is increasingly adopted within the industry, IoT technology will also allow manufacturers to experiment with recipes and offer a wider variety of products.

To keep up to date with Telefónica’s Internet of Things area, visit our web site or follow us on TwitterLinkedIn  and  YouTube

Homeworking: Balancing Corporate Control and Employee Privacy (I)

Antonio Gil Moyano    Juan Carlos Fernández Martínez    14 January, 2021

At this point in time and looking back on 2020, nobody would have imagined the advance in the digitalisation of organisations and companies due to the irruption of homeworking in this current global pandemic situation. An advance to which employers and workers have had to adapt by implementing distance working methodologies and, given that homeworking has come to stay, some governments have chosen for its regulation, such as Spain, which did so last September with the Royal Decree-Law 28/2020, Argentina last August with the Law 27. Other countries in the region have already regulated homeworking in their legal systems, such as Colombia in 2008 with the Law 1,221, Peru in 2013 with the Law 30,036, and Costa Rica in 2019 with the Law 9,738.

Homeworking Regulation in Spain

The Distance Work Act harmonizes the basic standards that employers must apply to the implementation of homework in their organizations. This modality was already a reality for small and agile companies such as start-ups; however, for larger organizations and administrations, it has been a total improvisation, as it has been seen throughout the lockdown periods that have been taking place since March.

This circumstance evidenced the lack of methodologies and systems for adapting remote work, increasing the surface area of exposure of the organisations’ data and, consequently, aggravating the problems of cyber security, even more so given that the use of personal computer equipment and connections through private networks caused corporate information to leave the security perimeter offered by the organisations’ facilities.

Despite criticism from the business sector during the processing of the draft law on distance work, an agreement was finally reached between the government, employers and trade unions. One of the points of friction was the obligation to cover the costs incurred by workers. The decision is that employers will have to pay in cases where their workers work from home more than 30% of their workday, that is, for five-day workdays and forty hours a week, the rule will apply for those who work from home for more than one and a half days a week, counted in quarterly periods. This could affect the boom in homeworking, as it means that the employer must pay twice as much for office maintenance and for the worker’s expenses.

The most interesting part of the norm, which combines the technical and legal binomial, is found in section h) of article 7, which deals with the need to regulate the means of business control, which must be stated in writing in the homeworking agreement. In this sense, there is an interesting legal field that requires the study of the jurisprudence of the high courts, case by case, and checking how the judges understand that the lawful access to information from computer equipment and/or corporate emails of the employees is achieved. The basis for a successful agreement is trust and the balance of the employer’s power of control and the employee’s right to privacy.

Security and Privacy in Homeworking

Within this new paradigm in the way of working it is evident that, after people, information continues to be the main asset of companies and that its security, now more than ever, can be compromised due to the use of personal equipment that is outside the control of the company.

The lack of a policy on the proper use of information systems in most companies is one of the main reasons why these resources are not being properly managed, which in many cases can seriously affect the business continuity.

As an information security auditor, technology expert and businessman, I am responsible for implementing the necessary measures to mitigate or reduce the risks associated with information security but, if the incident occurs, also to investigate it by collecting and analysing evidence to help identify the origin of the problem.

The objective of the ISO/IEC 27001:2013 standard is to protect the confidentiality, integrity and availability of information in companies. Among others, it includes aspects related to Homeworking and the Acceptable Use of Company Assets. Point 8.1.3 aims to document the appropriate use of information by describing the security requirements of assets made available to the employee, such as the computer or laptop, mobile phone, mail account… always communicating in an appropriate manner to avoid misuse such as unauthorized information extraction (confidentiality), information manipulation (integrity), impersonation or ransomware (availability), among others.

This acceptable use policy can be adopted by the company outside of certification. The question is, if we sign a confidentiality agreement or NDA with our employees, why don’t we document a policy and sign an asset use document? This would avoid many technical and legal problems in the face of possible incidents related to security and privacy in homeworking. Firstly, because the employee is duly informed and explained what he/she can or cannot do with the company’s resources, secondly, he/she signs a document that proves this and if the incident finally occurs, he/she cannot claim ignorance of the rules and policies related to the company’s information security.

Point 6.2 talks specifically about guaranteeing information security in the use of resources for mobility and homeworking, also a very extensive objective that we cannot deal with in depth, but which we summarise in the following point, which applies to the risks associated with this practice, to the controls that must be implemented to reduce or mitigate them, and to the establishment of metrics that allow adequate monitoring.

This is an example of a document to be signed by the employee, which must include GDPR aspects relating to the processing of their personal data by the company:

In the second part of this post we will continue to deepen our understanding of this issue, emphasizing both the balance between corporate control and privacy and the tools of control. We hope you find it useful.


Second part now available:

46% Of the Main Spanish Websites Use Google Analytics Cookies Before the Consent Required by The Spanish Data Protection Agency (AEPD)

Innovation and Laboratory Area in ElevenPaths    13 January, 2021

Over the past few months, many IT departments have been busy carrying out this task of adaptation in order to comply with the new regulations on cookies. Every time we visit a website, we are asked whether we want to accept or (almost always indirectly) refuse cookies. Most users who arrive at this message looking for a service or specific information end up accepting all the cookies without knowing the real impact in terms of security and privacy. How many cookies are usually accepted? For how long? Do the websites respect the new law on cookies?

In TEGRA, the Galician centre of innovation in information protection of the ElevenPaths innovation area and laboratory, we wanted to analyse the current use of cookies in Spain and their impact and compliance based on a representative sample of the most visited websites in Spain. To achieve this, we have developed and released a tool called Triki, which automates the navigation to a series of websites defined by configuration and performs different navigation flows. We have drawn interesting conclusions which we include in this report, which we will now summarise.

In collaboration with Govertis, we will explain what has happened in 2020 concerning cookies and their management. The Spanish Data Protection Agency (AEPD), following the entry into force of the European General Data Protection Regulation and several consultations with the European Data Protection Supervisor (EDPS), updated its guide to the use of cookies in July 2020, giving website owners a deadline to adapt to these policies until 31 October 2020.

We could summarise the main updates in that the simple browsing is not valid as an expression of a user’s consent to the acceptance of cookies. The use of cookie walls is also prohibited if no alternative to consent is offered. Regarding the new features in the management of acceptance and revocation of consent, the most relevant is the removal of the option to obtain consent through the “continue browsing” option.  Previously, the option “If you continue to browse, we consider that you accept its use” was allowed and now the ECDC has established that continuing to browse is not a valid way to give consent.

As a general rule, some aspects are modified and clarified regarding the methods for informing users about the acceptance, refusal or revocation of consent, through the configuration that must be provided by the editor or common platforms that may exist for this purpose.

Finally, regarding third-party cookies, information will be provided on the tools provided by the browser and the third parties and it should be noted that if the user accepts third-party cookies and subsequently wishes to delete them, he/she must do so from his/her own browser or the system enabled by the third parties for this purpose.

Methodology

To carry out this research on cookies, the 100 most visited domains in Spain have been selected, obtained through the alexa.com website. A tool called Triki has been developed to extract the information. With it, and a personalized configuration per domain, different types of information have been extracted.  For each website, a series of flows have been tracked. In addition, for each flow, two types of extractions have been made: extraction without a blocker and extraction using a third-party cookie blocker.

The different flows simulated with each type of navigation are:

  • browse: the tool connects to the website without taking any action and extracts the cookies used. It is the part before the consent of the cookies
  • accept: the tool connects to the website, consents to the use of all cookies and extracts them. This is the acceptance part of cookies
  • reject: the connection to the website is made and the necessary actions are taken to proceed with the rejection of the cookies. This is the part that rejects cookies.

How Many Websites Does Each Flow Allow?

More than 50% of the websites in our survey allow the rejection or configuration of cookies directly, which is ideal. 24% allow only acceptance and redirect the user to the browser’s own configuration for rejection, which increases the effort to perform the rejection. 19 of them (19%) do not allow to reject or accept, but they could be sites without cookies that must be notified. At the same time, 9 (37%) use analytical cookies (Google Analytics) and therefore do not comply with the need for express consent expressed by the regulation of cookies of the AEPD.

How Many Cookies Are Used Per Site?

14% use more than 90 cookies. The average use of cookies is 27 cookies per website. We also compare our own cookies with third-party cookies. 44% of websites use the same or a greater number of third-party cookies than their own. In the worst cases, 90% of a website’s cookies are third-party cookies.

On the other hand, 53% of websites use more than 10 cookies before consent.  By using a third-party cookie blocker in the browser, it is shown that 96% of the sites use cookies of third parties as soon as the connection is made. Although it may be legal, it is at least rare that they require third-party cookies to ensure the technical functioning or personalisation of a page. In these cases, it is recommended to use a third-party cookie blocker.

During our research we have analysed how many sites use Google Analytics cookies before accepting or refusing consent at the stage we have defined as “browse”. The results show that 46% of the sites use Google Analytics cookies before consent. We also wanted to check how many sites still maintain Google Analytics cookies after an explicit rejection by the user. The results show that 25% of websites continue to keep this type of analytical cookie even when rejected.

Cookies and Expiration

The AEPD, in its guidelines on consent, recommends as best practice the renewal of consent at appropriate intervals. This agency considers that the validity of a user’s consent to the use of a particular cookie should not exceed 24 months. Based on these indications, we have analysed our dataset to verify whether the extracted cookies comply with this 24-month maximum lifetime requirement for permanent cookies. Around 15% of cookies do not comply with this regulation by using expiry periods longer than 24 months.  When we accept cookies from the site visited, we have found more than 100 cookies with a more than 3 years lifetime. The expiration of 50 of these cookies is greater than 20 years.

Finally, we have concluded that 96% of the sites analysed use more permanent cookies than session cookies. On average, 86% of the total cookies used on a website are permanent cookies

Secure Cookies

We also wanted to analyse which security systems are implemented in the established cookies themselves. Let’s look at some of the methods analysed:

  • Cookies Secure: if this flag is enabled in the cookie, it would only be sent to the server in an encrypted HTTP request via the HTTPS protocol (HTTP + TLS/SSL).
  • Cookies httpOnly:enabling this flag in a cookie helps prevent cross-site scripting (XSS) attacks, since HttpOnly cookies are inaccessible from the Javascript document.cookie API.

But there are more ways to secure a cookie. The __Secure- prefix makes a cookie accessible only from secure sites with the HTTPS protocol. This makes it impossible for an insecure site using the HTTP protocol to read or update cookies containing that prefix on its name. This security mechanism protects against attacks from tampering with secure cookies. The __Host-prefix does the same things as the __Secure- prefix, but at a higher level it restricts access only to the same domain in which it is configured. Only 2% of websites use the __Host-prefix. None of the websites use the __Secure- prefix.

Can Cookies Be Rejected?

Only 8% of the websites analysed allow you to reject directly from the main banner (see image).

Of the remaining percentage, 22% do not meet the premise that it is “as easy to reject as to accept”, since more actions are needed to be able to disable the use of cookies. The remaining 70% who are compliant use marketing strategies to subtly induce the user to accept cookies. For example, with ambiguous buttons that make people think that cookies have been deactivated.

The following graph shows the total number of cookies registered in all domains classified by stage, depending on whether or not third-party cookies have been blocked. Before consent, acceptance and rejection.

As can be seen from the results, the simple use of a third-party cookie blocker results in a significant decrease in the number of cookies used. Even if all cookies have been rejected.
We can conclude that 69% of the domains that allow cookies to be rejected do not completely eliminate the cookies of third parties when they are rejected with the browser.

Conclusions

  • None of the websites that only uses technical cookies and/or personalisation cookies gives any kind of warning to the user that this type of cookie is being used.
  • The data indicates that 44% of the websites use the same or a greater number of third-party cookies than their own. In the worst cases, 90% of a website’s cookies are third-party cookies. In this case it is recommended to enable the blocking of the use of third-party cookies in the browser to limit the number of cookies.
  • Even if all cookies are rejected completely, many of these third-party cookies are still used in the same way.
  • The regulations indicate that session cookies should be given priority over permanent cookies. However, the data indicates that 96% of the sites analysed use more permanent cookies than session cookies. In addition, on average, 86% of the total cookies used by a website are permanent cookies.
  • The regulations indicate that the life span of these cookies should not exceed two years, however, 15% of the cookies use expiry periods of more than 24 months.
  • 46% of the websites use pre-consent analytical cookies and 25% use them when still rejecting all cookies, so this is in violation of the AEPD policy.

WhatsApp Terms and Conditions Update: A Cheeky Move?

Carlos Ávila    Diego Samuel Espitia    11 January, 2021

Surely by now many have already accepted the new terms and privacy policies without really knowing what they were about or their impact on the privacy of their data, and many others have even decided to switch to Telegram and start abandoning the green messenger…

Why so much fuss about this new policy update? To explain briefly, with the acceptance (Figure 1) of this update of the conditions and privacy policy – mandatory from 8 February – you will allow your WhatsApp data to be shared with the rest of the Facebook services, which was optional a few years ago where the user could decide directly what to share and what not to share between the Facebook companies.

Notification of update of conditions and privacy policy

Users are talking a lot about this controversial topic because if you do not accept this update you will not be able to continue using the application. In recent days several articles have been written about these giving details, so we decided to focus this entry on what are the alternatives we have to the manifest intentions of Facebook on using our data.

Considerations on acceptance of the new terms and conditions

We are interested in analysing what will happen to users who accepted these new terms by mistake or in a hurry and want to revoke this acceptance, even if this means that on 8 February this year they will have to stop using the platform if they do not agree. Will they be able to do so? Is there any place where this acceptance can be revoked? The answer is currently simply NO. Nevertheless, we thought about verifying some actions that users might try to execute in order to reverse this “unconscious” acceptance, especially after reading so many articles or messages on Twitter about the subject, and we decided to start with the most obvious one: Search for an option in the account settings… of course there is no such option…

The second option we thought was harder, to delete the user and then create it again or even load another user into the application and see if the policy acceptance sign appeared again. However, when running WA, the application takes the last update (version 2.20.206.24) and accepts the new policy.

To be more incisive, the third option the user has is to uninstall the application completely and reinstall everything with previous versions from the official shop. However, when carrying out this procedure we verify that it is not possible to install a previous version since it is not available as an alternative in an official way (of course, if we already have the installer of a previous downloaded version or we download it from an unofficial shop, which we do not recommend, there we could install another version with the previous policy).

More details

It is also interesting to highlight that for the European community, the new privacy policy does not fully apply (sic), generating an exclusive policy for users resident in this area of the world and this is due to the GDPR regulations, which prevent both Facebook and any other company from sharing their users’ data with their other companies, or from being used for various interests, without the explicit and clear approval of the user involved.  Thanks to this, WhatsApp users in the European community have now won the battle over the control of their privacy.

In short, we can say that WhatsApp users who have already accepted the privacy policy, without reading or considering what it implies for the handling of their data, only have two options:

  • Delete the account and leave this messaging service by migrating to another of the many similar services that have emerged in recent years. For those who choose this option, they can select from several services that have taken off recently.
  • Continue with the use of this service taking into account that it is not possible to revoke the new privacy policy and accepting that your data will be shared among all the companies on Facebook, for purposes that as indicated in the policy are intended to “operate, provide, improve, understand, customize, support and promote our services“.

Cyber Security Weekly Briefing January 2-8

ElevenPaths    8 January, 2021

SolarWinds Update

To end the year, Microsoft published an update of its findings regarding the impact of the SolarWinds incident on its systems. In this release, it emphasizes that neither production services nor customer data have been affected by unauthorized access, and that there are no evidence of the use of counterfeit SAML tokens to access proprietary cloud resources or that the infrastructure was used to attack customers. However, Microsoft has revealed that attackers were able to compromise a limited number of internal accounts, one of them with proprietary source code reading permissions. Through this account, several code repositories would have been accessed. According to Microsoft’s investigation, no changes were made, as this account did not have the necessary writing permissions to perform such actions.  

Also, on Tuesday, January 5th, the U.S. Department of Justice issued a statement confirming that its systems have been breached as a result of the supply chain attack involving SolarWinds Orion software. The internal investigation would have revealed that the threat agents had moved between the network systems, gaining access to the email accounts of about 3% of the entity’s employees, or more than 3000 individuals. The governmental agency says that no impact on any classified systems has been detected. On the same day, the FBI, CISA, ODNI and the NSA published a joint statement formally blaming an APT linked to Russia for the attack. Lastly, a recent hypothesis involving the project management software TeamCity as an entry point into SolarWinds systems has been discussed in the media. JetBrains, the company that owns the software, has denied these speculations, stating that it is unaware of any investigation into the matter.

Más información: https://msrc-blog.microsoft.com/2020/12/31/microsoft-internal-solorigate-investigation-update/
https://www.justice.gov/opa/pr/department-justice-statement-solarwinds-update

Analysis on Malicious C2 Infrastructure on 2020

Recorded Future’s Insikt Group has published the results of a research on the infrastructure of malicious Command and Control (C2) servers identified on its platforms through 2020. The research has provided interesting details such as that more than half of the detected servers were not referenced on public sources, or that these servers have an overall lifespan of 55 days within the malicious scheme. On the other half, it has been also revealed that the hosting providers where most malicious servers were detected, are those which have a bigger infrastructure, such as Amazon or Digital Ocean, contrary to common belief that the most suspicious hosting providers are the ones that host these fraudulent activities. The data also shows a tendency to use open source tools during malware infection operations. Among this tools, Insikt Group has pointed out that offensive security tools such as Cobalt Strike or Metasploit are the main responsible for being present in at least one quarter of all the analysed servers. Finally, it must be stated that the researchers link almost all of their findings to APTs or threat actors with strong financial capabilities.

More details: https://go.recordedfuture.com/hubfs/reports/cta-2021-0107.pdf

Zyxell Fixes a Critical Vulnerability In its Devices

Network device manufacturer Zyxel has released a security adevisory that addresses a critical vulnerability in its firmware. This flaw, tracked as CVE-2020-29583 with CVSS 7.8, would allow a threat agent to access vulnerable machines with administrator privileges via ssh, due to the existence of a secret account (zyfwp) that was not documented and whose password, stored in plaintext in the firmware, was hardcoded. This vulnerability allows attackers to change the firewall configuration, intercept traffic or create VPN accounts to access the network where the device is located. The flaw, discovered and reported in December by EYE researchers, affects the Zyxel USG and USG FLEX, ATP and VPN devices with firmware version V4.60, as well as the NXC2500 AP access point drivers with firmware versions between V6.00 and V6.10, all of which have been updated and fixed in versions V4.60 Patch1 and V6.10 Patch1.

More information: https://www.zyxel.com/support/CVE-2020-29583.shtml

Remote Code Execution Vulnerability in Zend Framework

Cybersecurity researcher Ling Yizhou has revealed a deserialization vulnerability in Zend Framework that could be exploited by attackers to achieve remote code execution on PHP sites. The flaw, tracked as CVE-2021-3007, apart from affecting Zend Framework 3.0.0, could impact some instances of Zend’s successor, Laminas Project. A vulnerable application could deserialize and process data received in an inappropriate format, which could trigger everything from a denial of service to the possibility of the attacker executing arbitrary commands in the context of the application.

More details: https://www.bleepingcomputer.com/news/security/zend-framework-disputes-rce-vulnerability-issues-patch/

Google Publishes its Security Bulletin for Android

Google has released January security update for its Android operating system which addresses 42 vulnerabilities, including four critical ones. The most critical severity vulnerability is CVE-2021-0316, which corresponds to a system error that could be exploited to execute code remotely. Another three vulnerabilities addressed in Android’s System component have a high severity score. These include two elevation of privilege issues and one information disclosure bug. In addition, security patch 2021-01-01 also fixes 15 vulnerabilities in Framework, including one critical denial of service (DoS) flaw, eight high severity elevation of privilege flaws, four high-severity information disclosure issues, one high-severity DoS flaw and one medium-severity remote code execution vulnerability. The second part of the security update addresses a total of 19 vulnerabilities in Kernel (three high severity vulnerabilities), MediaTek (one high-severity vulnerability) and Qualcomm components (six high-severity vulnerabilities). Patches for nine flaws in Qualcomm’s closed source components (two critical and seven high-severity bugs) were also included in this month’s update set. Finally, a security patch has been released for Pixel devices, corresponding to another four vulnerabilities.

All the information: https://source.android.com/security/bulletin/pixel/2021-01-01

When will Robots find a place in the Smart Home?

Olivia Brookhouse    29 December, 2020

With the mass introduction of smart speakers, smart doorbells, smart fridges and even smart toilets there is a world of possibilities when it comes to innovating our homes. Whilst smart speakers have come close to providing human like assistance, they still lack the physical attributes of robotic assistants we have seen in films. Robots are almost second nature in factories and production lines but when will they become widespread in our homes?

iRobot

iRobot is one of the leading companies in home robotics and you may be familiar with its Roomba model, the autonomous vacuum cleaner. From 1990 they have tried to navigate through various business plans to build something both technologically advanced and commercially desirable.

When they first started, they encountered many issues which are still yet to be perfected by the robotics industry, including: spatial navigation, voice recognition and machine vision. Back then they were one of few companies investigating these components which now form the basis of a rich ecosystem of technologies, the most important being Artificial intelligence and the Internet of Things.

If so many bright minds and big companies are devoted to perfecting these building blocks of robotics, why haven’t we all got a robot in our homes?

“Getting a robot to work successfully still means getting just the right mix of mechanical, electrical, and software engineering, connectivity, and data science”

Colin Angle: Chairman, CEO and Founder of iRobot

After many failed business models, they built Roomba, an autonomous vacuum cleaner which can scan a room, identify objects and clean the floor without any human interference. It can even remember dirty places that need extra attention and it can plug itself into its charging station and go back to where it left off when the battery is recharged.

However, consumers are often hesitant to believe that Roomba can do what it says on the tin. This is what many robotics companies face. Even their most successful products are doubted by the average consumer.

Service and Social Robots

When talking about home robotics, people tend to group robots into 2 different fields, Service robots and Social robots. Whilst Service robots will eventually assume household tasks such as cooking and serving meals, Social robots, will provide owners with some level of empathetic interaction to offer companionship. Virtual assistants have already built a strong presence in the smart home market but lack the mobility to provide richer assistance. This presents many challenges for its design and engineering, spatial awareness, voice and facial recognition, emotional intelligence, machine vision and connectivity.

Robot senses

For a robot to begin to perform tasks it must understand its environment. As human we all have a concept of how the inside of our house differs from the outside, the layout of our furniture and devices, the weight and shapes of objects and how we should interact with them. If robots want to better interact with people, they need to share this common understanding of how the environment works to perform tasks successfully. Providing Robots with technologies which can facilitate obstacle avoidance and the ability to distinguish one room from another is crucial when building domestic robots.

With IoT connected sensors Robots can collect a huge about of data in order to help guide its behaviour. The main objective of these sensors is to allow the robot to move freely around its environment much like the perception and prediction systems do in an autonomous vehicle. Thanks to IoT, these sensors can be connected to the home network to show remote real time information to the users. Facial recognition and Natural language processing help robots understand speech and emotion to better interact with the people around it.

For many having a robot which can move, speak and understand is enough. However, some companies are going even further to give robots a fuller sensory experience. These sensors allow robots to see through walls, detect dangerous gas detection, infrared vision, gyroscopes and thermal vision. AI and machine learning tools then help aggregate and extract meaning from the rich sensor information. This will help power the robot’s decision-making capabilities.

Whist many of these individual technologies are extremely advanced to help robots navigate and understand its environment, it is still very difficult to explain those concepts we are assume are fact, such as that a bedroom is a place where humans sleep and that you cannot have a bath in the kitchen sink.

What seems clear is we can teach a robot to perform individual tasks to a high level of accuracy. They can successfully ‘Play Rolling in the deep by Adele’ or ‘Clean the floor’, such tasks which are one-directional and command-based. However, performing tasks which require a multi-level understanding prove trickier. The development of natural language processing is crucial in this area to give the robot a better sense of what we want and how we want it and Artificial intelligence will help these robots navigate through unexpected scenarios and learn how to react accordingly. As these technologies evolve, so will the level of robots’ physical interaction with the real world, making it a generalist and not just a one-task specialist.

IORT

The Internet of Things and Robotics are coming together to create the Internet of Robotic Things, a concept where intelligent devices (Robots) can monitor their environment and report back with live data. This is particularly important for the development of home care robots so that they can feed information to a central care system in case of emergencies. Robots can then go one step further than its IoT connected sensors and act. The combination of these two technologies will provide a rich ecosystem for connected assistance.

Pricing, availability and consumer awareness

There are many challenges that home robotics must overcome, not only on a technological level but in terms of the price, availability and consumer perception of ‘Robots’. Many consumers doubt the capabilities of robotics and do no trust these strangers in our homes. Companies need to find a way to replicate the acceptance and trust of virtual assistant and smart speakers for physical multi-function robots.

To keep up to date with Telefónica’s Internet of Things area visit our webpage or follow us on TwitterLinkedIn and YouTube.

The First Official Vulnerabilities in Machine Learning in General

Franco Piergallini Guida    23 December, 2020

Today you are nobody on the market if you do not use a Machine Learning system. Whether it is a system of nested “ifs” or a model of real intelligence with an enviable ROC curve, this technique is part of the usual statement in cyber security, and as such, is already incorporated into the industry as another method. We ask ourselves if you are being attacked or suffering from vulnerabilities, one way to measure this is to know if official flaws are already known and what impact they have had.

How do you attack a Machine Learning system? Well, like almost everything in cyber security, by getting to know it in depth. One of the formulas is to “extract” its intelligence model in order to be able to avoid it. If we know how a system classifies, we can send samples so that it classifies them to our liking and go unnoticed at the same time. For example, in 2019 the first vulnerability associated with a Machine Learning system with a particular CVE was registered with NIST. Here, a Machine Learning model for the classification of spam was “imitated” by collecting the scores assigned by the Email Protection system to the various headers. Once the model had been replicated, they extracted the knowledge that would be used to generate a subsequent attack that evade the filters.

Basically, the relevant knowledge that the researchers were able to obtain by replicating the original model was the weights that the system assigned to each term that appeared in the header of an email in order to classify it as “not spam”. From there, they could perform tests by adding these terms to a spam email to “trick” the original model and achieve the objective of being able to classify a spam email as non-spam.

In our post “Thinking about attacks to WAFs based on Machine Learning” we use the same technique to trick models implemented in some WAFs that detect malicious URLs and XSS. We investigate the models to understand or get to know which terms have more weight when classifying a URL as non-malicious and include them in our malicious URL to generate an erroneous classification in the prediction.

The manufacturer’s response to this vulnerability indicates that part of the solution to this problem was the ability of their model to update their scores (weights) in real time. In other words, sacrificing seasonality for adaptability and dynamically retraining their model with new user interactions. Although this is an interesting possibility in this case and one that reinforces the system, this is not always applicable to any model we use, as it could lead to another new attack vector where an opponent could manipulate the model’s decision limits by poisoning it with synthetic inputs that are actually “rubbish”.  Since to have a significant impact the attacker would have to insert a large amount of rubbish, services that are more popular and see a large volume of legitimate traffic are more difficult to poison to have a significant impact on the learning outcome.

A New World of Opportunities

The previous vulnerability was restricted to one product, but we have also seen generic problems in the algorithms. To make an analogy, it would be like finding a flaw in a manufacturer’s implementation, as opposed to a flaw in the design of the protocol (which would force all manufacturers to update). In this sense, perhaps one of the most famous implementations of Machine Learning are those based on the algorithms trained through gradient descent.

Not so long ago they were discovered to be potentially vulnerable to arbitrary misclassification attacks, as explained in this alert from the CERT Coordination Center, Carnegie Mellon University. In this case, we have already studied and shared in a real-world application attacking a Deep Fakes video recognition system, and another related to the attack of mobile applications for the detection of melanomas in the skin that we will publish later on.

In conclusion, as we incorporate this technology into the world of security, we will see more examples of implementation flaws, and even more sporadically design flaws in algorithms that will test the ability of these supposedly intelligent systems. The consequences will generally be that attackers will be able to trick them into misclassifying and therefore probably evading certain security systems.

Cyber Security Weekly Briefing December 12-18

ElevenPaths    18 December, 2020

Supply Chain compromise: SolarWinds Orion

FireEye researchers have unveiled a major global information theft and espionage operation that takes advantage of the supply chain to gain access to the systems of public and private entities. The entry point was the insertion of malicious code into legitimate updates of Orion SolarWinds, a widely used IT infrastructure management software. Between March and June 2020, the supplier’s official website offered multiple digitally signed versions for download containing a backdoor called by researchers SUNBURST. The attackers behind this campaign have kept secrecy a priority in their raids, using as few malware artifacts as possible, as well as hard-to-attribute tools and OpSec techniques. Victims include telecommunications, consulting, technology and extraction companies in North America, Europe, Asia and the Middle East; as well as government organisations. As for the threat agent, Volexity researchers have revealed details of three independent incidents that occurred between December 2019 and July 2020, which were allegedly carried out by this same agent, which they have named Dark Halo (FireEye designates it as UNC2452), against a US Think Tank. The aim of these intrusions was to obtain emails associated with specific individuals in the organisation (executives, politicians and IT staff). SolarWinds has issued a new update today (2020.2.1 HF 2) with the ability to disable any traces of SUNBURST malware that may have been left on systems that had malicious versions installed. Meanwhile, in an effort to mitigate this threat, Microsoft reported that its antivirus product Microsoft Defender is beginning to quarantine those Orion software binaries that are part of malicious versions, which could cause crashes in systems that have not yet been updated. Likewise, this organisation, together with several cyber security partners, have carried out an operation to take control of the Command & Control (C&C) server used by SUNBURST, with the aim of preventing the deployment of additional payloads on infected systems and identifying possible victims. In addition, FireEye revealed yesterday that actions taken by Microsoft and its partners to mitigate the threat of SUNBURST malware have enabled a “kill switch” in its code, which will prevent its execution by resolving all domains and sub-domains of the C&C server to a Microsoft proprietary IP. It is not yet known how the threat agents managed to implement the malicious code in the SolarWinds updates, is considered as a possible entry vector incorrectly secured FTP services or initial compromise of the Office365 mail service. Researchers at ReversingLabs, through extensive code analysis of Orion binaries, have revealed that previous versions of the software had already been manipulated to lay the groundwork for the subsequent introduction of malicious code. Attackers went so far as to modify the source code by imitating the coding style and naming rules of the software developers, also compromising the packaging infrastructure and legitimate digital signature mechanisms.

More info: https://www.volexity.com/blog/2020/12/14/dark-halo-leverages-solarwinds-compromise-to-breach-organizations/

Vulnerabilities in Verifone and Ingenico devices

Security researchers Aleksei Stennikov and Timur Yunusov have exposed vulnerabilities in Point of Sale (PoS) devices from two of the industry’s leading manufacturers, Verifone and Ingenico, at the Black Hat Europe 2020 presentation. These terminals are those used in commercial establishments to manage the sales process, collection, ticket printing, among other things. The flaws detected would affect the Verifone VX520 devices, the Verifone MX series and the Ingenico Telium 2 series. These are vulnerabilities of the type of stack overflows and buffer overflows that could allow arbitrary code to be executed. The researchers also highlight that both brands use default passwords in their accesses. The exploitation chain of the flaws could be carried out in different ways, being both the physical access to the PoS terminal and the remote input vectors valid for the threat agents, whose objective is to exfiltrate card information, clone terminals or malware infections that could spread to the network in which they are located. Both companies confirm that they have updated their terminals to avoid exploiting vulnerabilities that would have existed for at least 10 years.

All the details: https://www.cyberdlab.com/research-blog/posworld-vulnerabilities-within-ingenico-telium-2-and-verifone-vx-and-mx-series-point-of-sales-terminals

Omission of SAML authentication due to flaws in the Golang XML parser

Mattermost security researchers, in collaboration with Golang’s security team, have revealed three critical vulnerabilities in the XML parser of the Go programming language. The flaws, identified as CVE-2020-29509, CVE-2020-29510 and CVE-2020-29511, all with a CVSS of 9.8, are due to the fact that Golang’s XML parser returns inconsistent results when encoding and decoding XML. Threat agents could exploit these vulnerabilities in Go based SAML implementations and modify SAML messages by injecting malicious XML mark-up to impersonate another user, which could lead to privilege escalation and, in some cases, outright unauthentication. So far, the Go security team has failed to address these vulnerabilities.

Learn more: https:/mattermost.com/blog/coordinated-disclosure-go-xml-vulnerabilities/

New cyber espionage campaign from Lazarus Group

Researchers from HvS Consulting have carried out a detailed investigation on a recent cyber espionage operation attributed to Lazarus Group and aimed at multiple European entities in the electrical and manufacturing sector. The incidents began to be noticed in March and April 2020, extending to November without solution of continuity. Social engineering has been the preferred entry point, making users receive false job offers with malicious macros either through emails, contacts on social networks such as LinkedIn or messaging apps like WhatsApp. The final goal is to infect the entire network and remain undetected in order to exfiltrate confidential information. It is worth highlighting that Lazarus has advanced traffic tunneling capabilities, with a flexible infrastructure that allows modification in its C&C servers frequently and with tools that run completely in memory, thus avoiding detection.

More info: https://www.hvs-consulting.de/media/downloads/ThreatReport-Lazarus.pdf

Telephone extortion as DoppelPaymer operators’ new tactic

The US Federal Bureau of Investigation (FBI) has issued an alert reporting a new extortion tactic by DoppelPaymer ransomware operators. It should be noted that this malicious software already applied the well-known double extortion tactic, which consists of publishing exfiltered data of its victims in case they do not make the payment required by the threat agents. According to FBI investigations, they have obtained evidence that consists of telephone calls to victim companies in order to intimidate and coerce them and their workers to pay the ransom for the encrypted and stolen data. This tactic has been used by Doppel-Paymer operators since February 2020. Likewise, the digital media ZDNet published at the beginning of this month information indicating that other ransomware operators, such as Sekhmet, Maze, Conti and Ryuk, have taken up this same extortion tactic against their victims. The FBI also recommends not paying the ransom demanded and bringing these incidents to the attention of the authorities.

All the details: https://assets.documentcloud.org/documents/20428892/doppelpaymer-fbi-pin-on-dec-10-2020.pdf

0-day vulnerability in HPE server management software

Hewlett Packard Enterprise has revealed a 0-day vulnerability in its Systems Insight Manager (SIM) software, which would affect both Windows and Linux operating systems. The flaw, identified as CVE-2020-7200, could allow a non-privileged threat agent code execution on vulnerable servers due to inadequate validation of user-provided data. While HPE has not yet released the security update that fixes the flaw, it has provided temporary mitigation measures for the Windows operating system, based on disabling the “Federated Search” and “Federated CMS Configuration” features. The firm has not revealed whether the vulnerability is being actively exploited, however, they claim that the full fix will be made public in a future version of the software.

More: https://support.hpe.com/hpesc/public/docDisplay?docLocale=en_US&docId=hpesbgn04068en_us

Hiding Keys Under the Mat: Governments Could Ensure Universal Insecurity

Gonzalo Álvarez Marañón    17 December, 2020

The doorbell rang. “Who will be ringing now?” asked Brittney Mills, as she struggled to get off the couch. Her eight months of pregnancy were beginning to hinder her movements. “You don’t move,” she said as she passed her 10-year-old daughter sitting in front of the TV. When Brittney opened the door, two bullets left her bleeding dry on the floor. Her daughter ran to hide in the bathroom when she heard the shots. Her baby died a few hours later, the killer was never found. The authorities turned to her iPhone for incriminating evidence but were unable to unlock it. They turned to Apple, but the company claimed it could not enter her smartphone because its content was encrypted and without her unlocking password it was impossible to recover the keys.

This real case, which occurred in April 2015 in Baton Rouge, Louisiana, along with many others, such as the shooter Syed Farook, who killed 14 people and injured 22 in San Bernardino, have pitted authorities against Apple and reopened an old debate: should encryption technology be available to everyone, with the consequent risk of obstructing criminal investigations?

You May Not Be Aware of It, But You Use the Most Robust Cryptography That Has Ever Existed

When you use messaging apps such as Whatsapp, iMessage or Telegram; or video conferencing applications such as Facetime or Zoom; or some email services such as ProtonMail or OpenPGP; you are using end-to-end encryption: the communication between your device and the other person’s device is fully encrypted and no one, not even the service provider, can find out about the content.

What’s more, your information inside the smartphone is encrypted using a master key generated inside the device from your unlock code, that never leaves the device. You can also encrypt your laptop with a master key derived from your password.

In the end, it turns out that everyday consumer products incorporate such powerful cryptography that no one can break it. And of course, it’s not just you who use a smartphone or laptop, but also criminals, terrorists and murderers. The authorities watch helplessly as mountains of smartphones, tablets and computers are piled up in court rooms with priceless evidence inside – that no one can access!

Should cryptography be banned, and should security measures of current technological devices be reduced? Some governments are proposing a halfway: keeping a copy of the keys (key escrow).

The Idea Behind Key Protection

Apparently, the concept is quite simple: a trusted authority keeps a copy of the encryption keys used by all existing devices in the country. In other words, the aim is that the bad guys don´t have access to citizens’ information, but for the good guys to have it, of course only in exceptional cases.

There are precedents for this idea since the 1990s. At that time, the US government still considered cryptography to be ammunition and therefore its export was prohibited unless it was weakened to 80-bit keys. For the computing power of the time it was not even that bad, because it was assumed that no one other than the NSA could break them. What could have gone wrong?

You don’t have to look very far. Consider the SSL/TLS protocol. Browsers and websites were forced to include 80-bit key encryption suites. You know the famous computer adagio: “if it works, don’t touch it”. So, 20 years later, TLS was still supporting the weakened suites even though the export restriction had been lifted in 2000. And in 2015 came the FREAK and LogJam attacks, which allowed a website’s security to be downgraded to export encryption suites, making its cryptography so weak that it broke in seconds. Ironically, the FBI and NSA websites were also affected.

Back in the 1990s, the NSA also attempted to restrict the cryptographic capabilities of communications devices through another way: the Clipper chip. The idea behind Clipper was that any telephone device or communications device that was to use cryptography would incorporate the Clipper chip with a pre-assigned cryptographic key that would then be given to the government in custody. If a government agency deemed it necessary to intercept a communication, it would unlock the key and decrypt the communication. Fortunately, it never happened, as it was unable to meet the requirements and proved to be hackable. If you are curious about the history of this chip, I recommend the chapter on its rise and fall in Stephen Levy’s book Crypto.

Another question that arises is: who holds the key? The key server is a single point of failure. If an attacker manages to break into it, he would get hold of the keys of the entire population and could decrypt the communications of all citizens. Obviously, it does not seem a good idea to store them all in the same place. Each service provider could be forced to store their customers’, but how many would do it safely?

Or the keys could be distributed among several government agencies, so that they would each have to provide their share of the key if needed. Of course, implementing such a system of key sharing is not easy at all, and the master key remains vulnerable once it has been reset.

Another option would be to resort to threshold cryptography, but it is still very green, far from reaching universally accepted robust algorithms.

Moreover, even if such algorithms existed, the chosen solution would require major changes to the cryptographic protocols and applications of all products and services. They would have to be rewritten, with the consequent appearance of vulnerabilities and flaws.

There are still many questions remaining: should these changes be implemented at the operating system level in iOS, Android, Linux, Windows, MacOS, etc.? Would every creator of applications using encryption be obliged to hand over keys in custody? Would all users be obliged to use these backdoors? How long would the migration take and what would happen to legacy applications?…

So far, we are talking about safeguarding the key as if there was only one key per user or per device. The reality is quite different, both for encryption at rest and for encryption in transit, a multitude of keys are used that constantly rotate, derived from master keys, which can also rotate. There no two WhatsApp messages encrypted with the same key. A key chain is updated every time you change your password or access code to your device. In fact, it is not even clear which key or keys should be kept, or how the kept keys could be updated to serve a purpose if ever needed.

In summary, quoting the comprehensive work of a group of cryptographers from the 1990s, key escrow would impact on at least three dimensions:

  • Risk: Failure on the key recovery mechanisms can jeopardise the security of current encryption systems.
  • Complexity: Even if it would be possible to make key recovery reasonably transparent to end users, a fully functional key recovery infrastructure is an extraordinarily complex system, with many new entities, keys, operational requirements and interactions.
  • Economic cost: No one has yet described, let alone demonstrated, a viable economic system to account for the real costs of key recovery.

So far, we have assumed that the government would only use the keys in custody for criminal investigations. What guarantee do you have that they will not use them against their own citizens? Things are getting even more complicated.

And what about criminals? For them it would be as simple as creating their own messaging or data encryption apps with secure cryptography without revealing their keys and passwords to anyone. If you force the custody of the keys, only the criminals will use unguarded keys

Key storage systems are actually inherently less secure, more expensive and more difficult to use than similar systems without a recovery function. Mass deployment of key-custody-based infrastructures to meet law enforcement specifications would require significant security and convenience sacrifices and a substantial increase in costs for all users. Moreover, creating a secure infrastructure of the massive scale and complexity that would be required for such a system goes beyond current experience and expertise in the field and may well ultimately introduce unacceptable risks and costs.

There are enough security flaws already in current products and services that seek to be secure, as to introduce cryptographic vulnerability by design into our future products and services. So far, all attempts have failed miserably. And it seems like they will continue to fail in the future.