This time, we found that Ripple20 affects the implementation of the TCP stack of billions of IoT devices. They are thought to be 0-Day attacks, but they are not (there is no evidence that they have been exploited by attackers), and besides, a part of them has already been fixed before being announced. But this does not make these vulnerabilities less serious. Given the large number of exposed devices, has the Internet broken down again?
The Department of Homeland Security and the CISA ICS-CERT announced it. There are 19 different issues in the implementation of Treck’s TCP/IP stack. As this implementation provides or licenses an infinity of brands (almost 80 identified) and IoT devices, the affected ones are, indeed, billions. And, by nature, many of them will never be patched.
JSOF has performed a thorough analysis of the stack and found all kinds of issues. A meticulous audit has inevitably found four critical vulnerabilities, many serious and others minor. They could allow everything from full control of the device to traffic poisoning and denial of service. The reason for such optimism is that they have developed an eye-catching logo and name for the bugs, and have privately reported the vulnerabilities, so many have already been fixed by Treck and other companies using their implementation. Reasons for pessimism are that others have not been fixed, and it is difficult to trace the affected brands and models (66 brands are pending confirmation). In any case, another important fact to highlight is that these devices are usually in industrial plants, hospitals, and other critical infrastructure where a serious vulnerability could trigger horrible consequences.
So, the only thing left to do is to audit, understand and mitigate the issue on a case-by-case basis to know if a system is really at risk. This should already be done under a mature security plan (including OT environments) but, in any case, it could serve as an incentive to achieve it. Why? Because they are serious, public bugs in the guts of devices used for critical operations: A real sword of Damocles.
In any case, they are already known so it is possible to protect ourselves or mitigate the problem, as happened in the past with other serious problems affecting millions of connected devices. With them it seemed that the Internet was going to break down but, we kept going. And the reason was not that they were not serious (or even, probably, exploited by third parties), but because we knew how to respond to them in time and form. We should not underestimate them, but precisely continue to attach importance to them so that they do not lose it, but always avoiding catastrophic headlines. Let us review some historical cases.
Other “Apocalypse” in Cybersecurity
There have already been other cases of disasters that would affect the network as we know it and about which many pessimistic headlines have been written. Let us look at some examples:
- The first was the “Y2K bug“. Although it did not have an official logo from the beginning, did have its own brand (Y2K). Those were other times and, in the end, it was a kind of apocalyptic disappointment resulting in a lot of literature and some TV films.
- The 2008 Debian Cryptographic Apocalypse: A line of code in the OpenSSL package that helped generate entropy when calculating the public and private key pair was removed in 2006. The keys generated with it were no longer reliable or secure.
- Kaminsky and DNS in 2008: It was an inherent flaw in the protocol, not an implementation issue. Dan Kaminsky discovered it without providing details. A few weeks later, Thomas Dullien published in his blog his particular vision of what the problem could be and he was right: it was possible to forge (through the continuous sending of certain traffic) the responses of the authorised servers of a domain. Twelve years later, even after that catastrophe, DNSSEC is still “a rarity”.
- “Large-scale” spying with BGP: In August 2008, people were talking again about the greatest known vulnerability on the Internet. Tony Kapela and Alex Pilosov tested a new technique (believed to be theoretical) that allowed Internet traffic to be intercepted on a global scale. This was a design flaw in the Border Gateway Protocol (BGP) that would allow all unencrypted Internet traffic to be intercepted and even modified.
- Heartbleed in 2014 provided again the possibility to know the private keys on exposed servers. In addition, it created the “brand” vulnerabilities, because the apocalypse must also be sold. A logo and an exclusive page were designed with a template that would become the standard, a domain was reserved, a kind of communication campaign was orchestrated, exaggerations were spread, care was taken over timing, etc. It opened the path to a new way of notifying, communicating and spreading security bugs, although curiously the technical short-term effect was different: the certificate revocation system was tested and, indeed, it was not up to the task.
- Spectre/Meltdown in 2017 (and since then many other processor bugs): This type of flaw had some very interesting elements to be an important innovation. These were hardware design flaws on the processor. Rarely had we witnessed a note in CERT.org where it was so openly proposed to change the hardware in order to fix an issue.
However, if we view it prospectively, so far it seems that none of these vulnerabilities have ever been used as a method of massive attack to collapse the Internet and “break it down”. Fortunately, the responsibility of all the actors within the industry has served to avoid the worst-case scenario.
Unfortunately, we have experienced serious issues within the network, but they have been caused by other much less significant bugs, based on “traditional worms” such as WannaCry. This perhaps shows an interesting perspective on, on the one hand, the maturity of the industry and, on the other hand, the huge work still to be done in some even simpler areas.