China Leads the Race Towards an Attack-Proof Quantum Internet

Gonzalo Álvarez Marañón    30 June, 2020

Did you know that there is a 100% secure encryption algorithm? It is known as Vernam cipher (or one-time pad). In 1949, Claude Shannon proved mathematically that this algorithm achieves the perfect secret. And did you know that it is (almost) never used? Well, maybe things will change after a group of Chinese researchers has broken the record for quantum transmission of keys between two stations separated by 1120 km. We are one step closer to reaching the Holy Grail of cryptography.

The Paradox of Perfect Encryption That Cannot Be Used in Practice

How is it possible that the only 100% secure encryption is not used? In cryptography things are never simple. To begin with, Vernam’s cipher is 100% secure as long as these four conditions are met:

  • The encryption key is generated in a truly random way.
  • The key is as long as the message to be encrypted.
  • The key is never reused.
  • The key is kept secret, being known only by the sender and receiver.

Let us look at the first condition. A truly random bit generator requires a natural source of randomness.  The problem is that designing a hardware device to exploit this randomness and produce a bit sequence free of biases and correlations is a very difficult task.

Then, another even more formidable challenge arises: How to securely share keys as long as the message to be encrypted? Think about it, if you need to encrypt information it is because you do not trust the communication channel. So, which channel can you trust to send the encryption key? You could encrypt it in turn, but with what key? And how do you share it? We get into an endless loop.

The Key to Perfect Security Lies in Quantum Mechanics

Quantum key distribution brilliantly solves all Vernam cipher issues at one stroke: It allows you creating random keys of the desired length without any attacker being able to intercept them. Let us see how it does it.

As you may remember from your physics lessons at school, light is an electromagnetic radiation composed of photons. These photons travel vibrating with a certain intensity, wavelength and one or many directions of polarisation. If you are a photography enthusiast, you may have heard of polarising filters. Their function is to eliminate all the oscillation directions of the light except one, as explained in the following figure:

Now you get into the physics laboratory and send one by one photons that can be polarised in one of four different directions: Vertical (|), horizontal (-), diagonal to the left (\) or diagonal to the right (/). These four polarisations form two orthogonal bases: On the one hand, | and -, which we will call base (+); and, on the other, / and \, which we will call (×).

The receiver of your photons uses a filter, for example, vertical (|). It is clear that vertically-polarised photons will pass as they are, while horizontally-polarised photons, and therefore perpendicular to the filter, will not pass.

Surprisingly, half of the diagonally-polarised ones will pass through the vertical filter and be reoriented vertically! Therefore, if a photon is sent and passes through the filter, it cannot be known whether it had vertical or diagonal polarization, both \ and /. Similarly, if it does not pass, it cannot be said to be horizontally or diagonally polarised. In both cases, a diagonally-polarised photon might or might not pass with equal probability.

And the paradoxes of the quantum world do not end here.

The Spooky Action at a Distance That Einstein Abhorred

Quantum entanglement occurs when a pair of particles, like two photons, interact physically. A laser beam fired through a certain type of crystal can cause individual photons A and B to split into pairs of entangled photons. Both photons can be separated by a great distance, as great as you want. And here comes the good part: When photon A adopts a direction of polarisation, the photon B entangled with A adopts the same state as photon A, no matter how far away it is from A. This is the phenomenon that Albert Einstein sceptically called “spooky action at a distance “.

In 1991, the physicist Artur Ekert thought of using this quantum property of entanglement to devise a system for transmitting random keys that would be impossible for an attacker to intercept without being detected.

Quantum Key Distribution Using Quantum Entanglement

Let us suppose that Alice and Bob want to agree on a random encryption key as long as the message, n bits long. First, they need to agree on a convention to represent the ones and zeros of the key using the polarisation directions of the photons, for example:

State/Base+x
0/
1|\

Step 1: A sequence of entangled photons is generated and sent, so that Alice and Bob receive the photons of each pair one by one. Anyone can generate this sequence: Alice, Bob or even a third party (trusted or not).

Step 2: Alice and Bob choose a random sequence of measurement bases, + or x, and measure the polarisation state of the photons coming, no matter who measures first. When Alice or Bob measure the polarization state of a photon, its state correlates perfectly with that of its entangled partner. From this moment on, both are observing the same photon.

Step 3: Alice and Bob publicly compare which bases they have used and keep only those bits that were measured on the same base. If everything has worked well, Alice and Bob share exactly the same key: as each pair of measured photons are entangled, they must necessarily obtain the same result if they both measure by using the same base. On average, the measurement bases will have matched 50% of the times. Therefore, the key obtained will be of length n/2. The following would be an example of the scheme of the procedure:

Step 1
Position in sequence123456789101112
Step 2
Alice’s random basesXX++X+X++X+X
Alice’s remarks/\||/\|\/
Bob’s random basesX++XX++++XX+
Bob’s remarks/|//||\\
Step 3
Matching basesNo No  NoNo No 
Key obtained0 1 00 101  

But what if an attacker was intercepting these photons? Wouldn’t he or she also know the secret key generated and distributed? What if there are transmission errors and the photons are disentangled along the way?

To solve these issues, Alice and Bob randomly select half of the bits from the obtained key and compare them publicly. If they match, then they know there has been no error. They discard these bits and assume that the rest of the bits obtained are valid, meaning that a final n/4-bit long key has been agreed upon. If a considerable part does not match, then either there were too many random transmission errors, or an attacker intercepted the photons and measured them on his or her own. In either case, the whole sequence is discarded, and they must start again. As observed, if the message is n bits long, on average 4n entangled photons must be generated and sent so that the key is the same length.

And couldn’t an attacker measure a photon and resend it without being noticed? Impossible because, once measured, it is in a defined state, not an overlapping state. If he or she sends it out after observing it, it will no longer be a quantum object, but a classic definite-state object. As a result, the receiver will correctly measure the state value only 50% of the times. Thanks to the key reconciliation mechanism just described, the presence of an attacker within the channel can be detected. In the quantum world, it is impossible to observe without leaving a trace.

It goes without saying that Ekert’s original protocol is more sophisticated, but with this simplified description the experiment conducted by Chinese researchers in collaboration with Ekert himself can be understood.

China Beats Record for Quantum Key Distribution

The Chinese research team led by Jian-Wei Pan succeeded in distributing keys at 1120 km using entangled photons. This feat represents another major step in the race towards a totally secure quantum Internet for long distances.

So far, experiments on quantum key distribution have been carried out through fibre optics at distances of just over 100 km. The most obvious alternative, i.e. sending them through the air from a satellite, is not an easy task, as water and dust particles in the atmosphere quickly disentangle the photons. Conventional methods could not get more than one in six million photons from the satellite to the ground-based telescope, clearly not enough to transmit keys.

In contrast, the system created by the Chinese research team at Hefei University of Science and Technology managed to transmit a key at a speed of 0.12 bits per second between two stations 1120 km apart. Since the satellite can view both stations simultaneously for 285 seconds a day, it can transmit keys to them using the quantum entanglement method at a rate of 34 bits/day and an error rate of 0.045. This is a modest figure, but a promising development, considering that it improves previous efficiency by 11 orders of magnitude.

According to Ekert’s own words: “Entanglement provides almost ultimate security”. The next step is to overcome all the technological barriers. The race to build an attack-proof quantum information Internet has only just begun, with China leading the challenge and well ahead of the pack.

Ripple20: Internet Broken Down Again

Sergio de los Santos    29 June, 2020

This time, we found that Ripple20 affects the implementation of the TCP stack of billions of IoT devices. They are thought to be 0-Day attacks, but they are not (there is no evidence that they have been exploited by attackers), and besides, a part of them has already been fixed before being announced. But this does not make these vulnerabilities less serious. Given the large number of exposed devices, has the Internet broken down again?

The Department of Homeland Security and the CISA ICS-CERT announced it. There are 19 different issues in the implementation of Treck’s TCP/IP stack. As this implementation provides or licenses an infinity of brands (almost 80 identified) and IoT devices, the affected ones are, indeed, billions. And, by nature, many of them will never be patched.

What Happened?

JSOF has performed a thorough analysis of the stack and found all kinds of issues. A meticulous audit has inevitably found four critical vulnerabilities, many serious and others minor. They could allow everything from full control of the device to traffic poisoning and denial of service. The reason for such optimism is that they have developed an eye-catching logo and name for the bugs, and have privately reported the vulnerabilities, so many have already been fixed by Treck and other companies using their implementation. Reasons for pessimism are that others have not been fixed, and it is difficult to trace the affected brands and models (66 brands are pending confirmation). In any case, another important fact to highlight is that these devices are usually in industrial plants, hospitals, and other critical infrastructure where a serious vulnerability could trigger horrible consequences.

So, the only thing left to do is to audit, understand and mitigate the issue on a case-by-case basis to know if a system is really at risk. This should already be done under a mature security plan (including OT environments) but, in any case, it could serve as an incentive to achieve it. Why? Because they are serious, public bugs in the guts of devices used for critical operations: A real sword of Damocles.

In any case, they are already known so it is possible to protect ourselves or mitigate the problem, as happened in the past with other serious problems affecting millions of connected devices. With them it seemed that the Internet was going to break down but, we kept going. And the reason was not that they were not serious (or even, probably, exploited by third parties), but because we knew how to respond to them in time and form. We should not underestimate them, but precisely continue to attach importance to them so that they do not lose it, but always avoiding catastrophic headlines. Let us review some historical cases.

Other “Apocalypse” in Cybersecurity

There have already been other cases of disasters that would affect the network as we know it and about which many pessimistic headlines have been written. Let us look at some examples:

  • The first was the “Y2K bug“. Although it did not have an official logo from the beginning, did have its own brand (Y2K). Those were other times and, in the end, it was a kind of apocalyptic disappointment resulting in a lot of literature and some TV films.
  • The 2008 Debian Cryptographic Apocalypse: A line of code in the OpenSSL package that helped generate entropy when calculating the public and private key pair was removed in 2006. The keys generated with it were no longer reliable or secure.
  • Kaminsky and DNS in 2008: It was an inherent flaw in the protocol, not an implementation issue. Dan Kaminsky discovered it without providing details. A few weeks later, Thomas Dullien published in his blog his particular vision of what the problem could be and he was right: it was possible to forge (through the continuous sending of certain traffic) the responses of the authorised servers of a domain. Twelve years later, even after that catastrophe, DNSSEC is still “a rarity”.
  • “Large-scale” spying with BGP: In August 2008, people were talking again about the greatest known vulnerability on the Internet. Tony Kapela and Alex Pilosov tested a new technique (believed to be theoretical) that allowed Internet traffic to be intercepted on a global scale. This was a design flaw in the Border Gateway Protocol (BGP) that would allow all unencrypted Internet traffic to be intercepted and even modified.
  • Heartbleed in 2014 provided again the possibility to know the private keys on exposed servers. In addition, it created the “brand” vulnerabilities, because the apocalypse must also be sold. A logo and an exclusive page were designed with a template that would become the standard, a domain was reserved, a kind of communication campaign was orchestrated, exaggerations were spread, care was taken over timing, etc. It opened the path to a new way of notifying, communicating and spreading security bugs, although curiously the technical short-term effect was different: the certificate revocation system was tested and, indeed, it was not up to the task.
  • Spectre/Meltdown in 2017 (and since then many other processor bugs): This type of flaw had some very interesting elements to be an important innovation. These were hardware design flaws on the processor. Rarely had we witnessed a note in CERT.org where it was so openly proposed to change the hardware in order to fix an issue.

However, if we view it prospectively, so far it seems that none of these vulnerabilities have ever been used as a method of massive attack to collapse the Internet and “break it down”. Fortunately, the responsibility of all the actors within the industry has served to avoid the worst-case scenario.

Unfortunately, we have experienced serious issues within the network, but they have been caused by other much less significant bugs, based on “traditional worms” such as WannaCry. This perhaps shows an interesting perspective on, on the one hand, the maturity of the industry and, on the other hand, the huge work still to be done in some even simpler areas.

Cybersecurity Weekly Briefing June 20-26

ElevenPaths    26 June, 2020

Millions of User Records Exposed on an Oracle Server

Security researcher Anurag Sen has found an exposed database containing millions of records belonging to the company BlueKai, owned by Oracle. This is one of the largest web tracking companies that collects third-party data for use in intelligent marketing. The security incident occurred after a server was left open without a password, exposing millions of people’s records. Among the data affected the following can be found: people’s names and surnames, emails, home addresses, detailed web browsing activity, purchases, etc., as BlueKai collects all this raw web browsing data for later sale in an anonymous way. It is worth mentioning that Oracle received the notice from the researcher and have conducted an internal investigation to solve the incident.

Learn more: https://techcrunch.com/2020/06/19/oracle-bluekai-web-tracking/

New Malicious Campaign on COVID-19 Using Trickbot

Trustwave researchers have detected a new COVID-19-related malicious campaign that is infecting victims by means of Trickbot malware. This time, threat agents are using phishing campaigns as attack vector to impersonate a volunteer organization that wants to financially help those in need as a result of the pandemic. In addition, victims are encouraged to open two identical malicious JNLP files attached. Once the victim executes these types of documents, the infection occurs by downloading and running the “map.jar” software, that redirects the victim to an official WHO page with the aim of deceiving the victim. When done, the malware downloads Trickbot banking trojan that, in addition to stealing bank credentials, has other functions such as stealing information or downloading other malware. Trustware indicates that this is the first time that JNLP files are used as a TrickBot infection, and that the use of this file format to infect victims is not common. 

More info: https://www.trustwave.com/en-us/resources/blogs/spiderlabs-blog/trickbot-disguised-as-covid-19-map/

AMD Identifies SMM Callout Flaws

AMD has released three high severity vulnerabilities, that the company named SMM Callout. They would affect some of its laptops and embedded processors between 2016 and 2019. These flaws could allow an attacker with physical access to machines with embedded AMD processors or previously infected with malware, to execute arbitrary code without being detected by the operating system. The company released a fix for one of the three bugs on June 8 (CVE-2020-14032). However, AMD has announced that it plans to release the patch to fix the remaining two bugs (CVE-2020-12890 and the third without CVE) by the end of June.

All the details: https://threatpost.com/amd-fixes-for-high-severity-smm-callout-flaws-upcoming/156787/

Sodinokibi/REvil Scanning for PoS Software

Symantec researchers have detected a targeted campaign by Sodinokibi ransonmware, also known as REvil, in which threat actors would be scanning the networks of some victims for credit card or point of sale (PoS) software. The attackers would be using Cobalt Strike malware to deploy ransomware on the victims’ systems. According to the researchers, during this campaign eight organizations were found to have been attacked with the Cobalt Strike malware, and three of them were subsequently infected with Sodinokibi. In addition, the attackers would be leveraging legitimate tools such as the NetSupport remote control software to carry out this campaign. To date, it is unknown whether attackers are targeting POS terminals to encrypt their software or to make a profit by other means.

More info: https://symantec-enterprise-blogs.security.com/blogs/threat-intelligence/sodinokibi-ransomware-cobalt-strike-pos

VMware Fixes Critical Vulnerabilities

VMware has released security updates that fix bugs in ESXi, Workstation and Fusion products. Among these vulnerabilities there is a critical one (classified as CVE-2020-3962 and with a CVSSv3 of 9.3) that affects the SVGA device and could allow a threat actor to execute arbitrary code in the hypervisor from a virtual machine. To mitigate this threat, users are recommended to upgrade VMware Fusion to version 15.5.5, and VMware ESXi to versions ESXi_7.0.0-1.20.16321839, ESXi670-202004101-SG, or ESXi650-202005401-SG. Since the bug lies in the acceleration of 3D Graphics, this component can also be disabled to solve this flaw if the software cannot be updated immediately, thus preventing potential exploitation. In the other released security updates 9 more vulnerabilities have been fixed with CVSSv3 from 4.0 to 8.1.

More: https://www.vmware.com/security/advisories/VMSA-2020-0015.html

Move to the cloud with confidence supported by ElevenPaths and Check Point

Pablo Alarcón Padellano    Emilio Sánchez de Rojas Rodríguez de Zuloaga    26 June, 2020

Cloud security is mainly achieved through the implementation of appropriate policies and security technologies, like it is for other IT environments. If you don’t know if you are using the cloud securely, we will guide and help you to rapidly adopt and secure any cloud workload, according to your overall cloud strategy, and to mitigate cloud risks according to your defined risk appetite. That’s the goal of ElevenPaths Public Cloud Managed Security Services.

While public Cloud Service Providers (CSPs) dedicate extensive efforts to security, the challenge exists not in the security of the cloud itself, but in the policies and technologies used to secure and control your deployments in the cloud. In nearly all cases, it is the customer, not the cloud provider, who fails to manage adequately the controls used to protect an organization’s data. In fact, Gartner considers that through 2025, 99% of cloud security failures will be the customer’s fault. In addition, the teams that are implementing cloud workloads might not have the security knowledge necessary to adequately protect them.

Cloud compliance teams have traditionally relied on manual data aggregation and testing to assess IT compliance posture. The process of checking and tracking compliance status and resolving issues has been slow and laborious. In this age of heightened security risks, businesses are doing away with periodic security audits in favor of continuous compliance tracking and enforcement. The tools and controls that worked well for security and compliance in the datacenter fail in public cloud environments that demand agility and efficiency. It is no wonder that as organizations move critical workloads to the public cloud, compliance and governance remain a leading concern.

According to Check Point’s 2019 Cloud Security Report, 67% of security teams still complained about lack of visibility into their cloud infrastructure, security, and compliance, and setting consistent security policies across cloud and on premise environments and a lack of qualified security staff tie for third place (31% each). Misconfigurations (20%) is one of the most concerning cloud data leakage vectors because of human error, and precisely lack of experience and qualified security staff (26%) was one of the biggest barriers to wider public cloud adoption cited by respondents.

How to move with confidence into the cloud

ElevenPaths Cloud Security offering, which includes Professional Services and Cloud Managed Security Services (Cloud MSS), can support your organization by assessing your cloud infrastructure to determine if the appropriate levels of security and governance are implemented to counter these challenges. Based on the best cloud security practices, on demonstrated deep technical and consulting expertise in cloud native security solutions, and the experience gained from our Cloud Security Lab – by examining the leading cloud security market technologies and the latest features designed to keep your cloud safe – our cloud security team will guide and help you achieve optimal cloud threat prevention and establish and keep the best possible cloud security posture for your business.

Meeting cloud security goals may require rethinking and adapting to agile processes, reducing complexity, maximizing visibility, and automating compliance and governance enforcement. Our Cloud MSS service includes and offers Check Point’s CloudGuard unified cloud native security platform providing you with a comprehensive review of your cloud infrastructure with priorized actionable recommendations from our ElevenPaths SOC cloud security team.

ElevenPaths CloudGuard Partner Specialization Check Point

ElevenPaths is a Check Point’s CloudGuard Specialized Partner, recognition based on our solid knowledge, technical certified skills and demonstrated success in the support, installation, deployment and management of Check Point’s CloudGuard solutions within our Customers security environment, thus becoming the first CloudGuard Partner Specialist in Spain and Latin America. We provide you with a centralized visualization for all of your cloud traffic, security alerts, assets configuration and security posture along with auto-remediation.

Moreover, you can also benefit from our lessons learned, the knowledge and experience the ElevenPaths team has gained from securing our own public cloud deployments by using our own expertise and Check Point’s CloudGuard Cloud Security Posture Management solution.

ElevenPaths Cloud Security Services powered by Check Point’s CloudGuard unified cloud native security platform will provide you with:

  • Cloud Security and Compliance Posture Management: High fidelity security, visibility, control, governance and compliance across your multi-cloud assets and services. Our cloud security experts visualize and assess your cloud security posture, detect misconfigurations, model and actively enforce gold standard policies, protect against attacks and insider threats, apply cloud security intelligence for cloud intrusion detection, and ensure that your public cloud infrastructure conforms to regulatory compliance requirements and security best practices at all times. Our customers receive a comprehensive security report auditing standard and ElevenPaths’ enriched compliance and configuration checks within your public cloud instance, to find misconfigurations, provide a complete inventory of assets, prioritization of failed tests by severity and context of your environment, along with best practices and guidance for remediation;
  • Cloud Workload Protection: Seamless vulnerability assessment, full protection of modern cloud workloads, including serverless functions and containers, from code to runtime – automating security with minimal overhead. ElevenPaths cloud security team continuously scans functions to increase your security posture – providing observability, continuous assessment, and providing your security teams and developers with clear guidance on how to improve your overall cloud workload protection;
  • Cloud Network Security: Automated and elastic public cloud network security to keep assets and data protected while staying aligned to the dynamic needs of public cloud environments. We deliver consistent visibility, policy management, logging, reporting and control across all your cloud and networks, and security events monitoring from your virtual Firewall deployments;
  • Cloud Intelligence and Threat Hunting: Advanced security intelligence, including cloud intrusion detection, network traffic visualization, and cloud security monitoring and analytics. We apply cloud security intelligence and security analytics, delivering enhanced cloud security processes, rich contextualized information and decisions with contextualized visualization, intuitive querying, intrusion alerts, and notifications of policy violations, for faster and more efficient incident response.
ElevenPaths Cloud Security

With ElevenPaths Managed Cloud Security Services organizations have faster and more effective cloud security operations, end-to-end compliance and governance, and automated DevSecOps best practices. Our cloud security experts are focused on staying ahead of adversaries, relentlessly reducing its attack surface and obtaining total visibility of events taking place in your environment. We jointly with our Strategic Partner Check Point automate your security posture at scale, preventing advanced threats and giving you visibility and control over any workload across any cloud, helping you moving with confidence into the cloud. Together we go further.

What is a connected car and how can it improve the driving experience?

AI of Things    26 June, 2020

Every day, we are surrounded by devices that are connected to the Internet. We only need to turn our eyes in any direction, and we can see televisions, coffee machines, mobile phones and tablets, everything is connected to the Internet. But what is a connected car? In the following post, we explain what it is and how it can improve the driving experience.

What are the benefits of a connected car?

A connected car is a vehicle that has an internet connection, with which it optimizes some of its functions. Like almost all networked devices, the purpose of its connection is to help the user. In our cars, it works in exactly the same way. The automotive companies have studied the subject and have created really important use cases for the drivers.

Many of the benefits of these vehicles are associated with protecting the life of the driver. This represents a major advance in road safety. Imagine being able to protect the of your other passengers with a connected car. Well, this is now a reality and there exists approximately 380 million connected cars in the market today.

Road death rates are really high all over the world. According to the WHO (2015), hundreds of thousands of children die every year as a result of car accidents. Faced with this sad reality, technology is constantly trying to improve vehicle safety.

What is IoV?

This connectivity, which is also known as IoV (Internet of Vehicles), forms a fusion with the surrounding environment. What do we mean by this? What we are trying to say is that the car is connected to all modes of urban mobility, other vehicles, pedestrians and the driver himself.

Today, a smart city can have millions of connections, and generates millions of data points. A connected car can gain access to a lot of really useful information coming from the same city around it. Traffic lights, traffic, accidents, weather, road closures, etc. Everything can be transmitted to the driver to help inform decision making and prevent accidents.

GPS is also another signal of the IoV. This type of connectivity is what many already use to get around town or to get assistance on the road.

How does a connected car work?

Today’s vehicles have a lot of technology inside them. Part of this is what assists the vehicle to stay connected to its environment. This includes:

  1. Sensors
    They are in charge of collecting immediate information, concerning the car’s environment. Driving patterns, nearby or imminent external situations, our position with respect to other vehicles, are just some of the data points which can be detected.
  2. Connectivity
    A specialty of Smart devices is their ability to connect to the Internet in different ways. In this case, cars are no exception. Whether it is via Bluetooth, WiFi, WLAN or 5G networks, cars have access to constant information.
  3. Decision making
    Now, with all this information our car can react in different ways. It can warn us that we have a vehicle in close proximity when parking, or when driving on the road. They can also inform us about the need for preventive maintenance of the vehicle or can warn the driver about risk situations, etc.

What can I do if my car does not have all this connectivity?

There are alternatives at present, which can turn our vehicle into a connected car. For example, Telefónica has a device that can connect it. This device is known as Movistar Car.

Movistar Car, can become a WiFi point so that you do not ave to consume your cellular data, can offer assistance in case of accidents, GPS for the car, error alerts and much more. Without a doubt, it is another convenient feature for the operation of the IoV.

In conclusión, there are many advantages that a Smart Car offers us, and it can even save our lives. Movistar Car is a good tool to make the car connected or improve the connection we already had. One way or another, a connected car is becoming safer and more necessary. What are you waiting for to join the IoV?

To keep up to date with Telefónica’s Internet of Things area, visit our web site or follow us on TwitterLinkedIn YouTube.

Keys to Implementing a 360 Corporate Digital Identity

ElevenPaths    25 June, 2020

In recent years, in parallel with the accelerated processes of corporate digital transformation, a major issue has been growing steadily in the fundamental structures of all organisations.

We are talking about the drawbacks arising from inefficient identity access management which, on the one hand, hinder productivity, and business expansion and, on the other hand, impact significantly on the security of the organisation.

Factors causing identity management issues:

  • Non-planned technological evolution based on partial and isolated solutions.
  • The inorganic corporate growth and the delay in the integration of identity directories.
  • Lack of standard lifecycle management processes and a policy of roles and authorisations.
  • Delay in implementing corrective measures as well as a corporate identity strategy.

This paper begins with a description of the issues resulting from inefficient corporate identity management. Then, a model of identity governance based on Gartner’s CARTA methodology is detailed. Finally, it provides the characteristics that a comprehensive identity access management solution must have.

Full paper available here: Keys to Implementing a 360 Corporate Digital Identity

Anti-Coronavirus Cryptography

Gonzalo Álvarez Marañón    23 June, 2020

Governments and health authorities worldwide are launching infection tracing apps. Moreover, within an unprecedented partnership, Apple and Google joined forces to create an API that makes it easier to develop this app. Scientists agree that the adoption of these types of apps by as many citizens as possible will help curb the spread of Covid-19. According to a rough estimate, if 80% of mobile users with iOS or Android install them, this would be equivalent to 56% of the total population, a figure sufficient to contribute significantly to curbing the pandemic.

Unfortunately, since the launch of these apps was announced, all kinds of hoaxes and fake news have been spreading lies about conspiracy spying scenarios. This groundless fear can lead many people not to use the app when it is available in their country, so in this post we will explain how its cryptography works to ensure the privacy of users.

Cryptography of Apps Based on Apple’s and Google’s API

If the health authorities of your country or region have created an app, you can download it from Apple Store or Play Store, depending on whether your device is iOS or Android, respectively. Although you can have more than one app installed on your device that uses exposure notifications, only one can be active at a time.

If you choose to voluntarily install the app authorised in your region or country, this will ask your permission to collect and share random identifiers. To protect your privacy, the app uses a Cryptographically Secure Pseudorandom Number Generator to independently and randomly generate a Temporary Exposure Key (ExpKey) every 24 hours. From it, an encryption key is derived through a HDKF function to generate the Rolling Proximity Keys (RPIKey) and an Associated Encrypted Metadata Key (AEMKey) to encrypt additional metadata in case you later test positive.

RPIKey = HKDF(ExpKey, NULL, UTF8(“EN-RPIK”), 16)

AEMKey = HKDF(ExpKey, NULL, UTF8(“EN-AEMK”), 16)

Specifications for Bluetooth Low Energy (BLE) assume that your device’s MAC address changes every 15-20 minutes to prevent tracking. Every time your MAC address changes, the app generates a new Rolling Proximity ID (RPID) obtained by encrypting, via AES-128 with the previous RPIKey, the value of the new 10-minute time window, Ti.

RPID = AES128(RPIKey, UTF8(“EN-RPI”) || 0x000000000000 || Ti)

On the other hand, the associated metadata are encrypted with AES128-CTR using as key the previous AEMKey and as IV, the RPID.

AEM = AES128−CTR(AEMKey, RPID, Metadata)

Ilustración 1. Esquema de Claves para Notificación de Exposiciones
Figure 1. Key Schedule for Exposure Notification (Source: Exposure Notification Cryptography Specification)

This metadata includes the date, the estimated duration of exposure and the strength of the Bluetooth signal. To further protect your privacy, the maximum estimated duration recorded is 30 minutes. The Bluetooth signal strength helps to understand the proximity of the devices. In general, the closer the devices are, the greater the recorded signal strength. Also, other devices that receive your Bluetooth identifiers will record them in a similar way and store them along with their associated metadata.

As you can see, neither the Bluetooth identifiers nor the random keys on the device include information about your location or identity. Also, GPS cannot be seen, so there is no way to track your movements.

Your terminal and the terminals around you work in the background, constantly exchanging this RPID information and encrypted metadata through BLE without the need to have the application open.

What Happens if I Test Positive for Covid-19?

If you are later diagnosed with Covid-19, your terminal uploads your last 14 temporary exposure keys (ExpKey) to a server of the health authorities of your region or country called Diagnosis Server. Its mission is to add the diagnosis keys of all the users who have tested positive and to distribute them to all the other users who take part in the exposure notification.

All other devices on the system download these 14 keys, regenerate the RPIDs for the last 14 days, and compare them with the locally stored identifiers. If there is a match, the app will have access to the associated metadata (but not to the matched identifier), so it can notify you that a potential exposure has occurred and guide you on the steps to be taken based on health authorities’ instructions.

Depending on its design, the app may generate an exposure risk value that the government or health authorities could use to adapt the guidelines specifically for you in order to better control the pandemic. The exposure risk value is defined and calculated based on the associated metadata, as well as the transmission risk value that the government or health authorities may define for the matching device random keys. In no case will the exposure risk value or transmission risk value be shared with Apple or Google.

The parameters used for this transmission risk value could include information you have provided (such as the symptoms you report or whether your diagnosis has been confirmed by a test) or other data that the government or health authorities consider could affect your risk of transmission, such as your job. The information you choose to provide to the government or health authorities is collected in accordance with the terms of the app’s privacy policy and its legal obligations.

Conversely, if you remain healthy and do not test positive, your Temporary Exposure Keys will not leave your device.

Some Final Thoughts on the Apple and Google API and Your Privacy

Given the image of Google’s and Apple’s Big Brother and in our collective mind, many people will not care about the cryptography of this API and will not trust these two companies. To give you more reassurance, keep in mind that:

  • You decide whether or not you receive exposure notifications: This technology only works if you choose it. If you change your mind, you can turn it off at any time.
  • The Exposure Notification System does not track your location: It does not collect or use the location of your device via GPS or other means. It uses Bluetooth to detect if two devices are close to each other, without disclosing their location.
  • Neither Google, Apple nor other users can see your identity: All matches in the Exposure Notification system are processed on your device. Health authorities may request additional information from you, such as a phone number to contact you for further guidance.
  • Only health authorities can use this system: Access to technology will be granted only to health authority applications. Their applications must meet specific criteria in terms of privacy, security, and data use.
  • Apple and Google will disable the exposure notification system in each region when it is no longer required.

Success Lies in Critical Mass

Remember that these apps will only work if at least 80% of iOS and Android users install them and keep Bluetooth on when they leave home. If you disable your device’s Bluetooth connection, random Bluetooth identifiers will also stop being collected and shared with other devices. This means that the app will not be able to notify you if you have been exposed to someone with Covid-19.

Therefore, when making the decision whether to use these apps or not, each citizen must find the right balance in his or her conscience between the public good and the safeguard of privacy.

Most Software Handling Files Overlooks SmartScreen in Windows

Innovation and Laboratory Area in ElevenPaths    22 June, 2020

SmartScreen is a component of Windows Defender aimed at protecting users against potentially harmful attacks, whether in the form of links or files. When a user is browsing the Internet, the filter or SmartScreen component analyses the sites visited by the user and, if the user access a website considered suspicious, it displays a warning message so that the user can decide whether to continue or not. But it also warns about downloaded files.

We have conducted a study on how SmartScreen works particularly in this area and have tried to understand what triggers this protection component developed by Microsoft in order to better understand its effectiveness.

How Does SmartScreen Know Which File to Analyse?

Alternate Data Streams or ADS is a feature of the NT file system that allows metadata to be stored in a file, whether by a stream directly or by another file.

Currently ADSs are also used by different products to tag files in the “:Zone.Identifier” stream so that you know when a file is external (i.e. not created on your own computer) and therefore needs to be examined by SmartScreen. Microsoft began tagging all files downloaded through Internet Explorer (at the time), and other browser developers began doing the same to take advantage of SmartScreen’s protection.

The value written to the stream, i.e. the ZoneId, can have the value that you wish. However, SmartScreen’s behaviour is based on the values reflected in the table below:

Activating the value in any file is easy by command line:

This image has an empty alt attribute; its file name is image-64.png

Do Browsers Use This Feature to Tag Files?

We analysed the 10 most used browsers in desktop operating systems. To do this, we downloaded a file from a web page. Is the ZoneId added to the downloaded file? In most cases it is.

This image has an empty alt attribute; its file name is image-52.png

What about FTP, Code Versioning, Cloud Sync or File Transfer Clients?

We now examine other programs capable of downloading files. For example, most email clients do not add the ZoneId to be scanned by SmartScreen.

This image has an empty alt attribute; its file name is image-50.png

However, many desktop instant messaging clients do.

This image has an empty alt attribute; its file name is image-51.png

No FTP, code versioning, or cloud sync client adds the appropriate ZoneID, so files obtained by this means will not be analysed by SmartScreen.

This image has an empty alt attribute; its file name is image-53.png
This image has an empty alt attribute; its file name is image-54.png

Nor do cloud sync clients worry about tagging files.

This image has an empty alt attribute; its file name is image-55.png

The same goes for the integrated file transfer mechanisms in Windows.

This image has an empty alt attribute; its file name is image-58.png

At least, WinZip and the native Windows decompressor do respect this option if it is decompressed after the download.

This image has an empty alt attribute; its file name is image-60.png

Potential Evasions

After understanding how and when the file is tagged, the research led us to reflect on which process is responsible for running SmartScreen and whether there are ways to bypass that process. To conduct the test, we mostly tagged files that were interpreted and known by SmartScreen as malicious to find out whether or not the file executed in this way was bypassing SmartScreen. We took a series of files in different interpreted languages and set the bit, as mentioned above.

This image has an empty alt attribute; its file name is image-65.png

The result can be seen in the following table:

This image has an empty alt attribute; its file name is image-59.png

Perhaps the most interesting point is the difference when launching them by using the start command:

This image has an empty alt attribute; its file name is image-66.png

Where SmartScreen gets in the way of PowerShell, but not in the way of CMD.

This image has an empty alt attribute; its file name is image-67.png

Conclusions

In the following table, we can observe the percentage of those who do NOT implement ZoneId when the file is downloaded to be analysed by SmartScreen:

This image has an empty alt attribute; its file name is image-68.png

In general, we can conclude that a potential attacker would have several ways to get a malicious file onto a computer with greater chances of not being discovered by SmartScreen: by relying on the user to download executables through certain programs.

We believe that it is necessary for both developers and users to be aware of how SmartScreen works in order to take advantage of its detection capabilities and better protect the user.

The full report is available here:

Cybersecurity Weekly Briefing 13-19 June

ElevenPaths    19 June, 2020

Ripple 20 Vulnerabilities in TCP/IP Software

JSOF researchers have discovered 19 0-day vulnerabilities, collectively called Ripple 20, in the TCP/IP software library developed by Treck that would affect more than 500 vendors worldwide. The millions of devices affected by these flaws are present everywhere, including homes, hospitals, industries, nuclear power plants and the retail sector, among others. An unauthenticated remote attacker could use specially-designed network packets to cause a denial of service, leak information, or execute arbitrary code. Of the 19 vulnerabilities, there are 4 critical ones with CVSS scores over 9 (two of them, CVE-2020-11896 and CVE-2020-11897 scored 10). They would allow an attacker to remotely execute arbitrary code on the compromised devices. Some vulnerabilities have already been patched by Treck in version 6.0.1.67. However, many devices will not be patched, so it is recommended to minimize their exposure to the Internet.

More info: https://www.jsof-tech.com/ripple20/

Adobe Fixes 18 Critical Bugs

Adobe has released an out-of-band security update patch to fix 18 critical flaws that could allow attackers to execute arbitrary code on systems running vulnerable versions of Adobe After Effects, Illustrator, Premiere Pro, Premiere Rush, and Audition on Windows and MacOS devices. The vulnerabilities found in these five Adobe products were caused by out-of-bounds reading and writing, stack overflow, and memory corruption errors. Adobe also fixed a “critical” severity vulnerability (CVE-2020-9666) that allowed disclosure of information and affected Adobe Campaign Classic. Adobe advises users to update vulnerable applications to the latest versions using the Creative Cloud update mechanism in order to block attacks that might attempt to exploit unpatched installations.

More details: https://helpx.adobe.com/security.html

RCE Vulnerability Analysis on Microsoft SharePoint Server

Zero Day Initiative researchers have published a remote code execution vulnerability analysis on Microsoft SharePoint Server CVE-2020-1181, fixed this month. The bug would allow an authenticated user to execute arbitrary .NET code on the compromised server. For the attack to be successful, the attacker should have “add and customize pages” permissions on the target SharePoint site. However, the default configuration of SharePoint servers allows authenticated users to perform this function. Therefore, the threat actor could create the malicious site directly from the SharePoint web editor, and it would be considered a legitimate site.

More: https://www.zerodayinitiative.com/blog/2020/6/16/cve-2020-1181-sharepoint-remote-code-execution-through-web-parts

AWS Shield Mitigates the Greatest DDoS Attack to Date

Following the AWS Shield Theat Landscape report, it has been announced that this Amazon service has managed to mitigate the biggest DDoS attack ever experienced, with a volume of 2.3 Tbps. The target of this attack is unknown, but it has been detailed that this incident was carried out by using CLDAP (Connection-less Lightweight Directory Access Protocol) web servers and was ongoing for three days. This protocol is an alternative to LDAP and is used to connect, search and modify shared directories on the Internet. It is also well documented that CLDAP servers amplify DDoS traffic by 56 to 70 times their initial size, making it a highly sought-after protocol to support DDoS services made available on the market for threat actors. It’s worth mentioning that the previous record for the highest volume of DDoS attack was detected in March 2018, with a total of 1.7 Tbps.

More information: https://aws-shield-tlr.s3.amazonaws.com/2020-Q1_AWS_Shield_TLR.pdf

Vulnerability in Pulse Secure Client

Timmy Security Network researchers have discovered a privilege escalation vulnerability in the Pulse Secure Client for Windows systems. By exploiting this flaw, threat actors could abuse PulseSecureService.exe to run an arbitrary Microsoft Installer file (.msi) with SYSTEM privileges, granting them admin permissions. The vulnerability is present in the dsInstallerService component, that gives users without admin privileges the ability to install new components or update them using the installers provided by Pulse Secure. This bug has been successfully tested in versions prior to 9.1.6.

More: https://www.redtimmy.com/privilege-escalation/pulse-secure-client-for-windows-9-1-6-toctou-privilege-escalation-cve-2020-13162/

Popular Docker Images under Security Scrutiny

Juan Elosua Tomé    borjapintoscastroeng    16 June, 2020

Docker is a widely-used technology in development to quickly deploy self-contained applications independent of the operating system used. It is very useful for developers and administrators.

For developers, it provides agility in creating and testing complex technological architectures and their integration. Another crucial aspect in Docker’s success among developers is the certainty that its code will work in any other machine working with Docker, so eliminating the classic issues of deployment in the target machine due to different configurations of environments, dependencies, base SW versions, etc.

It makes it easier for administrators to maintain virtual machines and allocate resources because Docker containers are much lighter. A simple image is needed to be able to deploy the containers needed. But, how secure are these images?

From the TEGRA cybersecurity centre in Galicia, we have carried out a study on the most popular Docker images. To do this, we have used the DockerHub platform, the official image repository managed by Docker, so that any user can download an image instead of building one themselves from scratch. For example, this would be an image from the mysql database.

Popular Images with More Vulnerabilities

Firstly, we got a list of the 100 most downloaded images from DockerHub, as of August 8, 2019.

Afterwards, we have analysed each image using Dagda, a tool that its creator, Elías Grande Rubio, defines as: “a Docker security suite that allows both the static analysis of vulnerabilities of software components in Docker images and the analysis of dependencies of different application frameworks”.

Below are the 10 Docker images (of the 100 analysed) where Dagda found the greatest number of vulnerabilities:

Docker ImageNo. of DownloadsVulnerabilities
docker-dev:11M+696
percona:latest10M+564
logstash:7.1.010M+519
crate:latest10M+464
elasticsearch:7.1.010M+444
kibana:7.1.010M+440
centos:latest10M+434
java:latest10M+172
ros:latest5M+134
buildpack-deps:latest10M+128

How is it possible that there are so many vulnerabilities in the most popular Docker images?

How Docker Works

If we think about how Docker works, we see that it is built in static layers.

Therefore, we see that the vulnerabilities of the previous layers are present in the images built based on them.

We are confident that the developers, the day they created the image, updated it as much as possible. However, we see how the images are anchored to the time when they were built and as time passes, bugs, vulnerabilities and new exploits are discovered.

Details of the Vulnerabilities Found

Then, already aware that Docker images work by layers and inheritance (even of vulnerabilities), we dissect in depth the vulnerabilities found. To do this, we got the dockerfiles corresponding to their construction and observed how the analysed images are made up. In the following figure we can see the inheritance scheme of the images with more detected vulnerabilities:

It should be pointed out that of the 10 most vulnerable popular images we have analysed, most (6) inherit from centOS7. In the following sections we will analyse this in detail.

Detailed Analysis of CentOS-Based Images

Let us discuss the source of vulnerabilities in centOS-based images. For each image, we subtract the centOS-based vulnerabilities, resulting in the following table:

Docker ImageVulnerabilities
centos:7434
percona:latest130
logstash:7.1.085
crate:latest30
elasticsearch:7.1.010
kibana:7.1.06

Now the origin of the vulnerabilities is more evident, which ones are specific to each image and which ones are inherited from the operating system.

Tools

If we use Docker in our technology stack, it is important to have tools that help us assess the security of the images we use or build, either with free solutions such as Dagda, Anchore, Clair, Dockscan, etc., or other paid solutions such as Docker Trusted Registry or Twistlock.

One option to consider in these tools is the real-time container monitoring functionality. This dynamic monitoring scans all events occurring in the running container and, if there is any suspicious activity, it triggers an alert.

Let us think that Docker images usually have a very specific activity for which they have been built. Therefore, within a running image, if an administrator tried to install new software it would be an anomalous behaviour. For example, in a container with a WordPress, it would be very strange for an administrator to install new software.

To show how this works, we enabled real-time monitoring for a base image of ubuntu:18.04 and installed the git package. In the figure we can see how dynamic monitoring detects this behaviour and triggers the corresponding warnings.

In short, if we work with Docker, the use of container analysis tools can help us to have a security approach within our development lifecycle. The tools will show us the existing vulnerabilities so that we can analyse more thoroughly if the image can really be compromised or not, both in a static way and with a dynamic monitoring.

In any case, now we understand that the inherent nature of Docker Imaging makes it more likely to be anchored in time, so we must assess the impact of such vulnerabilities. The existence of a vulnerability is one thing, but that a vulnerability exists and can be exploited by an attacker is another, and something even more complicated (we hope) is that an attacker can exploit it remotely.

However, in the Docker world there are examples of vulnerabilities being exploited in-the-wild, like the 2014 ElasticSearch dockerised attack exploiting the CVE-2014-3120 vulnerability, one of the first publicly recognised attacks on Docker images. Other examples would be the known Heartbleed vulnerabilities (CVE-2014-0160) caused by an OpenSSL library, or Shellshock (CVE-2014-6271) associated with the GNU BASH library. These libraries used to be installed in many base images, in this case even if the deployed application were secure it would have a remotely exploitable vulnerability when using one of them.

Should These Images Be Used? Is the Risk Greater or Lesser?

Like all tools and software, these images should be used with caution. It is possible that, in development, in continuous integration or while testing an application with a database, vulnerabilities may not affect − as long as we only want to test the functionality and those containers will be destroyed at the end. Even so, it is necessary to monitor the environment and practice defence in depth. Recommendations could be:

  • For production use, it should be verified that the application does not make use of the vulnerable libraries and that the vulnerability exploit does not affect the nature of the application itself in order to ensure that future updates to the application do not expose us.
  • The same recommendation should apply to Docker as when using any third-party software: we should only use containers from reliable sources. As an example of the risks of not following this recommendation we can see this article describing the use of Dockerhub with malicious images used for cryptomining.
  • Within a security-conscious development cycle, managing vulnerabilities and versions of all components of the product or software is a key task.
  • Minimum exposure point. An advantage of using Docker is that you can build images only with the libraries needed to work. For example, you could remove the shell so that no attacker could perform actions, which would be very complicated on a real server. These images are called distroless, and do not contain any shells, package managers, or other programs that are expected to contain a standard distribution, resulting in smaller, less vulnerable images.

Conclusions

As we have seen, with the emergence of technologies such as Docker, aimed at facilitating deployments by packaging complete application dependencies, the defined boundary between the responsibilities of developers and those of system administrators within a company becomes blurred. The summary of its “dangers” could be:

  • Docker images are built on static layers and, by their nature, these are anchored to the moment they were built, so images are more prone to out-of-date (especially those with a significant number of layers).
  • Docker images are usually created by developers and other profiles who, not being used to system administration tasks, may not take into account the security measures required for their proper update, configuration, and maintenance.

In summary, it is necessary to have joint processes, tools and methodologies between both profiles that make it possible for the productivity gained with Docker not to generate, on the other hand, a security issue or a lack of control of the risks we are exposed to in our systems.


TEGRA cybersecurity centre is part of the mixed unit in cybersecurity research known as IRMAS (Information Rights Management Advanced Systems), which is co-financed by the European Union, within the framework of the 2014-2020 Galicia FEDER Operational Programme to promote technological development, innovation and high-quality research.

This image has an empty alt attribute; its file name is image-85.png