Cyber Security Weekly Briefing 16-22 October

Telefónica Tech    22 October, 2021

Zerodium interested in acquiring 0-days of Windows VPN software

Information security company Zerodium has reported its willingness to purchase 0-day vulnerabilities targeting VPN service software for Windows systems: ExpressVPN, NordVPN and Surfshark. The company has shown interest in exploits that could reveal users’ personal information, leak IPs or allow remote code execution. It is worth remembering that Zerodium is known for buying 0-days in different applications, which it then sells to law enforcement and government agencies, so the target of these new acquisitions is easily identifiable. However, this has generated some controversy, as reported by The Record, as many users use VPN apps to preserve their privacy in countries with oppressive regimes, and it is not known who Zerodium´s end customers are. So far, none of the VPN providers have commented on the matter.

More details: https://twitter.com/Zerodium/status/1450528730678444038

LightBasin: a threat to telecoms companies

Researchers at CrowdStrike have published a new analysis of the threat actor known as LightBasin or UNC1945, which has been targeting companies in the telecommunications sector since 2016.  Linked to Chinese interests, LightBasin often targets Linux or Solaris systems in its operations, as they are highly related to its preferred sector. CrowdStrike has observed new Techniques, Tactics and Procedures (TTPs) associated with this group. For example, LightBasin would have taken advantage of external DNS servers (eDNS) to propagate its operations, or TinyShell, an open-source SGSN emulation software, to channel traffic from the C2 server. It is worth noting that eDNS is a fundamental part of the radio frequency networks (GPRS) used for roaming between different mobile operators. The researchers highlight the group’s extensive knowledge of networks and protocols, claiming that LightBasin would have compromised at least thirteen telecommunications companies in 2019 alone.

All details: https://www.crowdstrike.com/blog/an-analysis-of-lightbasin-telecommunications-attacks/

RedLine Stealer: main source of data from two Dark Web markets

​Recorded Future’s cybersecurity research division, Inskirt Group, has published a report identifying the RedLine Stealer malware as the primary source of stolen credentials being traded on two Dark Web markets: Amigos Market and Russian Market. RedLine Stealer is an infostealer that has the ability to collect credentials from FTP clients, web browser logins, mail applications, as well as extract authentication cookies and card numbers stored in the browsers of infected devices. During the investigation, the Inskirt Group team detected the publication of identical listings on both marketplaces simultaneously, which contained the same information stolen from victims, far exceeding the contributions of other malware on both forums. In addition, it is worth noting that, although Redline Stealer was developed by the threat actor REDGlade, several versions, similar to the original, are currently being distributed and have led to its further spread.

More information: https://go.recordedfuture.com/hubfs/reports/mtp-2021-1014.pdf

Manageability: the importance of a malleable system in a Cloud Native world

Gonzalo Fernández Rodríguez    18 October, 2021

We recently published the post Cloud Chestnuts in The Cloud, Or What It Means That My Software in which we tried to explain what the term Cloud Native meant and what attributes our applications/systems would have to have in order to really be considered as Cloud Native.

We discussed the need for scalability of current applications derived from the strong demand for resources of current applications, a need that Cloud Computing technology responded to by offering the necessary resources (almost infinite) on demand, instantly and paying only for their use.

However, any distributed application/system has a complexity associated with it, partly inherited from the dependence of the different subsystems that make it up (storage, network, computation, databases, business services/microservices, etc.). Inevitably hardware will fail from time to time, the network will suffer outages, services may crash and become unresponsive, and so on. With this scenario, it is not enough to move applications from “on-prem” environments to “cloud” environments to convert them into Cloud Native applications, they have to be able to survive in this type of environment, recover and continue to function without users perceiving these problems, therefore they have to be designed to withstand failures, and ultimately be more reliable than the infrastructure on which they run, i.e. they have to be resilient.

In addition to these possible failures (hardware, network, services, etc.) there are other elements such as possible changes in the business, a fluctuation in demand for our services or the execution of the same in different environments that make us have to act on our applications to incorporate new functionalities or to keep them working correctly and without interruptions as desired by the users

How easy or difficult it is for our applications to change their behaviour, either to enable/disable some functionality, to deploy/redeploy more service nodes without downtime, or to failover from one or more resources that have stopped working to others that are still available is what we are going to talk about today in this post.

What do we mean when we say that our application is malleable?

In our previous article we said that one of the attributes of a Cloud Native application according to the CNCF was “manageable”, we talk about manageable, but probably the term malleable is more accurate.

When we say that a material is malleable, we mean that we can change its shape without breaking it, and without any casting, melting, or any other industrial or chemical process that we can think of. Looking for an analogy in the software world, we could say that an application is malleable when we can modify its behaviour from the outside, without touching its code and without having to stop the application, that is to say, “without breaking it”, without any user perceiving that something has stopped working, even momentarily. It is important to highlight the difference between our application being malleable or “manageable” and our application being maintainable, in which case we would be referring to the ease with which we could change the behaviour or evolve our application from the inside, that is, by making changes to the code, something that is equally very important, but which is not the subject of this article.

To better understand what we mean, let’s imagine that we have a running application or system that is providing a certain service to N clients, which we need to modify for some reason, for example:

  • There is a growing/decreasing customer demand, and we need to increase/decrease the number of certain system resources. Note, we are not talking about how to solve scalability (let’s assume that our service has been designed stateless and is ready to scale horizontally without problems).
  • We have developed a new version of our application, we have tested it in our test environments and we want to run it in a production environment where the code is exactly the same, but the resources such as network, databases, storage, etc. are different.
  • We have detected a problem in the configuration of a component that causes the service to behave incorrectly and we need to modify that configuration.

The key to achieving this does not lie in moving our applications to a cloud environment as we have mentioned previously, the key lies in following a series of practices in the design of the architecture of our applications so that we can not only modify their behaviour but also do so in a simple and agile way, ensuring that users perceive that the system is working correctly at all times.

But how do we make our application malleable?

Observable

Well, if this is about making changes in a system that is providing service in a productive environment, first of all we have to know when it is necessary to make those changes. In the post Observability, what is it, what does it offer, our colleague Dani Pous gave us an introduction to the importance of our applications being observable, because thanks to this we will know what is happening at all times and we can make decisions based on the information gathered by our metrics, logs and traces.

If we want our application to be malleable, it is essential that we know when to make those decisions, so we need to spend time designing alarms that activate the corresponding automatic mechanisms that change the behaviour of our system (for example, detecting a DB cluster that is not responding so that we can automatically failover another one), and also the dashboards that give us the necessary information to make a manual change in the configuration and update our application without the need to restart (for example, increasing a timeout in a configuration file to avoid rejecting client requests), and also the dashboards that give us the necessary information to make a manual change in the configuration so that our application can be updated without the need to restart (for example, increasing a timeout in a configuration file to avoid rejecting client requests).

Configurable

Secondly, it is necessary that our application has some mechanism so that we can externally change its behaviour. We must try to identify which parts of our application have to be parameterisable (DB connection strings, URLs for invoking web services, threads, memory or active CPUs for performance, etc.). This configuration or parameterisation is something that can change between different environments (development, integration, production, etc.)

Most readers will have heard of The Twelve-Factor App, for those who do not know it, it is a methodology that was created at the time by several Heroku developers and which establishes twelve principles that help to create applications in the cloud providing benefits such as portability, parity between development and production environments or greater ease of scaling, among others.

Environmental Variables

One of these twelve principles refers to the configuration of the applications and indicates that the code of the applications is maintained between the different environments in which it is executed, but the configuration varies, so it is important that the configuration is kept separate from the code. It is also important that the configuration is versioned in a version control system to facilitate the restoration of a specific configuration if necessary.

Environment variables have the advantage that they are easy to implement and to change between deployments without changing the code, and they are supported by any language and any operating system. However, not all are advantages in the use of environment variables, environment variables define a global state and are shared with many other variables, so we must be careful when defining them so that they do not step on each other, and they cannot manage more complex configuration than a text string. In any case, to represent configuration at environment level (Development, Staging, Production, etc.) they are a very suitable solution.

Command Line Arguments

Another option used for the configuration of simple applications that does not require any file is the command line arguments. This type of configuration provided on the command line when starting an application is suitable for when interacting with scripts. However, when the configuration options get complicated, command line arguments are not a manageable option, they are overcomplicated and have an inflexible format.

Configuration files

Configuration files on the other hand also offer many advantages, especially when we have really complex applications, among other things because they allow us to represent more complex structures that can group logic of our application that is related. However, when using configuration files, it can be complicated to maintain the integrity of the configuration at all times of the nodes in a cluster, since we will have to distribute this configuration to each of the nodes. This problem can be improved by incorporating a solution such as etcd or consul that offers a storage system (key, value) in a distributed way.

Deployments and reconfigurations without downtime

Last but not least, we need to have an automated deployment system that allows, among other things, to:

  • Update all the necessary nodes of a system to the new configuration. The times when one person in the operations team would update the configuration of the different nodes that served a system or component of a system are long gone. Today there are services that support millions of users and have thousands of active nodes. Can anyone imagine how to update thousands of nodes if not automatically?
  • Manage the scaling/descaling of the components of a system/application in a progressive way without the need to stop the service. This includes tasks such as infrastructure deployment, software deployment, balancer configuration, etc.

Fortunately, the widespread use of containers and orchestrators such as Kubernetes in Cloud Native applications means that the problem of configuration distribution is greatly reduced, as these platforms offer specialised mechanisms for this, such as Kubernetes’ “ConfigMap”, which also allows you to manage both environment variables and command-line parameters such as configuration files.

Kubernetes also facilitates the deployment of new versions through the application of what is known as “Rolling Updates”, this technique allows us to progressively update the different instances of our applications, hosting the new versions on nodes with available resources, while at the same time eliminating the instances of the previous version, thus achieving a deployment with the coveted “Zero Downtime”.

In all cases we must always work with the concept of immutability, in which the images of the containers deployed in our application as well as the configuration objects are immutable. In this way, once the applications are deployed, any change will require the replacement of the container by a new image or of the configuration file or object (if, for example, we have a Kubernetes ConfigMap) by the new version.

Conclusion 

Cloud Native applications use architectures based on microservices, which makes it easier to develop and evolve applications (independent teams, functional decoupling, technology independence, etc.).

The use of containers for the deployment of microservices (e.g., Docker) and the increasingly widespread container orchestrators (e.g., K8s) facilitate the scaling, de-scaling of applications and the management of thousands of nodes within an application/service.

However, all these facilities are not without problems, the large number of nodes that can be serving a Cloud Native application multiplies the number of possible failures and therefore it is necessary that we design our systems with the mind-set that they will fail.

Additionally, we need to be able to distribute new versions (both code and configuration) across a huge number of instances without users perceiving a loss of service. The sheer number of machines, services, etc. managed within our applications makes it unfeasible for these changes to be manual and also requires us to work with the concept of immutability to ensure that each change is associated with a version that can be restored at any time.

References

https://livebook.manning.com/book/cloud-native-spring-in-action/chapter-1/v-1/87

https://docs.microsoft.com/en-us/dotnet/architecture/cloud-native/definition

https://12factor.net/config

Cyber Security Weekly Briefing 9-15 October

Telefónica Tech    15 October, 2021

​​Microsoft Security Bulletin

Microsoft has published its security bulletin for the month of October in which it has fixed a total of 81 bugs in its software, including 4 0-day vulnerabilities. Out of the 81 bugs, 3 have been categorised as critical severity. The first 0-day, categorised as CVE-2021-40449 and with a CVSS of 7.8, is an elevation of privilege flaw that has been exploited to carry out attacks in campaigns against IT companies, military and diplomatic entities. The second 0-day (CVE-2021-40469 and CVSS of 7.2) is a remote code execution vulnerability in Windows DNS Server. The third (CVE-2021-41335 and CVSS of 7.8) is an elevation of privilege bug in the Windows kernel. The last one, classified as CVE-2021-41338 and with CVSS of 5.5, is a security evasion vulnerability in Windows AppContainer Firewall. On the other hand, the 3 fixed critical severity bugs correspond to remote code execution vulnerabilities, two of them in Windows Hyper-V (CVE-2021-38672 and CVE-2021-40461) and the remaining one (CVE-2021-40486) in Microsoft Word. It is recommended to apply the security updates as soon as possible.

More info: https://msrc.microsoft.com/update-guide/releaseNote/2021-Oct

​​​​Vulnerability in OpenSea NFT platforms allows cryptocurrency wallets to be stolen

Check Point researchers have detected that malicious actors could empty cryptocurrency wallets through malicious NFT platforms on OpenSea, one of the largest digital marketplaces for buying and selling crypto assets. This platform, active since 2018, has a total of 24 million NFT (non-fungible tokens), reaching a volume of up to $3.4 billion in August 2021 alone. The attack method used consists of creating an NFT in which the threat actor includes a malicious payload and then distributes it to victims. Several users reported that their wallets were emptied after receiving supposed gifts on the OpenSea marketplace, a marketing tactic known as “airdropping” used to promote new virtual assets. CheckPoint identified that the platform allows the uploading of files with multiple extensions (JPG, PNG, GIF, SVG, MP4, WEBM, MP3, WAV, OGG, GLB, GLTF), so they ran a test to reproduce the attack scenario, uploading an SVG with a malicious payload used to get the wallets of potential victims emptied. The reported bugs have now been fixed.

All the details: https://research.checkpoint.com/2021/check-point-research-prevents-theft-of-crypto-wallets-on-opensea-the-worlds-largest-nft-marketplace/

​​Cyber-attacks against water treatment systems

The US Cybersecurity and Infrastructure Agency (CISA) has issued a new alert concerning cyber-attacks against drinking water and wastewater processing facilities. The activity observed includes attempts to compromise the integrity of systems through unauthorised access by both known and unknown threat actors. The advisory also points to known weaknesses in entities in this sector such as their susceptibility to spear-phishing attacks, the exploitation of outdated and unsupported software and control systems, as well as the exploitation of remote access systems. Over the course of 2021, there have been several relevant incidents that would fit into this scheme, such as the identification in August of ransomware samples belonging to the Ghost and ZuCaNo families in the SCADA systems of plants in California, Nevada and Maine. Similarly, it is worth recalling the incident that occurred in February at a water treatment plant in Florida where a threat actor managed to modify the volumes of chemicals poured into drinking water tanks.

Learn more: https://us-cert.cisa.gov/ncas/alerts/aa21-287a

​Google warnings for government-backed attacks increase by 33%

Google’s Threat Analysis Group (TAG) team has published information on the number of warnings generated by its “Security warnings for suspected state-sponsored attacks” alert system launched in 2012. In the course of 2021, the system sent more than 50,000 warnings to users, an increase of 33% compared to the same period in 2020. According to Google, this service monitors more than 270 attacker groups in 50 different countries, generating warnings when it detects phishing attempts, malware distribution or brute force attacks originating from the infrastructure of government-backed threat actors known as Privateers. During 2021, Google highlights two threat actors that stand out above the rest, based on the impact of their campaigns targeting activists, journalists, government officials or workers in national security structures, identified as APT28 o “Fancy Bear” with the support of Russia and APT35 or “Charming Kitten”, an Iranian threat actor active since at least 2014. In addition, the publication points out that receiving such an alert means that the account is considered a “target” and does not necessarily mean that it has been compromised, so users are encouraged to sign up for this service or otherwise enable two-factor authentication on their accounts.

All the info: https://blog.google/threat-analysis-group/countering-threats-iran/

​​​​TrickBot Gang duplicates and diversifies infection efforts

IBM researchers have tracked the activity of the ITG23 group, also known as the TrickBot Gang and Wizard Spider, after observing an increase in the expansion of distribution channels used to infect organisations and businesses with Trickbot and BazarLoader, samples used to orchestrate targeted ransomware and extortion attacks. IBM’s analysis suggests that this increase may have contributed to the spike in Conti ransomware activity reported by CISA last September.  Researchers have also associated ITG23 with two groups affiliated with malware distribution, Hive0106 (also known as TA551) and Hive0107. These are characterised by attacks aimed at infecting corporate networks with malware, using techniques such as email thread hijacking, the use of fake customer support response forms, as well as the use of undeground call centres employed in BazarCall campaigns. These TTPs are reportedly leading to an increase in infection attempts by these groups.

More: https://securityintelligence.com/posts/trickbot-gang-doubles-down-enterprise-infection/

Towards a smarter supply chain

José Luis Núñez Díaz    14 October, 2021

One of the recurring use cases that is always mentioned when talking about Blockchain is its application in supply chains. In fact, back in 2018, at Telefónica we were pioneers in implementing a Blockchain-based solution to manage our supply chain in Brazil. Three years later, this Monday, we introduced Telefónica Tech’s supply chain solutions to the market.

Solutions to face with the greatest guarantees the important challenges that arise in supply chains. Among them, an innovative platform developed so that our customers can benefit from our experience over the last few years in the adoption of Blockchain. The platform, built in collaboration with IBM and EY, enables traceability of any type of material or asset. More importantly, it promotes efficiencies by establishing a collaborative ecosystem between all actors in the supply chain.

Our experience: revolutionising our supply chain

The goal of any supply chain is to secure the company’s operations. Achieving this becomes more challenging every day due to the increasing globality and complexity. The product, from raw materials to distribution and sale or installation, passes through hundreds of companies and thousands of hands. Logistics extends to a very complex network of participants spread over the five continents.

Telefónica is no exception. We manage one of the largest supply chains in the world. Some numbers help us to understand its dimension. We have operations in 12 geographies, from Europe to Latin America. We ensure the maintenance and deployments of our industrial plant, with more than 70,000 sites and 50,000 switchboards. We have over 16,000 shops, twice as many as giants such as Inditex and more shops in Latin America than Starbucks or McDonalds.

More than three years ago, we started to establish global processes to make these operations more efficient, which we can summarise in 3 objectives:

  • Give end-to-end, real-time visibility and traceability. We needed data on which all participants could rely and make joint decisions. We are talking about knowing the origin of each component, the person responsible for it and its location at all times. In short, the maximum detail of information that allows us to identify and prevent incidents of any kind and improve planning.
  • Increase the level of automation and control. It was essential to minimise human interaction. Include technology to automate warehouse processes. All data collected through IoT devices is accessible in real time. In this way, decisions can be made and action can be taken on the product anywhere in the chain.
  • Add analysis and alarming capabilities. Increasing automation means multiplying data entry points. Increasing the granularity of information means generating a huge amount of data in real time. Analysing it allows business insights to be obtained. But, above all, optimising the different parts of the process on the basis of the information generated in other stages.

To be able to carry out this transformation, we needed a platform that would allow all actors in the chain to share information in real time. One that would guarantee all participants that we could trust the data and make decisions on it without questioning it. In short, a platform that fulfilled all the advantages that Blockchain promised at the time.

Today, what started as a pilot to demonstrate the value of the technology is the cornerstone of these global processes. Today we are managing the entire manufacturing and design flow of our client’s equipment (Decos, Routers, Modems, etc.) in Brazil. We can say without hesitation that we have one of the largest projects in the world using Blockchain applied to the supply chain. This is backed up by the numbers:

  • We distribute more than 15 million sets of equipment per year, 25% of which are semi-new, reconditioned and previously recalled.
  • Each box has more than 1,000 serialised and non-serialised components, more than 15 billion components plus their grouping in boxes, pallets or containers.
  • More than 100 partner companies with some 30,000 field technicians are involved in the chain.
  • The updates involve more than 200,000 transactions per day, more than 70 million updates per year.

And of course, significant efficiencies have been achieved in business magnitudes. Material supply times have been made 60% more flexible and inventory has been reduced by 50%. The return on investment has been 10 times the cost of the project, and this cost has been recovered in less than 1 year.

Our virtuous circle

But beyond the data, what is really valuable is the accumulated experience. The lessons learned over the years have allowed us to tailor a suit to the main challenges of the supply chain. Each update that went into production revealed a new improvement that was incorporated in the next iteration. It has been a continuous process where we have learnt how to bring such a project to fruition. We can sum it up in 4 headlines:

  1. The most important thing is the use case. You have to speak the language of the business and understand its needs. What use case do we want to cover. It has never been about setting up a Blockchain project, but about using Blockchain in a supply chain project. No more than 30% of the effort has been about implementing the technology or developing Smart Contracts. The important thing is to understand the business needs and the relationships of the different actors.
  2. A Blockchain project makes sense when we are dealing with an ecosystem. A network of data exchange between various actors. It is about implementing with technology the relationships that exist between the different actors. It is necessary to understand the value that the project will give to each participant. The value of the project is the sum of the individual values. Only if each participant perceives value in the solution will it add up. It is a matter of creating this ecosystem of trust between all the participants.
  3. The value of the case lies in the value of the data that is exchanged or presented. We make the data irrefutable. You have to make sure that the data you enter is true and accurate. That is why the intersection with the Internet of Things is so powerful. By collecting the data as close to the source as possible we increase its reliability.
  4. Data accumulates and multiplies and is accessible to all participants. No one has to expose or compromise each other’s information systems or deal with costly multilateral integrations. Data is no longer requested to be processed by others. We pool all the data from all stages that until now resided in different data sources. The application of Big Data and Artificial Intelligence techniques exponentially multiplies the possibilities of optimising the chain. We can identify variables in the entire process that one of the parties was unaware of and that directly affect its performance or contribution to the entire chain.

From project to product: TrustOS

Telefónica Tech has gone one step further: can we build on these learnings to build a product that helps our clients meet these same challenges? The answer is yes. Not just based on those learnings. While continuing to optimise our supply chain, we have tackled other Blockchain projects with their corresponding lessons learned. With all this, we designed TrustOS. It is a modular solution that allows customers to enjoy the benefits of immutability and transparency that Blockchain brings easily and quickly. It’s not about undertaking large projects like ours to redesign the supply chain to take advantage of the technology. It’s about understanding a customer’s specific need and how the technology enables them to solve it.

TrustOS logical architecture

The main challenge in this type of project is how to incorporate Blockchain into existing processes. If we focus on the supply chain, all companies have already digitised it to a greater or lesser extent. Even with integrations between different systems (SAP purchasing, warehouse management, etc.). Thanks to TrustOS, the technology developed by Telefónica, we connect pre-existing systems with a complementary layer that offers traceability. Thanks to TrustOS, Blockchain does not replace pre-existing systems, but complements them and adds capabilities at a low integration cost..

But Blockchain is not only about traceability. Apart from traceability, most of the projects we have developed over the years replicated the same functional modules. The different solutions needed to certify information, reconcile disparate data sources or create markets on ecosystems thanks to the tokenisation of assets. These are the four main modules of TrustOS. Each one of them is focused on serving this group of solutions.

In addition, many customers just need a fast and usable way to verify information recorded on blockchain. Therefore, as part of TrustOS we have also developed simple verification interfaces available to anyone. A user can “read” or “write” to blockchain from their mobile phone or email reader. For example, he can certify on the spot the image captured by his camera or the report he has just received by email.

With this whole suite of solutions, we aim to universalise blockchain and make it accessible to everyone. Not to mention the fragmentation of available technologies, consortia or public or private networks. TrustOS simplifies all this for customers, making the technology completely transparent.

And as part of the portfolio, TrustOS also incorporates vertical solutions. These are products focused on solving a specific need. Among these solutions we can find products for contract tracking, document certification or brand protection. And also, of course, the optimisation of the supply chain.

The Smart Supply Chain

The new TrustOS-based platform establishes a new model of collaboration between supply chain participants based on trust. As we have been saying, Blockchain enables the creation of a unique digital identity for each product in an immutable “repository”. This repository is agreed with all participants (suppliers and partners) and is therefore auditable, trustworthy and transparent.

The Platform records all events and enables automatic tracking of every change in the asset lifecycle. The Smart Contracts engine is used to orchestrate suppliers and supply chain actors. It also enables the automation of functions such as inventory management and procurement. The entire ecosystem of companies involved in the supply, installation and consumption of materials interacts efficiently and information is easily accessible.

Participants’ existing systems communicate through standard APIs, so there is no need to modify their systems. It also allows the automation of data ingestion.

All in all, we can list the following advantages that a company gains from the solution:

  • Facilitates the unification of work systems across companies
  • Improves data integrity, inventory management and control.
  • Enables traceability and auditability of delays, incorrect payments, inconsistent transactions.
  • Actively detects “inconsistencies” in transactions, as well as non-compliance at any point in the process (shipments, quantities, transfers, receipts, etc.).
  • Evaluates the types of materials used and their consumption per site (anomalies).
  • Verifies the treatment of returns.
  • Acts in the face of potential problems and mitigates risks
  • Avoids the inappropriate use of products.
  • Acts and minimises common problems such as shrinkage, overstocks, potential breakages, obsolescence, etc.
  • Provides information on each product in almost real time and with access to its traceability history.
  • Identifies a product by its characteristics of origin at any point in the chain and allows action to be taken on it.
  • Allows to be more efficient in reverse process and waste reduction (collection, recycling, reuse, packaging…).
  • Facilitates the auditing of transformation processes and the authenticity of components.

The platform is part of Telefónica Tech’s ambition to accelerate digital innovation and the adoption of new technologies in companies. We are of course talking about Blockchain, Internet of Things or Artificial Intelligence. We intend to create a reference for the construction of these digital ecosystems of trust that connects people, resources and organisations. But also, a catalyst for generating efficiencies along the entire supply chain through continuous innovation. It is our vocation to continue evolving it by incorporating both new technologies and new business requirements that allow us to broaden its impact.

The adoption of initiatives related to sustainability must play an important role in this model. To this end, it is essential to ensure the sustainable origin of materials and to control reputational risk by extending control to its network of suppliers.

For all of this, we have the synergies of three major technology companies: Telefónica, IBM and EY. Between the three of us, we have extensive experience in supply chain operations. Not only in the telco sector, but also in other sectors such as the automotive and energy industries. Between the three companies we digitise the end-to-end process, integrating different technologies and adapting them to the particularities of the different industrial sectors.

Leave a Comment on Towards a smarter supply chain

Europe’s new digital identity; sovereign identity wallets

Alexandre Maravilla    13 October, 2021

Have you ever stopped to think about how many user accounts we have on the Internet? Bank accounts, utility providers, Social Networks, email, e-commerce, … Nowadays we handle an almost infinite number of digital services.

How many times did you have to repeat the same registration process? Do you remember what personal information you shared each time? Do you know what personal data is stored and processed by each of these services you are registered in?

According to a Eurobarometer survey, 72% of users want to know how their data is processed when using digital services, and 63% of EU citizens want a single, secure digital ID for all online services.

A new European model for digital citizen identity

In this context, on June 3, 2021, the European Commission announced its new proposal for a secure and trusted digital identity. In the words of Ursula von der Leyen, President of the European Commission:

«Whenever an app or a website asks us to create a new digital identity or to easily connect through a large platform, we really have no idea what happens to our data. This is why the Commission will propose a secure European e-identity. An identity that we trust and that every citizen can use everywhere in Europe for everything from paying taxes to renting a bicycle. A technology allowing us to control for ourselves what data is used and how it is used»

From user accounts to sovereign identity wallets

The EU has set out to regain sovereignty over our personal data and is working on the definition of the new digital identity model, based on identity wallets. A wallet is a cryptographic application that is installed on our mobile devices allowing us to store and share credentials related to our identity and its attributes.

This new model is based on:

  • The concept of sovereign/decentralised identity on blockchain
  • Verifiable credential exchange standards

Under this new paradigm, users go from having as many identities or user accounts as digital services we use, to a single identity that we carry on our mobiles, and that we share totally or partially (through the attributes of the identity) with the rest of the world.

Binding Directive for EU countries

The new regulation on electronic identification (eID) is part of the European regulatory scheme eIDAS (electronic IDentification, Authentication and trust Services), and will be mandatory for EU Member States, which by the end of 2023/beginning of 2024 at the latest, will have to provide their citizens with an identity wallet that will enable them to;

  • Access public services and apply for e.g. a birth or medical certificate, or report a change of address.
  • File your tax return
  • Applying for a place at a public or private university in any EU member country
  • Open a bank account
  • Store a medical prescription that can be used anywhere in Europe
  • Validate your age online/offline without having to share/show your national identity document
  • Rent a car using a digital driving licence
  • Check into a hotel

Impact on the private sector

This new regulation will also be mandatory for the private sector, in particular for those online services that need to implement “strong” authentication mechanisms. This includes sectors such as transport, energy, banking and financial services, insurance, health, telecommunications and education, as well as large online platforms such as Google, Apple, Facebook and Amazon.

According to the European Commission, it is estimated that the implementation of this new identity model will benefit the private sector through:

  • Reducing the operational costs of; identifying, authenticating and managing your users’ personal data
  • Reducing online fraud.

Recovering digital sovereignty

Decentralised/sovereign identity models have long been a hot topic in the identity framework. There was a consensus among experts on their usefulness and technical feasibility, but there was a lack of momentum to validate their economic viability, a lack of a use case to energise the ecosystem.  Now it seems that this momentum has finally arrived via the EU.

All in all, the horizon points to the fact that we are beginning to redefine identity as we know it to date, a new model of identity designed for people, in which attributes such as privacy and sovereignty over personal information are defined factors from its initial conception.

What have we learned about Cloud this September?

Telefónica Tech    11 October, 2021

This new course has come loaded with knowledge for the Telefónica Tech blog. Thanks to our experts, we continue to advance, on a daily basis, in our training on technology: cybersecurity, IoT, Big Data, Artificial Intelligence, Blockchain and Cloud. This latest technology has made a strong entry into the blog and we want you to be sure you don’t miss out on what has been discussed.

We know that the term Cloud is in a lot of conversations lately and we don’t want you to miss out on all the latest news. For this reason, we bring you the best compilation of September. Are you ready? 👇

The first Cloud dictionary

A single volume of the dictionary is not enough

Stay tuned to our blog, because this October will be packed with more content to further expand your knowledge of the Cloud. Will you join us?

Cyber Security Weekly Briefing 2-8 October

Telefónica Tech    8 October, 2021

Apache vulnerabilities actively exploited

Earlier this week, Apache fixed a 0-day (CVE-2021-41773) affecting Apache HTTP servers which was actively being exploited. However, on Thursday we learned that the patch released on version 2.4.50 was not enough, leading to a new vulnerability, as a remote threat actor can still exploit the Path Traversal attack to map URLs to files outside the web server’s root directory via Alias-like directives. In addition, remote code execution would also be possible if CGI scripts are enabled in these aliased paths. This new vulnerability, identified as CVE-2021-42013, affects versions 2.4.49 and 2.4.50 and is also being actively exploited. Apache released a fix for the new vulnerability in version 2.4.51. CISA also issued a release urging organisations to apply the patches as soon as possible, as mass scans are being observed to exploit these flaws.

More: https://blog.talosintelligence.com/2021/10/apache-vuln-threat-advisory.html

Vulnerability in Azure AD enables brute force attacks

Security researchers at Secureworks have published the discovery of a new vulnerability in Microsoft Azure. This flaw, which has not yet been fixed by Microsoft, could allow threat actors to perform brute force attacks against Azure Active Directory without being detected, as no login events would be generated on the victim company’s tenant. The flaw resides in Azure’s Seamless Single Sign-On (SSO) feature, which allows users to automatically log in without having to enter credentials. However, the exploitation of this flaw is not just limited to organisations using Seamless SSO. Microsoft has told researchers that Seamless SSO features are being enhanced to mitigate the vulnerability. Since the vulnerability became known, some proofs of concept for exploiting the flaw have already been published on GitHub.

All the details: https://www.secureworks.com/research/undetected-azure-active-directory-brute-force-attacks

Syniverse suffers years of unauthorised access to its systems

Last September, the company Syniverse reported to the US Securities and Exchange Commission that it had discovered in May this year that it had suffered a security incident that affected its EDT transfer environment through unauthorised access to internal databases on several occasions since 2016. The company itself indicated that the incident did not affect its operations and that there was no attempt at extortion. The media outlet Motherboard has published an article in which they try to evaluate the possible real scope of these events, highlighting that Syniverse offers services to more than 300 companies in the telecommunications sector, such as AT&T, Verizon and T-Mobile. In their article, they add that a former employee of the company reportedly reported that the affected systems contained access to metadata on call logs, person data, phone numbers, locations, as well as SMS text message content. According to security researcher Karsten Nohl, Syniverse has access to the communications of billions of people around the world, and this would be a serious breach of users’ privacy. However, the digital media reported that Syniverse has declined to comment on specific questions about the actual extent of the breach.

Learn more: https://www.vice.com/amp/en/article/z3xpm8/company-that-routes-billions-of-text-messages-quietly-says-it-was-hacked

950GB of data extracted from an Agent Tesla C2

Resecurity researchers, working with ISPs in the European Union, the Middle East and North America, reportedly managed to exfiltrate 950GB of data from a Command & Control (C2) server of the Agent Tesla RAT, active since late 2014 and known for compromising sensitive information through malspam campaigns. After analysing the information, user credentials and confidential files were found, among other things, allowing researchers to establish patterns of use of the Agent Tesla RAT by threat actors. These patterns include the geographical distribution of the victims, locating the most affected regions such as the United States, Canada, Italy, Spain, Chile and Egypt, as well as the sectors most affected by this RAT, including the financial, retail and government sectors. According to various security researchers who have been monitoring this malware, Agent Tesla will continue to be a threat to Windows environments, especially after observing that the new version of the RAT is attacking Microsoft’s ASMI interface to avoid detection and lengthen infection times.

Full info: https://securityaffairs.co/wordpress/123039/malware/agent-tesla-c2c-dumped.html

TangleBot – New Android Malware

Security researchers at Proofpoint have discovered a new malware for Android mobile devices, which they have named TangleBot and which is currently targeted at users in the United States and Canada. The malware is distributed via smishing campaigns simulating the sending of COVID-19 regulations or information related to possible power outages. In the SMSs, victims are prompted to click on a link requesting an Adobe Flash update and are invited to download the supposed update. What is actually installed is already TangleBot, a malware that gives attackers full control of the devices, allowing them to monitor and record user activities, activate a keylogger to intercept all typed passwords or also store audio and video using the device’s microphone and camera without the user’s knowledge. In addition to its spying and keylogging capabilities, the malware can block and make calls, leading to the possibility of other premium services being enabled.

More details: https://www.proofpoint.com/us/blog/threat-insight/mobile-malware-tanglebot-untangled

IoT, Big Data and AI convergence report

Telefónica Tech    7 October, 2021

The IoT and Smart Cities Cybersecurity Innovation Centre of Telefónica Tech Ciber Security & Cloud in Valencia, Spain, brings us a compilation of the potential risks related to IoT, Big Data and Artificial Intelligence, catalogued in different areas such as ethics, legal, cybersecurity and privacy.

In this report we can see the latest version of the challenges posed and presented by the state of the art of the regulation that affects them and how Data Governance contributes to the achievement of the ethical, technical and cultural objectives set. What are you waiting for to discover the full report?

Where does ransomware attack? Three main pillars

David García    5 October, 2021

It all starts with a tweet from a researcher (Allan Liska from RecordedFuture) announcing that he is compiling a list of vulnerabilities currently being exploited by organised groups in ransomware operations.

It was, and still is, a good idea, so the good side of the Internet began to work and collaborations began to arrive, extending the set of vulnerabilities. In the end, a more or less fixed picture was reached (we all know that in technology, years pass in days):

These are, to a large extent, to blame for many headaches and losses of millions nowadays. This list will change, some CVEs will fall due to exhaustion while new ones will enter and replace the old ones in a perverse cycle that seems to have no end.

If we take a good look at the image, we can see that they correspond to vulnerabilities in products that can reside in the network perimeter of our organisation as well as in the desktop systems or in the cloud.

There is heterogeneity in the classification and it corresponds directly to this other New Zealand CERT publication which illustrates perfectly how a ransomware operation works broadly.:

The above table would enter the first, initial phase, where the first contact takes place. Thus, for example, vulnerabilities affecting Microsoft Office are triggered through the connection “Email -> Malicious Document -> Malware”, while those affecting a product located at the perimeter of the network exposed to the Internet would be in “Exploit software weaknesses”.

The connections do not end here. Vulnerabilities that are specific to operating systems often involve elevation of privileges that guarantee two main things: access and persistence; in the network: “Take control -> …”.

Once they have entered the perimeter, the focus shifts to the discovery of internal systems, exploitation, take control and elevation of privileges. From this point onwards, the company’s value, its data, is sought. And not just live data, but also attempts to wipe out backups, the only viable solution against ransomware once all preventative controls have failed.

Basically, the following three points can be summarised as the basic pillars against which the criminal groups strike:

  1. Vulnerabilities that allow taking control of the device exposed to the Internet.
  2. The human factor as a point of failure exploitable by social engineering.
  3. Poor configuration and implementation.

The technical pillar (exploiting critical vulnerabilities)

In the first case, the control is prevention and warning. As has been said many times, equipment must always be up to date. There is no excuse. If we have a dependency on an endangered technology, it is a countdown until it is replaced. So it is better to advance its replacement than to postpone it indefinitely.

Moreover, it is not just a matter of waiting for the manufacturer’s patch; as soon as we hear of the appearance of a vulnerability, we must put some kind of countermeasure in place to take it for granted that they are going to exploit it while we make our move.

There are infrastructures that are designed to be staked at a particular point on the perimeter, and when that point falls, the consequences are devastating. We cannot place all the responsibility on a single control. The planning of a defence must take for granted that the commitment of that point in the network can occur at any given moment. For a team to be part of an internal network should not engender trust. Imagine a stranger who hangs a name tag on his shirt and walks around the departments of an office as he pleases.

In fact, a handful of vulnerabilities are discovered when he is already wreaking havoc. In other words, a zero-day, which is discovered precisely because of its activity, is not detected by any antiviral solution or the like. There is no signature, it has not been seen before, it is not suspicious and yet it knocks down computers and systems. You have to be mentally and technically prepared to deal with such a blow. You have your computers properly updated and yet they are compromised.

The human pillar (phishing and social engineering in general)

In this case, we are talking about malware that needs the help of a human to act. This is no longer a vulnerability that can directly take control of a computer or at least run as a process. What we have is an unwitting helping hand with a finger that makes the terrible decision to send two keystrokes through the mouse and trigger a cascade of actions that end badly.

That decision is made because false information has been provided that creates a situation perceived as safe by a person. A theatre. The king of this is email, but even now we have operations that are set up and run by posing as managers or department heads. Social engineering works. Always.

Does awareness-raising work as a countermeasure? It is paradoxical. Imagine in the Middle Ages a castle that wants to defend itself against a possible surprise takeover. The sergeant of the guard lectures the watchmen every night to be vigilant. A state of alertness is induced which the soldiers internalise, but which they end up normalising as they see that night after night nothing happens. Until the moment comes when the walls are stormed and they are caught… with their guard down despite the poor sergeant’s constant warnings and harangues.

Perhaps the problem is that we call awareness to what we should call (and do) training. Training is what generates a tailored response to a particular problem. Make your employees understand the problem they face. Give them the opportunity to learn through simulated exercises. If instead of continuous alert the sergeant had trained his men with night raids, they might have interpreted the early signs of an invasion and would not now be under enemy fire.

Social engineering works not because you are not on continuous alert but because you do not know how to identify the right signals to detect that you are walking into a trap. Let’s remember that even in 2021, classic scams like the “pigeon drop” still work.

The Pillar of Carelessness (All (wrong) by default))

This type of breach is somewhere between technical and human failure. The former for bringing to market a product or system with a configuration that is not very demanding in terms of security, and the latter for deploying and integrating without giving importance to changing the parameters or performing the bastioning.

The clearest case is the system with an account with default credentials. There have been (and continue to be) hundreds of well-documented cases. When doing a pentest, there is always the phase of knocking on doors hoping that one of the usual keys will open the door.

A very bleeding case is CVE-2000-1209 or the classic Microsoft SQL Server with the account ‘sa’ and no password (rather, password to ‘null’) that filled audit and pentest reports for many years. In fact, in the early 2000s, several worms exploiting this oversight emerged.

Mirai also had a field day with this kind of oversight. The IoT botnet reached a large number of nodes thanks to a simple list of default accounts on all kinds of network systems, set up and left to their own devices on the Internet.

In the cinema, there is a very famous cliché in which one of the protagonists struggles with a door until he exhausts himself. Then another one of them, with a certain degree of mockery, approaches the door and opens it by simply turning the knob. The image should stick in our minds. We are giving cybercriminals the freedom to turn the knob and open the door.

It’s an example that shows that sometimes it doesn’t take a lot of effort to find a zero-day vulnerability that allows us to execute arbitrary code. They are the worst because it is an evil that could have been avoided in an extremely simple way: by changing the default password to a suitable and robust one.

Usually, this is done for several reasons, for example: rushing to finish things due to poor planning, staff not trained in cyber security, thinking that the manufacturer has a secure default configuration, lack of a security policy (no guidelines, no controls), etc.

Conclusion

As we have seen, ransomware comes in through the door at the slightest opening. Once inside, it makes itself at home, where we let our guard down and finally, when it is perhaps too late, it ruins us in dimensions ranging from a bad afternoon to a complete business shutdown.

Identifying possible avenues of entry and their techniques are fundamental skills to learn in order to plan our defence. Either that or give in to luck and miss out on the lottery, the one that has as its ticket number: “All your files have been encrypted…”.

Download our new guide created in partnership with Palo Alto to help you prepare, plan, and respond to Ransomware attacks

Cyber Security Weekly Briefing 25 September – 1 October

ElevenPaths    1 October, 2021

​​​Let’s Encrypt root certificate expires (DST Root CA X3)

A few days ago, Scott Helme, founder of Security Headers, highlighted the 30 September as the date when Let’s Encrypt’s root certificate, DST Root CA X3, would expire. As of 4:01 p.m. EDT yesterday 30 September, as the existing root certificate expired on multiple websites, all devices and browsers that had not been updated (and for which the certificate was therefore no longer supported) began to experience problems with connections being seen as untrusted. In his article, Helme provided a list of clients that only trusted the expiring certificate and would therefore experience problems after expiry: “OpenSSL <= 1.0.2, Windows < XP SP3, macOS < 10.12. 1, iOS < 10 (iPhone 5 is the lowest model that can get to iOS 10), Android < 7.1.1 (but >= 2.3.6 will work if served ISRG Root X1 cross-sign), Mozilla Firefox < 50, Ubuntu < 16.04, Debian < 8, Java 8 < 8u141, Java 7 < 7u151, NSS < 3.26 and Amazon FireOS (Silk Browser)”. To avoid this problem, Let’s Encrypt has a new root certificate, ISRG Root X1. On the other hand, it is also worth noting that, until yesterday, the firm used a cross identification system that made DST Root CA X3 compatible with the most recent and extended version of ISRG Root X1, however, with the expiration of the first one, this practice is put to an end. Following the expiry and despite warnings, Helme has reportedly confirmed problems, at least for firms such as Palo Alto, Bluecoat, Cisco Umbrella, Catchpoint, Guardian Firewall, Monday.com, PFsense, Google Cloud Monitoring, Azure Application Gateway, OVH, Auth0, Shopify, Xero, QuickBooks, Fortinet, Heroku, Rocket League, InstaPage, Ledger, Netlify and Cloudflare Pages.

All the details: https://www.zdnet.com/article/fortinet-shopify-others-report-issues-after-root-ca-certificate-from-lets-encrypt-expires/

​Chrome fixes new 0-days actively exploited

On 24th September Google released an urgent update for its Chrome browser for Windows, Mac and Linux that fixes a 0-day. According to Google, there are already reports of its active exploitation on the web by threat actors, although details on the alleged incidents have not been made public. The flaw, identified as CVE-2021-37973 (no CVSSv3 score for the moment), resides in Google’s new navigation system for Chrome called “Portals” and is a “use after free” flaw (use of previously freed memory) that, after successful exploitation in vulnerable Chrome versions, would allow the execution of arbitrary code. Google has already released a new version of Chrome 94.0.4606.61 that fixes the issue and, according to their own release, [it[ “will be deployed in the coming days/weeks”.

Only a few days later, on 30th September, Google released another urgent update to its Chrome browser for Windows, Mac and Linux, fixing two new 0-days for which no specific details have yet been released, and which remain reserved until mass deployment of the patch. These vulnerabilities, which according to Google are being actively exploited, have been identified as: CVE-2021-37975, a use-after-release memory usage flaw in the V8 JavaScript engine and WebAssembly (use-after-free), which would allow program crashing and arbitrary code execution and CVE-2021-37976, which causes an information leak in the browser’s kernel. Google has already released a new version of Chrome 94.0.4606.71 that fixes the problem, with plans for users to deploy it in the coming days. It should be noted that so far this year, Google has been forced to patch up to 14 0-day vulnerabilities, so it is recommended to keep the application updated in its latest versions.

More info: https://chromereleases.googleblog.com/2021/09/stable-channel-update-for-desktop_30.html

​​Good practice guidance for VPN selection and hardening

The National Security Agency (NSA) and the US Cybersecurity and Infrastructure Security Agency (CISA) have jointly created and published a document entitled Selecting and Hardening Remote Access VPN Solutions. The main purpose of the document is to assist organizations in choosing a VPN solution that follows current standards, as well as defining best practices for using strong authentication credentials, agility in patching vulnerabilities, and implementing processes to secure and monitor access to and from the VPN. The publication of this guide follows numerous attacks against government and defense institutions in several countries this year by threat actors, mainly backed by governments, and different ransomware groups that have exploited known vulnerabilities in widely used VPN services such as Fortinet, Pulse Secure or Cisco. The document is now publicly available at the following link and, as the NSA itself states in its press release, “The publication of the guidance is part of its mission to help protect the departments of defense and homeland security“.

Learn more: https://us-cert.cisa.gov/ncas/current-activity/2021/09/28/cisa-and-nsa-release-guidance-selecting-and-hardening-vpns

​​GriftHorse malware for Android devices subscribes to paid services

Security researchers at Zimperium have discovered a new trojan, distributed on a large scale since November 2020, that subscribes victims to premium SMS services. It has so far infected more than 10 million Android devices in more than 70 countries. The malware is distributed via legitimate-looking apps that look like tools, personalization or entertainment software, uploaded to the official Google Play Store and third-party shops. The malware is developed with the Apache Cordova framework, making it cross-platform and allowing it to deploy updates without the need for user interaction. Afterwards, the application repeatedly displays alerts with pretexted prizes to redirect the victim to a website in their language where, by entering their phone number, they are subscribed to a premium SMS service with a monthly cost of more than €30. It is worth noting that the malware uses several techniques to avoid detection: it avoids encoding URLs, does not reuse domains, filters content based on the geolocation of the IP address and avoids checking the dynamic analysis of the communication. Researchers estimate that the trojan’s authors make a monthly profit of between 1.2 and 3.5 million euros.

Info: https://blog.zimperium.com/grifthorse-android-trojan-steals-millions-from-over-10-million-victims-globally/