The Ultimate Evolution Towards the Digitisation of Industry

Andrés Escribano    22 September, 2022

Digital transformation brings us new opportunities, it is changing our lives both in the way we communicate and in the way we produce.

Digitalisation of the industrial fabric in particular is key to ensuring new business opportunities, enhancing competitiveness, efficiency and guaranteeing the sustainability of the industry.

Nowadays we can already see how companies in the industrial sector and their processes are transforming at great speed to adapt to the needs of this new digital revolution. All of this, thanks to technologies such as the Internet of Things, Big Data, Artificial Intelligence and Blockchain.

Automation and productivity improvement

Automation in Industry 4.0 continues to gain relevance, as it allows routine and repetitive processes to be carried out quicker, more comfortably and efficiently, thus increasing productivity significantly.

One of the most important points to achieve this improvement in productivity is the monitoring and control of processes, a task that can nowadays be carried out by machines in a practically infallible way and in real time. This enables the end-to-end control of the production process, facilitating the integration and interoperability of the different equipment and sensors.

Such control also allows the identification of links with low productivity, facilitating modifications as soon as possible and thus avoiding loss of profits or investments in unsuitable elements.

In addition to these advantages, the automation and robotisation of processes reduces production and error detection times during the process, improves product quality and reduces production and maintenance costs.

Connectivity

In this digital revolution, where a multitude of devices will be interconnected in real time for instant decision making, it is vital to have a good connectivity in factories.

5G networks are the wireless connectivity solution that connects all components of the industrial sector, optimising internal processes, in environments such as manufacturing, mining, ports and airports, petrochemicals, and logistics.

It opens the door to more efficient, flexible and autonomous production plants. They make production processes more mobile and flexible, eliminating wiring and enabling production lines to adapt quickly to orders.

Today, there are 5G solutions that offer dedicated mobile connectivity in the industrial environment, with synergistic technologies in the same infrastructure enabling public/private connections. IIoTNs (Industial IoT Networks, LTE or 5G networks) facilitate the paradigm shift that Industry 4.0 is implementing in its production chains from being static to virtual and 100% configurable in real time depending on the demand.

Data analytics and new business models

All the information generated in Industry 4.0 processes can be studied to improve productivity. This is possible thanks to Big Data Analytics processes, which can provide reports related not only to productivity, but also to the probability of success, points of improvement or modifications in real production time.

Enabling advanced analytics in the industrial environment comes through the creation of an integrated operations environment that facilitates the collection and analysis of data from different processes with Big Data, Machine Learning and AI techniques.

The rapid response and planning enabled by Big Data tools make it possible to carry out predictive actions in the industrial environment, such as predictive maintenance of machines and equipment, which reduces their downtime, increases their useful life, reduces maintenance costs, reduces the waste of manufactured products and reduces the environmental impact..

In short, the digitalisation of the industrial environment allows the factories of the future to work more efficiently, with better results in terms of quality and availability, but, above all, greater flexibility. A flexibility that allows them to adapt to changes in demand and to the growing trend of customisation of manufactured products, which will consequently enable manufacturers to respond better and even new business models with a focus on the end client and on sustainability.

Industrial and Energy Symbiosis (or Industry and Energy Symbiosis)

We are currently facing a global climate emergency where reducing greenhouse gas emissions into the atmosphere is a major challenge, a task in which we must all be involved.

Industries seek to become more sustainable and reduce the environmental impacts derived from all stages of the value chain of their production, logistics and commercial processes. It is essential that in the industrial environment it is possible to carry out technological developments aimed at the improvement and efficiency of these processes and the optimisation of energy consumption in all factories and companies’ facilities.

This can be achieved through more digitalised and automated solutions, using connectivity, sensorisation, Big Data, Machine Learning and AI technologies that enable factories to work more efficiently in terms of both production and energy. Here are some examples: reducing the number of machine and equipment downtime through predictive maintenance; reducing wastage; optimising the routes of vehicles used in internal and external logistics and in fleet management; or with modular, scalable and adaptable energy efficiency solutions that control and manage the energy consumption of the facilities.

It is therefore very important to maintain this symbiosis between the industrial environment and energy efficiency, as the efficiency of industrial, distribution and logistics processes and assets, as well as commercial ones, can substantially improve not only the savings in production and maintenance costs, but also the energy savings of factories and facilities, increasing sustainability and care for the environment.

In conclusion, we highlight the fundamental role that each of these subjects plays in the industry:

  • Automation in Industry 4.0 to improve productivity.
  • 5G networks as enablers for efficient connectivity in factories.
  • Analytical tools enabling predictive actions.
  • Reducing environmental impact through technological developments.

Today, more than ever, we can say that the world is moving in a clear direction: digitalisation and sustainability.

Leave a Comment on The Ultimate Evolution Towards the Digitisation of Industry

Latency and Edge Computing: Why is it important?

Emilio Moreno    20 September, 2022

For many years we have been in a race to increase the speed of our connections. Ever since those modems that treated us to a symphony of beeps, the end of which we waited anxiously to see the speed at which we were connected finally confirmed, higher speeds have always been the goal to be achieved.

The incorporation of new technologies, such as ADSL, fibre optics, 3G or 4G mobile communications, private MPLS networks, has gradually brought higher and higher speeds. And in many cases, the commercial claim has been to promise more kilobits, more megabits in a technical and commercial race so that we can consume new services. For example, mobile internet consumption did not become widespread until the arrival of 3G. The case of HD or UHD video is unthinkable without these higher bandwidth values.

But bandwidth is not the only parameter that is important when consuming digital services. This is where latency comes in.

Latency, the great protagonist

Latency basically measures the time that elapses in the communication between the client initiating the communication and the time it takes to receive the response. The order of magnitude in which we move is milliseconds.

Latency, even if it has not been very visible, has always been there and some of its consequences are sometimes perceptible. When transatlantic communications were carried out via satellites in geostationary orbit, more than 35,000 km above the earth’s surface, the time taken for the signal to travel from the earth station to the satellite and back down to another earth station added enough delay to complicate communication between people, with timeouts, collisions between speakers, etc. Here the latency is in the order of hundreds of milliseconds.

Another example is in data centres when replicating data between two locations. There are hardware solutions that do not commit write operations to disk until the remote system has committed the equivalent write to the secondary system to ensure that the copy has been performed correctly. This is why many vendors have at least two data centres in the same metropolitan area to offer synchronous replication solutions.

In contrast, there are many other situations where latency is not relevant, because communications response times are much shorter than the processing time, or the responsiveness of a human being. For example, a large part of most web query applications are not particularly sensitive to latency.

In mobile communications, the advent of 5G has been a major departure from previous generations. While this technology promises a growth in speed, it has put latency at the centre. On the one hand, to achieve much lower values, and on the other, to ensure stable values, with little variation and very controlled. But this is not only happening in mobile communications: fibre networks also allow for lower and more stable latency values.

And it is latency that really puts Edge Computing at its best. Edge means in simplified terms that we are bringing computing capabilities to the edge. To the edge of the network.

Why bring this compute capacity to the edge of the network?

The main advantage is to improve the latency perceived by the consumer of this capacity. If instead of the hundreds or thousands of kilometres that the signal would have to travel to reach a traditional Data Centre, it only must travel a very short distance of a few kilometres, the latency is reduced to very low milliseconds.

But is it really worth the effort to deploy multiple nodes to bring computing closer to end users? For some use cases, it certainly is.  And this is where one of the most important lines of work begins: identifying the use cases that really need a very low latency value.

In this line, at Telefónica we have been working for some time now with our customers and partners to identify these use cases that, only in an Edge Computing infrastructure, could happen.  Many of them are the result of the most advanced lines of research and are still in a very preliminary stage. We can mention some of them, such as augmented reality, Smart Industry, image recognition in real time, gaming, drone management, etc.

For this reason, Next Generation Networks (5G and Fibre) combined with Edge Computing are the winning option to optimally develop solutions that are sensitive to latency.

Leave a Comment on Latency and Edge Computing: Why is it important?

The Merge: one small step for Web3, one giant leap for Ethereum

Jorge Ordovás    19 September, 2022

Many companies have recently set their sights on the opportunity that Web3 will bring for the future of their business, in many sectors. As our CEO Jose María Álvarez-Pallete says, the transformation that it enables is already here, we are talking about a different ecosystem, a different Internet, much more intelligent, which is based on the existence of a “decentralised supercomputer” based on Blockchain technology, where business logic is programmed through smart contracts.

Ethereum is the world’s largest programmable Blockchain, the basis for this future of the Internet (Web3), finance (Decentralized Finance – DeFi) or self-managed digital identity (Self Sovereign ID – SSI), among other areas.

Ethereum in numbers and its influence on the Web ecosystem3

Despite market volatility and global macroeconomic uncertainties (inflation, war, energy crisis, etc.), the current Web3 ecosystem with decentralised services such as DeFi is a reality, thanks to Ethereum. This network offers surprising adoption and usage figures that are clear signs of maturity, such as those pointed out by Consensys (one of the most relevant companies in the ecosystem) in a recently published report:

  • The number of unique users of the network (Ethereum addresses) has doubled to over 200 million in the last two years (early adopters, yes; but 200,000,000 is a lot of early adopters).
  • More than one million transactions per day have been generated over the last 12 months.
  • The cost of using existing services on the network (one of the most historically relevant constraints to wider adoption) has steadily declined since January 2022.
  • Incentives for the actors that make Ethereum’s network possible (the so-called “miners”) reached USD 1.8 billion since mid-March 2022, the highest among the top 20 blockchains.

On the other hand, a huge ecosystem of applications, tools, infrastructure and protocols has been developed over the last few years, making it increasingly easy to develop Web solutions and services3.

Web3 Ecosystem (source: Coinbase)
Web3 Ecosystem (source: Coinbase)

Ethereum has also become the solid technological foundation on which many so-called “Layer 2” (L2) blockchain solutions are built, offering networks with higher performance or lower transaction costs where Web3 services can be deployed, without sacrificing security or availability of information, as they periodically use Ethereum’s “first layer” as a trusted repository of information.

One of the most prominent of these “L2 solutions” is Polygon, which today manages the largest transactional volume of all existing Blockchain networks, and which also offers an ecosystem with a negative carbon footprint, unlike the significant impact of the Ethereum mining process (which is highly energy demanding to operate).

A change that reduces environmental impact

However, I have good news for the environment: this is true not only for networks like Polygon, but also for Ethereum since Thursday 15 September at 8:43 a.m. (Spanish time), when the network underwent one of the biggest changes in its history, known as “The Merge“.

This change consisted of merging the Ethereum network (with all the existing information and services) with the “Beacon Chain”, a blockchain that allows managing a new consensus mechanism called “Proof-of-Stake” (proof of possession) that replaces the existing mechanism until now, “Proof-of-Work” (proof of work, or more colloquially, mining).

The Merge activated at 8:43 on 15/9/2022 (Beacon Chain Client Console)
The Merge activated at 8:43 on 15/9/2022 (Beacon Chain Client Console)

In order to understand the magnitude of this upgrade (which is a milestone in a process that has lasted years) we can consider an analogy: The goal was to change the engine of the Ethereum network, replacing the existing one (which used highly polluting fossil fuels) with an electric one (drastically reducing carbon emissions), but with the challenge of doing it in mid-flight and without the passengers, crew or pilot even being aware of the slightest detail of the process or suffering any inconvenience whatsoever.

This is what the Ethereum developers (and the entire ecosystem) have achieved, a true success story that we can tell our grandchildren about when they ask us what it was like when Web3 began.

The Merge has enabled, with immediate effect, a 99.95% reduction in the energy consumption required to validate Ethereum transactions.

This shift from the Proof of Work (PoW) to Proof of Stake (PoS) consensus mechanism radically changes the incentive model for the actors that make the network work.

Until now, miners had to constantly compete to generate new blocks in the network (which contain the transactions generated by users of all existing services), requiring a large computing capacity (which they obtained by acquiring specialised hardware to carry out this mining process, which in turn required very high energy consumption for its operation).

This investment that miners had to make, in order to achieve greater computing capacity than their rivals, was incentivised by obtaining a reward in cryptocurrencies (Ether, Ethereum’s native cryptocurrency) every time they generated a block accepted by the network (the business case consisted of weighing the costs of this infrastructure against the income obtained, depending on the price of Ether on the markets).

Reducing the carbon footprint also increases the attractiveness of Ethereum for companies and institutions with ESG objectives.

The Merge has made it possible, with immediate effect, to reduce by 99.95% the energy consumption needed to validate transactions in Ethereum, thus greatly limiting its carbon footprint and increasing its attractiveness for companies and institutions with clear environmental, social and governance objectives, at a particularly sensitive time due to global concerns (especially in Europe) about energy scarcity and its high cost, or the increasingly evident impact of climate change.

Proof of Work (PoW) vs. Proof of Participation (PoS)

Therefore, as of 15 September 2022, there are no more “miners” (who have had more than enough time to “pivot” their business model during the years that this process of moving from PoW to PoS has lasted and take advantage of the investment made in infrastructure by moving to mining on other networks).

In PoS we talk about “validators”, and these players no longer need a large computing power to obtain incentives for generating new blocks. They now participate in a “lottery” where they periodically choose who will propose a new block and a set of validators who will give their approval to that block.

The probability of being chosen for this process depends on the “ballots” they have purchased (to obtain each “ballot” they must first make a deposit or stake amounting to 32 Ether, just over €50,000 at the current exchange rate, which acts as a “deposit”). Those chosen in each round of this “draw” get a reward (in Ether) for their activity, if they perform it efficiently and according to the established rules, which is added to the amount initially deposited.

This mechanism simply requires having an application permanently connected to the network, ready to participate in the process of proposing or validating new blocks when required.

If, when chosen, it responds quickly and performs its task as expected, it will get the established reward. If it does not respond (e.g., because the application is not running at the time) or behaves incorrectly (or worse, maliciously), it will receive a penalty, the amount of which depends on the reason for the incorrect behaviour and the impact the action has had on the network.

This penalty is deducted from the 32 Ether deposit previously made by each validator and is what incentivises these actors to behave correctly and honestly, following the rules. If a validator repeats his bad behaviour, his deposit will be progressively decreased, even to the point of being expelled (and losing those 32 Ether, plus any rewards he may have obtained up to that point).

Thanks to this incentive mechanism defined in PoS, game theory (a branch of mathematics and economics) predicts that validators will behave honestly, thus guaranteeing the correct functioning of the network, because they will maximise their profit (in the same way that happened in the case of miners if they followed the rules in the case of PoW). Business is business.

However, not everything has turned out to be as positive as it seems after The Merge. This milestone in the evolution of Ethereum was also aimed not only at reducing the environmental impact, but also at increasing the security and decentralisation of the network with respect to the situation that had arisen over the years due to the concentration of mining in a few hands, which posed a potential risk (several miners with a high computational volume of the network could agree and cause problems in its operation).

The Merge was just the first step towards Ethereum’s future

As we can see in the following graph, this concentration has only increased with the move to PoS, with only five players (including exchanges such as Coinbase, Kraken or Binance) accumulating more than 75% of the probability of being chosen for the generation of new blocks:

Distribution of validators in Ethereum according to staking (source: https://beaconcha.in/charts)
Distribution of validators in Ethereum according to staking (source: https://beaconcha.in/charts)

Looking to the future, in Ethereum’s roadmap, The Merge milestone is just a first step, with a whole sequence of updates planned to make the network on which Web3 is being built even more secure and scalable, this new ecosystem that we will see develop fully over the next few years (something we are working on very actively at Telefónica).

May the force be with Vitalik Buterin (the creator of Ethereum) and all the developers in their mission. And may we see it.

Leave a Comment on The Merge: one small step for Web3, one giant leap for Ethereum

Cyber Security Weekly Briefing, 9 — 16 September

Telefónica Tech    16 September, 2022

Microsoft fixes two 0-day and 63 other vulnerabilities in Patch Tuesday

Microsoft has fixed 63 vulnerabilities in its September Patch Tuesday, including two 0-days, one of them actively exploited, and another five critical flaws that would allow remote code execution.

The actively exploited 0-day, identified as CVE-2022-37969 and CVSS 7.8, was discovered by researchers from DBAPPSecurity, Mandiant, CrowdStrike and Zscaler and affects the Common Log File System (CLFS), allowing an attacker to gain system privileges.

On the other hand, the second 0-day that has not been exploited is listed as CVE-2022-23960 and with CVSS 5.6, and it refers to a cache speculation restriction vulnerability.

Microsoft Dynamics CRM (CVE-2022-35805 and CVE-2022-34700), 2 others in IKE (CVE-2022-34722 and CVE-2022-34721) and, finally, a flaw in Windows TCP/IP (CVE-2022-34718), all of which would allow remote code execution.

More info

* * *

Analysis of the OriginLogger keylogger

Researcher Jeff White from Unit 42 in Palo Alto has published the results of his recent analysis on the OriginLogger keylogger, which is considered to be the heir to Agent Tesla.

It is used to steal credentials, screenshots and all kinds of device information and is for sale on sites that specialise in spreading malware.

Its infection chain is initiated through different types of droppers, but usually a Microsoft Office document with malicious macros, which redirect to a page from which a file with an obfuscated script is downloaded, used at the same time for downloading a payload that will be used to create persistence and schedule different tasks.

The payload will also contain PowerShell code and two encrypted binaries, one of which is a loader and the other the actual OriginLogger payload.

Another feature that makes OriginLogger a separate version of Agent Tesla is the variety of data exfiltration methods, using SMTP and FTP protocols and servers, web pages with their own panels or Telegram channels and bots.

More info

* * *

Lampion malware distributed in new phishing campaign

Cofense researchers have analysed a phishing campaign distributed by email, in which the attachment contains a script that downloads and executes the Lampion malware.

This malware, discovered in 2019, corresponds to a banking trojan that seeks to steal information from the infected device. It connects to its command-and-control (C2) server and is able to superimpose a page on top of banking login forms to get the user’s information.

As for the campaign, it is distributed by sending via stolen corporate accounts various fraudulent emails, which attach malicious payment proofs hosted on WeTransfer and urge them to be downloaded.

Once the recipient of the fraudulent email downloads the malicious document and opens it, several VBS scripts are executed and the attack chain begins. It is worth noting that Lampion focuses mainly on Spanish-speaking targets, abusing cloud services to host the malware, including Google Drive and pCloud.

More info

* * *

SAP Security Bulletins

SAP has issued 16 security advisories on its September Security Patch Day, fixing 55 Chromium and other high-priority vulnerabilities.

First, SAP is issuing security updates for the Google Chromium browser that affect several versions of SAP Business Client. On the other hand, among the high priority vulnerabilities fixed is an XSS vulnerability affecting SAP Knowledge Warehouse, identified as CVE-2021-42063 and with CVSS 8.8.

Also among the most critical is CVE-2022-35292, with CVSS of 7.8, which affects the service path in SAP Business One and would allow privilege escalation to SYSTEM.

The second priority note corresponds to the SAP BusinessObjects service, affected with two vulnerabilities, one of them, with CVE-2022-39014 and CVSS 7.7, would make it possible for an attacker to gain access to unencrypted confidential information; while the other vulnerability, designated with CVE-2022-28214 and CVSS 7.8, corrects for the possibility of information disclosure in the service.

A related vulnerability update, CVE-2022-35291 and CVSS 8.1, affecting SuccessFactors is published, which resumes the functionality of file attachments.

More info

* * *

Webworm activity analysis

Symantec’s threat research team published a post yesterday detailing the activities of a group called Webworm, which reportedly has the same TTPs and devices in use as the threat actor known as Space Pirates, leading researchers to believe they could be the same group.

According to the investigation, the group has been active since 2017 and has been engaged in attacks and espionage campaigns against government agencies and companies in the IT, aerospace and energy sectors, especially in Asian countries.

Among its usual resources are modified versions of the Trochilus, Gh0st RAT and 9002 RAT remote access trojans, used as a backdoor and spread via loaders hidden in fake documents. It is worth noting that the RATs used by Webworm remain difficult to detect by security tools, as their evasion, obfuscation and anti-analysis tricks are still remarkable.

More info

How to become a cyber resilient organisation

Estevenson Solano    15 September, 2022

Fear, panic and uncertainty are some of the feelings constantly experienced in corporate leadership. In management committees, the big question is frequently asked: is our cyber security working?

As well as, What are the new behavioural patterns of adversaries? How do we understand cyberspace in order to define the design, construction and implementation of a cyber security strategy? How do we perceive the cyber threat landscape? Or are we considering retrospective, prospective and panoramic aspects to define a cross-cutting and comprehensive cyber security strategy?

The National Institute of Standards and Technology (NIST) defines resilience as “the ability of an organisation to transcend (anticipate, resist, recover from, and adapt to) any stress, failure, hazard, and threat to its cyber resources” within the organisation and its ecosystem, so that the organisation can confidently pursue its mission, enable its culture, and maintain its desired way of operating.

Comprehensively understanding the impact of cyber risks on an organisation is a complex but critical factor in strengthening cyber resilience. Therefore, frameworks and tools are needed to equip human talent to understand and communicate the prevailing cyber risks and their impact.

Cyber resilience must be seen as a strategic imperative.

Cyber resilience and its benefits must be clear to corporate leadership. Therefore, it is important to translate the impact of the state of cyber resilience into operations, strategy and business continuity. It is a commitment to position cyber resilience as a strategic imperative.

However, current figures and developments indicate that much work is needed to close the cyber resilience capability and performance gap between industry ecosystems and within organisations.

The World Economic Forum’s (WEF) Global Cybersecurity Outlook 2022 ound that only 19% of respondents feel confident that their organisations are cyber resilient, indicating that a large majority know that their organisations lack the cyber resilience they need to be commensurate with the risks they are exposed to.

In addition, the report found that 58% of respondents believe their partners and suppliers are less resilient than their own organisation, and 88% are concerned about the cyber resilience of the small and medium-sized businesses that are part of their ecosystem.

In another Accenture report, 81% of respondents said that “staying ahead of attackers is a constant battle and the cost is unsustainable”, compared to 69% in 2020.

No matter the size, sector or risk profile of your organisation, all of them are exposed to increasingly sophisticated cyber-attacks.

This indicates that as organisations, ecosystems, supply chains and supplier relationships become more interconnected and interdependent – and the pace of change and transformation processes accelerates – not only is resilience lagging, but a cohesive approach to how resilience is designed. It is increasingly clear that, despite this interconnectedness, there is no alignment to jointly overcome disruptive cyber events.

Is your organisation prepared for what is to come, and can you measure your organisation’s capability in the face of various attacks, threats or incidents? It should be emphasised that no matter the size, economic sector, risk profile of your organisation, all organisations are exposed to increasingly sophisticated, evolving and innovative cyber attacks.

There is a reality that many organisations are ill-equipped to demonstrate their capabilities to withstand sophisticated cyber-attack behaviour. What do we need? Where are we joining forces to move forward? Do we have the operational, technical and strategic capabilities? How can we draw a roadmap? What are we doing and how can we improve?

Many organisations are poorly prepared to withstand sophisticated cyber-attacks.

Cyber resilience is not about creating a contingency plan and continuity of operations, it is something that goes beyond ensuring availability and focuses on resilience in the aftermath of a technology infrastructure.

How prepared is our organisation and strengthening its capabilities to identify, detect, prevent, cancel, recover, cooperate and continuously improve against cyber threats?

According to The Cyber Resilience Index: Advancing Organizational Cyber Resilience 2022 report (WEF) found that the top four reasons why cyber resilience is limited in today’s ecosystems are that many organisations:

  1. They have a narrow perspective on cyber resilience, focusing primarily on security response and recovery.
  2. They lack a common understanding of what a comprehensive cyber resilience capability should include
  3. They find it difficult to accurately measure the organisation’s cyber resilience performance or communicate its true value to senior management
  4. They struggle to be transparent within their organisation and with ecosystem partners about the shortcomings of their cyber resilience posture and their experiences with disruptive events.

Characteristics of a cyber-resilient organisation

The approach to cyber resilience must also be free of the fear-driven constraints caused by mere preservation of the status quo that are so often followed by attempts to return to a demonstrably fragile state when disruption predictably occurs.

The reward of making cyber resilience part of the ethos is a greater opportunity to take healthy risks, innovate and responsibly capture the value of tomorrow’s digital economy.

Some resilience techniques that you can implement to mature your security programmes and improve your ability to provide services to customers during a cyber incident:

  • Adaptive response: Optimise the ability to respond in a timely and appropriate manner to adverse conditions.
  • Analytical monitoring: Maximise the ability to detect potential adverse conditions and reveal the extent of adverse conditions.
  • Coordinated protection: Requires an adversary to overcome multiple safeguards.
  • Deception: Deceive or confuse the adversary or conceal critical adversary assets.
  • Diversity: Limit the loss of critical functions due to the failure of common replicated components.
  • Dynamic positioning: Impeding an adversary’s ability to locate, eliminate, or corrupt mission or business assets.
  • Dynamic representation: Support situational awareness, reveal patterns or trends in adversary behaviour.
  • Non-persistence: Provide a means to reduce an adversary’s intrusion.
  • Privilege restriction: Restrict privileges based on user attributes and system elements.
  • Reordering: Reducing the attack surface of the defending organisation.
  • Redundancy: Reducing the consequences of loss of information or services.
  • Segmentation: Limit the set of potential targets to which malware can easily be spread.
  • Integrity checked: Detect attempts by an adversary to deliver compromised data, software or hardware, as well as successful modifications or fabrication.
  • Zero trust: Implies questioning the organisation’s security practices and policies right to ask for and expect clear answers.
  • Unpredictability: Increasing an adversary’s uncertainty regarding the system protections they may encounter.

Cyber resilience must be part not only of the technical systems, but also of the teams, the organisational culture and the way we work on a daily basis.

It is imperative for the success of a cyber resilient organisation to design, build and manage cyber resilience and then get the fundamentals right. Cyber resilience must be part not only of the technical systems, but also of the teams, the organisational culture and the day-to-day way of working.

Cyber resilience must be a pervasive mindset underpinned by a holistic approach within organisations and across their ecosystems. For decades, cyber resilience management has been underrepresented and confused with other principles in cyber security programmes.

Today, more than ever, there are many positives. We have come a long way in a short time. But the key is not to become complacent and complacent, to reaffirm our commitment to improvement and to recognise that the attacker will come back with new capabilities and skills.

Name the malware you have, and I’ll tell you which botnet you belong to

Marta Mª Padilla Foubelo    15 September, 2022

What is a botnet and how does it work?

To begin with, let’s dissociate the word botnet. On the one hand, “bot” means robot and, on the other hand, “net” means network. This gives the phrase a meaning, something like “a network of robots”.

A bot, or robot, would be a system infected by malicious software whose target is defined in the malware code. Therefore, a botnet would be a net of systems infected by the same malicious software.

A botnet is a group of systems infected by malicious software (malware) and managed by the same BotMaster.

This network is called a Botnet, what is not implicit in the name is the fact that they are controlled remotely through a common Command and Control (hereafter C&C) server from which the operator of the network, also known as BotMaster, will send instructions to perform malicious actions.

In botnets, the famous parental phrase “if a friend of yours goes off a cliff, will you do it too? Well, yes sir, everyone will do the same as they are controlled by a specific threat actor.

Botnets also on mobile devices

It is not only computers that are affected, mobile devices are also targeted by BotMasters. For example, on a well-known Dark Web forum, a botnet is offered for the Android operating system, which is one of the most widely used operating systems worldwide.:

The full functionality and capabilities are included in the post itself:

In this case it is the Anubis botnet, whose main objective is to collect bank account information. But it can also be used to send SMS messages to the device’s contacts.

How many times have we seen online scams in which we have received a message from a known person asking for data, money or simply sending a link? Obviously, coming from a known person does not usually seem suspicious. However, nothing could be further from the truth.

Additionally, and as a curious fact, botnet names are often associated with the malware that links them. Due to the large amount of malware currently in existence, it is practically impossible to list them all. Among the best known, although not always the most widely used, are Emotet, Mirai, Pink, Arkei, Redline and Racoon.

Uses and purposes of botnets

There are an infinite number of uses for a botnet, it all depends on the imagination of each threat actor, which, it has been demonstrated, is also quite broad.

One of the most common uses of botnets, for example, are the famous distributed denial of service or DDoS attacks, which are orchestrated, in most cases, by networks of infected systems.

Distributed Denial of Service (DDoS) attacks are often launched using botnets.

However, not only can an infected computer be used to attack exposed services, but also to collect the affected user’s credentials, mine cryptocurrencies, carry out phishing attacks, and even download other malware.

What’s more, from the DFIR team’s perspective, many ransomware attacks start with the insertion of botnet malware. These malwares are tasked with downloading more malware to move laterally in the network, downloading updates to the malware itself, or even directly downloading the payload of the ransomware itself.

How do I know if my computer is part of a botnet?

That said, the question often arises “how do I know if my computer is part of a botnet?” It is best to have an EDR, a firewall with defined rules or a powerful signature-based detection software, otherwise it can go completely unnoticed by a user.

In general, infected people will not be handpicked, i.e., they are not targeted attacks, but, on the contrary, mass campaigns that make anyone susceptible to be infected.

Everyone is susceptible to being targeted by a botnet just because they have a computer or a mobile phone.

Many people think that they are “nobody” or not “interesting” enough for a botnet operator to be particularly interested in attacking them. Nothing could be further from the truth.

Who doesn’t access their bank account from their computer, who doesn’t access online shopping platforms, who doesn’t access their company’s internal network via a VPN, any information of this kind is still very valuable, or even if you are a low-ranking worker in a company, you still have access to that private network that is so attractive to cybercriminals!

How to identify botnet operators

Likewise, the question arises “is it easy to identify the threat actors operating botnets”? It is not easy. In fact, investigation is complicated by the fact that threat actors, apart from being groups of several people, often operate through the Tor network.

In addition, the operators use domain generation algorithms (DGA) to generate a large number of domain names. In this way, they manage to evade possible detection by the C&C server, as only some of these domains will resolve to a real C&C server.

For example, if a specific IP address or domain is denied access by a firewall rule, the BotMaster will have so many domains that it can dynamically change the domain name of its C&C.

In this way, it maintains contact with the bot as it will continuously generate the same list of domains per DGA. Another evasion method used is to make use of a Fast Flux network in which, basically, many different IP addresses would be assigned to the same domain name.

These IP addresses will be changeable and, assuming that many different domain names will be used, the possible IP addresses connecting to the C&C would increase exponentially. For these reasons, dismantling a botnet organisation takes years of investigation, dedication and, in some cases, cooperation between law enforcement agencies in several countries.

Dark Web sales of malware and botnets

Of course, as with anything, there is also malware available for sale on the Dark Web, as discussed in the post on these types of markets.

Threat actors also have the possibility to add systems to their botnets by selling or renting specific malware on Dark and Deep Web forums and markets. For example, below, we can see a sale of Redline for life (and at a discount of 300 euros!).

In this other recent post from a well-known Dark Web marketplace, the Arkei malware is offered for sale for $210:

Not only paid malware is found, but, ironically, there are also free “pirated” versions of malware, as we can see below with the Arkei malware.

Although the post was opened in 2018, it can be seen that the thread has been quite active throughout the years up to 2022, apparently accumulating a lot of downloads.

As another curious fact, and following the saying “if you want something well done, do it yourself”, tutorials are offered for sale, and sometimes for free, to learn how to set up your own botnet.

Dark Web sales of credentials and session cookies

One of the capabilities of malware infecting systems is the theft of login credentials or the theft of session cookies from web services.

It is a common occurrence to see credentials being sold on major Deep and Dark Web markets. It is as common as it is worrying as it not only affects access credentials to personal services (Amazon, online banking, supermarkets, streaming platforms, etc.), but also affects access to professional services such as access to work tools, access to VPN networks, access to professional mail, etc.

The sale of credentials on Deep and Dark Web markets is as common as it is disturbing

What started out as the compromise of a single computer, ends up being the compromise of an entire corporate network and can lead to a serious security incident, as discussed above.

In order to provide real data and obtaining data for all countries in the world and from the sales of the main Dark Web markets, it was found that, in the last month alone, at least 311 credentials for access to Citrix services, 2000 accesses to the intranet of different companies, 105 VPN accesses, among many others, have been offered for sale. At the enterprise level, it is, to say the least, worrying. 

Conclusion

As we have seen, anyone can be a target of a botnet and the consequences can be dire.

The human factor is one of the main players in botnet infection so, at this point, there is nothing more we can do than recommend being very careful about where we click or where we download software from – beware of freeware or off-platform downloads!

This way, we will already be much less likely to end up “turned” into a bot.

Cloud market trends until 2025

Roberto García Esteban    13 September, 2022

All of us who work in Cloud services are aware that this market is still in a phase of accelerated growth and that more and more companies are taking the plunge into the cloud or, having already done so, are continuing to incorporate new workloads and processes into it.

We also see that the more technological the customer’s business, the more likely it is to adopt the Cloud, although it is by no means a technology that is exclusive to any particular sector.

Nevertheless, I think it is worth putting some numbers to that subjective feeling, and that is why I have reviewed the “Global Cloud Computing Market” report by the company Global Data, which was published this year. I will now summarise the most important conclusions of this report.

Cloud market continues to grow rapidly

Firstly, it confirms that those of us who thought that the Cloud market continues to grow rapidly were right, and it also shows that this growth shows no signs of slowing down. The size of this market was $543 billion in 2021 and Cloud market will grow to $864 billion by 2025, representing a compound annual growth rate (CAGR) of 12.8% over that period.

Cloud market grows by the need for companies to organise, secure and manage a growing volume of valuable data

Asia-Pacific is the most important region in the Cloud market, but it is South America, however, where the highest growth in the Cloud business is forecast (CAGR of 17.2%). As a result, it is here where the greatest business opportunities lie. The need for companies to organise, secure and manage a growing volume of data from a wide range of sources and IT resources is at the root of these high growth rates.

Public, private and hybrid Clouds

Another fact highlighted by the report is that more than 61.6% of Cloud investment goes to the public cloud, 16.7% to the private cloud and 9.4% to the hybrid cloud (the rest of the market is divided between managed services and other cloud management platforms).

However, the trend over the next few years is that the highest growth will be in hybrid cloud services (14.3% CAGR), while the private cloud will show the lowest growth. In other words, more and more companies are opting for a hybrid cloud model that combines the advantages of public and private clouds.

Banking and insurance, the highest level of adoption

According to sector, the banking and insurance sector is the one with the highest level of adoption of cloud technology in 2021. However, it is the sectors where Cloud Computing currently has the least presence (construction, food and retail) that are expected to grow the most in the coming years and, therefore, where we can find the most business opportunities.

Spain, of course, is no stranger to these global market trends. As reflected in the report “Use of digital technologies by companies in Spain” published by the National Observatory of Technology and Society (ONTSI), 32% of Spanish companies claim to have used some cloud service in 2021, which represents a four-point increase over the previous year.

In Spain, however, there is a wide variation in the use of cloud technology depending on the size of the company, from 28% adoption among small companies to 68% among large companies.

Cloud Computing does is democratise access to technology, making it available to medium and small companies

This disparity stems from the mistaken belief that Cloud Computing is an expensive and complicated technology, available only to large corporations, when it is precisely the other way round: the essence of the cloud is to pay only for those resources that are really needed and used, so that small customers, who need fewer capabilities than large ones, pay much less.

As in other markets, Cloud Computing is most widely used in the most high-tech sectors in Spain, while more traditional sectors such as metallurgy and construction have a lower adoption rate, but at the same time a greater prospect of growth.

E-mail and file storage are clearly the kings of the services that companies upload to the cloud, although they are by no means the only uses of the Cloud, with database servers, security applications and financial, accounting and customer management software also standing out.

In short, Cloud Computing is an ever-growing market, a trend that is not expected to change in the coming years. It is key to the digitalisation of companies, many of which are already pursuing a “Cloud First” strategy for any digitalisation project. In other words, the digital transformation of companies will either be in the Cloud, or it will not.

Leave a Comment on Cloud market trends until 2025

AI of Things (IX): Integrated smart building management as a driver for greater operational efficiency

Antonio Moreno    12 September, 2022

We spend more than 90% of our lives indoors, and for this reason alone we should be very concerned about the comfort and healthiness of our buildings, which are also real energy predators. At least until now.

It is becoming increasingly clear that the real estate sector is key to the energy transition, the one that should lead us to a low-carbon economy. In this regard, three fundamental aspects that buildings must assume as necessary can be defined as follows:

  • Decarbonisation. By controlling energy demand, reducing consumption and using renewables together with electromobility.
  • Decentralisation. Understood as on-site electricity generation and energy storage.
  • Digitalisation. This should enable control and automation. Internet of Things (IoT) technologies are increasingly present here, with economies of scale that would have been unthinkable just a few years ago.

These goals are not only a challenge for new construction, but the existing building stock must also be updated to mitigate climate change and, this is often ignored, to adapt to climate change itself. We are moving from a traditionally static environment to a dynamic one where buildings must be integrated into their surroundings.

We are no longer talking about tenants, but about clients who demand a certain level of comfort and quality

It is even becoming more and more common in the real estate sector to move from the concept of property to the concept of service, no longer talking about tenants, but about clients who demand a certain level of comfort and quality in the space they inhabit. We can define a series of main axes in this regard.

People:

  • Increase the satisfaction of users, visitors and workers.
  • Achieve more comfortable and healthier buildings.
  • Encourage interaction between the users and the building.

Efficiency:

  • Achieve savings and positive environmental impact, improving occupant comfort and reducing management times.
  • Correct dimensioning of spaces.

Sustainability:

  • Minimise water, gas and electricity consumption.
  • Incorporate renewable energy sources.
  • Achieve the full decarbonisation of buildings, set as a target for 2050.

Security:

  • Protect people and building infrastructures
  • Optimise and simplify processes.

Not to mention transversal platforms that, thanks to AI, allow centralised management of all systems, integrated analytics to optimise the use of infrastructures and predict patterns of behaviour and use of the building.

Nearly Zero Energy Buildings (NREB)

All these principles apply to both residential and tertiary buildings, and many of the new corporate headquarters include from their initial design the necessary requirements to be considered Nearly Zero Energy Buildings, or NREB as they are often referred to by their acronym.

Telefónica District’s intelligent buildings

On a technical level this means not exceeding certain thresholds in cooling and heating demand, as well as in primary energy consumption and building airtightness. And how can these strict requirements be achieved?

There are several basic principles to be met: increasingly better insulation, absence of thermal bridges, airtightness, mechanical ventilation with heat recovery, high-performance windows and the use of new technologies.

This last point is fundamental to ensure that the EECN are not only on paper, but that in their daily use they meet or improve the required criteria by measuring in real time all the parameters of consumption and comfort.

Buildings that even produce their own power

Why don’t we just talk about ZEB (Zero Energy Buildings)? Or even PEDs (Positive Energy Districts). There are already real projects testing a new challenge that is approaching our cities, given the great room for improvement in the energy management of buildings.

Buildings that not only absorb from the electricity grid, but also inject their surplus energy into it, exchanging energy in the cities, and all of this at an optimum cost. This is undoubtedly a technical challenge and, above all, one of new relationship and business models: we are moving from a consumer to a pro-consumer scheme.

Electric vehicles can be recharged with surplus power from Positive Energy Districts (PED).

There will always be buildings that need extra energy, but in the total sum there are many surpluses, such as all the solar energy that can be produced in residential environments at midday, just when their inhabitants are out of their homes, often in offices or tertiary buildings. This surplus can be used to recharge electric vehicles or be exchanged between properties.

Certainly, this dialogue between energies will be essential, without forgetting of course that, as in ICT, the tenant, sorry, the customer, is at the centre and will demand the best possible service.

🔵 Feel free to read more content on IoT and Artificial Intelligence in our other articles in our series, starting with the first article here:

Cyber Security Weekly Briefing, 3 — 9 September

Telefónica Tech    9 September, 2022

0-day vulnerability in Google Chrome

Google released on Friday an emergency patch for the Chrome browser on Windows, Mac and Linux, fixing a 0-day vulnerability, which is being actively exploited.

The security flaw, identified as CVE-2022-3075, relates to insufficient data validation by the Mojo library collection, which is responsible for providing independent mechanisms for communication between processes with different programming languages.

A malicious actor could bypass the security restrictions when the victim accessed a specially crafted web page. Google reported that an anonymous researcher reported the vulnerability on August 30 and that exploits are available to exploit it.

Users of Chromium-based browsers, such as Microsoft Edge, Brave and Opera, would be affected by this vulnerability, so it is recommended to upgrade to Google Chrome version 105.0.5195.102, which addresses the 0-day.

More info

*  *  *

New breach affects the giant Samsung

The multinational company Samsung acknowledged on September the 2nd that it had been the target of a security breach.

According to the statement issued, at the end of July, an unauthorised third party gained access to information on some Samsung systems in the United States, exposing the personal information of several customers.

The information accessed included name, demographic and contact information, date of birth, and product registration information, but did not include social security numbers or credit card information.

This incident is the second in less than six months to be reported, as in March there were reports that internal data from the source code of its smartphones was leaked.

The company has indicated that it has taken security measures to ensure that such incidents do not happen again.

More info

*  *  *

QNAP patches 0-day used in new Deadbolt ransomware attacks

QNAP has issued a security advisory urging NAS users to upgrade to the latest version of Photo Station. The advisory follows the detection of an ongoing DeadBolt ransomware attacks that began on Saturday that exploits a 0-day vulnerability in Photo Station.

QNAP, which has already released security updates for Photo Station, urges its customers to update the software to the latest available version and suggests that users replace Photo Station with QuMagie, a safer photo storage management tool for QNAP NAS devices.

The details of this flaw are still unclear at this time, but the company strongly recommends, in order to reduce the possibility of being attacked, not to connect QNAP NAS directly to the Internet and to make use of the myQNAPcloud Link feature provided by QNAP, or enable the VPN service.

They also recommend using strong passwords for user accounts and take regular backups to prevent data loss. This would be the fourth round of DeadBolt attacks targeting QNAP devices since January 2022, which was followed by similar incursions in May and June.

More info

*  *  *

HP fixes a serious vulnerability in HP Support Assistant

HP has issued a security advisory warning users about a recently discovered vulnerability in HP Support Assistant, a software tool that comes pre-installed on all HP computers, which is used for troubleshooting or performing hardware diagnostic tests, among others.

The flaw, identified as CVE-2022-38395 and with CVSS of 8.2, allows attackers to elevate their privileges on vulnerable systems. Although the manufacturer has not provided many details about the vulnerability, the advisory mentions that it is a DLL hijacking flaw when users try to launch HP Performance Tune-up from HP Support Assistant.

In this type of flaw, the code that is executed when loading the library obtains the privileges of the executable, in this case SYSTEM permissions. Due to the large number of devices with HP Support Assistant installed and the low complexity of the exploit, it is recommended that all HP users update Support Assistant as soon as possible.

More info

*  *  *

The North Face and Vans announce credential stuffing attack

VF Corporation has released a statement to its customers in which they report that they have suffered a data breach on The North Face and Vans retail brands.

The threat actors used credential stuffing techniques to breach 162,823 customer accounts on thenorthface.com and 32,082 customer accounts on vans.com. A credential stuffing attack involves attempting to access accounts with compromised credentials from other leaks, a strategy that is based on the assumption that the users are probably reusing passwords across multiple platforms.

The attack on The North Face began on July 26th, was detected on August 11th and disrupted on August 19th. On the other hand, the intrusion at Vans was detected on the 20th of August and was active for only one day.

Among the data that could have been exfiltrated were names, addresses, e-mail addresses, purchase history and customer telephone numbers, among others.

In their statement, the company said that credit card data is stored in third-party payment systems, so it could not have been affected by the attack. Finally, the company has confirmed that all the credentials of the affected accounts have been reset.

More info

Hyperledger Besu: blockchain technology on the rise in the business environment

Alberto García García-Castro    8 September, 2022

Until relatively recently, the world of private DLT (Distributed Ledger Technology) has been divided around a few players that have covered most of the international market. Historically, technologies such as Corda, Quorum or Hyperledger Fabric have been the most widely used in corporate projects. For example, Forbes magazine’s list of the fifty companies that lead in the use of DLTs worldwide indicates this.

However, as of the second half of 2019, the launch of a new project within the Hyperledger umbrella called Besu was made public. This is an Ethereum-based technology that enables the development of enterprise-class applications both on the public network and on private or consortium networks. This hybrid approach has allowed Hyperledger Besu to see rapid adoption within the blockchain world.

Background

In 2018, the team at Pegasys protocol engineering (part of the American company Consensys) started developing this technology. At that time, the product was called Pantheon and its main objective was to develop an Ethereum client suitable for productive enterprise environments.

In February 2019, version 1.0 of the product was launched and a few months later, in August 2019, Pantheon was officially adopted into the Hyperledger ecosystem under the new name Besu. Through this incorporation, a first bridge of collaboration was established between two of the largest blockchain development communities in the world: Ethereum and Hyperledger.

Since the beginning of the project, the product has been constantly evolving in relevant aspects such as consensus algorithms, permissions management in the network or privacy at the Ethereum protocol level. In addition to all these already integrated functionalities, the roadmap refers to advances in stability, interoperability and performance, especially taking into account the needs of business applications.

What is Hyperledger Besu?

Hyperledger Besu is an open-source Ethereum client written in Java under the Apache 2.0 license. It is designed to be used both on the Ethereum core network and for the creation of private business-purpose networks based on the same technology. In addition, it is compatible with Ethereum’s public test networks (Görli, Rinkeby or Ropsten), which are widely used within the Ethereum development community.

The implementation of Besu follows the technical specifications of the EEA (Ethereum Enterprise Alliance), an organisation that aims to create open standards within the Ethereum ecosystem, accelerating the adoption of the technology within corporate business processes.

In relation to the more technical side of the technology, it is interesting to know the following features of Hyperledger Besu:

  • Use of different types of consensus algorithms: on the one hand, it is possible to use a Proof of Authority type algorithm for private/consortium networks and on the other hand, it is compatible with the Proof of Work type algorithm currently used by the public Ethereum network, the second most important blockchain worldwide in terms of market capitalisation.
  • It includes the EVM (Ethereum Virtual Machine), which enables the execution and deployment of Smart Contracts and Dapps (decentralised applications), both on the public network and on private or consortium networks.
  • In terms of programming, Besu is compatible with the most widely used tools within the Ethereum development community, such as Truffle, Remix or web3j, among others.
  • Thanks to the Besu client, the Ether cryptocurrency can be mined within the main Ethereum network.
  • In terms of key management, Besu is compatible with the most popular wallets within the Ethereum community. For example, Metamask, the digital wallet used by more than one million people worldwide.

In terms of privacy and network permissioning, it is worth noting that these are two fundamental pillars within Hyperledger Besu. On the one hand, it has the ability to keep transactions secure and private depending on the needs of the business, and on the other hand, it allows different access permissions to be configured only for those nodes or accounts that are allowed. In terms of monitoring, Besu allows you to manage both nodes and the network using third-party tools and has a block explorer that gives customers real-time control of what is happening on the blockchain.

From a business point of view, Hyperledger Besu allows you to:

  • Deploy private or consortium networks, taking advantage of their privacy capabilities, high performance, network access permission settings or incident support.
  • Deploy a node on the public Ethereum network to provide additional trust and transparency for use cases that need it.

It is important to note that a Hyperledger Besu node cannot connect to a public and private network at the same time. That is, if there is a use case that requires it, it would be necessary to have at least one node for each type of network.

Ready for business environments?

Most business blockchain projects have historically been developed as proofs of concept and have not been able to make the leap to production environments. To a large extent, this is due to the scarcity of companies able to offer support for a product developed with this type of technology.

To encourage the use of blockchain in corporate environments, it is essential that there are specialised companies with technical capacity that are capable of guaranteeing the viability of the product in the long term, taking into account essential aspects such as security, performance and availability, which are fundamental for any business process.

In this sense, in October 2019 Consensys launched “Pegasys Plus” a commercial distribution of Hyperledger Besu in which 24×7 technical support, training, product updates, patch creation and improvements in aspects such as security, monitoring or efficiency are offered. In this way, companies that want to integrate blockchain into their business processes will find it easier to create 100% productive platforms.

Blockchain consortiums

Another important aspect to take into account is the adoption of this technology by the largest European and Latin American blockchain consortiums. This point is very important when it comes to understanding the reasons for the rapid growth of the technology:

  • Within the Spanish blockchain ecosystem, Alastria, an open association of companies that promotes the digital economy through the development of decentralised registry technologies, stands out. At a technical level, they advocate a technology-agnostic platform by promoting different types of networks. For this reason, the so-called “B-network”, deployed by some of its partners and Hyperledger Besu, was born at the beginning of 2020.
  • In the case of Europe, 29 countries (all EU Member States, Norway and Lichtenstein) and the European Commission have joined forces to create the European Blockchain Services Infrastructure (EBSI). They have been working on it since 2018 and its main objective is the creation of a cross-border network for public administrations, providing the members of the European Union with a DLT network based on several protocols, including Besu, that takes advantage of all the benefits offered by blockchain technology in their public services.
  • In Latin America, LACCHain, an alliance of companies and institutions, is operating with the aim of developing the blockchain ecosystem in Latin America and the Caribbean. Among its objectives are the promotion of innovation and the reduction of inequalities thanks to the adoption of blockchain technology. In terms of infrastructure, since 2019 they offer their partners the ability to use a DLT network based on Hyperledger Besu for their business use cases.

How do we use Besu at Telefónica?

Telefónica has been bringing blockchain to our customers for years through TrustOS: a solution created so that companies can adapt their processes to blockchain easily and simply. It consists of several modules in the form of API (Application Programming Interface) so that companies can implement their certification, traceability or tokenisation use cases quickly and simply.

A clear example of what Besu would bring to TrustOS can be found in the certification section. Customers using this module will be able to reliably record information in both private and consortium networks, abstracting from the complexity associated with the technology. Using TrustOS APIs, customers will be able to access networks based on Hyperledger Fabric or Hyperledger Besu in a transparent way, without the need to adapt their developments to each type of network.

Photo by Glen Carrie on Unsplash