Cyber Security Weekly Briefing, 21 – 27 January

Telefónica Tech    27 January, 2023

Killnet targeting victims in Spain

This week, the hacktivist group Killnet announced a campaign of attacks against Germany, leading to Distributed Denial of Service (DDoS) attacks that rendered the websites of the German government, the Bundestag, several banks and airports in the country inoperative on Wednesday.

Following these attacks, the group posted a comment on its Telegram channel directly pointing to Spain as a possible target for its next attacks, leaving the following message “Spain – f*** you too, but with you everything will be easier and faster”.

Following this message, other participants within the Telegram channel explicitly singled out two Spanish companies, stating that they would be supposedly “easy” to attack. No attacks against Spanish critical infrastructure companies or government agencies have been reported so far.

* * *

Apple fixes 0-day vulnerability affecting older iPhones and iPads

Apple has issued a security advisory addressing patches for an actively exploited 0-day vulnerability in older iPhones and iPads.

The vulnerability, listed as CVE-2022-42856 with a CVSSv3 of 8.8, could allow an attacker to process maliciously crafted web content to achieve arbitrary code execution, due to a type confusion in Apple’s WebKit web browser engine. This vulnerability was published in December for other Apple products, and is now available for older versions, specifically the iPhone 5s, iPhone 6, iPhone 6 Plus, iPad Air, iPad mini 2, iPad mini 3, and iPod touch (6th generation).

Apple’s advisory states that there is evidence of active exploitation of this vulnerability in iOS versions prior to iOS 15.1. Also, on 14 December, CISA included this vulnerability in its catalogue of exploited vulnerabilities.

More info

* * *

​​​​​​VMware vulnerabilities fixed

VMware has released security patches to address a number of vulnerabilities in vRealize Log Insight, now known as VMware Aria Operations for Logs. The first vulnerability, identified as CVE-2022-31703 and CVSS 7.5, addresses a directory traversal flaw whereby attackers can inject files into the affected system and achieve remote code execution.

On the other hand, CVE-2022-31704, with CVSS 9.8, is an access control vulnerability that can be exploited for remote code execution.

The company has also fixed a deserialisation vulnerability, identified as CVE-2022-31710 and CVSS 7.5, which can trigger a DoS, and CVE-2022-31711, with CVSS 5.3, which addresses an information disclosure flaw.

More info

* * *

​​​PY#RATION: a new Python-based RAT

The Securonix research team has discovered a new Python-based malware attack campaign with Remote Access Trojan (RAT) capabilities. This malware, named PY#RATION, is actively evolving, having moved from version 1.0 to 1.6.0 since its detection in August 2022.

PY#RATION is distributed via phishing containing .ZIP attachments, inside which there are two .lnk shortcut files in the guise of images (front.jpg.lnk and back.jpg.lnk). When these shortcuts are executed, the victim sees the image of a British driving licence on the front and back, but also executes the malicious code to contact the C2, which in turn downloads two additional files to the user’s temporary directory.

Once executed, PY#RATION is able to perform network enumeration, perform file transfers, keylogging, steal data from the clipboard, extract passwords and cookies from web browsers or execute shell commands, among other capabilities. According to Securonix, this campaign is mainly targeted at victims in the UK or North America.

More info

* * *

​​​​Microsoft plans to block XLL files from the Internet

After disabling macros in Office files downloaded from the Internet to prevent the spread of malware, Microsoft’s next step in its fight against malicious files will be to block XLL files coming from the Internet, mainly attached to e-mails.

XLL files are dynamic Excel libraries that provide additional features to Excel (dialogue boxes, toolbars, etc.). Since these are executable files, they are very useful for threat actors who include them in their phishing campaigns to download malware onto the victim’s computer with a single click.

According to Microsoft, the measure is being rolled out and will be generally available to users in March.

More info

Featured photo: Arnel Hasanovic / Unsplash

«We are moving towards genderless professions», María Martínez

Marta Nieto Gómez-Elegido    25 January, 2023

It’s a cold morning in Madrid and María Martínez Martín, Head of the Threats Intelligence Operations team at Telefónica Tech, welcomes us at the Telefónica building.

Wearing a blue blazer and smiling, María laughingly confesses that she doesn’t see herself being interviewed as a female hacker because when she was a child, she dreamed of being a marine biologist. And now ‘Look where I’ve ended up’, she says. A telecommunications engineer, María has been with Telefónica for more than seven years, where she currently leads a team of almost 50 professionals. She is, without a doubt, a great ambassador for #LadyHackers for being a female reference in the world of technology.

What does it mean to you to be a female hacker?

To be a #LadyHacker is to contribute everything you have inside you to a sector that needs people to help make us all safer and more secure. In the end, within the world of technology, cyber security is something very incipient and important where women can contribute a lot. We have a different vision to men, we each bring something different to the table and, in the end, it’s the mix that makes it all good.

Which woman has been or is your technological reference?

María admits that she has not been a person who has wanted to work in technology since she was a child, but she finally decided to go into Telecommunications. A field in which she has found great references in her colleagues and friends. Idoia Ochoa, a researcher focused on the field of medicine, who has developed her work at Stanford University and is currently at the University of Navarra; María Ángeles Santos, RVP of SalesForce working with Iberia; Andrea, María ….. All of them are very young and are contributing a lot to the world of technology.

They are all very young and are contributing a lot to the world of technology. They are also references within Telefónica Tech: Svetlana, Mercedes, Martina Matarí, Ester Tejedor, Marta, Lydia, Andrea, Susana… They are people who are contributing a lot to the teams, they work tirelessly and they also have the complexity of being mothers. They know a lot about security and they combine it with the other life that is their family. I admire them and I think they are role models for any of us to follow.

Is cybersecurity an increasingly inclusive world?

Yes, absolutely, is the resounding answer he gives us before going on to tell us the calculations he has made. 36% of the people on my team are women. In this sector, the cyber security sector, there are many female profiles, although in the hacking part they are scarcer, says María. We have a lot to contribute, another vision that leads us to do things differently from the way we are doing them in order to keep moving forward.

36% of the people on my team are women. In the cyber security sector there are many female profiles

Why is it still surprising to find a woman in leadership positions?

I think it draws less and less attention to myself, she says, saying that this has never been the case for her. Since I started, I have never noticed that I have attracted attention for being a girl and being in a position of responsibility, perhaps because of my youth. Maria also reflects on why this is still sometimes surprising: it is only recently that women have been entering the labour market with skilled and valuable jobs. There are more and more of us and we are taking on more and more positions of responsibility. I understand that in a few years it will no longer attract attention.

New generations are coming along and they see it as normal, although there is still work to be done to get more women to opt for technological careers. What does a telecommunications engineer do? We do a lot of things, engineering has many opportunities, but people don’t know about them. The problem is that there is a lack of information about the opportunities available in these types of careers.

How do you see the presence of female profiles in the technological field evolving?

I think we are evolving towards professions that are genderless. It is true that, perhaps because of the way we are, we may tend more towards one profession or another, but that is changing. Young people are changing and it is very rare for a woman, nowadays, not to think about not working.

I don’t think that women’s profiles stand out, which there are, but perhaps they are not so well publicised, and that’s why people don’t know what it’s like to end up studying a career in telecommunications.

Any advice for girls and young women who are thinking of pursuing a career in the world of technology?

I would tell them to do their research and go for it because they can make a huge contribution. Technological careers are not just for men. They are beautiful and in this environment, there is a lot of support, great colleagues and friends who bring a lot to the table. Take the step forward because you have a lot to give us.

What is a Cloud-Native Application?

Gonzalo Fernández Rodríguez    24 January, 2023

The term Cloud Native is something that goes beyond moving applications on-premises, i.e. hosted directly in a data centre to an infrastructure from a Cloud provider, whether public or private.

What is known as “lift & shift” of applications to the Cloud is nothing more than a process in which our applications stop running on the on-premises infrastructure of our data centre to move to run on a Cloud infrastructure, but often without any type of redesign in the architecture of these applications, nor in the construction, deployment and/or operation practices of the same.

Obviously, we can take advantage of some basic “out of the box” benefits by using infrastructure with greater redundancy, with backup facilities, updated with the latest security patches, etc.

But we have to bear in mind that our application is not going to become a Cloud Native just because we deploy it in the Cloud: if you have a system that is a chestnut and you deploy it on an AWS Kubernetes EKS cluster… you have a ‘kubernetised’ chestnut!

Our application is not going to become a Cloud Native just because we deploy it in the Cloud

The Cloud has changed the rules of the game

Not so many years ago, it was necessary to make a good study of the capacities (compute, network and storage) that we would need for our system to offer a guaranteed service and place an order to cover those capacities (from one or several suppliers) that could take months to be ready. From time to time, we had to assess the potential growth of the system and buy more hardware again if we didn’t want our clients to leave us when it stopped working.

Today it is possible to do all this with a couple of clicks on an administration console or better yet, with a call to an API (Application Program Interface) that allows us to automate this process, and Cloud has made computation, network, storage and other more advanced services (databases, message queuing systems, data analytics, etc.) as software-defined abstractions giving rise to what is known as Cloud Computing.

Cloud Computing, in short, are those computing resources that we need to build our systems (CPU, storage, network, etc.) but which are available on the network and can be consumed on demand, offering cost efficiency and a scalability never seen before.

Cloud Native is a consequence of the need to scale up

It is clear then that the technology on which we build and deploy our systems has changed but, what is the reason? Giving a single reason would perhaps be a bit risky, but what we can say is that Cloud Computing solves a scalability problem.

In recent years, digital solutions —gaming platforms, video streaming, music, social networks, etc.— are increasingly consumed from different devices, not just PCs, we are talking about mobile phones, tablets, Smart TVs and even IoT (Internet of Things) devices that have created different scenarios for accessing our systems.

Scenarios where the number of requests and the volume of data is changing, i.e., scenarios that require our system to continue to function correctly in the face of changes in demand for resources, in short, scenarios that require our system to be “scalable”.

However, this is not free, the management of this scalability in our services is becoming more complex, the traditional methods are no longer useful, so we need another way of doing things. Just as PCs emerged back in the day and with them the client/server architectures needed to take advantage of their computing capacity, thus relieving the old “mainframes” from doing all the work of our systems.

Along with this technological change that we call Cloud Computing and to respond to the management of this scalability, new architectural patterns and also new practices to operate our systems have emerged, giving rise to the term Cloud Native.

The goal of Cloud Native applications is to take advantage of the benefits of the cloud to improve scalability, performance and efficiency.

So, when we say that a system or an application is Cloud Native, we are not really referring to whether it runs in Cloud, but to how it has been designed to be able to run and operate correctly on a Cloud Computing technology, also benefiting from the advantages of this technology.

Cloud Native systems are designed in such a way that they have the capacity to grow/decrease dynamically and that they can be updated, all of this without loss of service, which is known as “zero downtime“. Zero downtime does not mean perfect uptime but fulfilling a goal whereby no interruption of service is perceived throughout the operation of an application[1].

Cloud Native according to CNCF (Cloud Native Computing Foundation)

Today’s users do not have – or rather, we do not have – the same patience as we did years ago when it was totally normal for a website to take a few seconds to load, or for a streaming video to have some latency, or even to stop from time to time. 

The level of scalability provided by Cloud makes it possible for social networking, instant messaging, video or audio streaming applications to allow millions of users to chat, upload photos, videos, watch movies or listen to music, all at the same time.

Users do not want failures, they need the services to be working properly all the time, and this is complicated in an environment as changeable as the Cloud.

The CNCF talks about Cloud Native as a set of technologies that allow us to build and run scalable applications in modern, dynamic environments such as public, private and hybrid Clouds.

It also tells us that the resulting systems are loosely coupled, resilient, manageable, observable and with good automation will allow us to introduce frequent and predictable changes with little effort.

Cloud Native attributes for building reliable and resilient systems

In such a fast-changing environment as the Cloud, it is going to be necessary that we design our systems to be able to react to possible errors that may cause failures in our systems. If we can ensure that our systems have the attributes that the CNCF defines for Cloud Native applications: scalable, loosely coupled, resilient, malleable and observable, we will be able to keep our clients satisfied by providing them with systems that work on a continuous basis.

If we consider each of these attributes, we can see how they help us to make our systems reliable and run virtually uninterrupted.

  • Scalability: if we design our systems to be able to operate statelessly, we will make our systems scalable and therefore able to adapt to unexpected growth in demand for resources, a form of “failure prevention”.
  • Weak coupling: if our systems are loosely coupled, avoiding sharing dependencies as could happen when we design a system based on microservices and end up generating a distributed monolith (where changes in one microservice cannot be made without changes in others), it will allow those components or services (or microservices) that are needed independently to evolve and scale, and it will also prevent failures derived from the necessary changes in multiple components if they were coupled.
  • Resilience: through the redundancy of components or the application of certain patterns that avoid the cascade propagation of failures, we can make our systems more resilient and therefore be able to continue functioning even when certain failures occur, i.e. we will make our system fault tolerant.
  • Manageable: if we design our systems to be easily configurable, we will be able to change certain system behaviours without the need to deploy a new version of the system, and we may even be able to eliminate possible errors that may have arisen.
  • Observable: finally, we should take measurements (metrics) of different indicators of our systems that we can observe continuously to be able to predict errors or undesired behaviour and act before they occur.

Cloud Native allows to manage all the complexity that comes with the almost infinite capacity provided by Cloud Computing

By applying design patterns and operating practices, we make our systems even more reliable than the Cloud infrastructure on which they run (for example, a failover between two regions of a Cloud) and at the same time the user has full confidence in the operation of our system.

* * *

[1] On the ability to plan based on the user’s perception of service quality, there are a number of books written by Google engineers —known as SRE or Service Resilience Engineers— which extend the concept of SLA, while adding new ones such as SLI or SLO.

Featured photo: Shahadat Rahman / Unsplash

A concept popularized by a former Google CEO is more relevant today than ever

Nacho Palou    24 January, 2023

A few years ago, Google’s then CEO Eric Schmidt popularised the concept of “augmented humanity”. This term refers to the ability of technology to “enhance human capabilities” to make us more efficient. To achieve better results with less effort.

In short,

“’Augmented humanity is about computers helping us with tasks we are not very good at, and us helping computers with tasks they are not very good at.” —Eric Schmidt, former Google’s CEO

In his talk, Schmidt highlighted the potential that Artificial Intelligence was beginning to prove to improve our lives. A technology that Google had already been applying for some time in some services.

Artificial Intelligence as a tool and not as a replacement

Schmidt then highlighted how Artificial Intelligence —a term first defined in the 1950s by John McCarthywas beginning to show real progress in automating tasks that were complex, repetitive and monotonous. Even unmanageable for humans. Their adoption would thus allow us to focus on tasks related to “critical thinking and creativity.

“Augmented humanity is about embracing AI as a tool to improve people’s lives, not as a replacement.” —Eric Schmidt, former Google’s CEO

But what are those repetitive and monotonous tasks that we can automate with Artificial Intelligence? The following are some examples:

  • Analysis of large amounts of data from multiple sources to extract insights to make better decisions and optimise processes.
  • Pattern recognition in data to automate production, prevent incidents, anticipate future events or identify trends or preferences.
  • Image analysis to extract information from photographs and videos and enable machine vision.
  • Fully or partially autonomous driving and driver assistance systems to reduce the number of accidents and optimise transportation.
  • Robotics to automate physical tasks, machinery and manufacturing, production or logistics processes.
  • Natural language processing to automate communication-related tasks such as customer service and support, translation or content generation.

Artificial Intelligence is increasingly present in the digital transformation processes of companies.

From rejection to normalisation: the pilgrimage of disruptive technologies

In 2010, Schmidt’s talk on the growth of Artificial Intelligence generated scepticism and rejection among some people. Even fear. But, as they say, “change is only scary when you are not willing to change with it”.

The reasons are numerous, however, and also legitimate. Common fears include job loss, risk to the privacy and intimacy of individuals and their data, lack of regulation, increased inequality and risk of cyber-attacks, ethical considerations to be resolved… Not to mention the fear of artificial superintelligence, a recurring theme in science fiction.

Throughout history, even before the industrial revolution, there have been other cases of technologies that have first generated rejection in some people or in society, and then proved their usefulness.

A couple of recent examples include mobile phones, because of fears that they could endanger health; or the internet, because it could be used to commit crimes. Both technologies have nevertheless proven their usefulness and are now essential for business and the economy, individuals and society.

The landline telephone came to be seen as unnecessary because of the existence of messengers, and raised concerns that other people (the operators) could listen in on conversations.

Technology, reskilling and lifelong learning are key in the golden age of AI

The arrival of new work tools that arise with “the golden age of Artificial Intelligence”, in the words of Satya Nadella, CEO of Microsoft, requires changes.

These changes are not only to develop them, but also to incorporate them into work processes, adapt to them and manage them. Just as happened before with computing or the internet. Also, to anticipate a potential gap between demand and the availability of suitable professional profiles.

In this sense, “reskilling workers will help to overcome the impact”, in the words of José María Álvarez-Pallete, CEO of Telefónica. Training and updating skills is what will make it possible to fill the jobs generated by the digital transformation and capture the new opportunities.

“Technology has historically been a net job creator. The introduction of the personal computer in the 1970s and 1980s, for example, created millions of jobs.”—McKinsey Global Institute.

For companies, “the winning long-term strategy is to create, cultivate and nurture” technology talent, says José Cerdán, CEO of Telefónica Tech. “It’s not about competing to capture talent, but about developing internal recycling and training programmes to strengthen and update skills,” he adds.

Achieving this, says Álvarez-Pallete, requires massive retraining through training programmes adapted to new skills. It also requires the promotion of a culture of continuous learning in order to manage the transition to the new digital world in a socially responsible way.

Featured photo: Nguyen Dang Hoang Nhu / Unsplash

Cyber Security Weekly Briefing, 14 – 20 January

Telefónica Tech    20 January, 2023

Several vulnerabilities have been discovered in Netcomm and TP-Link routers. On the one hand, the flaws, identified asCVE-2022-4873 and CVE-2022-4874, are a case of buffer overflow and authentication bypass that would allow remote code execution.

The researcher who discovered them, Brendan Scarvell, has published a PoC for both. The affected router models are Netcomm NF20MESH, NF20 and NL1902 running firmware versions prior to R6B035.

On the other hand, the CERT/CC detailed two vulnerabilities affecting the TP-Link WR710N-V1-151022 and Archer-C5-V2-160201 routers, which could cause information disclosure (CVE-2022-4499) and remote code execution (CVE-2022-4498).

More info

​​* * *

PoC for multiple vulnerabilities in WordPress plugins

Researchers at Tenable have published details of three new vulnerabilities in plugins for the WordPress platform, including proof-of-concepts (PoCs) for all of them.

The first, catalogued as CVE-2023-23488 with a CVSS score of 9.8, is a SQL injection vulnerability without authentication in the Paid Membership Pro plugin. The second, identified as CVE-2023-23489 with the same score and of the same type as the previous one, affects the Easy Digital Downloads plugin.

And the third and last, CVE-2023-23490 with a CVSS score of 8.8 and also a SQL injection vulnerability, affects the Survey Maker plugin. The authors of the plugins would have been notified in December 2022 and would have released security updates correcting these issues, so that the latest available versions would no longer be vulnerable.

More info

​​* * *

Hook: new banking trojan targeting Android devices

Researchers at ThreatFabric have discovered a new Android banking trojan called Hook. According to the researchers, it was reportedly released by the same developer as the Android banking trojan Ermac, although it has more capabilities than its predecessor.

ThreatFabric claims that Hook shares much of its source code with Ermac, so it should also be considered a banking trojan. The most notable aspect of Hook is that it includes a VNC (virtual network computing) module that allows it to take control of the compromised interface in real time. 

It is worth noting that Spain is the country with the second highest number of banking applications threatened by Hook after the United States, according to the ThreatFabric report.

More info

​​* * *

Malware discovered hidden in PyPI repository packages

Fortinet researchers have discovered three packages in the PyPI (Python Package Index) repository containing malicious code intended to infect developers’ systems with infostealer-type malware. The three packages, which have been uploaded to the platform by the same user with the nickname Lolip0p, are called Colorslib, httpslib and libhttps, respectively.

Fortinet highlights that as a major novelty in this type of supply chain attack, the threat actor has not tried to embed malware in malicious copies of legitimate packages, but has instead created its own projects by investing a lot of effort in making them look trustworthy.

Fortinet found that the setup file for all three packages is identical and attempts to run a PowerShell that downloads a malicious file. According to PyPI’s statistics, together these three packages have been downloaded 549 times so far.

More info

​​* * *

NortonLifeLock reports password manager incident

Gen Digital, the company that owns NortonLifeLock, has begun sending a statement to an undisclosed number of its users informing them that an unauthorised third party has been able to access their Norton Password Manager accounts and exfiltrate first names, last names, phone numbers and email addresses.

In the official notification sent to the Vermont Attorney General’s Office, Norton explains that its systems have not been compromised or abused, and that the incident is due to the attacker reusing usernames and passwords available in a database for sale on the dark web.

This claim is supported by the fact that in late December Norton detected a substantial and unusual increase in the number of failed login attempts on its systems, indicating that attackers were trying to gain access by testing compromised passwords on another service.

The incident again highlights the need for a proper password policy with unique passwords for each online service.

More info

Featured photo: Souvik Banerjee / Unsplash

How to start programming in Artificial Intelligence: languages, tools and recommendations

Nacho Palou    18 January, 2023

There is a very close relationship between Big Data and Artificial Intelligence (AI): Big Data is about capturing, processing and analysing large amounts of data. When this data is combined with capabilities such as machine learning and predictive analytics, more value is extracted from that data.

This makes it possible, among other things, to find patterns in this data that are “invisible” to the human eye, allowing us to predict and prevent events, offer personalised experiences of use and consumption of products and services, hold care conversations and even create content.

Knowing how to program Artificial Intelligence allows you to develop countless solutions and take advantage of the enormous potential offered by Big Data, Artificial Intelligence and Internet of Things.

Furthermore, in 2023 the demand for professionals qualified in the development of Artificial Intelligence solutions will continue to grow, because “AI will be present in all the digital transformation processes of companies” in numerous sectors, according to data from Fundación Telefónica’s Employment Map.

Programming languages for AI development

Elena Díaz, head of the Centre of Excellence in the AI of Things product team at Telefónica Tech, is passionate about programming languages focused on exploiting data.

As an expert, to program in Artificial Intelligence Elena recommends learning these programming languages:

  • Python is the most widely used programming language for the development of Artificial Intelligence applications. It has many libraries and tools for machine learning, such as TensorFlow, PyBrain or PyTorch, among others. You can start with Python with an experiment for everyone.
  • R is also a programming language widely used for data analysis, data visualisation and machine learning, especially in the field of statistics.
  • SQL is the standard query language for relational databases, widely used in the fields of Big Data and Artificial Intelligence. Knowledge of SQL is essential for the management and analysis of large datasets, essential in the field of Artificial Intelligence.

However, as Elena explains, although right now these are the most widely used programming languages in the field of Artificial Intelligence, “we always have to be aware of evolutions and adapt new languages and always be in a continuous learning process”.

“Once we learn to program, it is relatively easy to switch from one language to another”.

Elena Díaz, Telefónica Tech.

Other languages that also apply to the development of Artificial Intelligence include:

  • Java and C++ are more advanced programming languages and are also used for the development of artificial intelligence applications, including high-performance developments such as neural networks and machine learning algorithms.
  • JavaScript, a very popular language in web development that is increasingly used in the field of Artificial Intelligence, especially for the development of machine learning applications oriented to users accessing them through apps or web browsers, for example.

The programming language will depend very much on your specific preferences and needs – it will even depend on your previous programming experience, if you already have some – and also on what you want to achieve or the project you are going to work on.

How to develop your Artificial Intelligence skills

The first thing you need to do if you are interested in starting programming in Artificial Intelligence is to learn the basics of it. In this sense, it is important to “overcome conceptual, mathematical or technical barriers” and understand basic concepts of artificial intelligence, such as machine learning, computer vision and natural language processing.

In addition, you will also find it helpful to:

  • Learn a programming language such as the aforementioned Python, R, Java and C++, which are widely used in the development of artificial intelligence applications. Choose one and dedicate time to learning it.
  • Practice with problems and projects: It is important that you practice with real problems and projects. You can find datasets and problems on websites such as OpenAI or Kaggle.
  • Learn about AI tools and libraries: there are many AI tools and libraries, such as TensorFlow, PyTorch, scikit-learn and Keras. They allow you to build and train AI models easily and you can use them in your projects.
  • Take every opportunity to keep learning: Artificial Intelligence is constantly evolving, so it is important to keep acquiring and updating skills. You can keep up to date with trends, new techniques and technologies through blogs and articles on the subject, webinars, talks and courses (there are many free ones) and by participating in online groups and forums.

Elena recommends those interested in Artificial Intelligence to “discover what you like the most and go deeper into it. Specialising in what most motivates you and helps you to continue to grow.”

Featured photo: Kelly Sikkema / Unsplash

Consequences of a cyber-attack in industrial environments

Jorge Rubio    17 January, 2023

Industrial environments can be found in any type of sector we can imagine, whether in water treatment, transport, pharmaceutical, machinery manufacturing, electrical, food or automotive companies, among others.

The differences between an industrial environment and the typical corporate or IT (Information Technology) environment is that industrial communication networks or OT (Operational Technology) are designed for a specific task and use equipment and systems that do not change over time, i.e., the same communications between the same devices are produced continuously, in a cyclical manner, unlike the corporate world in which a multitude of different equipment is connected at different times, such as laptops or corporate mobiles, for example.

Another major difference is that these industrial devices are more likely to have vulnerabilities in their firmware or software because they are outdated equipment that is not usually updated or patched, as they are not compatible with the latest operating systems on the market or because replacing them could be very costly for the company.

In addition, it is common to use unencrypted network communications or insecure protocols that allow vulnerabilities to be exploited or passwords to be obtained in clear text.

The most serious implications of an industrial system being breached are the impact on the physical safety of people.

This state of industrial environments, coupled with the increasingly pressing need to connect industrial processes and factories to the corporate world, the cloud or the internet, increases the risks of a cyber-attack on such facilities.

The most serious implications of an industrial system being breached are the impact on the physical security (safety) of people, as well as economic losses or damage to the company’s image, which is why it is vitally important to try to protect this equipment against any cyber-attack.

Cyber-attacks that have occurred in the past in industrial environments

Over the years, various companies and organisations in all types of industrial environments have been attacked, both through technical and social engineering attacks, as well as through carelessness, laziness or lack of employee awareness, such as the use of USB keys between OT equipment and IT systems.

The following are some examples of the different types of cyber-attacks used to attack companies in a variety of sectors with industrial environments:

  • Malware in industrial or field devices.
  • Communication hijacking and man-in-the-middle attacks.
  • Denial of service.
  • Spear phishing.
  • Database espionage.
  • Supply chain attacks.
  • Improper or malicious device updates.
Photo: Greg Rosenke / Unsplash

And these are not isolated cases – attacks on industrial infrastructures are in the news all the time! Some of the most relevant are the following:

  • Worcester Airport in the United States (1997): A hacker hacked into the communications of the air traffic control system and caused a system failure that rendered the telephone system completely useless, affecting the control tower and different areas of the airport (fire brigade, meteorology, etc.), which had a major economic impact.
  • Saudi Aramco (2012): An attacker gained access to the industrial network through one of the employees and deleted the content of all computers. This resulted in the management of supplies, oil transportation, contracts with governments and business partners being done on paper. If it had been a smaller company, this attack would probably have bankrupted it.
  • Maersk (2017): A cyber-attack using the “NotPetya” malware caused outages in all of the shipping company’s business units, bringing its container shipping operations around the world to a standstill for weeks. The losses generated by this attack are estimated to be as high as $300 million.
  • Oldsmar water treatment plant (2021): A group of attackers gained access to the SCADA (Supervisory Control and Data Acquisition) systems used to control the chemical treatment of Florida’s water and altered the levels of caustic soda in the drinking water. Thanks to an operator who identified the unauthorised access and was able to detect the manipulation, this did not have serious adverse effects on the population.

These are just some of the examples that have been reported in the media, but there are many others that we will never know about.

How to avoid or mitigate the consequences of an industrial cyber-attack

To minimise the risks of suffering a cyber-attack in an industrial environment, network visibility must be minimised to reduce the attack surface, increase staff training to avoid social engineering attacks, generate new cyber security procedures and policies, and deploy technologies appropriate to the environment to prevent or mitigate the effects that could occur.

One of the key aspects is the monitoring of industrial networks using dedicated tools specialised in OT communications protocols that analyse anomalous behaviour once they have learned the normal or baseline behaviour of the network, such as Nozomi Networks’ probes

Visualisation of the network through an industrial monitoring tool. Source: Nozomi Networks.

As well as generating alerts when malicious action is found, these tools also provide great visibility into the industrial network by providing an inventory of connected devices, which can help companies discover unidentified equipment that could be a gateway for future cybercriminals.

But what should be done with all the information obtained by these industrial monitoring probes? One of the options could be to integrate them with a SIEM (Security Information and Event Management), so that all alerts are aggregated in the same place and can be correlated with each other.

In addition, it is necessary to establish an incident response procedure that determines what actions to take according to the type, severity and location of each of the alerts. But all of this cannot be done without dedicated personnel specialised in these monitoring and industrial incident response tasks.

The importance of cyber security in industrial environments

Industrial cyber security risks continue to grow over time as industrial networks become increasingly connected and exposed to IT networks or even the internet, and the number of threats grows exponentially.

Cyber threats can have a major impact on personal and corporate reputation (loss of customer confidence), financial operations (fines for non-compliance) and business (unscheduled production downtime), as well as potential legal liabilities (legal consequences for non-compliance with laws and physical and environmental security standards).

This is why it is crucial to implement, manage and improve cyber security measures in industrial environments in order to maintain and increase their effectiveness against any cyber attack.

Featured photo: Umit Yildirim / Unsplash

Incentives in business blockchain networks: a new approach

Alberto García García-Castro    16 January, 2023

It has always been a fundamental part of the technology to use incentives that reward collaboration and good practices among participants in a blockchain network. So much so that even Satoshi Nakamoto himself included, in 2008, an entire chapter on these incentives in the Bitcoin whitepaper.

Despite this, there are still problems today, especially in private and consortium networks, when it comes to rewarding the nodes that maintain the networks. Here is why.

Blockchain miners?

At a very high level, the mining process within a blockchain network is responsible for ensuring that everything that is happening within the network is done correctly. For example, validating transactions or generating the new blocks that form the network.

This work is carried out by so-called “miners” and, taking into account that this procedure is economically and computationally expensive, it is necessary for the network itself to reward them in some way.

According to Satoshi himself in his paper, the economic incentives that these miners have are divided into two types:

  • They are rewarded every time they create a new block in the chain
  • They are rewarded with a small commission for each transaction included in the chain.

In this way, the network itself is able to financially incentivise the people or entities that dedicate their time and resources to maintaining it, without the need for third parties to do so. In other words, due to the very design of decentralised networks, participants are offered both the benefits of the service and the responsibility of maintaining it.

This paradigm has not only allowed the technology to evolve but has also made it sustainable without the economic support of third parties, extending to the rest of the technologies that are now grouped under the umbrella of Web 3.0.

Analysing Blockchain technologies

So far, within the ecosystem of enterprise Blockchain networks in use today, four technologies stand out above the rest: Hyperledger Besu, Corda, Hyperledger Fabric and Quorum.

Unlike public Blockchain technologies, such as Bitcoin or Ethereum where miners do not trust each other and need a PoW or PoS type consensus algorithm, in enterprise technologies this process of agreement between the participants that validate the proper functioning of the network is carried out in a different way.

The mechanism of consensus and validation of transactions is carried out through algorithms that base their agreement on the identity and reputation of the nodes that are in charge of such verification. In other words, the participants know and trust each other when it comes to verifying that the network is working correctly.

On the other hand, it is important to note that in the technologies used in private or consortium networks, due to their design, there is no incentive for the validating nodes in the network to close a block directly through cryptocurrencies or commissions for the validation of the transfers made. With this in mind, would it be necessary to incentivise the participants or companies that are responsible for the proper functioning of the network?

Medium and long-term incentives?

In general, business networks tend to share the costs of network maintenance among the companies using the network.

Companies that are typically interested in using such a network pay a flat fee to the companies in charge of maintaining it, and in this way, they can execute all the transactions required by their corporate applications.

So far so good, but what if these operation and maintenance costs are not covered in the medium to long term, would there be sufficient incentive to maintain the network, and how could the immutability and permanence of the information in the chain be guaranteed without a clear service continuity plan?

On the other hand, there is another type of blockchain network management in which several companies or entities are associated to carry out network maintenance in a collegial manner, where all assume in good faith to comply with established policies, good practices and previous agreements on the availability of validator nodes.

Despite this, how would responsibility be shared in the event of possible network incidents? What would happen if the cost of the operation is not balanced and one of the participants makes more use of the network than the others? In short, how can such a collegial operation be professionalised?

A new approach

To address the problems associated with the lack of incentives in enterprise Blockchain networks, it seems inevitable to increase the professionalisation of the core of the network.

This means having business-agnostic nodes with sufficient incentives to validate transactions and maintain consensus without additional interests, whether related to use cases or to the execution of transactions. In other words, they should be able to validate such transactions regardless of their origin or who sends them, even preventing them from having visibility over the information they receive.

In the world of public networks, there have been so-called “layer two” solutions for several years now, in which a second protocol is introduced on top of an existing blockchain network, also known as “layer one”. The main objective is to solve problems of scalability, cost and transaction processing speed that the main public blockchain networks currently have. In other words, a new protocol is added to complement the existing one, improving some of its limitations.

Photo: Shubham Dhage.
Photo: Shubham Dhage.

An example of this type of “layer two” network is Polygon, a platform that, thanks to its design, allows much faster and less costly transactions than those that can currently be carried out on Ethereum, the layer one with which it interacts. In fact, it has become one of the biggest players in the Blockchain ecosystem today and has a multitude of interesting technologies, both public and private, that can be used for corporate applications.

In terms of the enterprise blockchain ecosystem, within Polygon’s portfolio of solutions, there is a new technology called Supernets that adds a new approach to the incentive models that currently exist.

Polygon’s Supernets allow the creation of private networks compatible with Ethereum technology, but delegating the validation of transactions to “professional” validators

Broadly speaking, Supernets allow the creation of private networks compatible with Ethereum technology, but with the peculiarity that it delegates the validation of transactions to “professional” validators already present in Polygon’s main network, with their corresponding incentives.

The idea is that business applications can be deployed in private or consortium networks guaranteeing the necessary security and scalability, and at the same time, use the public chain to carry out the validation of these transactions through zero knowledge proofs, or more commonly known as Zero Knowledge Proofs.

Next steps for Blockchain technology

Since Telefónica allied with Polygon in March 2022 to jointly develop solutions, Telefónica Tech’s blockchain team has carried out a technological analysis process to specify the use cases that materialise this collaboration. The improvement of incentives for the administration of private and consortium networks is one of them.

Although Polygon Supernets is a technology that is only a few months old (it was launched in April 2022), at Telefónica Tech we are already taking the necessary steps to integrate it into our traceability, certification and tokenisation processes within the TrustOS platform.

In addition, within Alastria, the Spanish non-profit association that promotes the digital economy through the development of decentralised registration technologies, Telefónica will lead a new network with Polygon Supernets technology that, by combining the performance and availability of private networks, together with the validation of transactions through nodes already present in the Polygon public network, will provide a new approach to the options already existing in the consortium.

We expect that by using this new technology, companies interested in deploying their applications on a blockchain, whether private or consortium, will no longer have to worry about the incentives for validators to manage it, as by default the network will have “professional” nodes with sufficient incentive to continue validating the good use of the network in both the medium and long term.

Cyber Security Weekly Briefing, 7 – 13 January

Telefónica Tech    13 January, 2023

​Microsoft fixes 98 vulnerabilities on Patch Tuesday​

Microsoft has published its security bulletin for the month of January, in which it fixes a total of 98 vulnerabilities.

Among these, an actively exploited 0-day vulnerability stands out, which has been identified as CVE-2023-21674 with a CVSSv3 of 8.8. It is an Advanced Local Procedure Call (ALPC) privilege escalation vulnerability in Windows, which could lead a potential attacker to obtain SYSTEM privileges.

Also noteworthy is the vulnerability CVE-2023-21549 (CVSSv3 8.8) for escalation of privileges of the Windows SMB Witness service. Its exploitation by a potential attacker could lead to the execution of RPC functions that are restricted only to privileged accounts, as it has already been publicly disclosed.

It should also be noted that of the 98 vulnerabilities fixed, eleven of them have been classified by Microsoft as critical, specifically those identified as: CVE-2023-21743CVE-2023-21743CVE-2023-21561CVE-2023-21730CVE-2023-21556CVE-2023-21555CVE-2023-21543CVE-2023-21546CVE-2023-21679CVE-2023-21548, and CVE-2023-21535.

More info

​​* * *​

​​​Critical vulnerability in unsupported Cisco routers

Cisco has issued a security advisory warning of a critical vulnerability affecting multiple end-of-life Cisco routers for which there is a public PoC, although there is currently no known exploit attempts. This security flaw, registered as CVE-2023-20025, with a CVSSv3 of 9.0 according to the vendor, can trigger an authentication bypass caused by incorrect validation of user input within incoming HTTP packets.

Unauthenticated malicious actors could remotely exploit it by sending a specially crafted HTTP request to the administration interface of vulnerable devices. This security flaw could also be chained together with another new vulnerability, CVE-2023-20026, which would allow arbitrary code execution. Finally, it should be noted that the affected devices are Cisco Small Business router models RV016, RV042, RV042G and RV082.

Cisco says it will not release a patch, but as a mitigating measure it is recommended to disable the administration interface and block access to ports 443 and 60443 to block exploitation attempts.

More info

​​* * *​

​​​IcedID takes less than 24 hours to compromise the Active Directory

Researchers at Cybereason have published an analysis of the banking trojan IcedID, also known as BokBot, highlighting how quickly it can compromise a victim’s system.

In the report Cybereason warns that IcedID takes less than an hour from initial infection to start lateral movements in the system and that it takes less than 24 hours to compromise the Active Directory and finally start data exfiltration in just 48 hours.

The report also highlights that IcedID has changed its initial access vector as it was initially distributed via Office files with malicious macros, but after the macro protection measures implemented by Microsoft it is now distributed via ISO and LNK files.

Finally, it is worth noting that IcedID shares tactics, techniques and procedures (TTPs) with groups such as Conti and Lockbit.

More info

​​* * *​

​​​​Vulnerability actively exploited in Control Web Panel (CWP)

Shadowserver Foundation and GreyNoise have detected active exploitation of the critical vulnerability in Control Web Panel (CWP) listed as CVE-2022-44877 with a CVSSv3 of 9.8.

The vulnerability, which was discovered by researcher Numan Türle, was patched in October, but it was not until last week that more details of the vulnerability were published along with a Proof of Concept (PoC).

According to the experts, the first attempts to exploit this vulnerability, which would allow an unauthenticated threat actor to perform remote code execution on vulnerable servers or privilege escalation, were detected on 6 January.

Specifically, this security flaw affects CWP7 versions prior to 0.9.8.1147. It is worth noting that GreyNoise has observed four unique IP addresses attempting to exploit this vulnerability.

More info

​​* * *​

​​​Latest SpyNote version targets banking customers

Researchers at ThreatFabric have reported recent activity in the SpyNote malware family, also known as SpyMax. The latest known variant has been listed as SpyNote.C, which was sold by its developer via Telegram, under the name CypherRat, between August 2021 and October 2022, accumulating, according to researchers, a total of 80 customers.

However, in October 2022, the source code was shared on GitHub, which led to a very significant increase in the number of detected samples of this malware. Among these latest samples, it has been observed how SpyNote.C has targeted banking applications, impersonating apps from banks such as HSBC, Deutsche Bank, Kotak Bank, or BurlaNubank, as well as other well-known applications such as Facebook, Google Play, or WhatsApp.

It is noteworthy that SpyNote.C combines spyware and banking Trojan capabilities, being able to use the API of the devices’ camera to record and send videos to its C2, obtain GPS and network location information, steal social network credentials, or exfiltrate banking credentials, among other capabilities.

More info

Observability: what it is and what it offers

Daniel Pous Montardit    12 January, 2023

What is observability?

The term “observability” comes from Rudolf Kalman’s control theory and refers to the ability to infer the internal state of a system based on its external outputs. This concept applied to software systems refers to the ability to understand the internal state of an application based on its telemetry. Not all systems allow or give enough information to be ‘observed’, so we will classify as observable those that do. To be observable is one of the fundamental attributes of cloud-native systems

Telemetry information can be classified into three main categories:

  1. Logs: probably the most common and widespread mechanism for issuing information on internal events available to the processes or services of a software system. Historically, they are the most detailed source of what happened and they follow a temporal order. Their contribution is key to debugging and understanding what happened within a system, although some point out that they could be overtaken by traces in this main role. They are easy to collect, but very voluminous and consequently expensive to retain. There are both structured and unstructured (free text) logs, and common formats include json and logfmt. There are also proposals for semantic standardisation such as Open Telemetry or Elastic Common Schema.
  1. Metrics: are quantitative information (numerical data) related to processes or machines over time. For example, it could be the percentage of CPU, Disk or Memory usage of a machine every 30 seconds or the counter of the total number of errors returned by an API, labelled with the HTTP-Status returned and the name of the Kubernetes container, for example, that has processed the request. Thus, these time series can be determined by a set of tags with values, and which also serve as an entry point for exploration of telemetry information. Metrics are characterised by being simple to collect, inexpensive to store, dimensional to allow for quick analysis, and an excellent way to measure overall system health. Later in another post we will also see that the values of a metric can have data attached to them known as exemplars, also in the form of a key/value, which serve among other reasons to easily correlate this value with other sources of information. For instance, in the above API error counter metric an attached exemplar could allow us to jump directly from the metric to the traces of the request that originated the error. This greatly facilitates the operation of the system.
  1. Traces: we are talking about detailed data about the path executed inside a system in response to an external stimulus (such as an HTTP request, a message in a queue, or a scheduled execution). This type of information is very valuable as it shows the latency from one end of the executed path to the other and for each of the individual calls made within it, even if it is a distributed architecture and therefore the execution may affect multiple components or processes. The key to this power lies in the propagation of context between system components working together, for example, in a distributed micro-services system components may use HTTP headers to propagate the required state information to get the interleaved data from one end to the other. In conclusion, traces allow us to understand execution paths, find bottlenecks and optimise them efficiently, and identify errors, making them easier to understand and fix.

These three verticals of information are referred to as the “three pillars” of observability and making them work together is essential to maximise the benefits obtained.

For example, metrics can be alarmed to report a malfunction, and their associated exemplars will allow us to identify the subset of traces associated with the occurrence of the underlying problem.

Finally, we will select the logs related to those traces, thus accessing all the available context necessary to efficiently identify and correct the root cause of the problem. Once the incident has been resolved, we can enrich our observability through new metrics, consoles or alarms to more proactively anticipate similar problems in the future.

Why monitoring is not enough? and… What does Observability offer?

Monitoring allows us to detect if something is not working properly, but it does not give us the reasons. Moreover, it is only possible to monitor situations that are foreseen in advance (known knowns). Observability, on the other hand, is based on the integration and relationship of multiple sources of telemetry data, that together help us to better understand how the software system under observation works and not only to identify problems. However, the most critical aspect is what is done with the data once it is collected, for example, why rely on pre-defined thresholds when we can automatically detect unusual ‘change points’? It is this kind of ‘intelligence’ that enables the discovery of unknown unknowns.

The elaboration of real time topology maps is another capability offered by observability, and allows us to establish automatic relationships between all the telemetry information gathered, going much further than a simple correlation by time. A high-impact example of what these topologies can provide would be to achieve automatic incident resolution mechanisms in real time without human intervention.

Observability also facilitates the integration of performance as a first level activity in software development, by allowing us to have profiling information (step by step detail of an execution) on a continuous basis (something that without the appropriate mechanisms requires a lot of effort in distributed systems) and offers us the possibility of detecting bottlenecks in real time, etc. In addition, the mere fact of making us understand in depth what happens within a system over time allows us to maximise the benefit of load testing (and in general of any type of e2e test) and open the doors to the implementation of chaos engineering techniques. At the same time, but not least, it reduces the mean-time to resolution (MTTR) of incidents by reducing the time spent on diagnosis, allowing us to focus on the resolution of the problem.

We can conclude that when a system embraces a mature observability solution, the benefits for the business become more acute. Not only does it give rise to more efficient innovation, but the reduction in implementation times is transferred as an increase in efficiency to the teams, generating consequent cost reductions.

For all these reasons, you can imagine that observability is not a purely operational concern, but a transversal responsibility of the whole team, as well as being considered a basic practice within the recommendations of the most modern and advanced software engineering.

Conclusion

The key to understanding the problems of distributed systems, problems that appear repeatedly but with accentuated variability, is to be able to debug them reinforced with evidence rather than conjecture or hypotheses. We must internalise that ‘errors’ are part of the new normal that accompanies complex distributed systems. The degree of observability of a system is the degree to which it can be debugged, so we can assimilate the contribution of observability in a distributed system to what a debugger offers us on a single running process.  Finally, it is worth noting that an observable system can be optimised, both at a technical and business level, much more easily than the rest.

References
  • https://newrelic.com/resources/ebooks/what-is-observability
  • https://lightstep.com/blog/opentelemetry-101-what-is-observability/
  • https://www.splunk.com/en_us/blog/devops/observability-it-s-not-what-you-think.html
  • https://www.alibabacloud.com/blog/a-unified-solution-for-observability—make-sls-compatible-with-opentelemetry_597157 https://containerjournal.com/kubeconcnc/the-future-of-observability/
  • https://www.infracloud.io/blogs/tracing-grafana-tempo-jaeger/
  • https://blog.paessler.com/the-future-of-monitoring-the-rise-of-observability

Featured photo: Mohammad Metri / Unsplash