We are now live! Discover the new Telefónica Tech website

Nacho Palou    14 February, 2023

We have redesigned the Telefónica Tech website to represent who we are as a digital solutions integrator. We also want to share what we do, who is behind it and how next-generation digital technologies are making companies and organisations more efficient, competitive, and resilient.

The new Telefónica Tech website focuses on our most valuable assets:

  • The people that make up the Telefónica Tech team in different countries and regions: more than 6,000 professionals who work every day to provide innovative solutions to millions of customers.
  • The technology partners we share two commitments with: to develop the best products and services, and to help customers achieve their goals.
  • Our customers, who we support in their digitalisation process by integrating the solutions they need to grow into their systems and processes. Because their success is our success.
  • Our technologies in connectivity, Cyber Security, Cloud, IoT (Internet of Things), Big Data, Artificial Intelligence and Blockchain… And the innovation and development labs we have in different cities in Europe and America.
  • We also give visibility to analysts in the technology sector who know (and acknowledge!) our capabilities and the value we bring to our clients.

“We have improved Telefónica Tech’s website to continue positioning ourselves as the best technological partner in the digital transformation for our customers”

María Díaz. Head of Marketing & Communications, Telefónica Tech

We have also developed a News section that brings together the initiatives, events and multimedia content you need to stay informed about how digital technologies are creating new growth opportunities for companies and society.

We would love to hear from you. We invite you to visit the new Telefónica Tech website and join the conversation or share with us any comments that will help us improve. Thank you!

The role of “Threat Hunting” as an enabler in ransomware incident response

Íñigo Echavarri    13 February, 2023

Following in the wake of the articles developed to shed light on incident response in our group, it seems clear that the actions required to deploy ransomware with the maximum impact desired by the attacker require different phases and different states of compromise.

These phases start with an initial compromise of some asset of the organisation, which the adversary skilfully exploits to gain power over the infrastructure and eventually deploy the final artefact that is responsible for encrypting and leaving the ransom notes.

Usually, the mission of a “Threat Hunter” is to focus on locating these previous phases that the adversaries carry out (and the earlier this phase is detected, the better) to be able to solve and/or mitigate the incident produced with no or very minimised impact with respect to what a ransomware incident is.

So how can threat hunting add value when the incident is already complete?

A Threat Hunter is an analyst specialised in investigating various sources of information from the organisation’s infrastructure to extract threat data.

These threats are detected in the different anomalies caused by an adversary in the normal operation of the different assets. These anomalies are known and detected by the “Threat Hunter” in the information provided by tools such as EDR, XDR, SIEM, UEBA, etc., which provide the necessary context to differentiate between what is normal operation and what is not.

If this same process is adapted to incident response, the result is a flow of information that is fed back by the sharing of findings between the different roles. What a forensic officer finds can be investigated by the Threat Hunter to see in the EDR or SIEM how it got to that machine, from where and on how many other machines it has been seen. In turn, giving the forensic scientist new information (feedback) on the course of the incident, allowing the different lines of investigation to be accelerated (and containment to be better adjusted, in many cases).

Following these investigations with the data obtained by the Cyberintelligence team (as we saw in the previous post), the Threat Hunter will return to the rest of the team the different findings that will help this part of the team to advance quickly in the classification of the adversary. This allows them to know crucial data such as whether there is a site where data leaked by the attackers will be published or what other tools they may have used in the specific scenario, as they will have been seen in other incidents carried out by this same group.

But it does not only allow the incident response team to move faster in their respective tasks. The fact that they are actively investigating the behaviour of the machines in the organisation and with the different indicators of compromise (IOCs) available, allows the organisation to quickly bring up services that can be confirmed to be unaffected and the safe start-up of those affected by blocking the malicious artefacts already identified through EDR.

Acting from EDR for all assets

As mentioned above, the incident response is largely “EDR-centric”. The actions that the Threat Hunter performs on this platform are diverse:

  • Evidence retrieval. Thanks to an EDR, it is possible to connect to the machines of interest and retrieve information directly from them, making it possible to obtain artefacts without the need to use the time of other technical groups in the organisation (given the situation, they usually have a much higher workload than normal).
  • Event investigation. An EDR also provides telemetry on the machines on which the corresponding agent is running. This telemetry makes it possible to investigate artefact executions, understand the creation of artefacts, remove malicious files, trace connections created by each executed process and, among other things, detect the persistence of different software elements or malware. All this provides a very complete context of operation in which attack patterns can be studied in an optimal way.
  • Asset isolation. As telemetry is investigated, it is possible to isolate those assets within the network that present malicious or clearly suspicious behaviours; at the same time, allowing the normal operation of those elements that have not been affected.
  • Blocking of IOCs and creation of rules. Two of the most interesting results of the research are the Indicators of Compromise (IOCs) and the rules for the behaviour of the adversary in the incident. Both elements can be easily configured in the platform for automatic blocking and warning. So that if the adversary had left a logical bomb or maintained access with some persistence that relaunched the attack, it would be automatically blocked by the EDR as this configuration has previously been done.

Handing the baton back to the organisation

During the incident response, the team will ensure that the adversary has been successfully ejected and the incident has been resolved at all levels.

In that purpose, the Threat Hunter will be responsible for monitoring that the information produced by the EDR system “shows calm” and that no new activity related to the incident is present on any machine in the infrastructure.

To this end, any anomalous behaviour will be monitored and the different necessary alerts will be configured in the event that the incident is reproduced or that any machine may have activity (in order to automatically isolate that activity and continue with the recovery).

After a reasonable time in which it will be verified that there is no new relevant activity (normally one month since the incident), the Threat Hunter’s work will be terminated, returning the control to the client organisation or group of assigned EDR services.

Cyber Security Weekly Briefing, 4 – 10 February

Telefónica Tech    10 February, 2023

Critical vulnerability in Atlassian Jira

Atlassian has issued a security advisory in which it releases fixes to resolve a critical vulnerability in Jira Service Management Server and Data Center.

According to the vendor, this security flaw has been registered as CVE-2023-22501, CVSSv3 of 9.4, and has been classified as a low attack complexity because a malicious actor could gain access to registration tokens sent to users with accounts that have never been logged in.

This could lead to a user impersonation that would allow unauthorised access to critical instances of Jira Service Management. Atlassian says the security issue affects versions 5.3.0 to 5.5.0, and advises upgrading to versions 5.3.3, 5.4.2, 5.5.1 and 5.6.0 or later. In case the patches cannot be applied as soon as possible, the manufacturer has provided a workaround to manually update the asset.

More info

* * *

Mustang Panda campaign to distribute PlugX

Researchers at EclecticIQ have detected the existence of a PlugX malware distribution campaign and attribute it to the APT Mustang Panda.

According to the published information, Mustang Panda sent out EU-themed emails containing a supposed Word file that was in fact an LNK-like executable that downloads PlugX onto the victim’s system.

EclecticIQ claims that the target of the campaign is European governmental institutions and recalls that a similar campaign was attributed to the same actor last October, although in the recently detected campaign Mustang Panda has implemented more evasion techniques to avoid detection.

More info

* * *

Tor and I2P networks hit by DDoS attacks

Tor and peer-to-peer (I2P) networks have recently been hit by distributed denial-of-service (DDoS) attacks that have caused connectivity and performance problems.

On the one hand, Isabela Dias Fernandes, executive director of the Tor Project, issued a statement saying that the network had been under DDoS attacks since July. The target of these ongoing attacks or the identity of the threat actor behind these events has not been detailed.

The company has stated that it is continuing to work to improve its defences so that users are not affected. The I2P network has also been the victim of an attack of this type over the last three days, causing performance and connectivity problems

According to the project administrator’s statements, as in the case of Tor, the threat actors behind these attacks are using a variety of tactics to perpetrate these DDoS attacks.

More info

* * *

New Google Chrome update

Google has released a new version of Chrome 110 which fixes a total of 15 vulnerabilities, 10 of which have been identified by security researchers outside the company. 

The breakdown of these vulnerabilities according to their criticality is as follows: 3 with high criticality, 5 medium and 2 low. 

Among these, the three with the highest severity are those identified as: firstly CVE-2023-0696, which could allow a remote attacker to exploit it through a specially crafted HTML page.

In second place, CVE-2023-0697 affecting Chrome for Android, which could allow a remote attacker to use a manipulated HTML page to spoof the content of the security user interface.

Lastly, CVE-2023-0698 which would allow a remote attacker to perform an out-of-bounds memory read via a malicious HTML page. It is recommended to update to Chrome versions 110.0.5481.77/.78 for Windows and 110.0.5481.77 for Mac and Linux to fix these vulnerabilities.

More info

How I won a Capture the Flag competition by solving challenges using my mobile phone

Telefónica Tech    9 February, 2023

David Soto, winner of the challenge, collecting the prize together with Humbert Ruiz, from 42 Barcelona, Fundación Telefónica’s programming campus.

We organised activities aimed at the technical audience in the Hacking Village area as part of our participation in the Barcelona Security Congress 2023 event. One of the activities consisted of a Capture the Flag challenge in which 74 hackers registered, including both on-site and online participants.

David Soto, our guest blogger, was the first participant to solve three challenges, win the challenge and win the prize. In this post he tells us how he managed to do it using only his mobile phone, and what are the keys to stay ahead in the field of cybersecurity.

* * *

BY DAVID SOTO
CYBER SECURITY SPECIALIST

I am David Soto and I am lucky enough to work as an IT consultant as a cybersecurity and secure development specialist at ERNI Consulting Spain. I have been passionate about this field since I was a child.

In Capture the Flag (CTF) competitions I am known by the alias of JDarkness and I have the honour of having won competitions such as IntelCon, MundoHacker or PwnVerse, among others. And more recently, just a few days ago, the one organised by Telefónica Tech together with campus 42 during the celebration of the Barcelona Cybersecurity Congress.

Capture The Flag are free competitive games that test your knowledge and skills as a hacker.

Participants find themselves in different types of challenges with the objective of getting a “flag”, a code that proves that you have solved the challenge.

On this occasion, since I won the challenge in a somewhat “different” way, using only my mobile phone, I have been invited to write this post telling how the competition went and my experience. So here is my story:

A couple of weeks ago, while looking at the schedule of the Barcelona Cybersecurity Congress, I found out that this year they had prepared a hybrid Capture the Flag challenge, with online and on-site modalities. As I was planning to go to the congress, I signed up with the intention of seeing what challenges they had prepared, sitting down for a while with my laptop and see how far I could go.

Humbert in the Hacking Village space at Barcelona Cybersecurity Congress

Once I received the admission tickets, I started to prepare my itinerary: Tour with the DCA, visits to the exhibitors of interest… I set aside 30 minutes to sit in the Hacking Village and watch the challenges without much intention of winning.

When the DCA Tour was over, I headed to the Hacking Village to log on to my laptop and take on the challenges. However, just at that moment, a presentation had started and there was not a single free seat left. As I needed to connect my laptop, I thought: “Well, I’ll take my chances, as I just want to see what the challenges are about, I’ll watch it on my phone”. So, I went to visit the stands.

I have to say that on my phone I carry a termux with a small Kali Linux distribution, which, although uncomfortable, allows me to carry out small tests and tasks in case I need to do so.

How the Capture the Flag challenge went, step-by-step

In this CTF, co-organised by Fundación Telefónica with the 42 programming campus, participants were challenged to three cybersecurity challenges plus an extra one to test their skills in memory analysis, use of cookies, password cracking… To win they had to solve at least three of the four challenges of warm-up, steganography, forensics and web.

1. Warm-up challenge

The warm-up challenge was to find a text string within the main page and pass it as a flag. Easy, I moved on to the next one.

2. Steganography challenge

It is a type of challenge based on hiding information inside files or images that do not appear to be hidden. Participants must discover where the information is hidden and extract it.

After the warm-up, the steganography challenge was the first “real” challenge. It consisted of a login screen with a nice Telefónica Tech logo…

3. Forensic challenge

A forensic challenge involves analysing files and systems in order to recover information (such as encrypted or deleted data), identify intruders, attackers or the perpetrators of computer crimes.

In this case it was a couple of supposedly dumped memory files or disk images… Having neither a keyboard nor the right applications, I didn’t even consider solving the challenge at the time, but I could always come back later if needed.

Martina Matarí, Head of Offensive Security Services at Telefónica Tech, during her speech.

4. Web challenge

Given the above, I decided to go for the last one, the web challenge. They usually include the identification and exploitation of vulnerabilities in websites, the recovery of sensitive information or the analysis of network packets. Perhaps the most accessible without tools.

The web challenge also started with a login screen asking for a username and password. I applied a SQL injection that worked its magic and returned a list of users and encrypted passwords.

The challenge statement mentioned a control panel. I found it but it had SQLi protection, so I couldn’t do a SQL injection. But as I had the previous credentials I could log in without any problem. Now yes, and the exercise was completed. 

The keys: knowledge, methodologies and tolos

At this point three challenges already had a solution, so I went to have lunch with my colleagues and forgot about the competition.

To my surprise I received an email inviting me to collect the prize for the highest score in person!

I went to collect the prize and the story of how I had won using my phone made a big impact.

The fact that I solved these challenges on the phone is thanks to having clear methodologies.

In this sense, I had the pleasure of learning from the great Francisco Martín, who always insisted on two things:

  1. Fat-button tools are only used when you know what they do and you are able to manage without them.
  2. Fuzzing is your friend: fuzz everything.

Jokes aside, I think understanding what we do, how we do it and why we do it is essential for those of us in IT.

So I would like to take this opportunity to encourage future professionals to learn, to investigate and not to remain on the surface of what we are taught. Because, who knows, maybe that will allow you to achieve things that nobody expects you to achieve..

Metacloud: a cloud of clouds

Roberto García Esteban    8 February, 2023

The digitalisation of society is rapidly advancing, mainly driven by the development of the internet and Cloud technology. Companies are rushing to adopt these new technologies in pursuit of the mantra of digital transformation, prompted by the need to adapt to an increasingly digitalised and, at the same time, more demanding consumer.

This race towards digitalisation, however, is often a bit of a mess. Many companies have been implementing multiple Clouds in a very heterogeneous way over the last ten years, with the management of the corporate Cloud becoming a chaos of increasing complexity.

Metacloud is emerging as a solution to bring order to the network of management tools for the different clouds managed by companies

Experts consider metacloud, also known as “super Cloud” or “sky computing”, as a key trend in the coming years to untangle this chaos. To bring order to the network of management tools for the numerous Clouds that companies manage. Tools that are sometimes interconnected, sometimes redundant, but always complex to manage.

Why metacloud is more necessary than ever

Operating with one cloud is simply not enough for many companies. There are various statistics on the use of a multi-cloud environment in companies, but surely more than half of the world’s companies with more than 1,000 employees work with more than one cloud. And that number is constantly growing.

According to Deloitte, 25% of companies with more than 1,000 employees use at least five Cloud platforms in their daily operations

Sometimes implementing multi-cloud solutions is the result of a strategic decision to increase flexibility, control costs, monetise the data the company manages or manage diverse data location requirements.

However, it is often an unintended consequence of different teams within an organisation preferring to run applications or workloads in different clouds.

Whatever the reason, the truth is that a multi-cloud strategy results in both optimised pricing and access to specialised capabilities, as well as increased complexity, inefficiencies and redundancies in its management.

Mantener múltiples configuraciones de seguridad y repositorios de datos constituyen todo un desafío para las organizaciones.

In this environment, metacloud offers the opportunity to move “above” the Cloud, providing a common layer of abstraction and automation to improve the simplicity and visibility of cloud services. It is one of the trends identified in Deloitte’s Tech Trends 2023 report on the most important technology trends for 2023.

Metacloud benefits and opportunities

Metacloud services work with the compatibility layer that lies “above the cloud”, using APIs to access a variety of common services such as storage and compute, Artificial Intelligence, security, operations, or particular application development and deployment. A common interface is used, giving administrators centralised control over their multiple cloud instances.

In addition to simplifying the management of a customer’s different Cloud environments, metacloud is even more necessary at a time when the current shortage of professional supply makes it difficult to find qualified cloud technology personnel, as using metacloud reduces the need for specialists in specific cloud platforms.

Metacloud is even more necessary considering the shortage of qualified Cloud technology professionals

Another benefit of metacloud is improved security, as it allows developers to set up a single configuration from the compatibility layer, which will run on each cloud platform through their common cloud interface, thus simplifying the implementation of security policies.

However, the development of metacloud services is limited by the interests of large hyperscalers because, although it is relatively simple to develop from a technical point of view, it would lead to Cloud providers becoming a kind of “commodity”, making it difficult to distinguish the capabilities of one from the other.

Unlike other technologies, however, where standardisation requires universal agreement, the software to create a compatibility layer between Clouds is widely available and third-party companies are already developing unified management tools for different clouds, using a single centralised control panel.

It is also sometimes the customers themselves who develop this management layer using vendor APIs. While it is complex to deal with complexity, the end result will be to increase simplicity, so it looks like it will be worth the effort.

Key ingredients for today’s Smart Cities

Nacho Palou    6 February, 2023

The term Smart City used to refer to “cities of the future”, but Smart Cities are increasingly turning the present. Thanks to the use of digital solutions such as IoT (Internet of Things) and Artificial Intelligence, among other technologies, it is now possible to sensor cities. Capturing real-time data from the urban environment —and its infrastructures and services— allows optimising and automating processes and making better management decisions.

This makes cities that embrace digitalisation increasingly healthier, more attractive and safer. With more efficient public services and a lower environmental impact.

Smart cities are able to grow in a more sustainable way and improve the quality of life of their citizens.

We have brought together six key components of today’s Smart Cities. They include solutions that enable features today that not so long ago defined the cities of the future:

Smart parking

Smart mobility solutions make it easier for drivers to find parking spaces in urban areas. This improves traffic flow, reduces emissions, and promotes local commerce and citizen satisfaction.

These solutions can also be applied to park-and-ride facilities, airports, hospitals, shopping and leisure centres or natural areas.

Smart Transport

Many public transport systems are already interconnected, enabling coordination between different modes of transport and allowing them to provide real-time information on their location, time of arrival at the stop, journey time and more.

Similarly, smart transport facilitates “mobility mixes“, journeys that combine different modes of transport according to the context and individual needs: private car, public transport, shared bicycle or scooter…

Smart street lighting

City lighting accounts for between 40% and 70% of municipalities’ electricity bills. It is possible to reduce costs and emissions, improve energy efficiency and reduce light pollution by integrating IoT devices and connectivity for smart management and remote management.

Smart street lighting offers new business opportunities and benefits for both cities and their citizens.

Mobility management

Mobility control systems for vehicles and pedestrians provide timely information on the status and possible incidents affecting mobility. These systems calculate and even anticipate the most convenient route for public transport, drivers and pedestrians.

They also automatically manage traffic lights and other signals to adapt to different contexts and unforeseen events to optimise mobility and reduce congestion.

Efficient waste management

Technology is already making it possible to improve the management of municipal waste, improving the quality of the service, saving costs and reducing its environmental impact. Not all districts in the same city generate the same type of waste or the same amount of waste.

Technology makes it possible to obtain real-time information on the status of the containers.

It is possible to combine this information with a vehicle fleet management system to plan optimal collection routes depending on collection needs, areas, volumes, or types of waste.

Efficient municipal waste management is essential to improve recycling, promote the circular economy and reduce environmental impact.

Monitoring environmental parameters

Monitoring environmental parameters makes it possible to assess the quality of the urban environment and reduce environmental, light or noise pollution. Thanks to the deployment of IoT sensors, it is possible to monitor these data in real time in order to know the situation accurately and make better decisions.

The monitoring of environmental parameters also makes it possible to measure parameters such as the concentration of CO2 or harmful particles or pollutants. Or meteorological phenomena that can influence the functioning of the city.

* * *

If you want to know more about the present and future of Smart Cities, visit our new Smart Cities section of the Telefónica Tech website.

Cyber Security Weekly Briefing, 21 January – 3 February

Telefónica Tech    3 February, 2023

LockBit Green: new LockBit variant

Researchers at vx-underground have recently detected that a new ransomware variant, called LockBit Green, is being used by the LockBit ransomware handlers.

This new variant would be the third one used by the group, after its inception with Lockbit Red, and its subsequent evolution to LockBit Black (also called LockBit 3.0). Several researchers have analysed the available samples of LockBit Green and found that this new variant is based on Conti’s source code.

Based on their analysis, they note that the ransom note used is that of LockBit 3.0, and that the .lockbit extension is no longer used, but a random one, when encrypting files on the victim’s system. The PRODAFT team has also shared Indicators of Compromise (IoCs) and a Yara rule for the new variant.

More info

* * *

​GitHub revokes compromised Desktop and Atom certificates

Github has taken the decision to revoke a number of certificates used for its Desktop and Atom applications after they were compromised in a security incident in December.

According to the company itself, the unauthorised access in December did not affect the platform’s services, however, a group of certificates were exfiltrated as a result. These certificates are password-protected, and so far, no malicious use of them has been detected.

The removal of these certificates will invalidate GitHub Desktop for Mac versions 3.0.2 to 3.1.2 and Atom versions 1.63.0 to 1.63.1. Users of these versions are advised to upgrade to the latest version in the case of Desktop and revert to earlier versions in the case of Atom. The changes will take effect on 2 February.

More info

* * *

PoC available for KeePass vulnerability

KeePass has recently discovered a vulnerability in its software for which a PoC has already been released. The flaw, identified as CVE-2023-24055, allows threat actors with write access to a system to alter the XML configuration file and inject malware to export the database with users and passwords in plain text.

When a user accesses KeePass and enters the master password to open the database, the export rule is triggered in the background and the content is saved in a file that is accessible to attackers. While KeePass described the issue in 2019 without describing it as a vulnerability, users are requesting that the product include a confirmation message before exporting or being able to disable the feature.

Bleeping Computer recommends ensuring that unprivileged users do not have access to any application files and creating a configuration file.

More info

* * *

Two new vulnerabilities in CISCO devices

Researchers at Trellix have warned of two vulnerabilities in Cisco devices. The first, identified as CVE-2023-20076 and with a manufacturer’s CVSS of 7.2, would allow an unauthenticated attacker to remotely inject commands into various devices.

The second bug, so far identified with Cisco bug ID CSCwc67015, would allow an attacker to remotely execute code and overwrite existing files. While both bugs were originally identified in Cisco ISR 4431 routers, they would affect other devices as well: 800 Series Industrial ISRs, CGR1000 Compute Modules, IC3000 Industrial Compute Gateways, IOS-XE-based devices configured with IOx; IR510 WPAN Industrial routers and Cisco Catalyst Access points (COS-APs).

Cisco has reportedly released security updates for the first vulnerability mentioned, and researchers urge affected organisations to upgrade to the latest firmware version available, and to disable the IOx framework if it is not needed.

More info

* * *

​Lazarus campaign against energy and healthcare companies

WithSecure has published extensive research on the latest campaign by the APT Lazarus, allegedly backed by North Korea. The campaign has been named “No Pineapple!” and in it the group has managed to steal 100GB of data from medical research, engineering and energy companies, among others.

According to WithSecure, Lazarus exploited vulnerabilities CVE-2022-27925 and CVE-2022-37042 in Zimbra to place a webshell on the victims’ mail server. Once inside the system they used various tools such as the Dtrack backdoor and a new version of the GREASE malware, which abuses the PrintNightmare vulnerability.

WithSecure was able to attribute the campaign to Lazarus, in addition to repeating TTPs associated with the group, because it discovered that the webshells communicated with an IP located in North Korea.

More info

Featured photo: Brecht Corbeel / Unsplash

Cybersecurity in films: myth vs. reality with 10 examples

Martiniano Mallavibarrena    1 February, 2023

The multiple aspects of cybersecurity (attacks, investigations, defence, disloyal employees, negligence, etc.) have been part of the plot of countless movies and TV series for years. In today’s society, with a part of the population born with mobile phones in their hands and universal Wifi, talk of “hackers”, “malware” or “cyber-attacks” is commonplace and no one is surprised.

Both the one (the evil villains or those who help them) and the other (the victims, not always passive) are often caught in the middle of a cyber-epic struggle of good versus evil in the form of investigative agencies, elite police forces and other groups of “do-gooders” who save us from all evil (or try with all their might).

As with other technologies (particularly robotics and artificial intelligence), the film/TV production industry is not going to risk a big hit with audiences by being overly purist in the more technical details. As a result, we constantly see the most creative interpretations of the possibilities of technology and of each other’s abilities on the big screen.

10 examples Cybersecurity in cinema, television and streaming

We will use 10 films or TV/Streaming series to illustrate, in this article, how reality and fiction, when it comes to cyber security, can be separated by abysmal distances. The end justifies the means, as we all know.

1. Not everyone is a script kiddie: “WarGames” (1983)

Ever since going online was a matter of knowing the right phone number and having a modem set up, the archetype of the solitary, tech-savvy young techie who compulsively consumes knowledge and challenges himself by trying out new techniques of intrusion and compromise, often for the sheer pleasure of it, has been cultivated.

Although this profile of malicious actor exists and is common in today’s society (who hasn’t looked on Youtube for a tutorial on something?) it is not representative when it comes to drawing a map of really dangerous actors where we will have as leaders the professionals of organised crime, intelligence agencies, digital mercenaries, etc.

🔵 These people that we witnessed being born as icons in the classic film “WarGames” (1983) are a constant in our immediate surroundings, but beyond the pranks (some try to change their class grades as in the famous film) and hacktivism, they do not usually go beyond attempts at fraud, small scams on the Internet, etc. So, they are not really representative of the cybercrime sector.

2. Lone wolves and other profile features: “Sneakers” (1992)

In order to increase the drama of the script, we can all agree that “lone hacker in our film wolves” (regardless of their age and gender) are very suitable characters. Former members of intelligence agencies, elite hackers with a desire for revenge and a long etcetera, make up a huge pool of candidates to be the perfect script.

As with the first point, it is obvious to say that, while both profiles exist in the team of malicious actors, most of today’s organised cybercrime is made up of thousands of mercenaries of all ages and types whose only goal is to make money and prosper in the organisation. Lone wolves (for revenge or on a mission) do exist, but they certainly do not represent this group.

The film “Sneakers” (1992) is a nice example of how in “reality” these teams of experts (in this case, a charming team of ethical hackers) are put together. The same applies to police units and other groups: more experienced professionals combined with younger people (and in some cases, redeemed cybercriminals), all united with a common goal: to attack or defend (the famous metaphor of the red and blue teams).

3. Type fast, type better: “Matrix” (1999)

One of the most comical effects, perhaps, in today’s cinema and in terms of cybersecurity, is that all the experts in the field must type at full speed, stringing together very long commands, with complex instructions, etc. Without respite or error. Whether you’re wearing gloves, injured, at a cash machine keyboard, or the world is collapsing around you.

🔵 Of the thousand and one examples of this circus-like agility, we can recall some scenes from the “Matrix” saga where several of its protagonists type (in some cases using real tools such as “nmap” with leather gloves and under extreme pressure) at breakneck speed, obtaining perfect results.

4. Immediacy of access: Jason Bourne (2016)

It is easy to remember scenes in recent productions where the protagonist has to enter a remote system (or a personal computer in front of him) that he does not know and of which he has no prior knowledge (the script has already given us this information to increase the complexity) and he succeeds without hesitation and in a few moments.

While it is true that, in many cases, it may be relatively easy for a trained and prepared person (both conditions are necessary) to perform an intrusion, it seems unlikely that in general it will be done in a few seconds, without errors, without downloading (almost never happens) any supporting tools, without checking existing vulnerabilities, etc.

That sort of magic universal password (no two-factor authentication or address-locking) is often the result of some prior work (e.g., sending a malware email that includes a password-capturing tool) or at least known vulnerability checks or a couple of trivial password tries.

🔵 The latest instalment of the “Jason Bourne” saga is littered with such scenes where the viewer must assume that the CIA bypasses all sorts of legal delays and ethical dilemmas in the relentless pursuit of its target as one after another, all systems are accessed with enviable comfort.

5. Prior knowledge of all types of systems and platforms: “Live Free or Die Hard” (2007)

Another recurring theme in the film is the attacker’s universal knowledge of all types of systems and platforms that the victims use on a regular basis and the obvious simplicity of their use: industrial control systems, air traffic control, nuclear weapons, electric lighting or autonomous cars.

However professional we may believe the attackers to be (almost always elite hackers, three-letter agencies, etc.) it does not seem very convincing that whatever the system, the actor moves with total agility (it always seems that they are connecting for the first time) through the console (ignoring that these systems have multiple access security measures that disappear and that the actor would have installed the necessary software on their computer) and that even overcoming the language barrier (Mandarin, Arabic, Russian, etc.) the attacker does not hesitate to choose the perfect option to (without further checking) turn off the power in half of the state of California.

🔵 The cute fourth part of the “Live Free or Die Hard” saga, is full of all kinds of poetic licences in terms of industrial control (lighting in the tunnel, the power plant, the federal reserve, etc.)

6. Information connected between some systems and others: “NCIS” (2003–)

Another great reality in current information systems is that the format in which the information is treated is not standardised beyond the obvious, the clearest case being that of car number plates, telephone numbers or identification numbers (such as ID numbers).

It is therefore surprising that when our elite team (from the “good guys” team) gets the first piece of information (a blurred car number plate at a tollbooth), they get within seconds the position of the car, the mobile phone, the subject’s high school grades and his military record (as they were almost always members of the special forces before they became serial killers or mercenaries.

Considering the current population of the USA and that a combination of a first name and a single surname will almost always give thousands of results, it seems curious that the first face that appears on the screen when typing the name “John X. Smith” is exactly that of the villain (the photo will be recent, of course).

🔵 Series in which individuals are constantly located, often abuse these resources as in the case of the NCIS series, being surprising that we never have problems with the format of the data, telephone prefixes, postcodes, initials in proper names, etc.

7. With their bare hands: McGyver (1985–1992)

Those of us who watched the TV series in the 80s (“McGyver“, we had a remake a few years ago, for the new generations) smile every time a cybersecurity expert gets to work on our favourite film production, without having any initial resources.

In the scenes we see on the screen, our protagonist will have only a portable video game console (wireless connection, we assume, of course), an old mobile phone or the old PC of a library in some town in North Dakota. However, within minutes, he will have gained access to the federal reserve or the air traffic control centre at Washington airport (Dulles, D.C).

🔵 Some scenes in films such as “The Net” (1995) can be framed in this way, when the bad guys or the protagonist do all kinds of cybernetic balancing on computers used randomly anywhere.

8. Ubiquitous collateral information: “Enemy of the state” (1998)

Any “cyber” scene in today’s cinema usually involves infiltration of some remote system (bank, military environment, industrial control, etc.) to perform a necessary action (stealing money or cryptocurrencies, perhaps from the bad guys’ team) for a specific purpose (launching the missiles without human control).

To carry out these actions, our hero or heroine (or diverse team of people with multiple skills, all complementary to each other) will make clear to us their extensive knowledge of technology and use advanced penetration techniques (not always shown, but always intuited) until they achieve their goal and smilingly shout out the timeless classic “We’re in!”!”.

On the way to a successful connection and subsequent actions, we will be able to see on the screen, surprisingly, countless drawings of parts, architectural diagrams of buildings, sewerage plans, power lines, private security systems, modules of a factory or power plant, etc.

No matter how old the building or environment and how private and protected the information on the screen is, the plans will show us all these pieces of information in an accelerated way to make us understand that despite the hacker’s skills, the collateral information shown covers the most “miraculous” part of the exploit.

🔵 In the interesting “Enemy of the state” (1998), the bad guys’ team (the NSA misdirected by an unscrupulous and unsupervised manager) makes use, time and again, of these miraculous resources to try to destroy the poor protagonist’s life.

9. We have our system perfectly prepared: “Blackhat” (2015)

Another of the great poetic licences of productions is that of the perfectly prepared “actor”. It doesn’t matter if the protagonist is in the middle of the desert armed only with a Swiss Army knife (see myth number 7) or if he is in his “lair” with his super laptop (let’s not forget the stickers, the low light and the hood) moving with total agility from one system to another, from one technology to another, while his fingers dance on a geek keyboard full of LED lights or stickers with emoticons.

Logically, everything would lose its magic, if the actor had to change tools many times, download a new utility, search in Github for some software of interest, etc.

🔵 In some blockbusters such as “Blackhat” we can see this kind of compulsive actions where it doesn’t matter the environment where we move, the attacker always has everything ready, the software installed, etc. Everything works perfectly, then we can see our star typing at full speed while things happen suddenly (without intermediate errors, of course).

10. Constant violation of legal requirements: “Criminal Minds” (2005–)

Although we can all understand that some police operations in cyberspace are especially critical and urgent (perhaps trying to prevent a terrorist attack at the last moment), all intelligence agencies, police units, etc., have to strictly follow the regulations that apply in that region and scenario (as well as a basic code of ethics) and therefore court orders, permissions from users, service providers, groups, etc., have to be requested.

Of course, it is not usually convenient for the agility of the script to have to “stop the action” every few steps, waiting for the “paperwork” and the presumed slowness of the corresponding judicial system.

🔵 The vast majority of cases in series such as “Criminal Minds” or “FBI” where the analyst jumps from flight reservations to credit card payments after seeing what they had for dinner at the nearby restaurant, seem hardly credible (from a legal perspective) considering the sequence of steps required in most countries that protect civil rights and privacy of citizens.

Conclusion

So, the next time we watch a streaming series, or a big movie premiere and a guy comes out typing fast in the dark, hiding his face with a hood while the world succumbs… you know what you must do: enjoy the show (which should always go on) and forget the level of realism used.

By the way, using the term hacker always for the case we all imagine is as inaccurate as it is unfair, but we’d better look at that in another post. 😊

Featured photo: Felipe Bustillo / Unsplash

Smart urban lighting: business opportunities and benefits

Nacho Palou    31 January, 2023

Smart street lighting is one of the pillars of smart cities. In fact, it is one of the best examples of what the term Smart City means: the application of next-generation digital technologies to improve the lives of citizens with more efficient and sustainable public services.

The concept of smart street lighting brings together all these advantages thanks to the confluence of LED lighting technology, connectivity, IoT (Internet of Things) devices and management platforms with Artificial Intelligence (AI).

In this way, each luminaire or streetlight in the public lighting network is connected via 5G or NB-IoT communications networks to send the data captured by different sensors (lighting, environmental or consumption, among others) to a platform capable of automatically optimising the operation and individual efficiency of each luminaire.

Why should energy be consumed when there is no one in the street?

Smart street lighting not only can adjust the intensity of light depending on the presence of people on the street.

It also has the ability to predict, thanks to Artificial Intelligence algorithms, when and with what intensity it will be necessary to switch on each luminaire. Or the lighting of a building or monument, for example. This improves the efficiency of the network and also the feeling of safety and the perception of service for citizens.

Smart street lighting has the potential to reduce electricity consumption and pollutant emissions significantly. These savings are in addition to the lower consumption implicit in LED light sources.

LED lighting is not only cheaper and more efficient, but also has a significantly longer lifetime than conventional light sources, gas or filament lighting.

Savings and new business opportunities for municipalities

Smart street lighting allows municipalities, councils and city governments to take advantage of the ubiquity of streetlights to provide additional services, including:

  • Sensors to combat light, noise and environmental pollution.
  • Charging points for electric vehicles.
  • Security and traffic control cameras.
  • Wifi access points to municipal networks.
  • Recharging points for public transport cards.
  • Information or advertising panels.
  • Mobile phone and 5G antennas.

Some of the additional services have the potential to generate extra revenue for municipalities.

It also means better service and attention to the citizens, fewer calls for warning or complaints about broken luminaires, more efficient maintenance….

And, of course, it increases the attractiveness of the city. Both for citizens and tourists, by promoting social and cultural activities, as well as for businesses and companies.

Benefits of Smart urban lighting for citizens

A smart street lighting network has a positive impact on citizens and on the image of cities. Adapting the lighting to the needs of the streets, for example, improves the safety of public spaces and the mobility of both people and vehicles.

Smart public lighting also reduces light pollution and the nuisance that fixed and constant intensity lighting sometimes causes to nearby premises and dwellings —including flora and fauna— improving the quality of life of citizens.

It also benefits citizens by enabling the new services already mentioned, such as charging points for electric vehicles for residents without a garage or domestic charging point.

Featured photo: Vlado Paunovic / Unsplash

Resilience, key to Cloud-Native systems

Daniel Pous Montardit    30 January, 2023

In the first post of the Cloud-Native series, What is a Cloud-Native Application?, or what it means that my software is Cloud Native, we presented resilience as one of the fundamental attributes that help us to ensure that our systems are reliable and operate with practically no service interruptions.

Let’s start by defining resilience:

It is the ability to react to a failure and recover from it to continue operating while minimising any impact on the business.

Resilience is not about avoiding failures, but about accepting them and building a service in such a way that it is able to recover and return to a fully functioning state as quickly as possible.

Cloud-Native systems are based on distributed architectures and are therefore exposed to a larger set of failure scenarios compared to the classical monolithic application model. Examples of failure scenarios are:

  • Unexpected increases in network latencies that can lead to communication timeouts between components and reduce quality of service.
  • Network micro-outages causing connectivity errors.
  • Downtime of a component, with restart or change of location, which must be managed transparently to the service.
  • Overloading of a component that triggers a progressive increase in its response time and may eventually trigger connection errors.
  • Orchestration of operations such as rolling updates (system update strategy that avoids any loss of service) or scaling/de-scaling of services.
  • Hardware failures.

Although cloud platforms can detect and mitigate many of the failures in the infrastructure layer on which the applications run, to obtain an adequate level of resilience of our system, it is necessary to implement certain practices or patterns at the level of the application or software system deployed.

Let’s talk now about which techniques or technologies help us achieve resilience in each of the layers presented: infrastructure layer and software layer.

Resilient Infrastructure

Resilience at the hardware level can be achieved through solutions such as redundant power supplies, write-over-redundant storage drives (RAIDs), etc. However, only certain failures will be covered by these protections, and we will have to resort to other techniques to achieve the desired levels of resilience, such as redundancy and scalability.

Redundancy

Redundancy consists of, as the word itself indicates, replicating each of the elements that make up the service, so that any task or part of a task can always be performed by more than one component. To do this, we must add a mechanism to distribute the workload between these duplicate ‘copies’ within each workgroup, such as a load balancer. On the other hand, determining the level of replication needed in a service will depend on the business requirements of the service, and will affect both the cost and complexity of the service.

It is recommended to identify the critical flows within the service, and to add redundancy at each point of the flows, thus avoiding the creation of single points of failure. These points refer to those components of our system that in case of failure would cause a total system failure.

It is also common to add multi-region redundancy with geo-replication of the information and distribute the load by means of DNS balancing, thus directing each request to the appropriate region according to the distance from its geographical origin.

Scalability

Designing scalable systems is also fundamental to achieve resilience.

Scalability or the capacity to adjust the resources to the workload, either by increasing or decreasing their number, is fundamental to avoid failure situations such as communication timeouts due to excessive response times, service failures due to work collapse, or the degradation of storage subsystems due to massive information ingestion, etc.

There are two types of scaling:

  1. Vertical scaling or scale up: increasing the power of a machine (be it CPU, memory, disk, etc.)
  2. horizontal scaling or scale out: adding more machines.

The ability to scale a system horizontally is closely interrelated to having redundancy. We could see the former as a higher level than the latter, i.e., a non-redundant system cannot be horizontally scalable and, in turn, we can achieve horizontal scalability over redundancy if we add feedback that allows us to determine from the real-time load of the system to what extent it should grow or decrease in resources to optimally adjust to the needs demanded at any given time.

Note that at this point we are also establishing a relationship with the observability capacity, which will be responsible for providing the necessary metrics to monitor the load and automate the auto-scaling systems.

There are libraries in many languages to implement these techniques and we can also resort to more orthogonal solutions such as Service Mesh to facilitate this task and completely decouple our business logic.

Resilient Software

As mentioned at the beginning of this post, it is essential to incorporate resilience into the design of the software itself in order to successfully face all the challenges of distributed systems. The logic of the service must treat failure as a case and not as an exception, it must define how to act in case of failure and determine the contingency action when the preferred path is not available. This latter is known as fallback action or backup configuration for that failure case.

Architectural patterns

Apart from the fallback pattern, there are a set of architecture patterns oriented to provide resilience to a distributed system, such as for example:

  • Circuit Breaker: this pattern helps a service to recover or decouple from both performance drops due to subsystem overloads and complete outages of parts of the application.
    • When the number of continuous failures reported by a component exceeds a certain level, it is the prelude to something more serious about to happen: the total failure of the affected subsystem. By temporarily blocking further requests, the component in trouble will have a chance to recover and avoid further damage. This temporary cushion may be sufficient for the auto-scaling system to have been able to intervene and replicate the overloaded component, thus avoiding any loss of service to its clients.
  • Timeouts: the mere fact of limiting the time in which the sender of a request will wait for its response may be the key to avoid overloads due to the accumulation of resources, thus facilitating the resilience of the system.
    • If a microservice A requires microservice B and the latter does not respond within the defined timeout, as there is no indefinite wait, microservice A will regain control and can decide whether to continue trying or not. If the problem has been caused by a network outage or an overload of microservice B, a retry may be sufficient to redirect the request to the already recovered instance of B or to a new instance free of load. And in case of no further retries, microservice A can free resources and execute the defined fallback.
  • Retries: the two previous techniques, short-circuit and timeouts, have already indirectly introduced the importance of retries as a base concept for resilience. But is it possible to incorporate retries in communications between components for free?
    • Let’s imagine, continuing with the previous example, that a microservice A makes a request to B, and due to a punctual network outage, B’s response does not reach A. If A incorporates retries, what will happen is that when the waiting time of that call (the timeout) ends, it will recover control and make the request to B again, so B will do the work in duplicate with the consequences that may arise. For example, if that request were to subtract a purchase from the stock of products, the output would be recorded in duplicate and therefore leave an incorrect balance in the stock books. It is because of this situation that the concept of idempotence is introduced. An idempotent service is characterised by being immune to duplicate requests, i.e., the repeated processing of the same request does not cause inconsistencies in the final result, giving rise to “safe retries”.

      The immunity is obtained based on a design that contemplates idempotency from the beginning, for example, in the previous case of the stock update, the request should include a purchase identifier, and microservice B should register and validate that this identifier has not been completely processed before trying again.
  • Caché: now that we know why you need to incorporate retries If you use a cache to automatically store the responses of a microservice, you are helping both to reduce the pressure on it and to generate a fallback in case of certain anomalies. In the case of a retry, the cache helps to ensure that the component does not have to retry a previously completed job and can return the result directly to the component.
  • Bulkhead: this last pattern consists of dividing the distributed system into “isolated” and independent parts, also called pools, so that if one of them fails, the others can continue to function normally.
    • This architectural tool can be seen as a contingency technique, comparable to a firewall or watertight compartments that divide ships into parts and prevent water from jumping between them. It is advisable, for example, to isolate a set of critical components from other standards. It should also be appreciated that such divisions can sometimes lead to losses in resource efficiency, as well as adding to the complexity of the solution.

Resilience tests

As mentioned above, in a distributed system there are so many components interacting with each other that the probability of things going wrong is very high. Hardware, network, traffic overload, etc. can fail.

We have discussed various techniques to make our software resilient and minimise the impact of these failures. But do we have a way to test the resilience of our system? The answer is yes, and it’s called “Chaos Engineering”.

But what is “Chaos Engineering”?

It is a discipline of infrastructure experimentation that exposes systemic weaknesses. This empirical process of verification leads to more resilient systems and builds confidence in their ability to withstand turbulent situations.

Experimenting with Chaos Engineering can be as simple as manually executing kill -9 (command to immediately terminate a process on unix/linux systems) on a box within a test environment to simulate the failure of a service. Or it can be as sophisticated as designing and running experiments automatically in a production environment against a small but statistically significant fraction of live traffic.

There are also supporting libraries and frameworks, such as, Chaos-monkey which is a framework created by Netflix that allows randomly terminating virtual machines or containers in production environments, and complies with the principles of Chaos Engineering.

It is necessary to identify system weaknesses before they manifest themselves in aberrant behaviour that affects the entire system.

Systemic weaknesses can take the form of incorrect backup configurations when a service is unavailable; excessive retries due to mismatched timeouts; service outages when a component of the processing chain collapses due to traffic saturation; massive cascading failures resulting from a single component (single-point-of-failure can be detected); etc.

Conclusions

The most traditional approach when building systems was to treat failure as an exceptional event outside the successful execution path, and therefore it was not contemplated in the basic design of the heart of the service.

This has changed radically in the cloud-native world, given that in distributed architectures, failure situations appear normally and recurrently in some part of the whole, and this must be considered and assumed from the outset and within the design itself.

Thus, when we talk about resilience, we refer to this characteristic that allows services to respond to and recover from failures, limiting the effects on the system as a whole as much as possible and reducing the impact on it to a minimum.

Achieving resilient systems not only has an impact on the quality of the service or application, but also makes it possible to gain more cost efficiency and, above all, not lose business opportunities due to loss of service.

Featured image: Alex Wong / Unsplash