AI of Things (IV): You can already maximise the impact of your campaigns in the physical space

Mariano Banzo Martínez    26 April, 2022

One of the companies’ biggest dreams when it comes to communicating their advertising campaigns, offers and services is to be able to adapt these communications to the profile of the people who are currently receiving these advertising messages.

In the online world, for many years, browsing data, cookies, etc. have been used to personalise the advertisements displayed on websites; two people with different profiles accessing the same website will not see the same advertising. In some cases, although the advertising is the same, the presentation is different to make it more attractive.

We are affected by this type of strategies on a daily basis, for example, in some streaming content platforms, the same series has different thumbnails that highlight different features of the series, making it attractive to people with different interests.

In the first article of the AI of Things series, we talked about the two main pillars of transforming the physical world, sensorisation and data exploitation.

In this article we are going to look at how knowledge of customer profiles can have an impact on business through communication.

How can I adapt my campaigns?

This article does not have a technical nature, but we must first define the technological scenario that will allow us to have an impact on our business. There are currently several technologies that we can deploy in our physical space to obtain information about our customers, the most important include:

  • Videoanalytics: it allows, through cameras and always in compliance with GDPR regulations, to obtain information on the number of customers, various parameters such as age, gender, whether they come alone or with their family, movement within the space, etc.
  • Big Data: Aggregate data obtained from various data sources that give us aggregated profiles with demographic and socio-economic data.
  • WiFi Trackers: it allows mobile devices with open WiFi to know the number of these customers and how long they stay, among other data.
  • Proximity marketing: the customer has an app installed on their mobile device, which may or may not contain a loyalty card. Through WiFi or Beacons we can know when the customer is in the shop and impact them with messages based on their purchase history, etc.
  • Audience measurement: by installing cameras on dynamic marketing screens, we have the ability to understand the audiences for content and their demographic profile.

These technologies will be deployed on the physical space based on the objectives that are set and will be linked to a dynamic marketing platform, which allows the management of the content that is broadcast on the screens that are deployed in the different points of sale.

Campaigns carried out with dynamic marketing, whether on screens or on customers’ mobile devices, have a greater impact and a higher recall rate than campaigns carried out on traditional media. This is due to the channel itself and to the fact that the content is designed specifically for this media.

Until recently, these campaigns were scheduled according to start and end dates, as well as days and times of the week; based on empirical knowledge of observation of managers, strategic targets of the company and the like. This was already an improvement over static campaigns, but still far from the adaptive capabilities of online campaigns.

Technologies and strategies to reach the right audience

In the last year and a half, thanks to the technologies mentioned above, the ability to combine strategic programming of content with the ability to generate rules to change content based on the audience currently in your space has begun to develop, so that the content offered is the most appropriate to the profiles of the audience.

The implications that this has on the business is greater than it may seem and affects various aspects, always with the aim of increasing the conversion rate and improving the experience in the space.

Showing the right product to the profile of the customers will increase the conversion into sales of that product, while the presentation of tactical offers helps to have a higher percentage of impulse purchases, bearing in mind that we are carrying out the campaign in the place where the purchase is going to be made.

The above is already a tremendous step forward compared to static campaigns, but it still requires an analysis of the data and, depending on what is seen in terms of customer trends, product targeting, time of year, etc., a strategy must be designed and the content to be reproduced must be chosen to ensure that the conditions set are met.

Where are we heading?

The future we are envisioning at Telefonica Tech IoT & BD is based on our own platform of intelligent spaces, spotdyna and the new capabilities of Big Data and analytics to develop an algorithm within the AI of Things platform with the ability to orchestrate the content to be broadcast at all times on the dynamic marketing platform, based on all this data and applying an exhaustive labelling of the content where various characteristics of the content are indicated both objectively and subjectively.

These algorithms will take into account various inputs that will include data such as the day of the year, the time, the weather, what has happened in the past in terms of sales on days with a similar profile, analysis of customers who are currently in the space, forecasts for the future, etc.

With this we will be able to simplify and make the content programming process more efficient, where what we would have to do is to tag the content and make it available as a pool, where the algorithm with automatic learning would be fed with all the sensors, big data and other environmental data to make the best decision and also be able to know the impact of the decision to follow the learning process, being able to make better and better decisions, offering content with offers and services better adapted to the audience and environment, achieving better conversion rates in sales.

The path of evolution is set and we are taking the first steps of this new paradigm.

If you want to know more applications of the fusion of the Internet of Things and Artificial Intelligence, or what we call AI of Things, you can read other articles of the series:

Cyber Security Weekly Briefing 16–22 April

Telefónica Tech    22 April, 2022

Fodcha: new DDoS botnet

360netlab and CNCERT researchers have discovered a new botnet focused on conducting denial-of-service attacks, and which is rapidly spreading on the Internet.

This new botnet has been named Fodcha, because of the first C2 was in the folded[.]in domain, and due to the fact that it uses the ChaCha algorithm to encrypt network traffic.

It spreads through exploitation of n-day vulnerabilities in Android products, GitLab, Realtek Jungle SDK, Zhone Router or Totolink Routers among others; as well as through the compromise of weak Telnet/SSH passwords by using the brute-force attack tool Crazyfia.

Fodcha’s activity began in January, with a significant increase of attacks on 1 March, but activity was reportedly intensified from the end of March. In fact, around 19 March there was a change in the botnet’s versions, which, according to the researchers, was due to a shutdown of the old servers by the cloud providers.

INCONTROLLER/PIPEDREAM new malware targeting ICS/SCADA environments

A new malware targeting industrial control systems (ICS) and supervisory control and data acquisition (SCADA) systems has recently been discovered. This malware could lead to system outages, degradation or even destruction.

Mandiant researchers have labelled this malware as INCONTROLLER, while Dragos’ team has named it PIPEDREAM, noting that it was developed by the threat actor CHERNOVITE.

This malware stands out for having a set of tools to attack the systems of its victims, and it does not exploit a specific vulnerability, but rather takes advantage of native functionalities of the affected ICS systems, which is why both researchers and several US security agencies (CISA, the FBI and the CSA) have published a series of measures for detection and protection.

It is worth noting that while investigations have found that the malware could target different manufacturers, it contains modules specifically developed for Schneider Electric and Omron programmable logic controllers (PLCs).

HOMAGE: zero-click vulnerability in iOS used in espionage campaign

The Citizen Lab team has published an investigation detailing an espionage campaign carried out between 2017 and 2020, which they have named Catalangate, and which involved the exploitation of several vulnerabilities in iOS.

The most relevant is the use of a new exploit for a zero-click vulnerability in iOS used to infect devices with spyware belonging to NSO Group. This vulnerability has been named HOMAGE, it affected an iMessage component and iOS versions prior to 13.1.3, having been fixed in iOS 13.2 (it should be noted that the latest stable version of iOS is 15.4).

Likewise, researchers have also detected the use of other vulnerabilities: another zero-click vulnerability discovered in 2020 and called KISMET, which affected iOS versions 13.5.1 and iOS 13.7, as well as another in WhatsApp, also patched CVE-2019-3568.

As a result of this investigation, it has been detected that at least 65 people have been infected with the Pegasus and Candiru spyware.

​Vulnerabilities in ALAC audio encoding format

Researchers at Check Point have announced several vulnerabilities in Apple Lossless Audio Codec (ALAC), also known as Apple Lossless, an audio encoding format.

Exploitation of the discovered flaw could allow an attacker to remotely execute code on a vulnerable device by tricking the user into opening a manipulated audio file – an attack they have named ALHACK.

ALAC was initially developed by Apple, and in late 2011 the firm made it open-source and has since been incorporated into a multitude of devices and software. Since its release, Apple has updated the proprietary version several times, but the shared code has not been patched since then.

It is therefore to be assumed that all third-party vendors using the initial code provided by Apple in 2011 have a vulnerable version.

According to the researchers, this is exactly what happened in the case of Qualcomm and MediaTek, which are said to have incorporated the vulnerable code in the audio decoders used by more than half of today’s smartphones.

The disclosure of the flaws has been done in a responsible way, so before making its discovery public, Check Point alerted MediaTek and Qualcomm, with both firms fixing the vulnerabilities last December 2021: CVE-2021-0674 and CVE-2021-0675 in the case of Mediatek and CVE-2021-30351 in the case of Qualcomm.

Technical details of the vulnerability will be made public next May at the CanSecWest conference.

Windows 11 security improves and joins Zero Trust

Sergio de los Santos    18 April, 2022

Windows 11 has just announced, despite already being on the market since October 2021, its improvements in cybersecurity. We are going to analyse the new functionalities, some of them old and even known, but applied by default or substantially improved.

Of course, the overall strategy had to be based on the fashionable concept of Zero Trust and hybrid work in several layers, and this is how they have organised it. Let’s analyse them roughly as there are not yet too many technical details known.

Screenshot - Image: Zero Trust Approach in Windows 11
Image: Zero Trust Approach in Windows 11

Hardware: Pluto

Pluto is a processor solely dedicated to security and is embedded in Qualcomm and AMD Ryzen versions. That is, a TPM directly in the processor that stores e.g., BitLocker or Hello ID keys. What is it for and how does it improve on current TPMs? The fact that it is embedded prevents someone from physically opening the device and “sniffing” from the bus the information that travels from the TPM to the processor.

After all, as complicated as it sounds, it is possible to trap BitLocker passwords by connecting a piece of hardware to the processor and reading this traffic with a certain program. In fact, during the official presentation of the functionality, there is a quite practical demonstration of the attack process.

Computer hack - The attack to get the BitLocker password of a computer to which you have physical Access
The attack to get the BitLocker password of a computer to which you have physical Access

Windows 11 does not work without TPM devices, but now it can also benefit from that TPM on the processor itself. In addition, Pluto’s firmware will be controlled by Windows’ own updates. Indeed, it will be made open source so that it can be used by other operating systems.

Config Lock

Config Lock is simple to explain. In MDM-managed systems, there was already Secured-Core PC (SCPC), a configuration that allowed the device to be controlled and managed by administrators in companies. Using Config Lock, there will be no window of opportunity between the time of a user-perpetrated change to a security setting and the enforcement of the security policy imposed by the administration.

If the user disables any security system, it will immediately revert back to the site as configured by the policy designer. The configuration is thus “locked” and does not need to wait even minutes for it to be reversed.

Personal Data Encryption

An interesting new feature. It basically encrypts files over BitLocker, with a layer of encryption that is also invisible to the user. But the user does not have to remember or execute anything to decrypt his data but can access the data without any problems when logging in with Hello in Windows. If the user has not logged into Windows with Hello, the files will be encrypted and cannot be accessed. What is this for?

As the example in the presentation says, it prevents attacks that bypass the lock screen through direct access attacks to unprotected DMA memory. An attacker who has not authenticated to the system through the “usual” channels, but has bypassed the lock screen, will not be able to access the files thanks to PDE.

One layer above BitLocker’s cold encryption is PDE for hot encryption. The PDE password is not known to the user, it is simply erased from memory when the system is locked and decrypted when unlocked with the usual login. It would also serve as additional security if the attacker bypasses BitLocker. It seems to clash or overlap somewhat with the EFS functionality.

How is this implemented? If the attacker tries to log in without being authenticated as a user (by bypassing the lock screen or mounting the disk on another computer), a closed lock would appear on the files and a message prohibiting access would appear.

Screenshot - File cannot be accessed thanks to PED
File cannot be accessed thanks to PED

Smart App Control

SAC seems very much oriented towards checking the signature and certificates of the manufacturer of the binaries. It will try to determine if it is correct (with its valid and correct certificate), before even going through Windows Defender to add an extra layer of security. SAC is AI-based, which implies telemetry. Microsoft seems to be moving towards requiring by default that programs are signed or downloaded from a trusted repository, as MacOS or Android already do.

It improves the usual SmartScreen where Windows, thanks to its telemetry, tells you whether an app is legitimate or not. It also improves AppLocker which is more static. SAC will be based on AI hosted in the cloud, learning from the user. In fact, for those who want to activate it, it requires a reinstallation of the system so that it can learn from the beginning what programs are common on that computer.

Screenshot - Smart App Control -- Smart App thinks the application is untrustworthy and sends you to the official Store
Smart App thinks the application is untrustworthy and sends you to the official Store

Enhanced phishing protection for Microsoft Defender

This is perhaps one of the most interesting measures. SmartScreen has so far, via the browser (or in professional versions, by other means) protected the system from a malicious URL, or a suspicious domain. Just for the sake of comparison. Now it goes further, and Windows protects passwords on several levels, always watching where they are used or sent. Whether it is a visible URL, an internal URL (to which URL they travel) or even if they are stored insecurely.

On the one hand, it observes the network connections in any application (including Teams) and if it concludes that the password travels to a domain that it should not, it alerts the user, even if it is not the main URL of the domain being visited. The image shows how a page pretending to be the Office login embedded in TEAMS is actually (the connection is highlighted in the Fiddler sniffer) carrying the Office password to another domain.

Screenshot - Process of detecting that the password travels somewhere else it shouldn't
Process of detecting that the password travels somewhere else it shouldn’t

However, it goes further. If you happen to store passwords in a TXT file in Notepad, you will be alerted to the error. Even worse, if you reuse a password known to the operating system (in the picture, for example, on LinkedIn), it will also alert you to the problem it could pose.

This way, Windows as an operating system does not treat the password as just another string but knows it at all levels and monitors it throughout its use within the operating system. Could it lead to false positives with password storage apps?

SCreenshot - Alerts when reusing the password on Linkedin and when storing it in a TXT
Alerts when reusing the password on Linkedin and when storing it in a TXT

All these options can be disabled by the user.

Screenshot Windows 11 phishing protection (options)
How to activate or deactivate these functions

Windows 11 also enables by default VBS, or virtualisation as a security feature. Since the inclusion in 2008 of Hiper-V, Microsoft’s software that takes advantage of the native virtualisation capabilities of Intel and AMD processors, this functionality has been targeted to improve security. In fact, this strategy is called Virtualization-Based Security or VBS. It focuses on virtualising memory to isolate processes from each other as much as possible.

If an attacker tries to exploit a flaw in the kernel and is operating from there, an even higher (or lower, depending on how you look at it) abstraction with even more power than the kernel would be available, which would allow preventing processes or access to certain resources even when the attacker already has powers in the ring0. Hence its usefulness. This is implemented with hypervisor-protected code integrity (HVCI) which would prevent injecting dynamic code into the kernel (as Wannacry did).

In turn, this will allow the Credential Guard (not new, but underutilised) and LSASS protection to work directly, so that it does not load unsigned code into this crucial process, which is also an old acquaintance (RunAsPPL in the registry, basically a protection against Mimikatz). All of these, despite being already known, will be enabled as standard in Windows 11.

Edge Computing Made Simple

Carlos Rebato    14 April, 2022

Edge computing is one of the technologies that will define and revolutionise the way in which humans and devices connect to the internet. It will affect industries and sectors such as connected cars, video games, Industry 4.0, Artificial Intelligence and Machine Learning. It will make other technologies such as Cloud and the Internet of Things even better than they are now. As you are likely to hear about the term quite often in the coming years, let’s take a closer look at what Edge Computing is, explained in simple terms.

In order to understand what Edge Computing is, it is necessary to first understand how some technologies such as Cloud Computing. What happens every time our PC, smartphone or any other device connects to the Internet to store or retrieve information from a remote data centre?

What Is Cloud Computing

The cloud is so present in our lives that you probably use it without even realising it. Every time you upload a file to a service like Dropbox, every time you check your account in the bank app, every time you access your email or even every time you use your favourite social network, you are using the cloud. To simplify it a lot, we can say that using the cloud consists of interacting with data that is on a remote server and which we access thanks to the internet.

When we do this, the procedure is more or less as follows: your device connects to the Internet, either through a landline or wireless network. From there, your internet provider, usually an operator such as Telefónica, takes the data from your device to the destination server, using an IP address or a web address (e.g., dropbox.com or gmail.com), to identify the site to which the information should be sent.

The Journey of The Data Until It Is Processed in The Cloud

The server in question processes your data (processing is a key term here, as we will see), operates on the information and returns a response. For example: when you connect to Gmail via your device, you ask the Google server to show you the current status of your inbox, it processes your request, queries if you have new mail, and returns the response you see on your screen. As the data is in the cloud, it doesn’t matter what device you use to send it.

Devices Operator Internet Cloud Services
The journey is, in a very simplified way, the one that data makes from devices to servers in the cloud. This is true for every device that connects to the server.

Although it seems simple, this “journey” of information is a marvel of technology that requires a whole series of protocols and elements arranged in the right place. However, it also has some disadvantages. Let’s say, for example, that you live in Spain and the cloud server in question is in San Francisco. Each time you connect, your data has to make the outward journey through the network of your ISP and other operators, wait for data processing at the destination processor(s), and then make the return journeyEach time you connect, your data has to make the outward journey through the network of your ISP and other operators, wait for data processing at the destination processor(s), and then make the return journey.

Besides the fact that it is not usual for servers to be so far away, for many of the things we use the cloud for today this is totally normal and valid, the times are so low (we are talking about milliseconds) that we don’t even notice it. The problem comes in certain use cases where every millisecond that passes is crucial and we need the latency, the response time of the server, to be as low as possible. Some of these frequent use scenarios have to do with the Internet of Things.

Why IoT Matters

The Internet of Things, or IoT, is the system made up of thousands and thousands of devices, machines and objects interconnected to each other and to the Internet. With such a large number, it is logical to assume that both the volume of data generated by each of them and the number of connections to the servers will increase exponentially.

Movistar Home

Some of the objects that are nowadays already regularly connected to the internet of things are for example light bulbs, thermostats, industrial sensors in factories to control production, smart plugs, virtual speakers with voice assistants such as Movistar Home, Alexa and Google Home or even cars such as those from Tesla.

The thing is that every time one of these devices connects to the cloud it makes a journey similar to the one explained above. For the moment and in most cases that is enough, but in some cases that journey is too long for the speed and immediacy that we could get if the cloud were simply closer to us.

In other words, we still have a lot of room for improvement. The possibilities that can be realised by bringing the cloud closer to where the data is generated are simply incalculable. This is precisely where Edge Computing comes into play.

The Advantages of Edge Computing

The best definition of Edge Computing is the following: it is about bringing the processing power as close as possible to where the data is being generated. In other words, it is about bringing the cloud as close as possible to the user, to the very edge of the network.

Devices Operator Internet Cloud Services

What matters when we talk about the edge of the network is that we bring the ability to process and store data closer to the users.

This makes it possible to move capabilities that were previously “far away” to a server in the cloud, much closer to the devices. It’s a paradigm shift that changes everything. The functions are similar, but because the processing happens much closer, the speed shoots up, the latency is reduced and the possibilities multiply. So you can enjoy the best of both worlds: the quality, security and reduced latency of processing on your PC, along with the flexibility, availability, scalability and efficiency offered by the Cloud.

Edge Computing and next generation networks (5G and Optical fibre)

This is where the second part of the equation comes into play when it comes to understanding Edge Computing: 5G and Optical fibre. Amongst its many advantages, 5G and Optical fibre offers very high reductions in latency. Latency is the time it takes for information to travel to the server and back to you, the sum of the round-trip and round-trip time explained above.

4G currently offers an average latency of 50 milliseconds. That figure can go down to 1 millisecond with 5G and fibre. In other words, not only do we bring the server as close as possible to where it is needed, at the edge, but we even reduce the time it takes for information to travel to and from the server.

In order to better understand the important implications of this, let’s consider three different scenarios: a connected car, a machine learning algorithm in a factory and a video game system in the cloud.

Edge Computing and Connected Cars

The connected car of the future will include a series of cameras and sensors that will capture information from the environment in real time. This information can be used in a variety of ways. It could be connected to a smart city’s traffic network, for example, to anticipate a red light. It can also identify vehicles or adverse situations in real time, or even know the relative position of other cars around it at all times.

This approach will transform the way we travel by car and improve road safety, but the road to it is not without its pitfalls. One of the most important is that all the information collected by the different cameras and sensors ends up being of considerable size. It is estimated that a connected car will generate about 300 TB of data per year (about 25 GB per hour). That information needs to be processed, however moving that amount of data quickly between the servers and the car is unmanageable, we need processing to happen much closer to where the data is generated – at the edge of the network.

A connected car receiving information from nearby sensors. (Telefonica)

Let’s imagine, for example, a road of the future on which there are 50 connected cars that are also fully autonomous. That means sensors that measure the speed of surrounding cars, cameras that identify traffic signs or obstacles on the road, and a whole host of other data. The speed at which communication must take place between them and the server that controls that information has to be minimal. It is a scenario where we simply cannot afford for the information to travel to a remote server in the cloud, be processed, and come back to us.

At the same time, an accident, a sudden change in traffic conditions (an animal crossing the road, for example) or any other unforeseen event may have occurred. We need the processor that operates with the information produced by the car sensors to be as close as possible to the car. With the cloud, this would have to go to the antenna (the operator) and from there travel over the Internet to the server, and then back again, triggering latency. With Edge Computing, since part of the server’s capabilities are at the edge of the network, everything happens right there.

Edge Computing y Machine Learning

Thanks to the machine learning models offered by Machine Learning, many factories and industrial facilities are implementing quality control with Artificial Intelligence and Vision.. This often consists of a series of machines and sensors that evaluate each item produced on an assembly line, for example, and determine whether it is well made or has a defect.

Machine learning algorithms often work by “training” the Artificial Intelligence with thousands and thousands of images. Continuing with our example, for each image of a product, the algorithm is told whether it belongs to an item that has been manufactured correctly or not. Through repetition, and gigantic databases, the Artificial Intelligence eventually learns which features of the items are flawless and, if it fails on a particular one, it determines that it has not passed quality control.

Once the model has been generated, it is usually uploaded to a server in the cloud, where the different sensors on the assembly line check the information they collect. The scheme we mentioned earlier is repeated: the sensors collect the information, from there it has to travel to the server, be processed, checked against the machine learning model, obtain a response and return to the factory with the result.

Edge computing significantly improves this process. Instead of having to go to the Cloud server in each case, we can generate a copy (virtualised or scaled down) of the machine learning model that sits at each sensor at the edge of the network. In other words, practically in the same place where the data is  generated. Thus, the sensors do not have to send the information to the distant Cloud for each element, but check the information directly against the model at the edge and, if it does not match because the product is faulty, then they send a request to the server. In this way, performance is improved without the need to increase the complexity of the sensors, and the devices can even be simplified by being able to use the processing capabilities deployed at the edge of the network for some of their functions.

Obviously, the speed of detection of manufacturing faults is multiplied and the traffic and bandwidth required is greatly reduced.

Edge Computing and Videogames

Ever since Nintendo’s first GameBoy blew the game industry away back in 1989, one of the biggest challenges for the video game industry has been to offer ways to play games on the go. Companies such as Xbox, Google, Nvidia or PlayStation have come a long way, offering cloud-based gaming solutions that allow next-generation games to run on any screen.

Stadia makes it possible to play video games anywhere thanks to fibre connectivity and the power of the Cloud.

How do they do it? Again, by using the power of the Cloud. Instead of processing the game’s graphics on a PC or game console’s processor, it’s done on big, powerful servers in the cloud that simply stream the resulting image to the user’s device. Every time the user presses a button (for example, to make Super Mario jump), the information from that press travels to the server, is processed, and returns. There is a continuous flow of image, as if it were a video streaming like Netflix, to the user. In return, all you need to play is a screen.

To be able to perceive that the process from pressing the jump button to Super Mario jumping on your screen is instantaneous, the latency times must be extremely low. Otherwise, there would be an uncomfortable delay (also known as lag) that would ruin the whole experience.

Edge computing allows us to bring the power of the Cloud (the servers that process game graphics) to the very edge of the network, greatly reducing the lag that occurs every time the user presses the button and delivering an experience that is virtually identical to what it would be like if the console were right next door.

Edge Computing: Why Now and Why This Will Change The Future Of Connectivity

Although we have explained the whole process in a very simplified way, the reality is that Edge Computing requires a number of the latest technologies and protocols to work properly. You may have wondered at some point why all this has not been done until now, i.e., why the Cloud was not designed from the outset to be as close as possible to where the data is generated.

The answer is that it was impossible. In order for Edge Computing to work properly we need, among other things, the latest generation of connectivity based on optical fibre and 5G. The better the network deployment, the better the Edge Computing. Without the speed and low latency offered by the combination of both, all efforts to bring the power of the Cloud to the edge, where data is processed, would be wasted. The network would simply not be ready.

Thanks to their extensive fibre deployments (in countries like Spain there is more fibre coverage than in Germany, the UK, France and Italy combined), companies like Telefónica are especially well prepared to deploy Edge Computing use cases.

Edge Computing will change the world in the coming years. It will take the Cloud services we currently enjoy to the next level. Only time and the infinite potential of the internet know what wonderful new technologies and applications await us beyond Edge Computing.

Technology for the planet

Sandra Tello    13 April, 2022

Climate change is already affecting all regions on Earth, causing extreme weather incidents, including heat waves, heavy rainfall and more frequent and severe droughts. It is estimated that the earth’s average temperature has risen by 1.1°C.

Society is demanding a more sustainable way of life and a more sustainable economy. A third of Europeans consider pandemics to be the biggest challenge facing their countries, but climate change remains a top issue.

On the other hand, consumers are embracing green products and services and an increasingly sustainable lifestyle. Today, sustainability is important to 80% of consumers worldwide. The pandemic has accelerated this trend. In fact, almost two-thirds of customers are willing to change their purchasing habits to help reduce negative environmental impact.

Sustainability is currently very important for large companies

In a world where competition between companies is increasing every day, efficiency is critical to ensure profitability and long-term sustainability. No business leader questions that sustainability should be on their strategic agenda and practically every major company issues a sustainability report and sets targets. More than 2000 companies and financial institutions around the world are working on Science Based Targets (SBTi) and have set a target to reduce their carbon emissions based on science. And around a third of Europe’s largest companies have committed to achieve net zero by 2050

Towards a green digital transformation

Telefónica’s group strategy is fully aligned with these trends and is based on two fundamental pillars:

  1. To minimise the impact of our operations.
    1. Our goal, to have the most energy and carbon efficient telecommunications network in the market, so that the connectivity we offer to our customers is low emission.
  2. To help our customers to decarbonise their activity.
    1. Digitalisation and connectivity are key to help them become more efficient and sustainable. Our products and services optimise the consumption of resources such as energy and water, reduce CO2 emissions and promote the circular economy.

In the first pillar we have brought forward our emissions reduction targets and set ourselves the goal of being net zero by 2025 in our key markets and by 2040 across the entire footprint, including the value chain. Twenty-five years ahead of the Paris Agreement. To achieve this, we continue to reduce direct and indirect emissions with the 1.5°C scenario, 70% globally by 2025.

In the second pillar, we have set a target of avoiding 5 million tCO2 at our customers through our products and services by 2025 in our four main markets: Spain, Brazil, the UK and Germany.

“Green Tech” to build a greener digital future and help society grow

For companies to achieve net zero emissions targets, digitalisation and decarbonisation must go side by side. Assessing the environmental impact of digital technologies is vital, as their impact can be significant. On the other hand, digital technologies have a huge potential to reduce emissions. In this sense, companies are increasingly turning to their technology partners to integrate sustainability and create meaningful change that is good for their business, society and the planet.

And this is where Telefónica Tech plays a key role in supporting companies of all sizes and sectors to digitalise. Our dedicated teams bring a great deal of experience and industry knowledge to develop, integrate and implement strategies and technologies that help our clients create business value and sustainable impact.

Over the past year, we have made further progress and made our commitment to sustainability and decarbonisation even more visible to our customers. To this end, Telefónica Tech products and services are designed with “Green” technology and have an Ecosmart seal, verified by AENOR, which certifies that our digital solutions for companies achieve what they promise: to reduce energy and water consumption and CO2 emissions and to promote circular economy.

It is estimated that through digitalisation, using technologies such as 5G, the Internet of Things, Artificial Intelligence, digital twins, blockchain, cloud and many others… we can achieve up to a 15% reduction in the world’s carbon footprint.

Google takes a step forward to improve Certificate Transparency’s ecosystem: No dependence on Google

Sergio de los Santos    12 April, 2022

Although Certificate Transparency (CT) is not very popular among ordinary users, it affects them in many ways to improve the security of their connection to websites.

What’s more, it even affects their privacy and they certainly weren’t taking this into account. Now Google (the main promoter of CT) is taking a step towards independence from the ecosystem, but it must still improve its privacy problems.

What is Certificate Transparency?

Let’s make it short. If a certificate is created, it must be registered on public log servers. If not, it will be suspected to have been created with bad intentions. To “register” it, a Signed Certificate Timestamp, or SCT, is created, a signed cryptographic token given by a Log server as a guarantee that the certificate has been registered in it.

This SCT is always embedded in the certificate and when visiting a website, the browser checks that it is valid against several log servers. One of them must be from Google (there are several certificate companies that have public logs). If this is not the case, an error is displayed. All this happens without the user being aware of it.

An SCT embedded in the certificate, which the browser must check.
An SCT embedded in the certificate, which the browser must check.

However, the SCT is more of a promise to put the certificate in the log, because nothing prevents a log operator (who can be anyone) from agreeing with an attacker, creating a certificate, granting him/her an SCT… but not actually making it public in his/her log. This would invalidate the whole CT ecosystem. How did Google solve it? Through two moves.

  1. One is that there always had to be a “Google Log” among the required ones (currently three) where the certificate was registered. This way, Google trusts itself and knows that it will never do wrong by sending an SCT to a certificate that it has not actually registered.
  2. The other one is “SCT auditing”, which, if poorly implemented, would imply a clear infringement of users’ privacy.

Both solutions have their problems. Let’s look into them.

Chrome showing the three logs where the SCT is valid. Always a Google one... until now.
Chrome showing the three logs where the SCT is valid. Always a Google one… until now.

At least one Google Log

If Google doesn’t trust other logs, why should it trust its own? Because it is the best solution it found at the time. A certificate will not be considered to be validly compliant with the Certificate Transparency ecosystem if it is not in a Google Log…. At least until this month, where that need has been removed.

This will be implemented in Chrome version 100. It is worth remembering that Apple already went its own way with Safari, and in March 2021 announced that it would not follow that policy of relying on Google, that knowing that the SCT was in two different logs was enough for it.

Privacy and SCT auditing

SCT auditing also came not long ago as one of the solutions to control the SCTs and make the logs really perform well. It is simple: randomly audit the SCTs of the certificates and check that they are really in the logs. But how? Well, as Google knows best: using the user and taking advantage of Chrome’s adoption to send the SCTs of the sites visited to the logs to check that they have indeed been registered.

There was a lot of talk about SCT auditing, but it really was an attack on user privacy and a problem to implement. But they did it again in the March 2021 version of Chrome, in the best way they knew. How? They enabled SCT auditing only for users who already shared, through the enhanced data sharing that can be done with Safe Browsing, their visits with Google. As this was something that users activated voluntarily, they were also added as “SCT auditors” in passing. It is not the default option.

Announcement of how SCT auditing would work
Announcement of how SCT auditing would work

The two formulas described above help to control that a malicious log does not issue an SCT for a certificate without being logged by that same log. But SCT auditing must have worked out well for Google since it seems to eliminate the first formula and (as Safari already did) from now on one of the logs does not need to be specifically from Google.

Therefore, in order to ensure that the Logs, we are left with SCT auditing where all users who already share certain browsing data with Safe Browsing are also ensuring a more secure CT ecosystem in turn.

Firefox does not implement Certificate Transparency.

MATSUKO, a brand-new winner for the SXSW Pitch 2022 Awards, spotted at MWC Barcelona

Innovation Marketing Team    12 April, 2022

MATSUKO is the world’s first real-time hologram meeting app that requires only an iPhone to capture and stream people’s holograms. Using mixed and augmented reality and artificial intelligence, the company creates 3D holograms for remote communication between people. This year, the disruptive start-up won the pitch award in its category of XR and Immersive Experiences at SXSW 2022 and already built a network of European telco partners with an initial focus on the B2B market.

Dive into the Metaverse with MATSUKO 

MATSUKO’s solution reconstructs people from 2D to 3D via hologram, bringing a physical presence to remote communication and ushering in the Metaverse. And it is not an animated avatar but a fully expressive 3D hologram that eliminates the problems often encountered on video calls: lack of non-verbal cues, lack of engagement, and missing spatial feeling.

“If these past two years have shown us anything, it is that as humans we need each other’s presence,” says Maria Vircikova, co-founder and CEO of MATSUKO. “And even though we have come a long way with remote communication, today’s tools are still way too distant. Our brain is wired for the third dimension, and we need a sensation of people physically being there.”

The story behind Wayra UK and MATSUKO’s partnership started a few years ago when the company was considering the options for its further growth. The largest global Open Innovation Hub decided to support MATSUKO’s vision that fully focuses on humans and cares about tech, perpetually refining its cutting-edge human-centric technology.

Among multiple advantages available to MATSUKO as part of the Wayra ecosystem, there are unique mentorship and coaching opportunities, support during the industry events and expos, collaboration in the field of international PR.

Wayra’s portfolio start-up pushed the boundaries of tech at 4YFN Barcelona 2022

Through the support of the Wayra team, MATSUKO obtained a priceless opportunity to make an appearance during 4YFN Barcelona 2022, the startup event of the world’s biggest expo for the mobile industry, Mobile World Congress. Participation in the event of this level helped the company to receive over 30 testimonials from professionals and enthusiasts of the telecommunications industry, and current partners and potential clients.

MATSUKO’s team also had the opportunity to be interviewed by two TV stations and talk about how its solution provides true holographic presence and how that can help people connect emotionally. The entire event became an amazing experience for the company, awakening the unique possibilities for MATSUKO as a significant contributor in the future of the telco industry.

Cyber Security Weekly Briefing 1–8 April

Telefónica Tech    8 April, 2022

Critical vulnerability in GitLab allows access to user accounts

GitLab has released a security update that fixes a total of 17 vulnerabilities, including a critical vulnerability affecting both GitLab Community Edition (CE) and Enterprise Edition (EE). This security flaw, CVE-2022-1162, rated with a CVSS of 9.1, resides in the establishment of an encrypted password for accounts registered with an OmniAuth provider, allowing malicious actors to take control of user accounts using these encrypted passwords. So far, no evidence of the compromise of any accounts exploiting this security flaw has been detected.

However, GitLab has published a script to help identify which user accounts are affected and recommends users to update all GitLab installations to the latest versions (14.9.2, 14.8.5 or 14.7.7) as soon as possible to prevent possible attacks.

Read more: https://about.gitlab.com/releases/2022/03/31/critical-security-release-gitlab-14-9-2-released/#script-to-identify-users-potentially-impacted-by-cve-2022-1162

New Deep Panda techniques: Log4Shell and digitally signed Fire Chili rootkits

Fortinet researchers have identified that the APT group Deep Panda is exploiting the Log4Shell vulnerability in VMware Horizon servers to deploy a backdoor and a new rootkit on infected machines.

The group’s goal is to steal information from victims in the financial, academic, cosmetics and travel industries. Firstly, the researchers show that the infection chain exploited the Log4j remote code execution flaw on vulnerable VMware Horizon servers to generate a chain of intermediate stages and, finally, to implement the backdoor called Milestone.

This backdoor is also designed to send information about current sessions on the system to the remote server. A kernel rootkit called Fire Chili has also been detected, which is digitally signed with certificates stolen from game development companies, allowing them to evade detection, as well as to hide malicious file operations, processes, registry key additions and network connections.

Researchers have also attributed the use of Fire Chilli to the group known as Winnti, indicating that the developers of these threats may have shared resources, such as stolen certificates and Command&Control (C2) infrastructure.

Read more: https://www.fortinet.com/blog/threat-research/deep-panda-log4shell-fire-chili-rootkits

Phishing campaign exploits supposed WhatsApp voicemail messages

Researchers at Armorblox have reported a phishing campaign that uses voice messages from the WhatsApp messaging platform as a lure to deploy malware on victims’ devices.

According to the investigation, the attack starts with the distribution of phishing emails pretending to be a WhatsApp notification containing a ‘private message’ audio message, for which the malicious actors include a ‘Play’ button embedded in the body of the email along with the length of the audio and its creation date.

As soon as the target user hits the “Play” option, they are redirected to a website offering a permission/block message that, through social engineering techniques, will eventually install the JS/Kryptik trojan and the necessary payload to ultimately deploy a stealer-type malware.

Armorblox stresses that the malicious emails are sent from legitimate accounts that have previously been compromised, which makes it very difficult for the various security tools active on the target machine to detect them.

The ultimate goal of the campaign is mainly the theft of credentials stored in browsers and applications, as well as cryptocurrency wallets, SSH keys and even files stored on the victims’ computers.

Read more: https://www.bleepingcomputer.com/news/security/whatsapp-voice-message-phishing-emails-push-info-stealing-malware/

Cicada: new espionage campaign

Symantec researchers have published research reporting on a sophisticated, long-term espionage campaign by the cybercriminal group Cicada (aka APT10). According to experts, the campaign is said to have been active from mid-2021 to February this year, with operations targeting government entities and NGOs in Asia, America and Europe.

However, other sectors such as telecommunications, legal entities and pharmaceuticals have also been affected. The entry vector is believed to be the exploitation of a known vulnerability in unpatched Microsoft Exchange servers, with no specific vulnerability specified.

After the initial compromise, Cicada deploys malware such as the Sodamaster backdoor, a tool associated with this actor and which has enabled its attribution, a custom loader via the legitimate VLC player that includes a malicious DLL, making use of the DLL Side-Loading technique, Mimikatz to obtain credentials, WinVNC for remote control or WMIExec for command execution.

Read more: https://symantec-enterprise-blogs.security.com/blogs/threat-intelligence/cicada-apt10-china-ngo-government-attacks

New critical vulnerabilities in VMware

VMware released a bulletin fixing critical, high and medium severity vulnerabilities for its VMware Workspace ONE Access (Access), VMware Identity Manager (vIDM), VMware vRealize Automation (vRA), VMware Cloud Foundation and vRealize Suite Lifecycle Manager products. The most critical vulnerabilities are the following:

  • CVE-2022-22954 CVSSSv3 9.8: server-side template injection vulnerability that can lead to remote code execution.
  • CVE-2022-22955/22956 CVSSv3 9.8: vulnerabilities that allow bypassing authentication in the OAuth2 ACS framework.
  • CVE-2022-22957/22958 CVSSv3 9.1: remote code execution vulnerabilities via a malicious JDBC URI and requiring administrator access.

Other vulnerabilities of high criticality (CVE-2022-22959 CVSSv3 8.8 and CVE-2022-22960 CVSSv3 7.8) and medium criticality (CVE-2022-22961 CVSSv3 5.3) have also been fixed. According to the company, there is no evidence that any of these vulnerabilities are being actively exploited. Additionally, VMware has published several steps that users can take to mitigate the impact of these vulnerabilities in cases where upgrading the software is not possible.

Read more: https://www.vmware.com/security/advisories/VMSA-2022-0011.html

AI of Things (III): IoT anomalies, how a few wrong pieces of information can cost us dearly

Jorge Pereira Delgado    7 April, 2022

When we hear the term Internet of Things – or IoT in short – we often think of internet-enabled fridges or the already famous smartwatches, but what does it really mean? The term IoT refers to physical objects that are equipped with sensors, processors and are connected to other similar elements, allowing them to obtain certain information, process it, and collect it for later use.

It’s not all fridges and overpriced digital watches: from sensors that count the number of people entering an establishment to screens with an integrated camera that can detect if someone is paying attention to its contents and record that information. Other interesting examples can be found in the first articles of this series, where we discussed several use cases of these technologies or the advantages of using smart water meters to optimise the water lifecycle.

However, sometimes the data captured deviates from the normal values. If the data received from an IoT device tells us that a person has been staring at an advertising screen for hours or that the temperature inside a building at a given moment is 60ºC, this data is dubious to say the least. These outliers must be taken into account when designing our networks and devices. The received values have to be filtered to see if they are normal values or not and act accordingly.

This is a very clear example of how Artificial Intelligence (AI) and the Internet of Things (IoT) benefit from each other. The combination of both is what we know as Artificial Intelligence of Things (AIoThings), which allows us to analyse patterns in the data within an IoT network and detect anomalous values, i.e., values that do not follow these patterns and that are usually associated with malfunctions, various problems or new behaviours, among other cases.

What is an outlier?

The first thing to ask ourselves is what anomalous values, commonly called outliers in the world of data science and Artificial Intelligence, are. If we look at the definition of anomaly in the RAE dictionary, we get two definitions that fit surprisingly well with the data domain: “deviation or discrepancy from a rule or usage” and “defect in form or operation”.

The first definition refers to values that do not behave as we would expect – that is, they are far from the values we would expect. If we were to ask random people what their annual income is, we would be able to put a range of values in which the vast majority of values would lie. However, we might come across a person whose income is hundreds of millions of euros per year. This value would be an outlier, as it is far from what is expected or “normal”, but it is a real value.

The second definition refers to the term “defect”. This gives us a clue: it refers to values that are not correct, understanding that a data is correct when its value accurately reflects reality. Sometimes it is obvious that a value is wrong: for example, a person cannot be 350 years old and no one can have joined a social network in 1970. These values are inconsistent with the reality they represent.

What to do with outliers?

The next question to ask is what to do about these anomalies, and the answer again varies depending on their nature. In the case of inconsistent data – for which we know the values are not correct – we could remove those values, replace them with more consistent values – for example, the mean of the remaining observations – or even use more advanced AI methods to impute the value of those outliers.

The second case is more complicated. On the one hand, having such extreme data could greatly impair the capabilities of our models, and on the other hand, ignoring an outlier could have devastating consequences depending on the scenario in which we find ourselves.

To illustrate the case in which we keep outliers and they hurt us, we can imagine a scenario in which we want to predict a target variable – for example, a person’s annual income – as a function of one or more predictor variables – years of experience, municipality in which he/she works, level of education, etc. If we were to use linear regression – simplistically, a method that fits a line to the data in the best possible way – we can see that an outlier could greatly impair the way that line fits the data, as shown in Figure 1.

Figure 1: Comparison of results of a linear regression with and without outlier.

In the event that we choose to ignore these extreme values, we may have the problem of not being able to predict the consequences of such events which, although improbable, are possible. An example would be the case of the Fukushima nuclear disaster. In this case, a magnitude 9.0 earthquake was estimated to be unlikely, so the plant was not designed to withstand it (Silver, N. “The Signal and the Noise”, 2012). Indeed, the probability of an earthquake of such a magnitude in that area was very small, but if the effects and damage had been analysed, it would have been possible to act differently.

Anomalies in sensors and IoT networks

It is the same in the IoT world: is the data real, or is it due to sensor error? In both cases, appropriate action needs to be taken. If the anomalous data is due to a malfunction of the sensors at specific points in time, what we will try to do is to locate those errors and predict or estimate the real value based on the rest of the data captured by the sensor. There are several AI algorithms that can be used here: based on recurrent neural networks, such as LTSMs, on time series, such as ARIMA models, and a long etcetera.

The process here would be as follows: we have a sensor inside an office building, in order to obtain the temperature over time to optimise the energy expenditure of the building and improve the comfort of the employees. When we receive the data from the sensor, it will be compared with the data predicted by an AI model. If the discrepancy is very large at a certain point – we assume that the sensor shows a temperature of 30°C at one point in time, while the rest of the day and our model show temperatures around 20-21°C – the model will detect that data as an outlier and replace it with the value predicted by our model.

In the case of receiving an anomalous data, but we do not know if it is real or not and it could be very harmful, we should act differently. It is not the same to detect a temperature in a building that is slightly higher than normal for a few moments as it is to detect very low blood sugar values in patients with diabetes – another IoT use case example.

The impact of outliers on data quality

As Clive Humby – one of the first data scientists in history – said, “data is the new oil”. The value that data has taken on in our society explains the rapid development of fields such as IoT in recent years.

However, as with oil, if this data is not of the necessary quality and does not adequately reflect the information we need, it is worthless. Having the wrong data can lead to drastically different decisions that will take time and money to rectify. That’s why, when capturing data in IoT environments, getting those outliers detected and corrected is a critical task.

For more content on IoT and Artificial Intelligence, feel free to read other articles in our series:

Identity for the NFTs (Non-Fungible Tokens)

Alexandre Maravilla    6 April, 2022

There is a Twitter account committed to reporting NFT authorship scams. This type of online fraud, which is on the rise, infringes on intellectual property and currently affects artists and creators of artwork, yet in reality impacts on any original copyrighted content on the internet that can be tokenised.

The problem is that NFTs accredit the possession of a digital asset, but do not verify the identity of the possessor, allowing the appropriation of the work of others.

What are NFTs (Non-Fungible Tokens)?

In a simple way, and following the example of pieces of art. Let’s imagine that we have an original painting by a renowned artist hanging in our living room and we decide to take a high-quality image of it and digitalise it. Then, using a tokenisation tool, we create its NFT. The NFT would be the certificate of authenticity and possession that verifies that the digital version of the painting we have hanging in our living room belongs to us. 

To ensure that the certificate is not manipulated or forged, it is stored in a decentralised way in Blockchain, without depending on any central authority for its custody and management (such as a notary’s office). Along with the certificate, a Smart Contract is also stored, a contract (written as software code) that defines and regulates how the NFT, and its associated representation (in this case the image of the digitised artwork), can be bought, sold and transferred. This contract is executed directly between the buyer and the seller, without the need for the intervention of an intermediary.

It is important to point out that the NFT (certificate + contract) is not the digitalised artwork itself (it is not the image). Moreover, just as we understand perfectly well that the physical representation of the artwork (the one we hang in our living room) cannot be stored on the Blockchain, it should be mentioned as well that the digital version is not stored there either (although technically it could be), but it is usually stored in the so-called crypto wallets. The most common is to use the software versions of the wallets (although there are also hardware-based ones). Perhaps the best known is Metamask. 

What is the problem?

Going back to the example of the tokenised painting. Now that we have created the NFT, we can proudly display the digitalised image of our artwork on social media. And with the NFT, we demonstrate that the image is unique, thereby creating scarcity and increasing its value. This NFT and its associated image can be marketed on the Internet, and this is where the problem comes in. The NFT has been created without the consent of the author, the artist who created the painting hanging in our living room. Why does this happen? Why can we create an NFT of a piece of art without the consent of the author?

NFTs demonstrate ownership of a unique asset, but do not verify the identity of the holder. The possession of the NFT is not linked to a verified real identity, as most blockchains, such as Bitcoin and Ethereum are pseudo-anonymous. Blockchain accounts do not reveal any information about the identity that controls them. And here is the problem, we cannot verify the identity of the first holder of the NFT who should supposedly be its author. 

Verifying possession of the NFTs

There is an unwritten rule in social media that if you don’t own an NFT, you can’t use it as your profile picture, but of course, there are jpg images that are worth hundreds of thousands (or even millions) of euros. Who wouldn’t want to “borrow” that image and use it as their profile picture? Who can verify that the image I am displaying corresponds to an NFT that I own?

Last January Twitter announced that it was launching a feature aimed at verifying NFTs displayed in profile pictures on its social network. “If you own an NFT, you can upload it as a verified profile picture on Twitter. Link your Ethereum Wallet to your Twitter account and a list of the NFTs you own will be displayed.” The verified NFT is shown with a blue check. With Twitter’s new functionality, it validates that the user displaying an NFT as a profile picture owns the NFT he or she “boasts”.

Twitter shows a blue check in the profile image when member is using an NFT that owns
Twitter shows a blue check in the profile image when member is using an NFT that owns

Returning to the example that concerns us in this article, we as creators of the NFT of the painting that we have hung in our living room, and that we have digitalised, could upload the image to Twitter with the validation of its NFT. Twitter will show a blue check on it and thus demonstrate that we are in possession of that NFT. Although we are not the rightful owners of that digital asset. Twitter validates ownership but does not verify authorship.

Authorship verification of NFTs

As we have shown, tokenising a work of art is simple nowadays, just take an image of a certain quality of the chosen representation, and there are hundreds of tools that convert them into an NFT. How do we then guarantee copyright? How do we ensure that only the real artists and creators of the works are the ones who digitise them through the NFTs? The solution is to provide a layer of identity to the NFTs, a layer of identity that was forgotten in their initial conception.

Identity for the NFTs

Decentralised identity standards or self-sovereign identity (SSI) are based on blockchain technology and are therefore an ideal complement to provide identity to NFTs in use cases where it is necessary. These standards are being driven around the world, although it is the European Union that is leading their adoption.

It is possible to link an NFT with a verified real identity through SSI standards and in particular through VC-Verified Credentials.

Non-Fungible Tokens (NFTs) and Verified Credentials (VC), commitment to Web3 and the Metaverse

As we have explained, NFTs and VCs are not the same, NFTs prove the possession of a digital asset, while VCs can prove the identity of the possessor of that digital asset. So, going back to the example used, in the same way that we can verify the possession of an NFT, we can verify the authorship as well. In the case of tokenisation of works of art through NFTs, this preserves intellectual property and protects copyright. For other cases, such as NFTs that may represent our avatars in the Metaverse, this identity layer will help create more secure and trusted interactions, thus benefiting the mass adoption of these exciting new environments.