Cloud Computing still running on holidays

Roberto García Esteban    20 July, 2022

Summer holidays are here and its time for almost all of us to take a well-deserved break. The activity of most companies is drastically reduced, although there are also others that follow the cliché of “making a killing” by multiplying their volume of activity in the summer months. In both cases, having fixed and inflexible IT resources means either wasting resources in the first case or failing to cover needs in the second.

In other words, in summer, the greatest benefit of Cloud technology is particularly evident: making it easier for companies to adapt to peaks in demand, since with Cloud you only pay for what you actually consume.

Therefore, if you need less storage or processing capacity, you will pay less for it, while if you need to temporarily reinforce resources, it is very easy to make them available quickly. It is estimated that with good planning of IT resource needs, you can save up to 15% on your IT resource bill during the summer months.

Good planning of IT resource needs in the summer months can save up to 15% on IT bills with Cloud technology

But one thing is for business activity to slow down in the summer; however, it is quite another for it to come to a complete halt. Those” closed for the holidays ” days are long gone.

Now, even if offices are half-empty, companies continue to sell and provide services to their customers thanks to the use of email, e-commerce, especially from mobile phones during the summer, downloads of apps of all kinds and backup services.

Therefore, the continuity of business processes has to be ensured in summer just like during the rest of the year, and this is where Cloud Computing comes into play once again, as the data centres that house these processes are designed to overcome any unforeseen event (excessive heat, power cuts…) and are also constantly monitored and managed 24 hours a day, every day of the year to be able to deal with any incident that may arise in real time because the SLAs for cloud services are just as strict in summer as in any other month of the year.

Cloud technology to access the workplace from any location (if necessary)

Cloud Computing is part of the daily routine in many companies. Nowadays, planning an emergency solution to cover staff holidays makes no sense at all.

On the contrary, business processes should be scheduled in advance so that they are easy to manage even during the holidays, allowing tasks to be delegated or alerts to be included so that important events are not overlooked.

And although holidays are for disconnecting, everyone has to keep an eye on their email or resolve a small issue that may arise from the beach. The ease of teleworking from anywhere is another of the advantages of the Cloud, given that thanks to SaaS (Software as-a-Service) applications it is possible to use the same tools that you have at your fingertips in the office with the simple requirement of having an internet connection.

Actions such as checking the status of an important order, authorising an operation or checking the receipt of an invoice can be carried out without any problem from our place of holiday if required.

Peaceful holidays thanks to the Cloud

In short, Cloud Computing remains open during the holidays, even if the level of activity decreases, making it easier to maintain that activity even when half the company is away from its workstations. It also makes costs more flexible because the bill for cloud services will be lower if they are used less.

In other words, we can peacefully go on holiday… and continue using cloud-based services such as language translation services, map downloads, navigation applications, streaming of all kinds of content, book downloads and the always-recommended backups so as not to lose those wonderful photos we take in the summer…

So let’s enjoy the holidays with the peace of mind of knowing that the business will continue to function thanks to the cloud and that the data will still be there in case we need to access it.

Leave a Comment on Cloud Computing still running on holidays

Cyber Security Weekly Briefing, 9 — 15 July

Telefónica Tech    15 July, 2022

Rozena: backdoor distributed by exploiting Follina vulnerability

Fortinet researchers have published an analysis of a malicious campaign in which they have detected the distribution of a new backdoor exploiting the well-known Follina vulnerability (CVE-2022-30190).

This new malware has been named Rozena and its main function is to inject a reverse shell into the attacker’s host, allowing malicious actors to take control of the victim’s system, as well as to enable monitoring and information capture, and/or to maintain a backdoor to the compromised system

Regarding the methodology used to carry out the infection, it consists of distributing malicious office documents, which when executed, connect to a Discord URL that retrieves an HTML file that, in turn, invokes the vulnerable Microsoft Windows Support Diagnostic Tool (MSDT), resulting in the download of the payload, in which Rozena is included.

More info

* * *

​Microsoft fixes an actively exploited 0-day

Microsoft has published its security bulletin for the month of July in which it fixes a total of 84 vulnerabilities, including one actively exploited 0-day.

Out of the total number of detected flaws, 5 correspond to denial of service vulnerabilities, 11 to information disclosure, 4 to omission of security functions, 52 to elevation of privileges, and 12 to remote code execution. Within this last type are the four vulnerabilities classified as critical (CVE-2022-30221, CVE-2022-22029, CVE-2022-22039, CVE-2022-22038), with the rest of the vulnerabilities being of high severity.

It is worth noting the 0-day, catalogued as CVE-2022-22047 with a CVSSv3 7.8, discovered by Microsoft Threat Intelligence Center (MSTIC) and Microsoft Security Response Center (MSRC), involves a Windows CSRSS elevation of privilege vulnerability, which could allow an attacker to gain SYSTEM privileges.

According to Microsoft, active exploitation of this flaw has been detected [6], although no further details have been provided so far, and it is recommended that patches be applied as soon as possible. Also, CISA has added this vulnerability to its catalogue of actively exploited vulnerabilities.

More info

* * *

Vulnerability in the authentication of an AWS Kubernetes component

Security researcher Gafnit Amiga has discovered several security flaws in the authentication process of AWS IAM Authenticator, a component for Kubernetes used by Amazon Elastic Kubernetes Service (EKS).

The flaw lies in incorrect validation of query parameters within the authenticator plugin when configuring the use of the template’s “AccessKeyID” parameter within query strings. Exploiting it could allow an attacker to bypass existing protection against replay attacks or obtain the highest permissions in the cluster by impersonating other identities, i.e., escalate privileges within the Kubernetes cluster.

According to the researcher, two of the identified flaws have existed since the first release in 2017, while the third, which is the one that allows impersonation, has been exploitable since September 2020. The flaws as a whole have been identified as CVE-2022-2385 and have been given a high criticality.

AWS has confirmed that since 28 June all EKS clusters have been updated with a new version of IAM Authenticator that fixes the issue. Customers who manage their own clusters and use the “AccessKeyID” parameter of the authenticator plugin should upgrade to AWS IAM Authenticator for Kubernetes version 0.5.0.

More info

* * *

VMware fixes vCenter Server vulnerability

VMware has recently published a new version of vCenter Server 7.0 3f in which it corrects, eight months later, a vulnerability in the integrated authentication mechanism with Windows discovered by Crowdstrike and with CVE-2021-22048.

This flaw can only be exploited from the same physical or logical network as the affected server, and although it is a complex attack, it requires few privileges and no user interaction. However, NIST suggests that it could be exploited remotely. The versions of vCenter Server affected by the vulnerability are 6.5, 6.7 and 7.0.

The company has provided mitigation measures for those who are unable to upgrade to the latest patched version by switching to an Active Directory over LDAP authentication model. CVE-2021-22048 also affects WMware Cloud Foundation versions 3 and 4 but has not yet been fixed.

More info

* * *

​​Phishing campaign via Anubis Network

Portuguese media outlet Segurança Informatica has published details of a new wave of the persistent phishing campaign, which uses the Anubis Network portal to set up its attacks and has been active since March 2022.

Affected users, mainly in Portugal and Brazil, receive smishing or phishing messages from financial services where users are forced to enter their phone number and PIN number, only to be redirected to banking pages where they are asked for their login credentials.

According to the researchers, the Command & Control server, hosted by Anubis Network, is controlled by around 80 operators. The analysis also shows how Anubis provides facilities for tracking user data, fake domains created to impersonate banks and temporary email addresses that operators can set up for each case.

More info

Hypocrisy doublespeak in ransomware gangs

Sergio de los Santos    14 July, 2022

The hypocrisy, doublespeak and even, we assume, sarcasm that ransomware gangs display on their websites has no limits. As an anecdote, we are going to show some of the statements or terms used by ransomware gangs to justify their services, as if it were not a full-fledged illegal extortion.

We assume that the intention of the attackers is similar to classic mafias. Far from outwardly acknowledging their illegal activity, the intention is to cloak the attack in some (albeit perverse) logic in which the victim becomes a “client” of the ransomware gang or even guilty of the extortion itself for not caring about their data or infrastructure.

Here are a few examples after taking a look at their websites

Babuk, a double standard

They attack everything they can and are very active and popular. They have a special grudge against Elon Musk. If they were to get into his systems, they would publish it without negotiation, they say. But they have a red line: hospitals, NGOs, schools and small companies with profits of less than 4 million. Interesting difference that is not found in many other groups.

Image: Organisations safe from Babuk
Image: Organisations safe from Babuk

Babuk spend a lot of time “justifying themselves”.

Image: Babuk's philosophy
Image: Babuk’s philosophy

They call themselves cyberpunks who go around ” testing cybersecurity”. Of course, they literally call themselves “specialised, non-malicious software that exposes a company’s cybersecurity problems”. They add that their “audit” is not the worst thing that could happen, and that it would be much worse if fanatic terrorists who don’t just want money, like them, were to attack the infrastructure.

Lorenz, nothing personal

They don’t talk about their morals; they attack as much as they can. On their blog they keep a slot with the attacked companies that have paid (and therefore removed their data), and others with the data published for not having paid.

Image: slots for future or victims who have already paid
Image: slots for future or victims who have already paid

But they remind on their website that of course, it is nothing personal. Just business.

LV, you are the one to blame

If LV attacks the company, encrypts and steals the data and ends up displaying it on its website, it is the victim’s fault for not having fulfilled their obligations and refusing to correct their failures. They have preferred to sell the company’s own data and that of its customers. This is the cynical message of this gang that blames the victim as if they had done something wrong.

It is worth remembering here that ransomware gangs do not always exploit security flaws: they use all sorts of techniques, such as extorting workers to get the data they need for the theft.

Image: LV says the victim is careless
Image: LV says the victim is careless

LockBit, the most professional

They are so professional that they recently announced a Bug Bounty of their own in which they could award up to a million dollars just for finding bugs in their infrastructure. They are very active and very good at marketing themselves as an affiliate campaign for ransomware, with very advanced encryption and exfiltration software, fast and very serious about their business. That’s what they say. On their FAQ page, we can find statements like these.

Image: What to target and what not to target
Image: What to target and what not to target

Neither they nor their affiliates can encrypt critical systems such as nuclear plants, pipelines, etc. They can steal information, but not encrypt it. They can steal information, but not encrypt it. If in doubt, they can contact the organisation’s helpdesk. They are also not allowed to attack post-Soviet countries, although this has long been common in malware.

They do allow NGOs without problems, and educational institutions as long as they are not public. They recommend not attacking hospitals if they can cause deaths. And they encourage attacking as much law enforcement as possible, because they say they don’t appreciate the important work they do in raising awareness of cybersecurity.

If the victim doesn’t pay up, they promise to store the stolen company data available on their blog for as long as possible, so they can learn. And so that they can’t take down this website they maintain a very robust antiDDoS system with dozens of mirrors as well as the aforementioned bug bounty to find potential flaws in their encryption system that could allow access to the data without paying.

Bl@cktor, the ransomware gang that claims not to be one

It’s not that they’re a ransomware gang, it’s that they love to go around looking at vulnerable companies, break into their systems, and ask for ransom money. But they don’t mean any harm… unless you don’t pay, of course.

Image: Bl@ckt0r, neither numbers nor deletes
Image: Bl@ckt0r, neither numbers nor deletes

And they don’t lie. They don’t actually encrypt anything; they leak the data directly and sell it. This way they do not break business continuity. According to them, a bargain for their services as they have alerted about potential security breaches.

They also seem to have a lot of resources to make everyone aware that the data has been stolen. For instance, contacts in the media. Hospitals, of course, are not touched.

Main image: Tyler Daviaux / Unsplash.

* * *

AI of Things (VIII): socio-demographic segmentation and video analytics to improve shopping experience

Pablo Salinero    13 July, 2022

The growth of e-commerce, with the many advantages and conveniences it offers to customers, has meant that physical shops have seen their market share shrink significantly. More recently still, the forced closure of physical shops due to the COVID epidemic has forced traditional retailers to completely reinvent themselves in order to attract customers again.

This reinvention should be based on two main pillars:

  • maintain the differential offer of physical shops (personalised physical service, physical access to the product);
  • convert the shopping experience into something more similar to that of online shops (product recommendations, personalised campaigns, etc).

In other words, the physical shop must stop merely being a warehouse for products and become a space for services available to the customer. And, to implement this second axis, the digitalisation of the physical shop is essential.

What can physical shops learn from online commerce?

What lessons can a physical shop learn from the way online shops relate to their customers? In online shops, from the moment the customer registers, the seller already knows which customer is buying and can have all their socio-demographic and economic information.

In online commerce, the customer’s consumption and browsing habits are also known: which products they have visited, how much time they have spent analysing and studying each product, which campaigns or suggested products they have shown interest in and which they have ignored, which days of the week or times of the year they look for and buy certain products.

How should a physical shop evolve to be able to make use of similar information to improve the customer shopping experience?

First of all, it is necessary to sensor the shop, which will allow the collection of data to be able to analyse customer behaviour inside the shop. The main types of sensors that can be installed are:

  • Video cameras: with software installed that allows facial recognition. The aim is not so much to uniquely identify customers entering the shop, but to be able to count how many people enter and leave, their gender and age range. To ensure customer privacy, this information is not recorded.
  • Bluetooth sensors (beacons): devices that are placed at different points in the shop to locate customers via their smartphones and communicate with them to inform them of the details of a product, show them offers associated with a specific point in the shop or last minute offers, find out about their movements around the shop and the busiest points, etc.
  • RFID (Radio Frequency Identification): tags that are placed on products and that significantly improve the possibilities of barcodes.
  • Interactive screens: distributed throughout the shop, with a dual purpose. On the one hand, to show customers personalised messages and, on the other, to collect feedback on the user’s shopping experience.

Although the exploitation of the information provided by these IoT sensors alone, without combining it with any other type of information, brings benefits that are more store-oriented than customer-oriented, it makes it possible to determine the ‘hot’ areas of the shop, to optimise its internal design by redistributing shelves and products, and to improve the management of queues at the checkout.

However, the information obtained in this way has the limitation that it does not distinguish between customers and considers them all, broadly speaking, as belonging to the same group.

Digitalisation of shops to improve customer experience

How can the benefits of digitalisation be increased for the shop, but also to improve the customer experience? By cross-referencing information from sensors with socio-demographic information about customers: their age group, gender, household size, economic group, place of residence and place of work.

For legal reasons, the information used must be aggregated so that customers cannot be uniquely identified, but despite this limitation, the cross-referenced and enriched information allows segmentation according to socio-demographic profile.

Cross-referenced and enriched information allows segmentation according to the socio-demographic profile of buyers

When the customer enters the shop, the bluetooth sensors detect their mobile devices, so the customer is identified. However, as mentioned above, this is not the final objective, but rather to cross-reference it with socio-demographic information to determine the segment of the population to which they belong.

This information is at a theoretical level, because at a practical level the cameras detect how many people belong to the group that has just entered, their sex and their ages, bearing in mind, once again, that specific customers are not identified but their socio-demographic segments. With this information provided by the cameras, it is possible to know whether it is a single person, a couple, a family with small children or older ones, a group of adults or teenagers, etc.

Customer knowledge in the physical shop

The physical shop is now on a par with the online shop in terms of customer knowledge, since, having assigned the customer to their socio-demographic segment, it can use everything it knows about the behaviour of similar groups of customers to offer a more personalised digital service, such as presenting campaigns and offering more specific products.

Moreover, the behaviour of these new groups inside the shop (obtained from the tracking of their path provided by the cameras or their use of the interactive screens), allows you to refine this knowledge of the customer, making segmentations of favourite products according to the customer’s socio-demographic group or, conversely, socio-demographic segmentations for each product or campaign.

Photo: Ashim D Silva
Photo: Ashim D Silva

It is possible to extract even more information: the information provided by the purchase tickets allows us to obtain a measure of the relevance of the products according to the interest they arouse during the purchase process and the conversion rate of this interest into a final sale. And the interactive screens located at the exit of the shop allow to collect the feedback of the shopping experience and to know if the actions launched on the customer have helped the final sale or not.

Mobility information: where do customers come from (and where do they go)?

In addition to purely socio-demographic information, there is another type of information that can be useful when combined with the digitisation of physical shops: mobility information, which refers to customers’ travel and movement habits.

This information is extracted from the millions of events recorded daily on mobile networks, always anonymised, extrapolated to the total population and also aggregated into socio-demographic characteristics.

With this mobility information, it is possible to know where the customers who come to the shop come from, how often, what days of the week and whether they do so because they live, work or are sightseeing in the area. This information was already very useful even before the shop was opened, as it was used to decide on the ideal locations for the shops, depending on the socio-demographic profile of the customers to be attracted, looking for the areas where these customers move around the most.

Conclusion

The digitalisation of shops, from the double point of view of physical infrastructure and customer information, brings benefits for both the shop and the customers.

Shops can organise themselves more efficiently, placing products and displaying campaigns in a way that attracts more attention, and they can find out what kind of customers are the most frequent visitors, how they behave and what products they prefer.

Meanwhile, customers see their shopping experience improved as they receive a much more personalised attention tailored to their profile. All of this makes the customer happier, and a happy customer spends more, which in turn increases the shop’s profitability and profit.

If you want to know more applications of the fusion of the Internet of Things and Artificial Intelligence, or what we call AI of Things, you can read other articles of the series:

Understanding Digital Certificates

Cristina del Carmen Arroyo Siruela    12 July, 2022

For ordinary citizens, digital certificates are those electronic files or documents that allow them to carry out thousands of legal actions, administrative actions, and they can dispense with having to go in person to carry out these procedures. But, what is a digital certificate?

A digital certificate is an electronic document signed and generated by a certification authority (CA) or certification service provider, which allows the unique identification of an entity or applicant. This is done using public key or asymmetric cryptography, in which a pair of electronic encryption keys (public and private) is used.

Public key encryption, or public key cryptography, is a method of encrypting data with two different keys and making one of the keys, the public key, available for anyone to use. The private key is held only by the owner or applicant of the digital certificate.

The operating mechanism of asymmetric or public key cryptography is that data encrypted with the public key can only be decrypted with the private key, and vice versa.

Certification Authority (CA) and Public Key Infrastructure (PKI)

A certification authority (CA) is a trusted entity responsible for providing a series of electronic certification services. One of the best known and most widely used certification authorities in Spain is the FNMT (Fabrica Nacional de Moneda y Timbre). 

Following the entry into force of the European regulation eIDAS 914/2018, CAs have been replaced by the figure of Qualified Service Provider (QSP), although the term CA is still used, especially in the business world.

These authorities are responsible for issuing, verifying the validity and revocation of electronic certificates, always guaranteeing the identity and veracity of the certificate holders’ data.

A public key infrastructure (PKI) is a system composed of hardware elements, software and security procedures, whose main function is the governance of encryption keys and digital certificates, making use of cryptographic and other mechanisms.

The usual components of a PKI infrastructure are:

  • Certification authority: As explained above, it is responsible for establishing user identities and creating digital certificates, an electronic document that associates identity and the set of public and private keys.
  • Registration authority: Responsible for the initial registration and authentication of users who are subsequently issued a certificate if they meet all the requirements.
  • Certificate server: Responsible for issuing the approved certificates with the registration authority. The generation of the public key for the user is composed with the user’s data and finally digitally signed with the private key of the certification authority.
  • Certificate repository: This component is responsible for the availability of the public keys of the registered identities. When a certificate needs to be validated, the repository is consulted, the signature and the certificate status are verified. They also have the CRL (Cerficate Revocation List), which lists those certificates that for some reason have ceased to be valid before the expiry date and have been revoked.
  • Time Stamping Authority (TSA): This is the authority in charge of signing documents in order to prove that they existed before a certain point in time.

Inside Digital Certificates

X.509 is a standard used in public key infrastructures to define the digital certificate structure. In 1998, the ITU (International Telecommunication Union) introduced this standard. There are 3 versions of X.509 available. For more details on this standard, it is recommended to consult RFC 5280.

Digital certificates under the X.509 standard is in ASN.1 language and encoded in most cases using DER, CRT and CER. The extensions used can be .pfx, .cer, .crt, .p12, etc.

The most common parts of a digital certificate are:

  • Version: used to identify the X.509 version.
  • Certificate serial number: this is a unique integer number generated by the CA.
  • Signing Algorithm Identifier: used to identify the algorithm used by the CA at the time of signing.
  • Issuer Name: displays the name of the CA issuing a certificate.
  • Validity: Used to display the validity of the certificate, showing when it expires.
  • Username: Displays the name of the user to whom the certificate belongs.
  • User’s public key information: contains the user’s public key and the algorithm used for the key.

In higher versions, more fields appear, such as the Unique Issuer Identifier, which helps to find the CA uniquely if two or more CAs have used the same issuer name, among others.

Digital certificates mainly employ asymmetric cryptography and use encryption algorithms such as RSA (Rivest, Shamir and Adleman), DSA (Digital Signature Algorithm) and ECDSA (Elliptic Curve Digital Signature Algorithm).

The DSA algorithm is mainly used for actions dealing with digital signature and signature verification. The RSA and ECDSA algorithms are used for actions related to electronic signature and also for data encryption and decryption.

Digital certificate types and classes

There are many types and classes of digital certificates, as these are provided by the CAs, which determine which ones they provide and manage.

The European regulation eIDAS 910/2014 establishes 2 types of certificates:

  • Electronic Certificate: Document signed by a certification service provider, linked to a series of signature verification data and ratification of the signatory’s identity. It follows the issuing requirements established in Law 59/2003 on electronic signatures and the eIDAS Regulation of the European Parliament. 
  • Qualified Electronic Certificate: Certificate that adds a series of additional conditions. The issuing provider must identify the applicants and seek reliability in the services it provides. This certificate complies with the requirements of the Electronic Signature Law 59/2003 in its content, in the processes for verifying the signatory’s identity and in the conditions to be met by the certification service provider. Example: Electronic ID card. 

If we consider digital certificates according to the type of identity and data, in general terms, the following 3 types can be established:

  • Natural Person: Associated with the identity of a natural person or citizen. They are designed to be used mainly for personal, official procedures.
  • For legal persons: Their use is intended for all types of organisations, whether they are companies, administrations or other types of organisations, all of which have a legal identity.
  • For entities without legal personality: They link the applicant with signature verification data and confirm their identity for use only in communications and data transmissions by electronic, computer and telematic means in the field of taxation and public administration in general.

They are also classified in some cases according to the scope of application of the certificate, examples of which include:

  • Web server certificate
  • Source code signing certificate
  • Company membership certificate
  • Representative certificate
  • Proxy certificate
  • Company seal certificate

The main purpose of web server certificates is to ensure the security of communications and transactions between the web server and visitors. This allows access to the contents of the web server that has the certificate, in a secure way (web pages or database), as long as it is well implemented.

These certificates use the TLS (Transport Layer Security) protocol, which replaces the SSL (Secure Socket Layer) protocol. There are various web server certificates such as SSL/TLS, wildcard, SAN or multi-domain certificates, among others.

Usefulness of digital certificates

The usefulness of digital certificates is uneven, as this depends on the type of digital certificate involved and as seen above, there are many types.

The main advantages offered by the use of digital certificates are:

  • Security in communications and servers.
  • Security in the authentication systems where they are implemented.
  • Ease of carrying out legal or administrative actions remotely.
  • Electronic signature capacity, for the signing of documentation.
  • Data and information encryption capacity.

Artificial intelligence: making fake news more real

Marta Mallavibarrena    11 July, 2022

Fake News, the word of the year 2017 according to the Collins dictionary and repeated endlessly both in the media and on social networks. We have even dedicated numerous posts to it in this blog, pointing out possible risks derived from its use, as well as the role of technology in its detection. In this case, the intention is to look at the problem from the other side: how technological development, including that of the systems used to identify fake notifications, is actually contributing to making them more and more real every day.

The intention is to show, through examples, the process of creating a totally fake news story from scratch with as little human effort as possible, letting technology do the rest, without going into the specific details of the technical functioning of these algorithms.

Creating our main character

Every news item needs a protagonist, and this, a context. Thanks to platforms like this X does not exist, in a couple of clicks we can have his face, his mascot or his CV. None of the images generated would have existed until we clicked on them, and they will cease to exist when we refresh the page.

Cassidy T. Pettway, 57, from Brighton, Colorado. Image automatically generated through thispersondoesnotexist.com
Cassidy T. Pettway, 57, from Brighton, Colorado. Image automatically generated through thispersondoesnotexist.com
Sundae, one of Cassidy´d cats. Image automatically generated through thiscatdoesnotexist.com
Sundae, one of Cassidy´d cats. Image automatically generated through thiscatdoesnotexist.com

In the absence of imagination to add details such as name, nationality, residence, etc. We can also resort to other free resources like fakepersongenerator.com or fauxid.com. Yes, for the cat too.

The limitation of this type of approach is that we cannot construct a complete identity from a single photograph, and given that Cassidy does not really exist, we cannot ask him to become more. To overcome this drawback, morphing techniques appear that allow us to obtain different angles of the same photograph, change its expression, increase or decrease its age, etc.

These technologies are similar to those used by applications such as FaceApp, which a few years ago had thousands of users on social networks showing “what they would look like when they were 80 years old”. They are also the culprit of many headaches for border agents around the world, as the images generated are close enough to the original image for the human eye to identify them as the same person, but can evade biometric systems.

Image: example of images modified by SofGAN. Source: apchenstu.github.io/sofgan/

Now that we have enough photos of our main character, we can also add a background, a context. If we don’t want to worry about someone recognising the original image we have used in our montage, we can describe the landscape to DALL-E (mini version available on its website) or, if we prefer to bring out our artistic side, we can draw it in Nvidia’s GauGAN2.

Image: input and output of GauGAN 2, generating realistic images from simple drawings. Source: gaugan.org/gaugan2/
Image: input and output of GauGAN 2, generating realistic images from simple drawings. Source: gaugan.org/gaugan2/

Special mention should be made of the Unreal Engine 5 video game engine, among others, which, although they allow the creation of scenarios and environments capable of fooling anyone, require much more effort on the part of the creator than the examples presented in this post. A recent example is the recreation of the train station in Toyama, Japan, created by the artist Lorenzo Drago.

Developing and sharing the news

Now that we have given Cassidy a face, it is time for her to fulfil her role as creator, disseminator or even protagonist of false content. If we’re not in the literary mood to write it ourselves, there are algorithms for that too.

Platforms such as Smodin.io can generate articles or essays of considerable length and quality by simply indicating the title. I may or may not have asked for your help in writing this post.

If we were to focus our disinformation strategy on impersonating someone else rather than creating it out of thin air, there are also systems trained to mimic writing styles. In 2017, the Harry Potter chapter generated by Botnik Studios imitating the style of the original author went viral.

If instead of a proper article we want to run a disinformation campaign on social media, we can create short text snippets with the Inferkit.com demo. Perfect for a tweet or a Facebook comment. What if Cassidy were to disprove that man landed on the moon?

Image: text generated by Inferkit.com - In grey: user-generated text. In green: text added through artificial intelligence. Source: app.inferkit.com/demo
Image: text generated by Inferkit.com – In grey: user-generated text. In green: text added through artificial intelligence. Source: app.inferkit.com/demo

In many cases it is not even necessary to create a user on the networks to actually post the content, just a screenshot indicating that you have done so. It could be a WhatsApp conversation, a Facebook comment or even their Tinder profile.

To raise grades

After generating static images and text, if we wanted to go one step further in our creation of fake news, we could turn to video and sound. The well-known deep fakes are a very useful tool in both cases. The blog has previously discussed how they are used in shootings, to impersonate someone’s identity or to carry out a “CEO fraud”.

In addition to these techniques, more focused on the impersonation or imitation of another image or sound, there are platforms capable of creating new voices: some from scratch, such as This Voice Does Not Exist; others allow us to make adjustments to previously created voices, such as Listnr.tech; and others create new voices from our own, such as Resemble.ai.

Conclusion

While the threat of misinformation and fake news has been around for centuries, thanks to technological development, we are now able to generate a person’s image in one click, give them a pet, a job and a hobby in three clicks, instil certain ideas in three clicks and finally give them a voice.

Tasks that used to require a great deal of manual effort by the party interested in creating and disseminating the information can now be automated and done en masse. This also means that these campaigns are now available to anyone and are not limited to governments and large corporations.

As long as technology cannot keep up with detecting what it creates, the only possible solution is based on awareness and critical thinking on the part of users, which starts with knowing the threats they face.

“Our technological powers increase, but the side effects and potential hazards also escalate”.  – Alvin Toffler. Future Shock (1970)

Cyber Security Weekly Briefing, 1 — 8 July

Telefónica Tech    8 July, 2022

Raspberry Robin: worm detected in multiple Windows networks

Microsoft has issued a private advisory to Microsoft Defender for Endpoint subscribers, informing about the detection of the Raspberry Robin malware in multiple networks, mostly from the industrial sector.

The worm, created in 2019 and first detected in September 2021, is mainly disseminated through infected USB devices. Some of its characteristics are the use of QNAP NAS devices as command and control (C2) servers and its ability to connect to the Tor network.

In addition, Raspberry Robin abuses legitimate Windows tools such as the msiexec process to infect new devices, execute malicious payloads and ultimately deliver malware. There are no evidences indicating that the operators of this malware have exploited the accesses obtained from their activities.

Plus, it has not been possible to attribute this campaign to any specific malicious actor, although Microsoft has rated it as high risk, as the attackers could deploy additional malware on the victims’ networks and escalate privileges at any time.

More info

​* * *​​​​​​​

Critical vulnerability in Spring Data for MongoDB

NSFOCUS TIANJI Lab researcher Zewei Zhang has reported a critical remote code execution (RCE) vulnerability in Spring Data MongoDB, a project for integrating documents into MongoDB databases.

The flaw has been identified as CVE-2022-22980 and has received a criticality of 9.8 (CVSSv3). The vulnerability in particular consists in the possibility of performing a malicious SpEL (Spring Expression Language) injection that would allow an attacker to execute arbitrary code remotely with legacy privileges. The flaw affects versions 3.4.0, 3.3.0 to 3.3.4, and earlier unsupported versions.

Spring has already released the corresponding patched versions of Spring Data MongoDB, 3.4.1 and 3.3.5 at the end of June. However, in case it is not possible to implement these new versions, there are mitigation measures that can be consulted in the advisory published by VMware, which are recommended to be applied immediately taking into account the public availability of proofs of concept on this vulnerability.

More info

​* * *​​​​​​​

Malicious version of Brute Ratel C4

Researchers at Palo Alto Networks have published about a malicious sample of the legitimate Brute Ratel C4 (BRc4) software. This tool has emerged as an alternative to Cobalt Strike for red team penetration testers.

Just as Cobalt Strike leaves beacons on infected computers, Brute Ratel installs badgers that perform a similar function, establish persistence and connect to command and control servers to receive commands and execute code on infected computers.

Additionally, this tool was specifically designed to evade endpoint detection (EDR) and antivirus detection. According to the researchers, it is very likely that former members of Conti ransomware have created shell companies in order to pass a part of the verification process required to obtain this software.

Finally, they urge security vendors to update their protections to detect this software and for organisations to take proactive steps to defend themselves.

More info

​* * *​​​​​​​

​Critical vulnerability in OpenSSL

Security researcher Xi Ruoyao has discovered a vulnerability in the OpenSSL cryptographic library that could lead to remote code execution under certain circumstances. The flaw, identified as CVE-2022-2274, lies in the implementation of RSA for X86_64 CPUs supporting AVX512IFMA instructions.

The vulnerability could lead to memory corruption during computation, which an attacker could use to ultimately trigger remote code execution on the machine performing the computation.

The flaw affects OpenSSL version 3.0.4, which was released on 21 June 2022, and has been fixed with OpenSSL version 3.0.5. OpenSSL versions 1.1.1 and Open SSL 1.0.2 are not affected by this vulnerability.

More info

​* * *​​​​​​​

​New HavanaCrypt ransomware campaign

TrendMicro researchers have analysed a campaign of the new ransomware family called HavanaCrypt, which is reportedly masquerading as the Google Software Update application for distribution.

HavanaCrypt is compiled in .NET and uses Obfuscar, an open-source obfuscator to secure .NET code. It has also been confirmed to be using an IP address of a Microsoft hosting service such as C&C (Command&Control) to evade detection, which is unusual for this type of threat.

TrendMicro has also detected the use of multiple anti-virtualisation tools to evade possible dynamic analysis in virtual machines. Finally, it is worth mentioning the QueueUserWorkItem function, used to distribute other payloads and encryption tools.

After the encryption process, during which it uses legitimate KeePass Password Safe modules and the CryptoRandom function, this ransomware does not leave any ransom note, so researchers believe it may still be under development.

More info

Main challenges for the adoption of the metaverse

Álvaro Alegría    8 July, 2022

In a previous post dedicated to the metaverse, I explained what the metaverse -the ‘buzz word’ of the year- consists of and what opportunities it will offer companies.

Today I want to share other challenges that, in my opinion, must be overcome in the short and medium term for the metaverse to unfold its full potential.

Diversification

Most of the metaverses, which are currently available, have gaming as a central element of their value proposition.

This is entirely understandable because, for years, the video game world had already oriented its strategy towards online multiplayer, so for its users, the leap into the metaverse is a natural step.

However, new proposals offering other types of content need to be deployed and consolidated in order to expand the number of users interested in joining the metaverse.

Here I am convinced that we will soon start to see horizontal proposals, as Meta is likely to be, and new vertical proposals, in the world of entertainment, sport, the workplace and even the industrial world.

Purpose and experiences

When a new technology is developed and, above all, when the level of hype that is brewing around the metaverse is generated, a perverse incentive is unleashed: to use it at all costs so as not to be left out of the wave, even if the real value is not understood and the potential is not well known.

As we mentioned in the previous article, it is important to understand that the metaverse is not an end, but a means. It is a tool that should serve companies to achieve their strategic objectives, whatever they may be.

The question should not be whether or not a company should be in the metaverse. The question should be why, what is the purpose?

Adopting the metaverse with a purpose is fundamental, because it will guide companies in designing the experiences that will define their relationship with their customers and users in the metaverse.

Payments and transactions

Everyone may not agree with what I am about to say, but the metaverse and the crypto world are independent concepts, which can exist entirely separately.

Whether the metaverse, without the ‘crypto’ world, can really unfold its full potential is a different matter. In my opinion, no.

Given that the metaverse involves the interaction of thousands of users from different countries, it is essential that all users share a common economy, through one or more digital currencies, which facilitates payments and transactions.

“Without the crypto world, the metaverse cannot unfold its full potential”

Imagine that you put an asset up for sale in euros and someone who has Peruvian pesos or Thai baht wants to buy it. That would mean that the buyer would have to calculate the price in their currency and one of the two would have to exchange currencies, increasing the friction of the transaction.

If, on the other hand, all users handle the same currency, for example “Mana” in the case of Decentraland, the transaction is much simpler.

But, to be honest, it will play a fundamental role here if the crypto world is able to overcome and avoid the major scandals that have been occurring in recent weeks and which directly affect the trust of the average user.

Privacy and Security

Great potential brings with it great responsibility, and the development of the metaverse will mean evolving security systems to a higher level.

The metaverse and Web3 will be built on the identity of its users and therefore it will be absolutely essential to build new methods of privacy and personal data protection. Let us be aware that the metaverse will multiply the type and amount of data we will share to identify ourselves.

But not only data, it will also be necessary to guarantee the protection of our virtual heritage or else it will be impossible to develop a true large-scale economy of digital assets such as the one I mentioned in the previous point.

Legality

The evolution of the internet has brought (and continues to bring) a parallel revolution in the legal sphere, both in terms of legislative production and in the way those same rules are applied.

The reality is that the law cannot cover all the factual scenarios enabled by technology because, among other things, the pace of technological development is several orders of magnitude greater than the capacity of any parliament to pass legislation.

Photo: Minh Pham
Photo: Minh Pham

And the problem we face with the advent of the metaverse is that, in addition to the current problems, we will be taking the complexity of the assumptions to a higher stage.

What jurisdiction applies in the metaverse? What is allowed and what is not? How do we give legal certainty to the hundreds of thousands of users who will interact at the same time in the same virtual space from multiple countries?

Metaverse adoption

I have left this point to the end because, although it is the simplest to understand, it is in fact the most far-reaching of all.

The future of the metaverse will depend on its mass adoption by users, as is the case today with social networks

However advanced, technological, immersive, decentralised and interactive the universes may be, they can only survive if they manage to attract the general public.

Nobody wants to go to the Wizink Center to a concert and find themselves alone on the dance floor because part of the fun of these kinds of activities is precisely to share them in community, to enjoy a common experience and to be part of something bigger than ourselves.

The challenge at this point is to overcome two important barriers:

  • An important part of the value proposition of the metaverse involves immersive experiences that, yes or no, require hardware (glasses, controllers, etc.) that are currently only available to a residual part of the population. Will brands get us to buy these devices to enjoy their experiences? It remains to be seen, but if we all now have a smartphone to be able to access all that mobile applications offer us, it is clearly a matter of incentive and reward.
  • Getting each of us to embrace the cultural shift of immersing ourselves for a few hours a day in a virtual world that abstracts us from our everyday reality. During the pandemic, internet traffic multiplied exponentially. But as the measures were relaxed, we all took to the streets to get back in touch with our loved ones. Will the experiences in the metaverse be interesting enough to make us renounce, even if only for a few minutes a day, life in the flesh and blood?

As I said at the beginning of the article, these and other challenges are undoubtedly the ones that all companies will have to face in the short and medium term in our strategies for adopting the metaverse.

Leave a Comment on Main challenges for the adoption of the metaverse

Edge Computing and Machine Learning, a strategic alliance

Alfonso Ibañez    7 July, 2022

By Alfonso Ibáñez and Aitor Landete

Nowadays, it is not unusual to talk about terms such as Artificial Intelligence or Machine Learning. Society, companies and governments are increasingly aware of techniques such as deep learning, semi-supervised learning, reinforcement learning or transfer learning, among others. However, they have not yet assimilated the many benefits of combining these techniques with other emerging technologies such as the Internet of Things (IoT), Quantum computing or Blockchain

IoT and Machine Learning are two of the most exciting disciplines in technology today, and they are having a profound impact on both businesses and individuals. There are already millions of small devices embedded in factories, cities, vehicles, phones and in our homes collecting the information needed to make smart decisions in areas such as industrial process optimisation, predictive maintenance in offices, people’s mobility, energy management at home, and people’s facial recognition, among others.

The approach of most of these applications is to detect information from the environment and transmit it to powerful remote servers via the Internet, where the intelligence and decision making resides. However, applications such as autonomous vehicles are very critical and require accurate real-time responses. These new performance requirements play a key role in decision-making, and the use of remote servers outside the autonomous vehicle is not appropriate. The main reasons concern the time taken to transfer data to external servers and the permanent need for internet connectivity to process the information

Edge Computing

A new computing paradigm is emerging to help alleviate some of the problems above. This approach brings data processing and storage closer to the devices that generate it, eliminating reliance on servers in the cloud or in data centres located thousands of miles away. Edge Computing is transforming the way in which data is processed, improving response times and solving connectivity, scalability and security problems inherent to remote servers.

The proliferation of IoT devices, the rise of Edge Computing and the advantages of cloud services are enabling the emergence of hybrid computing where the strengths of Edge and Cloud are maximised. This hybrid approach allows tasks to be performed in the optimal place to achieve the objective, whether on local devices, on cloud servers, or both. Depending on where the execution takes place, the hybrid architecture coordinates tasks between edge devices, edge servers and cloud servers:

  • Edge devices: these are devices that generate data at the edge of the network and have connectivity (Bluetooth, LTE IoT, etc.). They are equipped with small processors to store and process information and even execute, in real time, certain analytical tasks, which can result in immediate actions by the device. Tasks requiring greater complexity are moved to more powerful servers at higher levels of the architecture. Some examples of edge devices are ATMs, smart cameras, smartphones, etc.
  • Edge servers: these are servers that have the capacity to process some of the complex tasks sent from the lower devices in the architecture. These servers are in continuous communication with the edge devices and can function as a gateway to the cloud servers. Some examples are rack processors located in industrial operations rooms, offices, banks, etc.
  • Cloud servers: these are servers that have a large storage and computing capacity to address all tasks that have not been completed so far. These systems allow the management of all system devices and numerous business applications, among many other services.

Edge Artificial Intelligence

Research in the field of Machine Learning has made it possible to develop novel algorithms in the context of the Internet of Things. Although the execution of these algorithms is associated with powerful cloud servers due to the computational requirements, the future of this discipline is linked to the use of analytical models within edge devices. These new algorithms must be able to run on devices with weak processors, limited memory and without the need for an Internet connection.

Bonsai and ProtoNN are both examples of new algorithms designed to run analytical models on edge devices. These algorithms are based on the philosophy of supervised learning and can solve problems, in real time, on very simple devices with few computing resources. An application of this type of algorithms is smart speakers. A trained model is integrated on these devices that is capable of analysing all the words detected and identifies, among all of them, which is the activation lever (“Alexa”, “Hello Siri”, “OK, Google” …). Once the keyword is recognised, the system starts transmitting the audio data to a remote server to analyse the required action and proceed with the execution of this action.

Unlike previous algorithms, in which the training of models is performed on cloud servers, the Federated Learning approach arises to orchestrate the training of analytical models between the edge and the cloud. In this new approach, each of the system’s edge devices is in charge of training an analytical model with the data it has stored locally. After this training phase, each device sends the metrics from its local model to the same cloud server, where all models are combined into a single master model. As new information is collected, the devices download the latest version of the master model, retrain the model with the new information and send the resulting model back to the central server. This approach does not require the information collected at the edge devices to be transferred to the cloud server for processing, as only the generated models are transferred.

All new algorithms proposed in the literature try to optimise three important metrics: latency, throughput and accuracy. Latency refers to the time needed to infer a data record, throughput is the number of inferences made per second and accuracy is the confidence level of the prediction result. In addition, the power consumption required by the device is another aspect to consider. In this context, Apple has recently acquired Xnor.ai, a start-up company that aims to drive the development of new efficient algorithms to conserve the most precious part of the device, its battery.

Edge adoption

Businesses are embracing new technologies to drive digital transformation and improve their performance. Although there is no reference guide for the integration of these technologies, many companies follow the same pattern for the implementation of edge-related projects:

  • The first phase consists of the development of the most basic scenario. The sensors of the edge devices collect the information from the environment and send it to the cloud servers where it is analysed and the main alerts and metrics are reported through dashboards.
  • The second scenario extends the functionality by adding an additional processing layer to the edge device. Before the information is sent to the cloud, the device performs a small analysis of the information and based on the detected values, can initiate various actions through Edge Computing.
  • The most mature phase is the incorporation of edge analytics. In this case, the edge devices process the information and run the analytical models they have integrated to generate intelligent responses in real time. These results are also sent to cloud servers to be processed by other applications.

Another more novel approach associated with edge analytics applications consists of enriching the predictions generated by edge devices with new predictions provided by cloud servers. The challenge for the scientific community now is to develop new systems that dynamically decide when to invoke this additional intelligence from the cloud and how to optimise the predictions made with both approaches.

Time to start planting digital seeds for the future?

Telefónica Tech    6 July, 2022

Preparing business operations for the future of work is one of the defining problems of our time. Organisations the world over are now at the stage where their choice of technology infrastructure is vital to their future sustainability.

The vast majority of businesses may have already been heading along a digital transformation journey of sorts, assisted by the transformative forces of people, processes and technology. But throw in a global pandemic and you can expect this equilibrium to take a hit.

Telefónica Tech looked into the technology solutions that businesses relied on during the pandemic and how these would serve them in the long-term. In its research it found that only 2% of respondents made no changes to their ICT at all to deal with the pandemic.

For the majority, their response was technology-driven; 48% brought forward strategic ICT plans and 31% decided to pivot their business and overhaul their ICT. A further 17% made more effective use of the technology that they already had in place – including collaborative and cloud technology, a finding that was more prevalent in microbusinesses (25%).

The move to digital tools during the pandemic was prosperous: 60% of companies claim they’re in a better position than before

In most cases, the move to digital tools was prosperous, with 60% reporting that the pandemic had accelerated their long-term ICT strategy meaning that they’re in a better position than before. This finding rises to 70% of medium and large enterprises. Yet, 28% of companies claim they are still on a ‘war footing’ and dealing with the here and now. This signals that there isn’t a clear path out of the recovery phase for some organizations.

Using technology to fight fires

The move to remote working en masse was certainly a reason to advance digital transformation strategies, yet the speed of events and requirement for change were not necessarily conditions conductive to long-term strategic decision making. 

Our research tells us that some investments were more of a knee jerk reaction. A significant 81% of businesses admitted to making expedient ICT purchases during the pandemic that serve no long-term purpose.

Allowing people and processes to catch up

But that’s not to say there isn’t a huge amount to be learnt from the way that businesses rapidly transitioned digitally during the pandemic.

44% stated that although they realized greater technology efficiencies, they now need people-based processes to catch-up. Unsurprisingly, this trend increases the more staff a business employs (with 51% of large enterprises reporting this).

This demonstrates that for infrastructure decisions to be driven by long-term strategy, we need to think beyond technology to completely reimagine the way businesses operate. 

Other key challenges involved greater employee training and/or education to meet new operational demands (39%) and finding time to take stock, review and understand where the business is and the direction IT decision makers needed to take, next (35%).

One of the main challenges of digitalisation is the human element: without people there is no organisation and no service.

It’s telling that each of the three top challenges are routed to a human element. If  you look at this in the context of the “golden triangle” of people, processes and technology, the ‘people’ element commonly comes first, after all, without people you have no organization or service.

Throughout the pandemic, the technology aspect of this triad has noticeably accelerated, but this research demonstrates that the other two parts, ‘people’ and ‘processes’, now need to catch up.

Technology also imposes challenges

Technology is not only an enabler, but also a formidable force of its own that imposes its own set of challenges. Respondents rated three top challenges in this next phase of the pandemic in order of priority.  Addressing the scarcity of talent and digital dexterity was rated first and it has been well-reported that both are the missing ingredients of most digital transformation initiatives.  

Following this, ‘enabling multiple and competing business outcomes’ was rated second. The need to stay ahead of the competition during a period of rapid digital transformation can’t be underestimated.

Photo: Headway
Photo: Headway

Redesigning work for a hybrid model was rated third as its well-recognized that there are many benefits to the continuation of remote working, with the office still needing its place. But of course, allowing workers the flexibility to work where they choose comes with its own hurdles.

Time to plant digital seeds for the future

Whilst 60% report that they are in a better position than they were before the pandemic due to bringing forward ICT strategy, it doesn’t imply that the aspirations stop there. 57% of organizations stated that they want to spend more on ICT and 35% would spend the same amount again over the next 1-2 years.

Now more than ever, it’s time to start planting digital seeds for the future. This involves carefully considering the application of technology holistically across the organization. It’s important not to skip the collaboration stage that should inform the strategic decision-making process. This was a lesson learnt by 47% of respondents.

More than anything, our research emphasizes that there is a human connection that needs to be facilitated throughout any digital transformation initiative.