Are we really shopping ” securely ” on the Internet?

Martiniano Mallavibarrena    27 January, 2022

Once Black Friday, Singles’ Day (if you have Chinese roots or any kind of relationship to it) and Christmas are over, I’m sure the vast majority of us have a long list of anecdotes of exotic e-commerce portals, carriers in trouble, packages that never arrived and so many other stories. However, the real question we should all be asking ourselves is: Did I buy securely? Although we may all think we did, I hope this article makes you think for a moment and take a mental review of this list of best practices.

First level – Choosing the right portals

Most people make their online purchases on well-known portals where there should be no major problems to do business (if the following levels are taken into account). However, many others are looking for better prices (the “bargain” concept) or are looking for borderline legal options (imitations, second-hand of questionable reliability, private-to-private exchanges on little-known portals, etc.).

In these other cases, the problem is that users will basically approach two scenarios:

  1. Fraudulent websites: where under the guise of legitimate online commerce they will steal your credentials, payment method data, etc. Without giving anything in return or delivering useless merchandise.
  2. Legitimate imitation websites: In these cases, the portal functions normally, sometimes imitating the authentic portal of well-known brands (RayBan, Nike, Adidas, etc.) but the delivered product is a low-quality imitation or something similar directly. These cases border on legality, although they clearly infringe trademark and so on.

The rest of the “known” portals that we can use and that are in common use should not present a major problem when carrying out online transactions as long as we take into account the following two levels.

Be cautious and always check (forums, friends, etc.) how these other portals “are rated”.

Second level – Following some best practices (in the purchasing process)

At this level, there will come a time to check out and pay for the purchase. At this level, there are of course a few points to try to keep in mind:

  1. The famous “padlock” sign indicates that we are using the HTTPS protocol (HTTP Secure) and is the most basic condition for secure electronic transactions on the web. In addition to encrypting the data we send; it authenticates end-to-end the two environments (the portal and the user).
    1. Buying online without this approach is very dangerous. There are no minimum-security guarantees.
  2. If you are using the portal for the first time (watch out for level 3, below) give the basic data needed and nothing more than necessary (much of the data is for marketing and profiling purposes as we saw in another post)
  3. It may be a good practice to have a personal email address exclusively for e-commerce ([email protected]) and to manage this type of activity more carefully.
  4. The payment method is important. If we use third-party services such as PayPal, it is perfect as long as we configure the validation options in a reasonable way (authentication, payment approval security, maximum amounts, etc.)
    1. If you use credit/debit cards, be cautious about leaving the data already saved, be sensitive to the facilities provided by the browser or the operating system to save the data of all the cards that you have active.
      1. The most dangerous piece of information is the card’s security code (the CVV), which we should try not to leave stored and always keep in mind level 3 (below).
    1. We should be serious about accepting the transaction with our bank. Each time (if possible) we should be asked for specific authentication with a code generated on the spot (there are many variants depending on the bank) so that we can validate each transaction one-by-one.
  5. Taxes and currencies. Be aware of what exchange rate will be applied when using foreign currency and whether you will be charged taxes in the case of certain countries (e.g., UK now out of the EU due to Brexit).
    1. With the previous point, before accepting the transaction at the bank, keep in mind this issue (currency and taxes). If something does not fit, it is better not to accept and check everything. Avoid compulsive buying 😊
  6. Always keep the complete information (and if official, with digital signature) of the “receipt” of the purchase where the whole route of the purchase is completely identified (for possible claims or possible fraud).

Third level – Choosing the best scenario for buying online

It is obvious that sometimes we must make an emergency online purchase (a trip due to a family emergency) but we should try to avoid some basic high-risk scenarios:

  1. Use our own “secure” device and avoid computers in hotels, cyber-cafés, friends, etc.
  2. Use home or office connections, the 4G/5G network or trusted Wi-Fi networks and avoid networks in cafés, hotels, town halls, etc. Specially, if they are “open” or if we have never used them before.
  3. If a friend sends us an SMS, e-mail, Whatsapp, etc. with an address of a portal with outrageous prices (see level 1), avoid clicking on the link directly and look for it first on the internet or make sure it is a trustworthy portal.

It has always been said, and I believe it is still true, that the weakest link in the cyber security chain is the human being. When we shop online on a personal basis, we are making micro-decisions at all three levels above.

Before you go on to do anything else, please think for a moment to see if you are shopping (or not) in a “ secure” way.

The risks of not having controlled exposure to information (III)

Susana Alwasity    25 January, 2022

Finally comes the last and long-awaited post in this series on the risks of uncontrolled information overexposure. As we saw in the previous post, we know how to minimise the risks of our digital footprint, but now we need to know how to remove existing information.

Practical resources for the removal of information

In recent years, with the entry into force of the General Data Protection Regulation (GDPR), the trend in digital services has been towards trying to preserve the protection of the privacy of citizens and users on the Internet.

For this reason, an effective method for deleting our online accounts and associated information is to review the service’s privacy policy and find a contact or form to which we can direct our intention to exercise our right of deletion.

This right corresponds to the data subject’s intention to request the data controller to delete his or her personal data, provided that the personal data are no longer necessary for the purposes for which they were collected. To do so, we must send a letter, for example this model from the Spanish Data Protection Agency, adding our intention to delete our account or associated service.

Similarly, even if we believe that the information collected by different sites is “public”, we can almost always choose to request the removal of our information. This applies to services where, although the information is public, they are making a financial profit from the collection, or simply present the information in a structured form.

We can choose, for example, to search for ourselves in people indexing tools such as Pipl, and remove our information in the next section. Likewise, in the case of Have I been Pwned, through its Opt-out section, we can prevent access to see in which information leaks our compromised email is found.

One of the most direct ways to remove personal information displayed among Google’s results is to contact the owner of the site where they appear directly. Google also offers a form to remove personal information, and to stop indexing pages where personally identifiable information (PII) appears, such as financial information, sensitive personal or health data, contact addresses, or our handwritten signatures, among others.

If we have already removed different profiles of ours, or information that we had displayed on pages where we are not the owners, the next step is to inform Google to stop indexing the link, indicating that it is obsolete content and is no longer available. To do this, the search engine provides the user with a tool to remove obsolete content.

Finally, it should be noted that these measures are intended to control the exposure of information and minimise the risks associated with it, without forgetting the following premise:

It is not about not having information exposed about us, but about having a controlled exposure.

IoT and Big Data: What’s the link?

Paloma Recuero de los Santos    24 January, 2022
The digital revolution has changed our lives. To begin with, technological advances were linked to the worlds of scientific research, industrial innovation, the space race, defence, health, private companies etc. But nowadays, everyday citizens can see how technology is changing their daily lives, ways of communicating, learning, making decisions and even getting to know themselves. You no longer have to be a “techie” in order to use words such as “Big Data” and “IoT” in our daily speech. But, do we really know what they mean?

What is the IoT? What does it have to do with Big Data?

 
Put simply, IoT is an acronym for Internet of Things. The philosophy that this concept stands on is the connection between the physical world and the digital one, through a series of devices connected to the internet. These devices work like an improved version of our sensory organs, and are capable of collecting a large amount of data from the physical world and transmitting it to the digital world, where we store, process, and use it to make informed decisions about how to act. These decisions can end up being made automatically, since the IoT opens doors to the creation of application in the fields of automation, detection and communication between machines.

The data collected by these connected devices is characterized by its great Volume (there are millions of sensors continually generating information), Variety (sensors of all types exist, from traffic cameras and radars, to temperature and humidity detectors) and the Velocity at which is it generated. These 3 V’s are the same ones that define Big Data. To these three we can add Veracity and Value. It is said that this data is the oil of the 21st Century, but by itself, it is not very useful. However, if we apply advanced Big Data analytics we can identify trends and patterns. These “insights” carry great value for companies since that can help them to make decisions based on data, what we at LUCA call being “Data Driven“.

The application of IoT has two very different aspects.

  • On one hand, that which is related to consumers, comprising of applications aimed at creating smart homes, connected vehicles and intelligent healthcare.
  • On the other hand, the business-related uses, those application relating to retail, manufacturing, smart building, agriculture etc.

Which elements make up the IoT?

The Internet of Things is made up of a combination of electronic devices, network protocols and communication interfaces.
Out of the devices, we are able to distinguish three different types:
    • Wearable technology: any object or item of clothing, such as a watch or pair of glasses, that includes sensors which help improve its functionality.
    • Quantifying devices for people’s activity: any device designed to be used by those who want to store and monitor data about their habits or lifestyle.
    • Smart homes: any device that allows you to control or remotely alter an object, or that contains motion sensors, identification systems or other measures of security.
    • Industrial devices: any device that allows you to turn physical variables (temperature, pressure, humidity etc.) into electrical signals.
A graphical representation of the IoT
Figure 1 : A graphic representation of the Internet of Things.

These devices can reach certain levels of intelligence. At a most basic level, we see devices which are simply able to identify themselves in a certain way (identity), but then we move on to devices that can define where it is (location). Further still, there are devices which can communicate the condition that it is in (state) and those that can analyze their environment and carry out tasks based on certain criteria.

These intelligence levels translate into a series of capabilities:

  • Communication and Cooperation: being able to connect either to the internet and/or other devices, therefore being able to share data between themselves and establish communication with servers.
  • Direction: the ability to be configured and located from anywhere on the network.
  • Identification: the ability to be identified via technology such as RFID (Radio Frequency Identification), NFC (Near Field Communication), QR (quick response) code and more.
  • Localization: being able to know its own location at any moment.
  • Intervention: the ability to manipulate its environment.
With regard to protocols, we already know that in order to connect to the internet, we need TCP/IP (Transmission Control Protocol / Identification Protocol). The first steps in terms of the IoT were made using the fourth version (IPv4). But this brought an important limitation, since the number of addresses that it could generate was reduced. From 2011 onwards, the Internet IPv6 communications protocol was designed, which permits an infinite number of addresses. This meant that the IoT could develop since, according to Juniper Research, by 2021 there will be over 46,000 connected devices, sensors and actuators.
As well as the protocols, a connection interface is needed. On the one hand, there are wireless technologies such as WiFi and Bluetooth. On the other, we have wired connections, such as IEEE 802.3 Ethernet (which means you can set up a cabled connection between the IoT device and the internet) and GPRS/UMTS or NB-iot which use mobile networks to connect to the internet. These last ones, at their cost, are usually used for devices where a low level of data consumption is expected, such as garage door opening systems or rangefinders in solar fields.

The curious relationship between IoT and toasters. A short history lesson…

In 1990, John Romkey and Simon Hacket, in response to a challenge launched by Interop, presented the first device connected to the internet; a toaster. You could use any computer connected to the internet in turn it on, off and choose the toasting time. The only human interaction needed was to put the bread in. The following year, of course, they added a small robotic arm to fully automate the process. Curiously, in 2001, another toaster became a protagonist in the history of the IoT, when Robin Southgate designed one capable of collecting weather patterns from online and “printing” them onto a slice of toast.

Picture of two slices of toast in a toaster
Figure 2 : The first connected device was a toaster.
Although Romkey and Hacket’s toaster is often referred to as the first IoT device, the true first actually came much early. In the 70’s, the Department of Computer Science at Carnegie Mellon connected a CocaCola machine to the department server via a series of microswitches, meaning that before “taking the walk” to the vending machine, you could check on your computer whether there was stock left and whether the bottles were at the right temperature (since it knew how long they had been stored for). Although this device wasn’t technically connected to the internet (since the internet was still in development), it certainly was a “connected device”.

Moving on from toasters and vending machines, the Auto-ID Center at the Massachusetts Institute of Technology (MIT), played a crucial role in developing the IoT, thanks to their work on Radio-Frequency Identification (RFID) and the new detection sensor technology they developed.

In 2010, thanks to the explosive growth of smartphones and tablets, and the falling price of hardware and communications, the number of connected devices per person exceeded 1 for the first time (1.84). It should be noted that there was not an even distribution at a global level.

The challenges and obstacles that the IoT faces

The rapid innovation that we see in this area brings together a diverse collection of different networks, designed for different and specific purposes. Therefore, one of the main challenges for the IoT is to define common standards that allow these various networks and sensors to work together.
On the other hand, each day we have new technological advances in miniaturization, where components are becoming more powerful and efficient. However, there is something that slows this progress down: energy consumption, and in particular, autonomous batteries. When you are talking about a connected device for personal use, such a smartwatch, it can be somewhat frustrating when you have to keep recharging it, but it’s not a huge issue. However, with devices that are located in remote locations, it is vital that they work well with just one charge. in order to solve this problem, research is being done into devices that harness energy from their surroundings. For example, water level sensors that can recharge their batteries with solar energy.

In conclusion

The IoT significantly increases the amount of data available to process, but this data doesn’t become useful until they are collected, stored and understood. It is at this point when Big Data comes into play, with its ability to store and process data at a massive scale. We add to this the falling price and rising availability of connected devices. This will lead to an explosion of revolutionary applications that can create “smart cities“, help us efficiently use energy, lead to a more comfortable lifestyle where tasks are done for us, mean that medical diagnoses can be more precise and even offer us information from space.

The Internet of Things and Big Data are two different things, but one would not exist without the other. Together they are the true internet revolution.

Cyber Security Weekly Briefing 15–21 january

Telefónica Tech    21 January, 2022

Cyber-attack campaign against Ukrainian targets

The Microsoft Threat Intelligence Center team has been analysing the succession of cyberattacks against Ukrainian organisations since 13 January, which have affected at least 15 government institutions such as the Ministry of Foreign Affairs and Defence. According to investigators, this number could increase soon. As for the campaign itself, Microsoft warns that a new malware family called “WhisperGate” was used, malicious software aimed at destroying and deleting data on the victim’s device in the form of ransomware. “WhisperGate” is said to consist of two executables: “stage1.exe”, which overwrites the “Master Boot Record” on the hard disk to display a ransom note, whose characteristics indicate that it is a fake ransomware that does not provide a decryption key, and “stage2.exe”, which runs simultaneously and downloads malware that destroys data by overwriting files with static data. Journalist Kim Zetter has indicated that the entry vector used by the malicious actors would have been the exploitation of the vulnerability CVE-2021-32648 and CVSSv3 9.1 in octobercms. Consequently, according to Ukrainian cybersecurity agencies, the actors exploited the Log4Shell vulnerability and reported DDoS attacks against its infrastructure. In addition, the US Cybersecurity and Infrastructure Security Agency (CISA) has issued a statement, warning organizations about potential critical threats following recent cyberattacks targeting public and private entities in Ukraine. Microsoft has indicated that it has not been possible to attribute the attacks to any specific threat actor, which is why they have called these actions DEV-0586. It should be noted that, as indicated by the Ukrainian authorities, due to the escalation of tensions between the Ukrainian and Russian governments, this campaign of attacks is considered to be aimed at sowing chaos in Ukraine on the part of Russia.

More info: https://www.microsoft.com/security/blog/2022/01/15/destructive-malware-targeting-ukrainian-organizations/

Flaw in Safari could reveal user data

Security researchers at FingerprintJS have revealed a serious flaw in Safari 15’s implementation of the IndexedDB API that could allow any website to track user activity on the Internet, potentially revealing the user’s identity. IndexedDB is a browser API designed to host significant amounts of client-side data, which follows the “same-origin” policy; a security mechanism that restricts how documents or scripts loaded from one source can interact with other resources. Researchers have discovered that in Safari 15 on macOS, and in all browsers on iOS and iPadOS 15, the IndexedDB API is violating the same-origin policy. This would be causing that, every time a website interacts with a database, a new (empty) database with the same name is created in all other active frames, tabs and windows within the same browser session, making other websites able to see this information. FingerprintJS has created a proof of concept that can be tested from a Safari 15 or higher browser on Mac, iPhone or iPad. FingerprintJS also notes that they reported the bug to Apple on 28 November, but it has not yet been resolved.

All the details: https://fingerprintjs.com/blog/indexeddb-api-browser-vulnerability-safari-15/

Microsoft releases emergency updates for Windows

Following the discovery of a number of issues caused by the Windows updates issued during the last Security Bulletin in January, Microsoft released in an extraordinary way (OOB) new updates and emergency fixes for some versions of Windows 10 and Windows Server. Reports from system administrators indicate that, after deploying Microsoft’s latest patches, connection problems have been reported in L2TP VPN networks, domain controllers suffer from spontaneous reboots, Hyper-V no longer starts on Windows servers and there are problems accessing Windows Resilient File System (ReFS) volumes. The fixes affect a wide range of versions of Windows Server 2022, 2012 and 2008 as well as Windows 7, 10 and 11. According to Microsoft, all updates are available for download in the Microsoft Update Catalog and some of them can also be installed directly via Windows Update as optional updates. If it is not possible to deploy them, it is recommended to remove updates KB5009624, KB5009557, KB5009555, KB5009566 and KB5009543, although it should be noted that valid fixes for the latest vulnerabilities patched by Microsoft would also be removed.

More: https://docs.microsoft.com/en-us/windows/release-health/windows-message-center

Cisco security flaw allows attackers to gain root privileges

Cisco has released Cisco Redundancy Configuration (RCM) version 21.25.4 for StarOS software, which fixes several security flaws. The most prominent vulnerability is identified as CVE-2022-20649 CVSSv3 9.0, a critical flaw that allows unauthenticated attackers to execute remote code with root privileges on devices running vulnerable software. The source of the vulnerability is that debug mode has been improperly enabled for different specific services. To exploit the vulnerability, attackers do not need to be authenticated, but they do need to gain access to the devices, so they should first perform a detailed reconnaissance to discover which services are vulnerable. There is currently no evidence that the vulnerability is being exploited. In addition, Cisco has also patched another medium criticality vulnerability CVE-2022-20648 CVSSv3 5.3 information disclosure vulnerability.

Learn more: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-rcm-vuls-7cS3Nuq

Google fixes bugs in Chrome

Google has published a security advisory where it fixes 26 vulnerabilities that are affecting its Chrome browser. A critical vulnerability stands out among the flaws. It has been listed with the identifier CVE-2022-0289 and was discovered on January 5th by the researcher Sergei Glazunov. This vulnerability resides in Google’s Safe Browsing service, which is responsible for alerting users that they are accessing a website that could have an associated risk. If exploited, this vulnerability could allow remote code execution. The rest of the vulnerabilities fixed have been classified, for the most part, as high severity, with only five of medium risk. Google recommends updating to version 97.0.4692.99, where these flaws would be fixed.

All the details: https://chromereleases.googleblog.com/2022/01/stable-channel-update-for-desktop_19.html

Deep Learning in Star Wars: May Computation Be With You

Santiago Morante Cendrero    20 January, 2022

Today we are going to talk about how Artificial Intelligence and, Deep Learning in particular, are being used in the filming of movies and series, achieving better results in special effects than any studio had ever achieved before. And is there any better way to prove it than with one of the best science fiction sagas, Star Wars? To paraphrase Master Yoda: “Read or do not read it. There is no try”.

Deepfake Facial Rejuvenation

Since the release of A New Hope in 1977 until now, a multitude of Star Wars films and series have been made, without following a continuous chronological order. This means that characters that were played when the actors were young have to be played many years later… by the same actors, who are no longer that young.

This is a problem that Hollywood has solved by using “classic” special effects, such as CGI, but the advance of Deep Learning has resulted in a curious fact, as fans with home computers have managed to match or improve the work of these studios.

One example is DeepFake technology. DeepFake is an umbrella term for neural network architectures trained to replace a face in an image with that of another person. Among the neural network architectures used are autoencoders and generative adversarial networks (GANs). Since 2018 this technology has developed rapidly, with websites, apps and open-source projects ready for development, lowering the barrier to entry for any user who wants to try it out.

And how does this relate to Star Wars? On December 18th, 2020, episode 8 of season 2 of The Mandalorian series was released, which included a scene with a “young” Luke Skywalker made by computer (the original actor, Mark Hamill, was 69 years old). Just 3 days later, the youtuber Shamook uploaded a video in which he compared the facial rejuvenation of Industrial Light&Magic (responsible for the special effects) with the one he had done himself using DeepFake

As you have seen, the work of a single person, in 3 days, has improved the work of the special effects studio, which, in this case, had also used DeepFake in combination with other techniques. In addition, Shamook did this using two open-source projects such as DeepFaceLab and MachineVideoEditor.

The same author made other substitutions in recent Star Wars films such as that of Governor Tarkin in Rogue One (the original actor, Peter Cushing, died in 1994) or that of Han Solo in the film of the same name (where they hired a new actor instead of rejuvenating Harrison Ford) that proved the DeepFake technique generalised very well to other films.

These videos, which went viral, did not go unnoticed by Lucasfilm, who a few months later hired the youtuber as Senior Facial Capture Artist.

Outside of the Star Wars universe, Shamook has done face replacements in many other films, usually putting actors in films they have nothing to do with, with hilarious results.

Upscaling models to improve the quality of old videos

But rejuvenating faces is not the only use that Deep Learning can offer film studios. Another type of models, called upscaling models, are trained to improve the resolution of images and videos (and video games). This is useful when you want to remaster, for example, old films that were digitalised and do not allow for easy upscaling.

Fans, again, have taken the lead and are improving the quality of old Star Wars video game trailers using these technologies.

Some have even dared to restore deleted scenes from the first films, and provide tutorials so that those who have the courage can continue to improve their results.

In short, we can see that Deep Learning models are changing the way many businesses are developed. The film industry has a wide range of options to improve its processes using Machine Learning that will allow us to enjoy increasingly realistic and spectacular effects.

Note: you can see the process of Luke Skywalker’s rejuvenation in The Mandalorian in episode 2 of season 2 (“Making of the Season 2 Finale”) of the docuseries “Disney Gallery: Star Wars: The Mandalorian” available in Disney+.

Cyber Security Weekly Briefing 8–14 january

Telefónica Tech    14 January, 2022

Microsoft security bulletin

Microsoft has published its January security bulletin in which it has fixed a total of 97 bugs, including six 0-day vulnerabilities and nine bugs classified as critical. Regarding the 0-days, no active exploitation of these has been detected, but it should be noted that several of them have public proofs of concept, so it is likely that they will be exploited in the short term. Regarding the security flaws classified as critical, it is worth highlighting CVE-2022-21907 (CVSS 9.8), which affects the latest versions of Windows in its desktop and server versions. This is a vulnerability in the HTTP protocol stack, the exploitation of which would result in remote code execution and which has been labelled as “wormable”. The other flaw to note is another remote code execution in this case in Microsoft Office (CVE-2022-21840 CVSS 8.8), patched for Windows versions, but not yet for macOS devices. Similarly to what happened with the 0-days, according to Microsoft, no exploits have been detected for these two vulnerabilities either.

More info: https://msrc.microsoft.com/update-guide/releaseNote/2022-Jan

New JNDI vulnerability in H2 database console

Researchers at JFrog have discovered a critical unauthenticated remote code execution vulnerability in the H2 database console. The vulnerability shares its origin with the Log4Shell (JNDI remote class loading) vulnerability and has been assigned the identifier CVE-2021-42392. H2 is a popular open source Java SQL database widely used in various projects. Despite being a critical vulnerability and sharing features with Log4Shell, the researchers indicate that its impact is minor for several reasons. Firstly, this flaw has a direct impact because the server that processes the initial request is the same server that is affected by the flaw, making it easier to detect vulnerable servers. Secondly, the default configuration of H2 is secure, unlike with Log4Shell where default configurations were vulnerable. And finally, many vendors use the H2 database but not the console, so while there are vectors to exploit the flaw beyond the console, these other vectors are context-dependent and less likely to be exposed to remote attacks. Despite attributing less risk to this new flaw than to Log4Shell, the researchers warn that for anyone running an H2 console exposed to the LAN, the flaw is critical and they should upgrade to version 2.0.206 as soon as possible. The firm has also shared guidance for network administrators to check if they are vulnerable to the new flaw.

All the details: https://jfrog.com/blog/the-jndi-strikes-back-unauthenticated-rce-in-h2-database-console/

Five new URL parsing confusion flaws

Researchers at Team82 and Snyk have published a research paper in which they have studied in depth how different libraries parse URLs, and how these differences in the way they parse URLs can be exploited by attackers, by analysing URL parsing confusion bugs. They have analysed a total of 16 different URL (Uniform Resource Locator) parsing libraries and have detected five kinds of inconsistencies present in some of them, which could be exploited to cause denial-of-service conditions, information exposure or even, under certain circumstances, remote code execution. The five inconsistencies observed are: scheme confusion, slash confusion, backslash confusion, URL encoded data confusion and scheme mixup. In addition to the identification of these inconsistencies, they point to the detection of eight vulnerabilities that directly affect different frameworks or even programming languages and that have already been patched except in some unsupported versions of Flask: Flask-security (Python, CVE-2021-23385), Flask-security-too (Python, CVE-2021-32618), Flask-User (Python, CVE-2021-23401), Flask-unchained (Python, CVE-2021-23393), Belledonne’s SIP Stack (C, CVE-2021-33056), Video. js (JavaScript, CVE-2021-23414), Nagios XI (PHP, CVE-2021-37352) and Clearance (Ruby, CVE-2021-23435). In their study, they give a high relevance to this type of error in URL parsing, using Log4Shell as an example, since the bypass of Apache’s initial bug fix was achieved thanks to the presence of two different URL parsers within the JNDI search process, each of which parsed in a different way.

More: https://claroty.com/2022/01/10/blog-research-exploiting-url-parsing-confusion/

MuddyWater: Link to Iran and technical issues

The Cyber National Mission Force (CNMF) of the US cybersecurity command has published a note linking the APT known as MuddyWater to Iran’s Ministry of Intelligence and Security (MOIS) and details some technical aspects that have been associated with the group. MuddyWater was first identified in 2017, with targets located primarily in the Middle East, Europe and North America, and in the telecommunications, government and oil industry sectors. The release identifies some open source tools used by this malicious actor, including variants of PowGoop, samples of the Mori backdoor or sideloading DLL files to trick legitimate programmes into executing malware.

Learn more: https://www.cybercom.mil/Media/News/Article/2897570/iranian-intel-cyber-suite-of-malware-uses-open-source-tools/

0-day vulnerabilities detected in AWS CloudFormation and AWS Glue

Security researchers at Orca Security have detected two 0-day vulnerabilities in different Amazon Web Services (AWS) services. The first of the flaws was in the AWS CloudFormation service and consisted of an XXE (XML External Entity) vulnerability, which allowed threat actors to disclose confidential files located on the vulnerable service machine, as well as the disclosure of credentials for internal AWS infrastructure services. The second vulnerability discovered affected the AWS Glue service, which stemmed from an exploitable feature that allowed the credentials needed to access the internal service’s API to be obtained and could gain administrator permissions. The AWS spokesperson assured that no customer data has been affected due to the vulnerabilities in both services. It should be noted that both vulnerabilities were fixed by the AWS security team after they were reported by researchers.

All the details: https://orca.security/resources/blog/aws-glue-vulnerability/

The risks of not having controlled exposure to information (I)

Susana Alwasity    12 January, 2022

Welcome to the digital era. This is an era in which we embrace technology, with what it offers us in almost every sphere of our lives. From carrying a large part of our lives on our mobile phones, our memories, online banking and shopping at the click of a button, it makes our daily tasks so much easier.

Most of these services come to us for free, but we know that everything has a price, so we benefit from this technology at the risk of our privacy.

Nowadays it is difficult for a person not to leave a trail of information in cyberspace. The advent of the internet and the euphoria of social networking have led to the emergence of new privacy risks. Often, when analysing the risks of exposing information on the internet, we may think “nobody is interested in me”, and this idea is a big mistake that leads us to neglect the control of the information exposed about us.

The vast majority of financial data thefts that result in monetary losses are perpetrated against the end user. That is, the ordinary customer, who could be any one of us.

The risks associated with our digital footprint

The trail we leave behind in cyberspace is known as a digital footprint. This footprint is what other users can know about an individual, based on the information that we ourselves have left on the network. Whether by registering on a website, by creating a profile on a social network, or by the posts we make on networks.

The more information that is available about us and the better the profiling capability, the more sophisticated the fraud will become, giving the potential offender more tools to carry out more targeted attacks.

With our data, we may be susceptible to identity theft, data leaks, extortion, scams or online fraud, not to mention potential physical security risks. We may also receive more spam and be susceptible to malicious emails or targeted phishing. At the same time, another risk associated with malicious emails is malware infections or the installation of malicious software on our devices.

On the other hand, the exposure of email provides the public with information on associated services, i.e., where we have a profile or account created, as well as information leaks that may be associated with IP data, geolocations and passwords.

On a physical level, the exposure of addresses and access to locations poses a risk. In some Latin American countries, in the case of a person of special interest due to his or her position or socio-economic status, it can even lead to risks of kidnapping or extortion. Similarly, it provides malicious actors with physical addresses to which to direct mail frauds, or additional information for identity theft.

Similarly, impersonation allows the contracting of external services such as insurance, credit or gambling accounts. Against organisations, it leads to online fraud such as CEO fraud or BEC fraud (known as Business Email Compromise), where cybercriminals impersonate a senior executive and send an email with the aim of obtaining unauthorised transfers or confidential information.

Finally, there may also be associated reputational and negative reputational risks. This may be due to comments on social media that are unfortunate or inappropriate for some reason, such as ideological reasons. This case has a greater impact among profiles of great responsibility and influence, managers and senior officials, public figures of special relevance due to their position or profession, but it can also affect any citizen at the work or professional level.

Do you want to know more? Next week we will tell you all about how to minimise the risks of our digital footprint.

Artificial Intelligence 2022: myths and realities

Carlos Martínez Miguel    11 January, 2022

We are only in the early stages of AI development, but its impact is already huge

When making predictions in the technological field, it is always advisable to start by clarifying some basic concepts. Doing so prevents misinterpretation from leading to exorbitant expectations that will undoubtedly be followed by profound disappointment.

In the case of Artificial Intelligence (AI), it is key to distinguish between its three fundamental types according to their capability:

  • Artificial Narrow Intelligence (ANI or Applied AI): this focuses on solving specific problems.  For example, predicting when a machine is going to stop working in order to anticipate its failure and avoid it.

  • Artificial General Intelligence (AGI): this is the one that is comparable to human intelligence in all aspects.  It would be an artificial intelligence that would have the same capabilities as a human being.

  • Artificial Super Intelligence (ASI): an artificial intelligence that is superior to human intelligence in all aspects.

Today we are in the era of Applied AI, which has made great progress in recent years thanks especially to deep learning. However, AGI is still at an early stage of development and expert predictions suggest that it will not become a reality until at least 2040 or even decades later, the biggest obstacle to its development being the lack of knowledge we still have about the human brain. Finally, ASI can still be considered “science fiction”. Therefore, to the disappointment of dystopia-loving readers, the possible arrival of super-intelligent robots with the ability to control and subjugate the human race is still a long way off.

AI will play a major role in the transformation of all economic sectors by 2022

Fortunately, the age of Applied AI has many more benefits than drawbacks and is enabling a very positive transformation of activity in major economic sectors.

  • For example, the tourism sector, perhaps the sector most affected by the pandemic, is taking advantage of AI to reinvent itself. This reinvention is based on a deeper understanding of the needs and interests of visitors, thus being able to personalise the services on offer in order to attract them and build loyalty. The use of multiple data sources (mobility, card payments, navigation, etc.) allows for the development of advanced analytical models that can predict demand and adapt service capacity dynamically and efficiently.
  • In the mobility sector, by 2022, we will see a consolidation of the use of AI models, powered by data from connected vehicles and other sources, to optimise routes, maximise road safety and minimise environmental impact. Smart logistics will continue to accelerate, spurred by the unstoppable growth of e-commerce, including trials of autonomous delivery vehicles and the consolidation of end-to-end asset traceability, thanks to IoT technologies.
  • In retail, the need to develop a customer experience that seamlessly connects the physical and online worlds will continue to drive the adoption of AI. Models will be developed to maximise the conversion of customer interactions into sales, combining multiple data sources from both worlds.
  • Finally, the industrial sector will undoubtedly be one of the most advanced in its transformation. The massive sensorisation of factories and their connection in minimum latency environments thanks to 5G private networks will be key in the deployment of AI use cases. Predictive maintenance, quality optimisation, minimisation of waste and residues, movement of materials with automated guided vehicles (AGVs), etc. are just a few examples. 

In Europe, and in Spain in particular, recovery funds will accelerate mass adoption of AI

In Europe, and especially in Spain, this transformation will be accelerated with the arrival of funds from the “Recovery, Transformation and Resilience Plan” approved by the EU. A significant part of these funds is aimed precisely at boosting the adoption of AI in all areas of economic activity.

These funds will contribute to the financing of projects for the adoption of Big Data and AI infrastructures, the development of use cases, the implementation of data governance models, training and capacity building in this field, etc.

In addition, these funds will enable SMEs to start using these technologies, thanks to the Digital Kit programme, which includes modules oriented towards intelligence and analytics.

2022 will undoubtedly be an exciting year in which we will continue to build realities and debunk myths around AI.

The digitisation of predictive technology: digital twins

Antonio Ramírez    10 January, 2022

I still remember talking to my colleague Miguel Mateo after his presentation at the Telefónica Tech stand at MWC21 (Mobile World Congress 2021) about how the union of Predictive Algorithmics and 5G has led to an unprecedented predictive maintenance revolution.

We both agreed that the immediate future of this new 5G+Predictive technology was the Digital Twins and their application to many sectors of the economy and industry, and yet they are still largely unknown for what they can bring to industries

Digital twins

In a simple and broadly understandable way, a digital twin has the ability to interpret information captured in a physical world and display it to humans in a graphical format so that they can do two things.

If you imagine the engines of a factory assembly line or those of a lift bridge, the entire piping network of a warship, or the rotor of a wind turbine, to give a few examples, you will understand what I mean by physical world.

The first thing that a digital twin brings, apart from having a digital image of the infrastructure and/or asset it represents, is the ability to see in real time the status of that infrastructure, being able to show you in a simple and easy way the important and/or critical information of all your assets. At the same time, it can make decisions by activating internal processes, triggering alarms or others depending on the information collected thanks to the machine learning algorithms it incorporates.

A tool for decision-making and cost savings

The second contribution, and in my opinion just as important as the first, is the ability to be able to draw different scenarios, as the user wishes, so that he/she can analyse them and make decisions based on what he/she sees.

You can imagine the benefits a factory operations manager can get from being able to build performance and failure prediction scenarios for the assembly lines.

Or how much better the investment made every year in the maintenance of national infrastructures can be managed if engineers can see in their digital twin the current state of roads, tunnels, bridges, railways, ports and predict when any of them might have problems, depending on the scenarios under study.

These two scenarios are a mere sample, but now think of other sectors and industries that need to monitor the state of their infrastructure or assets and maximise them for optimal performance and better productivity and competitiveness.

Just to give you a quick sample. The estimated growth for the next 5 years for the implementation of digital twins is 41.5% (Source: Researchdive.com)

But how does a digital twin work?

If you are one of those who are in doubt about which technologies support digital twins, think of 5G technology, IIoT (Industrial Internet of Things) platforms and predictive technology and you will have a clear idea of what they are.

The way it works, although it may seem simple to you, involves the work of many decades and the knowledge of many experts.

First you must monitor the assets and/or infrastructure you need a digital asset of, and for that you need to put sensors (pressure, vibration, temperature, oil, noise, … or others).

These sensors collect information in real time and send it wirelessly or via Edge Computing technology to a Cloud IIoT platform. And for this we need to have the best possible communications. That is why the role of 5G, since its existence, has become essential for the existence of digital twins and their benefits.

Once the information is on the IIoT platform, it is interpreted, analysed and compared with other databases. This is done using Big Data, and predictive Machine Learning algorithms created by Predictive Analysts and Data Sciences customised for the client and/or industry.

The final part, but just as important, is to represent it graphically and link it with AI (Artificial Intelligence) processes for decision making and/or scenario generation.

How do I know if I need a digital twin?

If you ask me how to decide whether or not you would benefit from a digital twin in your organisation, let me ask you a few questions that will help you think and come up with an answer.

  • Does creating scenarios of your business help you improve your strategies?
  • Does predicting what and when you need to hold an asset help you invest better?
  • Does having all the information related to your asset regardless of the source benefit you?
  • How long do you think it will take your competitor to integrate this technology?
  • Do I have another effective strategy to digitalise my infrastructure?
  • Are there experts who can help me?

As I don’t want to be a pain, I’ll stop asking you questions here, but I’m sure a few more are starting to cross your mind.

Digital Twins are a reality as true as mobile phones, the Internet, aeroplanes and computers were in their time.

Cyber Security Weekly Briefing 1–7 january

Telefónica Tech    7 January, 2022

Mail delivery failure on Microsoft Exchange on-premises servers

2 January, Microsoft released a workaround to fix a bug that interrupted email delivery on Microsoft Exchange on-premises servers. The bug is a “year 2022” flaw in the FIP-FS anti-malware scanning engine, a tool that was enabled in 2013 on Exchange servers to protect users from malicious mail. Security researcher Joseph Roosen said the cause was that Microsoft used a signed int32 variable to store the value of the date, a variable that had a maximum of 2,147,483,647. The 2022 dates have a minimum value of 2,201,010,001, so they exceed the maximum number that can be stored, causing the scanning engine to fail and the mail cannot be sent. The emergency patch requires user intervention (it is a script that must be executed following certain instructions) and Microsoft warns that the process may take some time. The firm is also working on an update that will automatically solve the problem.

More info: https://techcommunity.microsoft.com/t5/exchange-team-blog/email-stuck-in-exchange-on-premises-transport-queues/ba-p/3049447

Uber security flaw allows emails to be sent from its servers

Security researcher Seif Elsallamy has discovered a vulnerability in Uber’s email system that could allow a threat actor to send emails impersonating the company’s identity. The vulnerability is in one of Uber’s email endpoints, which has been publicly exposed and would allow a third party to inject HTML code and send emails pretending to be Uber. The researcher sent the digital media Bleeping Computer an email from the email address [email protected], which contained a form asking the user to confirm their credit card details, information that would later be sent to the server controlled by Seif Elsallamy. This email did not enter the spam folder because it came from Uber’s servers. The researcher reported the vulnerability to Uber through HackerOne’s bounty programme, but this was rejected as it required social engineering to be exploited. It is not the first time this problem has been detected, as researchers Soufiane el Habti and Shiva Maharaj reported it some time ago.  Likewise, the researcher states that, due to the information leak that Uber had in 2016, there are 57 million users at risk who could receive emails pretending to come from Uber. Bleeping Computer has also contacted Uber but has not received a response yet. 

Full details: https://www.bleepingcomputer.com/news/security/uber-ignores-vulnerability-that-lets-you-send-any-email-from-ubercom/

Out-of-band update for Windows Server bugs

Microsoft released an out-of-band update yesterday that sought to resolve some bugs reported by Windows Server users. Some users of Windows Server 2019 and 2012 R2 were reportedly encountering problems of excessive slowness or terminals going black. In some cases, there could also be failures when accessing servers via remote desktop. The patch for these versions is not available in Windows Update and will not be installed automatically. Instead, affected users should follow the instructions provided by Microsoft in its release. All other versions of Windows Server are expected to receive similar patches in the coming days.

Learn more: https://docs.microsoft.com/en-us/windows/release-health/windows-message-center#2772

Evasive techniques of Zloader malware

Researchers at Check Point Research have analysed the new evasive techniques of the Zloader banking malware. In the new campaign analysed, which they attribute to the MalSmoke group and which they indicate to have been running since November 2021. The infection begins with the installation of Altera Software, a legitimate IT remote monitoring and management tool, and is used to gain initial access in a stealthy manner. Besides the use of a legitimate tool, the actors make use of malicious DLLs with a valid Microsoft signature to evade detection. To do so, actors exploit the CVE-2013-3900 flaw, a vulnerability known to Microsoft since 2013, whose patch is disabled by default and which allows an attacker to modify signed executables by adding malicious code without invalidating the digital signature.

Full information: https://research.checkpoint.com/2022/can-you-trust-a-files-digital-signature-new-zloader-campaign-exploits-microsofts-signature-verification-putting-users-at-risk/

Elephant Beetle: a group with financial motivations

Sygnia’s incident response team has published an article in which they present the analysis of Elephant Beetle, a financially motivated group that is attacking multiple companies in the Latin American sector, and which they have been tracking for two years. Also classified as TG2003, this group spends long periods of time analysing its victim, as well as its transfer system, going unnoticed by security systems by imitating legitimate packages and using an arsenal of more than 80 tools of its own. Elephant Beetle’s preferred entry vector is leveraging legitimate Java applications deployed on Linux systems. Sygnia highlights the exploitation of old, unpatched vulnerabilities such as: CVE-2017-1000486 (Primetek Primeface), CVE-2015-7450 (WebSphere), CVE-2010-5326 or EDB-ID-24963 (SAP NetWeaver). Once the victim has been studied, it creates fraudulent transactions of small amounts that mimic the company’s legitimate movements.  Although the attribution is not yet clear, Sygnia explains that, after multiple analyses carried out on incidents involving Elephant Beetle where they have located patterns such as the word “ELEPHANTE” or multiple C2s that were located in Mexico, it could have a connection with Spanish-speaking countries, more specifically with Latin America, and Mexico could be the area of origin. 

More: https://f.hubspotusercontent30.net/hubfs/8776530/Sygnia- Elephant Beetle_Jan2022.pdf