New FARO Version: Create Your Own Plugin and Contribute to Its Evolution

Juan Elosua Tomé    4 March, 2021

We are pleased to announce the latest version of FARO, our open-source tool for detecting sensitive information, which we will briefly introduce in the following post.

Nowadays, any organisation can generate and manage a considerable amount of documentation directly related to its daily activity. It is common for a significant part of these documents to be of a strategic or confidential nature: contracts, agreements, invoices, profit and loss accounts, budgets, employees’ personal data, etc. These are all examples of documentation that, if poorly guarded, can pose a major reputational and security problem for the organisation.

From our cyber security R&D centre TEGRA in Galicia, we have developed a tool called FARO, capable of detecting and classifying sensitive information in different types of documents such as: office, text, zipped files, html, emails, etc. In addition, thanks to its OCR technology, it can also detect information in images or scanned documents. All this to contribute to greater control of the sensitive data of our organisation.

Figure 1 – Visual example of detected entities and FARO results

In this new version we continue to add new features and improvements, among which we would like to highlight the plugin system with multilingual support. It is now possible to create simple plugins so that FARO can detect new entities with sensitive information.

“FARO is a tool open to the community and invites anyone interested in its development or evolution to access the repository and leave their feedback or any other input that may contribute to its future development”.

How to Use FARO

To use FARO, (after cloning it from Github and installing its dependencies), just launch it with the appropriate options.

FARO will generate an output folder in the root directory of the project with two output files.

  • output/scan.$CURRENT_TIME.csv: execution summary file with the final score of each document and the number of occurrences of each entity type.
  • output/scan.$CURRENT_TIME.entity: json format detail file with a list of the entities detected for each source document.

Multilingual Plugin System

Thanks to FARO’s new modular architecture and plugin system, it is possible to detect new sensitive information without any in-depth knowledge of the tool’s inner workings. It will only be necessary to focus on the definition of patterns for the detection of sensitive information and to incorporate configuration for validation and context.

Two types of patterns have been defined for each plugin. The first pattern is used when the entity to be located is very specific and therefore we can detect it with a very high accuracy, generating a low false positive rate.

"BITCOIN_P2PKH_P2SH_ADDRESS": r"[13][a-km-zA-HJ-NP-Z0-9]{26,33}"

Pattern 1 –Example of detection pattern for BTC addresses

The second pattern, however, is more generalist and could generate a higher number of false positives. Within each FARO plugin, a context can be added to increase the accuracy of the detection in order to avoid these false positives. This context is based on dictionaries of words that are searched before or after the potential entities detected, in order to confirm the decision.

"MOVIL_ESPAÑA": r"[67](\s+|-\.)?([0-9](\s+|-|\.)?){8}"

Pattern 2 – Example of mobile phone number detection pattern

In addition, the plugins in FARO allow you to add an automatic validation if there is, for example, a digit control of a bank account and thus considerably increase the certainty that it is the information we want to detect.

Finally, each plugin can be defined for multiple languages by customising the context and the pattern to be localised according to the original language of the document.

In the wiki of the project you will find all the technical information for the development of plugins. We encourage all of you to participate, either by contributing new plugins to improve the tool or by testing FARO in your organisation and sending us feedback via Github.


TEGRA cybersecurity centre is part of the joint research unit in cyber security IRMAS (Information Rights Management Advanced Systems), which is co-financed by the European Union, within the framework of the Galicia ERDF Operational Programme 2014-2020, to promote technological development, innovation and quality research.

Telefónica Tech’s Cybersecurity Unit Becomes Part of The European Commission’s Cybersecurity Atlas

Innovation and Laboratory Area in ElevenPaths    2 March, 2021

Telefónica Tech’s Innovation and Laboratory Area in cyber security has been included as part of the European Commission’s Cybersecurity Atlas, a knowledge management platform that maps, classifies, visualises and analyses information on cyber security expertise in Europe. It aims to foster collaboration between European cyber security experts in support of the EU’s digital strategy.

This atlas and the EU cyber security taxonomy support Regulation COM / 2018/630, which calls for the establishment of a European Centre of Industrial, Technological and Research Competence in Cybersecurity and a Network of National Coordination Centres.

Objectives

Among the objectives of this platform are:

  • To facilitate the establishment of a cyber security research community at a European level.
  • To help identify with whom to collaborate on current and future programmes and projects.
  • To map Europe’s competences in the different cyber security domains.
  • To act as a knowledge management tool for the future European Centre of Competence in Cybersecurity.
  • To increase the visibility of expert stakeholders within the cyber security community.
  • To improve the coordination of European R&D efforts in cyber security.
  • To contribute to shaping the strategic orientations of EU programmes funding cyber security research, technology and capabilities.
  • To provide relevant information for cyber security policymaking in Europe
  • To raise awareness in the cyber security community.
  • To support the European Commission in the management of work programmes and allocation of funds.

Benefits

The main benefits for organisations and researchers that make up this Cybersecurity Atlas include:

  • The opportunity to expand the research network and to get in touch with relevant peers across Europe.
  • Participation in the platform enhances the visibility of the organisation by enabling the EU and the cyber security community to participate in EU policies, programmes, events and sectoral activities.

The Hologram Concert – How AI is helping to keep music alive

Patrick Buckley    26 February, 2021

When Whitney Houston passed away in 2012, the world was shocked by the sudden and tragic news of her death. Fans gathered around the Beverly Hills hotel in Los Angeles not just to mourn the loss of a highly respected and talented artist, but also to come to terms with the fact that one of the greatest performers of the moment would never return to the stage. 

That was, until the powers of Artificial Intelligence (AI) and Computer Generated Imagery (CGI) came together to bring the star back to life for a revolutionary ‘hologram concert’ tour. This tour, ‘An Evening with Whitney Houston’ has been shown across the world from London to Los Angeles.

A promotional video for the ‘An Evening with Whitney’ Hologram Tour

Houston is not the only artist to have been digitally brought back to life. A Michael Jackson hologram shocked the world when it was featured at the Billboard Music Awards in 2014. Moreover, other late singers such as Amy Winehouse, Tupac and Roy Orbison have also been the subjects of Hologram recreations.

So how does it work?

It has not been revealed exactly how this so called hologram works, after all, the best magicians never reveal their secrets. It is, however, likely that AI plays a key role in creating and enhancing the images we see projected.  

The process starts by modelling a sequence of images based on the physical features and movements of a real-life human being. Of course, as Whitney herself is not available, a body double who matched the fundamental physical features of the star is used .

These images are then digitally enhanced using Computer Generated Image technology (CGI). Here, AI and Machine Learning technologies play a key role. These technologies efficiently enhance the characteristics of the relayed figure to make the hologram seem more like Whitney herself.

Before the age of AI, this enhancement would have had to be done manually, frame by frame. For a two hour long set, this would have been an almost impossible task, taking many years to produce. Now, parallel simulations combined with Machine leaning algorithms learn to recognise the common body movements of the character. These movements are then relayed throughout the set, helping the ‘fake’ Whitney to express the body language and subtle expressions of the star herself. 

This digitally enhanced sequence is then projected by a 4K laser onto a flat surface to create the hologram effect that we see. According to its scientific definition, the end result is not actually a hologram as it does not rely on the reflection of an image through a transparent medium. Rather, It is a two- dimensional projection which appears to be three-dimesional thanks to the high quality nature of the digitally enhanced image.

Does the Idea really have legs?

Thanks to high quality digital projections and sophisticated CGI technology, so called ‘hologram concerts’ are becoming increasingly viable. The very fact that a worldwide tour has been organised is a testament to the technological advancements that AI and Machine learning permit. Whether or not fans can be convinced by what they see is yet to be seen. If it is successful, it very well might be the first of many hologram concerts to to take place in our increasingly digitalised world.

To keep up to date with LUCA visit our website, subscribe to LUCA Data Speaks or follow us on TwitterLinkedIn or YouTube 

How to Trick Apps That Use Deep Learning for Melanoma Detection

Franco Piergallini Guida    Carlos Ávila    22 February, 2021

One of the great achievements of deep learning is image classification using convolutional neural networks. In the article “The Internet of Health” we find a clear example where this technology, like Google’s GoogleLeNet project (which was originally designed to interpret images for intelligent cars or self-driving cars), is now used in the field of medical image analysis for the detection of melanoma and skin cancer.

Just by searching the mobile app shops for this purpose, we found some apps that, based on a photo of a spot or mole on your skin, predict whether it is a malicious melanoma or something completely benign. As we have seen in previous articles, these types of algorithms could be vulnerable to alterations in their behaviour. From the selection of some of these applications, we proceeded to perform a blackbox attack with the aim of strategically generating noise to an image of a melanoma to see if it is possible to invert the classification of the internal neural networks of the applications about which we had no information. That is, in this research scenario, we did not have access to the internal neural networks of the applications.

Methodology

Given this situation, one of the possible paths was to recreate our own trained models in the most intuitive way to address this type of problem, and we generated attacks for these that, due to the property called transferability, should work in all the applications we had selected. But we found an even simpler way: to save us the step of training a neural network dedicated to melanoma detection in images, we simply looked for an open source project that addressed this problem and had a neural network already trained and ready on Github.

The transferability property was discovered by researchers who found that adversarial samples specifically designed to cause misclassification in one model can also cause misclassification in other independently trained models, even when the two models are supported by distinctly different algorithms or infrastructures

To try to verify the theory using one of the selected apps in a “normal” way from our device or emulator (Android), we proceeded to load our randomly selected melanoma images from Google in order to see their results. Indeed, we could observe that the apps classified those images with a high confidence as melanomas , as we can see in the following image:

Image 1: Classification of images as Melanomas

From there, we proceeded to recreate an adversarial attack. We assumed that all the victim applications used an approach similar to the one proposed in the Github repository. Therefore, using the neural network weights provided by the repository, we applied the Fast Sign Gradient Method (FSGM) technique, which we mentioned in another another post, generating the “white noise” needed to fool the neural networks. This noise, almost imperceptible to the human eye, is specifically designed from the weights of the neural network to have the greatest impact when assigning the classification probabilities of the images and completely change the prediction verdict.

And indeed, the image carefully generated by means of the weights of the open-source neural networks with the FSGM have the desired impact on the target victim applications. We observe that the transferability property is clearly fulfilled, as we have no idea what internal structure and weights the internal networks of the applications have. However, we were able to change the prediction of images in which a fairly certain result was shown to be melanomas, simply by adding “noise” to them.

Image 2: Analysed melanomas, but with a reduction in classification

We successfully recreated this type of attack on several apps we found in the Google and Apple shops. In some cases, they behaved in a similar way and not exactly the same, but at the end of the tests we always got the same result. Tricking the neural network in its prediction.

In the following image we show the results of the same melanoma image uploaded to the same application, but to which we increased the noise until we reach the point where the application’s internal network changes its prediction.

Cyber Security Weekly Briefing February 13-19

ElevenPaths    19 February, 2021

​​Privilege escalation vulnerability in Windows Defender

SentinelLabs researcher Kasif Dekel has discovered a new vulnerability in Windows Defender that could have been active for more than twelve years. The flaw, listed as CVE-2021-24092, with a CVSS of 7.8, would allow an unauthenticated attacker to perform privilege escalation on the vulnerable system, with the complexity of exploitation being low. The vulnerability, fixed in the security newsletter of 9 February, resides in the driver responsible for removing system resources, called BTR.sys, and is present in all versions of Windows Defender from 2009 onwards. Microsoft reports that no active exploit has been detected and that all users who have updated Windows Defender to the latest version will not be affected.

All the information: https://labs.sentinelone.com/cve-2021-24092-12-years-in-hiding-a-privilege-escalation-vulnerability-in-windows-defender/

France links Russian group Sandworm to attacks on web hosting providers

The French National Cybersecurity Agency (ANSSI) has published a report linking the Russian group Sandworm to a series of attacks that occurred between 2017 and 2020 against several French technology entities, web hosting providers in particular. The campaign targeted the compromise of exposed online servers running Centreon, an IT monitoring software. It is not yet known whether access to them was achieved through a supply chain compromise or by exploiting specific vulnerabilities in the software. Once the initial compromise was successful, the threat actor deployed Exaramel and PAS Web Shell (also known as Fobusell) backdoors on the affected networks, using public and private VPN anonymisation services to communicate with the Command & Control server. ANSSI has published indicators of compromise for this threat in JSON MIST format, as well as YARA and SNORT rules for detection.

More details: https://www.cert.ssi.gouv.fr/cti/CERTFR-2021-CTI-005/

​​QNAP fixes a vulnerability in Surveillance Station

QNAP has fixed a stack-based buffer overflow vulnerability that affects NAS devices running a vulnerable version of Surveillance Station software. The flaw, listed as CVE-2020-2501 and assigned a critical severity by the manufacturer. The flaw would allow attackers to execute arbitrary code and could also disrupt security services or anti-virus solutions running on the vulnerable device. QNAP has patched the vulnerability in Surveillance Station 5.1.5.4.3 for 64-bit operating systems as well as Surveillance Station 5.1.5.3.3 for 32-bit operating systems.

More details: https://www.qnap.com/en/security-advisory/qsa-21-07

​​RIPE NCC suffers credential stuffing attack

The Regional Internet Registry for Europe, Middle East and Central Asia, RIPE Network Coordination Centre (NCC), has issued a statement indicating that it has been the victim of a credential stuffing attack on its RIPE NCC Access single sign-on (SSO) service, which allows access to multiple applications or services with a single set of credentials. The company has reported that, despite some service disruption, the attack was successfully mitigated and that, after an initial investigation, no breached accounts have been detected. However, they indicate that the investigations are still ongoing and that they will inform the account holder individually in the event that they detect affected accounts. RIPE requests that users activate two-factor authentication to improve the security of their accounts.

All the information: https://www.ripe.net/publications/news/announcements/attack-on-ripe-ncc-access

Functional Cryptography: The Alternative to Homomorphic Encryption for Performing Calculations on Encrypted Data

Gonzalo Álvarez Marañón    19 February, 2021

— Here are the exact coordinates of each operative deployed in the combat zone.
— How much?
­— 100.000.
— That is too much.
— And a code that displays on screen the updated position of each and every enemy soldier.
— Deal!

Video games are a very serious business. They move a market worth many billions of euros worldwide and attract all kinds of criminals.

For example, in an online multiplayer video game, each device needs to know the position of all objects on the ground in order to render them correctly in 3D. In addition, it needs to know the positions of other players, to render them if they are in sight of the local player or not to render them if they are hidden behind walls or rocks. The server faces a classic dilemma: if it provides the positions of the players to the other players, they can cheat; but if it does not provide them, the game will not know when to show the hidden players.

Instead of providing exact coordinates, it would be ideal to be able to provide information on whether or not a target is in view of the local player, but without revealing its position. This was hardly possible until the invention of functional cryptography.

Functional Cryptography, A Step Beyond Conventional Public-Key Cryptography

Despite all its benefits and wonders, public key cryptography has some practical limitations:

  • It provides all-or-nothing access to the encrypted data: either you decrypt the full plaintext, or you get no information about the plaintext at all.
  • Once the data is encrypted with the public key, there is only one private key capable of decrypting it.

In 2011, D. Boneh, A. Sahai and B. Waters proposed to go beyond conventional asymmetric encryption with their functional cryptography: a new approach to public-key encryption in which different decryption keys allow access to functions on the data in clear. In other words, functional cryptography makes it possible to deliberately leak information about the encrypted data to specific users.

In a functional encryption scheme, a public key, pk, is generated. Any user can encrypt a secret message, m, with it, so that c = E(pk, m). And here comes the twist: instead of using a conventional decryption key, a master secret key, msk, is created, known only by a central authority. When this authority receives the description of a function, f, it derives from msk, a functional decryption key, dk [f], associated with f. Anyone using dk[f] to decrypt the encrypted data, c, will instead get the result of applying the function f to the data in clear, f(m), but no additional information about m. That is, D(dk[f], c) = f(m). Conventional public key cryptography is a particular case of functional cryptography where f is the identity function: f(m) = m.

Applications of Functional Encryption

A multitude of use cases can be devised for functional encryption, anywhere encrypted data is required to be operated on, but not seen:

  • Spam filtering: A user does not trust his mail provider but wants it to clean up his spam messages. The user can implement a functional encryption system: he encrypts all his messages with pk and provides the server with a functional decryption key, dk[f], where f is a spam filtering function that returns 1 if a message m is spam and 0 otherwise. The server will use dk[f] to check if an encrypted message is spam, but without obtaining any additional information about the message itself.
  • Database searches: a cloud service stores billions of encrypted images. The police want to find all images containing a suspect’s face. The server provides a functional decryption key that decrypts the images containing the target face but does not reveal anything about other images.
  • Big data analytics: Consider a hospital that records its patients’ medical data and wants to make it available to the scientific community for research purposes. The hospital can delegate the encrypted storage of its sensitive patient data to a public cloud. It can then generate functional decryption keys that it distributes to researchers, enabling them to calculate different statistical functions on the data, without ever revealing individual patient records.
  • Machine Learning on encrypted data: after training a classifier on a clear dataset, a functional decryption key associated with this classifier can be generated and used to classify a set of encrypted data, so that in the end only the classification result is revealed, without filtering anything about the data in the set.
  • Access control: In a large organisation you want to share data between users according to different access policies. Each user can encrypt x = (P, m), where m is the data the user wants to share, and P is the access policy that describes how the user wants to share it. The functional decryption key dk[f] will check if the user’s credentials or attributes match the policy and reveal m only if they do. For example, policy P = (“ACCOUNTING” OR “IT”) AND “MAIN BUILDING” would return m to an accounting department or an IT department with an office in the organisation’s main building.

Differences Compared to Fully Homomorphic Encryption (FHE)

If you are familiar with the concept of the fully homomorphic encryption (FHE), you may have thought of it when reading about functional encryption. The difference between the two is crucial: fully homomorphic encryption (FHE) performs operations on the encrypted data and the result is still encrypted. To access the result of the computation on the encrypted data, decryption is needed, which can be inconvenient in certain use cases. The following schematic representation will help to visualise the difference between the two encryption schemas.

In the case of fully homomorphic encryption (FHE), the function f is computed on the encrypted data and the result is encrypted:

E(m1), E(m2), …, E(mn) –> E(f(m1, m2, …, mn))

Whereas with functional encryption, the result is directly accessible after the calculation of f:

E(m1), E(m2), …, E(mn) –> f(m1, m2, …, mn)

Another important difference is that in the case of FHE, anyone can perform the calculations on the encrypted data, so given the encrypted text of the result, there is no guarantee that the calculations have been performed correctly. FHE requires the use of zero-knowledge proof to verify that the correct function was evaluated. On the other hand, in functional cryptography, only the holder of the functional decryption key can perform the calculations, which provides greater guarantees of correctness.

Functional Encryption Security

There is a wide variety of functional encryption schemas, based on different and very complex mathematical tricks. To simplify a lot, a functional encryption is considered secure if an adversary cannot obtain more information about m than f(m). Even if n parties in possession of the keys dk[f1], dk[f2], …, dk[fn], agree to attack m in a collusion attack, they will not obtain more information than f1(m), f2(m), …, fn(m). The level of information about m revealed is fully controlled by whoever generates the functional decryption keys.

Super-Vitaminised And Super-Mineralised Public Key Cryptography for A Future in The Cloud

Functional encryption is still a very young discipline, which is receiving strong research momentum for its endless applications in cloud services and IoT. A particularly interesting application enhanced by the European FENTEC project is the possibility of moving the decision-making process based on end-to-end encrypted data from back-end systems to some gateways in complex networks, which is called local decision-making. Being capable of enabling gateways to perform such local decision-making is a big step forward in securing IoT and other highly decentralised networks that might want to implement end-to-end encryption, without losing too much decision-making capabilities at the gateway level.

If you want to try functional cryptography, you can do so thanks to several libraries published by the researchers of the FENTEC project. You can go directly to Github and start playing with CiFEr, a C implementation, and GoFE, implemented in Go. You can even try it out in your browser using WASM.

Functional encryption represents a further step towards a more powerful and versatile cryptography, capable of protecting users’ privacy in use cases that were previously unthinkable with conventional cryptography.

WhatsApp, Telegram or Signal, Which One?

ElevenPaths    17 February, 2021

In the world of smartphones, 2021 began with a piece of news that has left no one indifferent: the update of WhatsApp’s terms and conditions of use. This measure, which was set by Facebook to come into force on 8 February but has finally been delayed to 15 May, has generated a great deal of controversy on social networks given the impact it has had on users’ privacy.

As a consequence, migration to other messaging applications has increased significantly, as can be seen in the graphic below:

Source: Apptopia

Given the situation, in this article we will look at the main differences in terms of security and privacy that exist between the green app, Telegram and Signal. We have discarded applications such as iMessage or Google Messages because they are exclusively for iPhone and Android users, respectively, and other less relevant minority applications for this comparison.

WhatsApp

WhatsApp has more than 2 billion users worldwide. It uses end-to-end encryption in all its chats, both individual and group. This cryptographic system protects messages so that only the sender and receiver can read them and no one else, not even the application itself. The cryptographic algorithms used are Curve25519/AES-256/HMAC-SHA256.

It is noteworthy the large amount of data associated with your account that it requests: phone number, user ID, contacts, email, device ID, approximate location, advertising data, purchase history and payment information, product interaction, bug and performance reports, and customer support. The metadata it collects are IP addresses, contacts, network operators, dates of use, location, phone model and device ID.

WhatsApp has some privacy options such as hiding your username, login time, profile picture, information and status and has two-step verification and fingerprint unlock option.

Telegram

Telegram is WhatsApp’s main competitor due to the similarity of its functionalities. It currently has more than 500 million users around the world. This application also uses end-to-end encryption for its communications, but not in all its chats, only in secret chats. Standard chats use server-client encryption, although it is very robust. In Telegram’s secret chats, the end-to-end encryption layer is added.

The encryption algorithms are RSA 2048/AES 256/SHA-256 (SHA-1 has been removed for its insecurity). Telegram is an open-source app and anyone can review its source code, protocol and API.

The app asks for considerably less data associated with your account than WhatsApp does: phone number, user ID, phone contacts and your account name. In terms of metadata, it collects IP addresses, contacts and devices.

Telegram contains two-step verification (2FA), fingerprint unlocking, incognito keyboard and in secret chats there are additional functions such as blocking screenshots or the possibility of self-destructing your messages after they have been sent. In addition, if the account is abandoned, it self-destructs, automatically deleting all the information contained on Telegram’s servers. The app allows you to set an empty username so as not to reveal your identity. In the same way, the phone number is not visible unless you allow it.

Telegram has bots, a functionality that allows the automation of a multitude of tasks within the application, for example, spam filtering, phishing detection, etc.

Signal

Signal has gone from 10 million to 50 million downloads in just a few days. This is a much more modest number than the two previous apps and its functionalities are more limited (although it has recently replicated several of WhatsApp’s), but the relevance of privacy in public opinion is making it gain popularity among users.

The end-to-end encryption used in all communications is the same as WhatsApp’s (or rather the opposite, as WhatsApp uses the Signal protocol developed by Open Whisper Systems), with the same encryption algorithms: Curve25519/AES-256/HMAC-SHA256. Signal is also open-source so that the developer community can contribute to improving its code.

Signal also includes two-step verification. Your username and profile picture are visible to your contacts, not configurable. Other key features include the ability to enable confidential sender to send messages without sharing your profile, temporary messages and screenshot blocking (like Telegram) or redirect calls through Signal’s servers to keep your IP hidden.

The only information this app asks for is your phone number. That’s right, a phone number is enough to create a Signal account. Also, the only metadata it stores is the date of the last connection.

Let us recap what we have seen in this following table:

As it can be seen, there are alternatives with less impact on users’ privacy. However, the strong network of users that WhatsApp has built up thanks to its popularity may raise the question: how will I be able to talk to my contacts if they are still using WhatsApp? This question, along with the small differences between the apps’ functionalities, implies a decision that only users can make.

Robot Waiters – The future or just a gimmick?

Patrick Buckley    16 February, 2021

As we continue to battle the COVID-19 pandemic, the hospitality industry is looking to technology as a way to keep workers safe. Could robot waiters be the answer? In today’s post we bring you our perspective.

The evolution of the Robot Waiter

Robots have been used in hospitality settings for many years. The focus of the so called ‘robot revolution’ in this industry has been observed mainly in China. There, Robots have been serving bemused customers since 2006. 

Up until recently, the main value of this technology was in its novelty factor. In tech crazed regions of China, enthusiasts target establishments with robot waiters in order to experience this technology first hand. In the same way, cruise line Royal Caribbean International have installed robot bar tenders on a number of ships with the purpose of providing entertainment to holidaymakers.

Aside from being a gimmick, employing robots in restaurants, bars and hotels makes a lot of financial sense. The cost of these systems can be as low as $500 USD/unit.  This makes them value for money in China, a country where the average human waiter can expect to make at least twice that amount each month. 

Of course, even the most advanced systems can’t replace the experience of interacting with a human. Perhaps this is why service robots are yet to boom outside of China. It is likely that the novelty of an emotionless machine would wear off faster in western countries who’s culture promotes more social experiences.

The Robot Waiter and the Pandemic

Due to the pandemic, certain areas of our economy are experiencing a premature digital transformation. According to Transparency Market Research (TMR) , the robotics industry is projected to grow annually by 17.64% (CAGR) between now and 2024.

The hospitality sector is not exempt from this transition. The sector is experiencing a shift in customer preference towards a more distanced, impersonal style of service that can only be provided by machines.

Luxury in the service sector can be defined by the ability to provide the customer with what they desire at the time they want it. It is for this reason that we are increasingly seeing service robots being rolled out around the world, especially in luxury settings.   

Upmarket hotel chains such as Four Seasons, Marriott and Hilton Group have started to implement robot waiters in their establishments around the world. Systems powered by Artificial Intelligence are capable of answering customer enquiries, delivering room service and even enforcing social distancing rules. 

The benefits of the robot butler/waiter become even greater in a ‘quarantine hotel’ setting. Countries continue to impose mandatory hotel quarantine for international arrivals. Here, it makes total sense for robots to take the place of human workers in order to avoid contagion amongst staff and quarantining guests. Already hotels in Japan have rolled out this technology across a variety of locations.

Final Thoughts

The robot waiter is no longer just a gimmick. The COVID-19 pandemic has propelled this technology to the forefront of the minds of the hospitality sector.Whether or not this technology will survive in a post COVID-19 world is unclear and depends on the cultural importance placed on human interaction in different regions worldwide.

To keep up to date with LUCA visit our website, subscribe to LUCA Data Speaks or follow us on TwitterLinkedIn or YouTube .

26 Reasons Why Chrome Does Not Trust the Spanish CA Camerfirma

Sergio de los Santos    15 February, 2021

From the imminent version 90, Chrome will show a certificate error when a user tries to access any website with a certificate signed by Camerfirma. Perhaps it is not the most popular CA, but it is very present in Spain in many public organisations, for example, in the Tax Agency. If this “banning” of Chrome is not solved for the income tax campaign, there may be problems accessing official websites.

Many other organisations in Spain (including the COVID vaccination campaign website, vacunacovid.gob.es) also depend on the certificate. But what happened, and why exactly did Chrome stop trusting this CA? Microsoft and Mozilla still trust it, but of course Chrome’s decision will create a chain effect that will most likely make it impossible to trust anything issued by this CA from the main operating systems or browsers.

In other news regarding this issue, there has been talk of Camerfirma’s failures and its inability to respond and solve them, but to be fair, we need to know a little about the world of certificates. The first thing is to understand that all CAs make mistakes: a lot of them, always. Just take a look at Bugzilla. The world of cryptography is extremely complex, and the world of certificates… as well.

Following the requirements is not always easy, and that is why the CA/Forum organisation and many researchers are responsible for ensuring that the certification authorities function perfectly and comply strictly and rigorously with these standards, so they are very used to these failures, mistakes and oversights and tolerate problems to a certain extent as long as they are reversed in time and corrected. It is a question of showing willingness and efficiency in management, rather than being perfect.

Incidents occur on a daily basis and are varied and complex, but CAs usually react by solving them and increasing vigilance, which improves the system on a daily basis. But sometimes, trust in a CA is lost because a certain limit is crossed in terms of its responses and reactions. In the case of Camerfirma, it seems that the key is that they have been making mistakes for years, some of them repeatedly, and that they have shown too many times that the remedies and resolution practices of this trusted authority cannot be trusted. Moreover, it seems that their excuses and explanations do not add up.

Chrome’s reaction thus demonstrates that cryptographic security must be taken seriously, and that it will not accept CAs that confess that they are understaffed, ignore specifications, etc. These moves are necessary. But with decisions like this, Chrome is on its way to becoming a de facto CA. We have already mentioned that traditional CAs are losing control of certificates. This could be one of the possible reasons why Chrome will have a new Root Store.

26 Reasons

We will describe the reasons very briefly and in order of importance or relevance (subjective). The text in quotation marks is verbatim from what we have found in the Bugzilla tracker, which seems to gloat over the fragility of Camerfirma’s excuses. To be honest, they have to be read in their full, particular context in order to understand them. But even so, what emerges on the one hand is a certain inability on the part of Camerfirma to do the job entrusted of being a serious CA capable of responding in time and form… and on the other, a significant weariness on the part of those who ensure that this is the case.

  • One: In 2017, the world stopped trusting WoSign/StartCom as a CA for different reasons. Camerfirma still had a relationship with StartCom as a way to validate certain certificates, and it did so under the criteria of “other methods”, which is the strangest (and last) way to achieve this and, therefore, raises suspicions. The CA/Forum did not want these “other methods” to be used (which came from an outdated specification) and did not want certificate validation to be delegated to StartCom. Camerfirma did not rectify the situation and continued the relationship with StartCom without making it clear how.
  • Two: They did not respect the CAA standard. This DNS record should contain which CAs are the preferred CAs for a website. For example, I do not want CA X to issue a certificate for me ever… or I only want CA Y to issue certificates for my domain. Camerfirma thought that if certificate transparency existed, they could avoid respecting CAA standards, because “they were in a hurry and misunderstood the requirements”.
  • Three: OCSP responses (to quickly revoke) did not comply with the standards.
  • Four: It was discovered that the Subject Alternative Names fields of many of their certificates were wrong. When they reported this to Camerfirma, they got no response, because these reports “went to only one person” who did not respond. Camerfirma never “intentionally” fixed certificates of this type and even after revoking some of them, Camerfirma reissued them incorrectly.
  • Five: Intesa Sanpaolo, one of Camerfirma’s sub-CAs, also made several mistakes when it came to timely revocation. It even issued a certificate for “com.com” by “human error”.
  • Six: They made certain revocations by mistake, confusing serial numbers in valid and invalid certificates. Camerfirma decided to do a “de-revocation”, which is intolerable in the world of certificates, but they implemented it inconsistently. In the midst of all the trouble, they claimed that they would use EJBCA management software to mitigate this in the future, but then they didn’t… then they confirmed that they would develop their own software with similar features. As not much more was heard about this afterwards, they claimed that they were in “daily meetings to discuss these issues”.
  • Seven: Camerfirma infringed a rule related to the inclusion of the issuer’s name and serial number in the key ID field (you must not). All Camerfirma certificates had been doing this wrong since 2003. They claimed they had got it wrong and fixed it at the end of 2019. But they did not revoke the previous certificates issued. In 2020 they reissued certificates that infringed this policy, which they did not revoke either.
  • Eight, nine and ten: They are not supposed to issue certificates with underscores in their names. According to a “human error” in their issuing and detection, they were not able to detect them in time and some of them were missed. It also happened with a domain name with the character “:”. And with a domain that existed but they spelled it wrong in the certificate.
  • Eleven: Camerfirma (and others) issued sub-CAs that could give OCSP responses for Camerfirma itself, because they had not included a suitable restriction in the certificate’s EKUs (EKUs are fields to limit the certificate’s power and use). They argued that they were not aware of this security flaw and did not revoke them in time. The reason for not revoking is that one of the sub-CAs was used in the healthcare smartcard sector and if they were to revoke them, these smartcards would have to be replaced. The problem was so important that they had to escalate the issue to higher bodies at a national level. On 2 October 2020, it appears that the keys on these cards were destroyed, but this destruction was neither supervised nor witnessed by a qualified auditor nor by Camerfirma itself.
  • Twelve: They issued a subCA for the use in S/MIME to the government of Andorra, which they did not audit. When they did, it was found that they made quite a few mistakes. In the end they had to revoke it, and claimed that as they were TLS certificates, they thought they were outside the scope of the audits. Again, the problem seemed to be that they did not have sufficient and necessary staff.
  • From thirteen to twenty-six: We have cheated here to put together all the other reasons that are very similar. For example, dozens of technical failures in other certificate fields that they were unable to revoke in time. And the excuses for this were varied. From the fact that local legislation obliged them to certain formulas that did not comply with the standards (things that they did not prove) … to the fact that their system had worked well for 17 years, but then, as it grew too much, some internal controls failed. Sometimes there were no excuses. They just did not respond to requests. In one incident, they were supposed to disclose the existence of a sub-CA within a week of its creation, but they did not do it. What was happening according to them was that “the person in charge was not available”. Neither was the person’s back-up. Camerfirma tried to solve this by saying that they would put “a backup for the backup person in charge of this communication”. To solve other problems, they claimed that their staff was completely ” overloaded”, or “on holiday”. Basically, of all the common errors in many certificates (insufficient entropy, incorrect extensions…), Camerfirma always failed to revoke certificates in a time and form.  

Conclusions

It is not easy to be a CA. Camerfirma is not the first or the last to be revoked. Even Symantec suffered a setback in this respect. FMNT also had a hard time getting Firefox to include its certificate in its repository and it took several years. At some points in this incredible story with FMNT, there is also a sense of downtime, where one senses a lack of adequate staff to meet Mozilla’s demands.

The world of certification is a demanding one. But it must be. The internet that has been built literally depends on the good work of the CAs. Tolerating the operation of a CA that deviates one millimetre from continuous vigilance, control and demand, or fails to respond in a timely manner, is like allowing condescension towards a policeman or a judge who commits any hint of corruption. It should not be tolerated for our own sake and because of the significant consequences it would entail.

Cyber Security Weekly Briefing February 6-12

ElevenPaths    12 February, 2021

Attempted contamination of drinking water through a cyber-attack

An unidentified threat actor reportedly accessed computer systems at the City of Oldsmar’s water treatment plant in Florida, US, and altered the chemical levels to dangerous levels. The intrusion reportedly took place on Friday 5 February, when the attacker gained access on two occasions to a computer system that was configured to allow remote control of water treatment operations. During his second intrusion, which lasted about five minutes, an operator monitoring the system reportedly detected the intruder by moving the mouse cursor on the screen and accessing the software responsible for water treatment, changing the sodium hydroxide (bleach) from approximately 100 parts per million to 11,100 parts per million. City of Oldsmar staff have indicated that the attacker disconnected as soon as the bleach levels were changed and that a human operator immediately reverted these chemical levels back to normal, preventing contaminated water from being delivered to local residents. Authorities have not attributed the attack to any specific group or entity, although it is important to note that the city of Oldsmar is located near the urban centre of Tampa, which hosted Sunday’s Super Bowl.

More information: https://www.zdnet.com/article/hacker-modified-drinking-water-chemical-levels-in-a-us-city/

Microsoft Security Newsletter

Microsoft has published its monthly security newsletter in which it has fixed 56 vulnerabilities, 11 of them classified as critical, two as moderate and 43 as important. Among the flaws addressed is the one of the 0-day type in Windows, classified as CVE-2021-1732, which was being exploited before the publication of yesterday’s patches and which would allow an attacker or malicious programme to obtain administrative privileges. Among the other flaws fixed, there are two critical flaws (CVE-2021-24074 and CVE-2021-24094) in the Windows TCP/IP stack, which could enable remote code execution, as well as a third flaw (CVE-2021-24086), which could be used in DoS attacks to crash Windows devices. In addition, a critical remote code execution flaw in the Windows DNS server component (CVE-2021-24078) has also been fixed, which could be exploited to hijack domain name resolution operations within corporate environments and redirect legitimate traffic to malicious servers. Finally, Microsoft also reportedly fixed 6 previously disclosed vulnerabilities (CVE-2021-1721CVE-2021-1727CVE-2021-1733CVE-2021-24098CVE-2021-24106 and CVE-2021-26701).

All the information: https://msrc.microsoft.com/update-guide/releaseNote/2021-Feb

SAP Security Update Newsletter

SAP has published its monthly security update newsletter in which it has addressed a critical vulnerability in SAP Commerce, among others. The critical flaw, listed as CVE-2021-21477 and with a CVSS of 9.9, affects SAP Commerce product versions 1808, 1811, 1905, 2005 and 2011, and could allow remote code execution (RCE). The company reportedly fixed the flaw by changing the default permissions for new installations of the software, but additional manual remediation actions would be required for existing installations. Such actions, according to security firm Onapsis, could be used as a complete workaround, provided that the latest patches cannot be installed. In addition, updates to six other previously released security advisories have been included, including a fix for flaws in Chromium browser control, which is provided with the SAP enterprise client, which has a CVSS score of 10 and affects version 6.5 of the SAP client. Finally, a critically important flaw (CVE-2021-21465), previously published and updated, which would include multiple flaws in SAP Business Warehouse, a data warehousing product based on the SAP NetWeaver ABAP platform, has been fixed. Users are strongly advised to upgrade to the latest versions of the affected products.

More information: https://wiki.scn.sap.com/wiki/pages/viewpage.action?pageId=568460543

Microsoft warns of increase in Webshell attacks

Microsoft has warned that the volume of monthly Webshell attacks has doubled since last year. Webshells are tools that threat actors deploy on compromised servers to gain and/or maintain access, as well as to remotely execute arbitrary code or commands, move laterally within the network or deliver additional malicious payloads. The latest data from Microsoft 365 Defender shows that this steady increase in the use of Webshells has not only continued but accelerated. In addition, every month from August 2020 through January 2021, they recorded an average of 140,000 of these malicious tools found on compromised servers, nearly double the monthly average seen the previous year. In its publication, Microsoft also provides some advice on how to harden servers against attacks that attempt to download and install a Webshell. Likewise, it is worth recalling that the US National Security Agency, in a joint report issued with the Australian Signals Directorate (ASD) in April 2020, also warned that attacks on vulnerable web servers to deploy Webshell backdoors would be intensifying. It should also be added that the NSA has a repository of tools that organisations and administrators can use to detect and block this type of threats.

More details: https://www.microsoft.com/security/blog/2021/02/11/web-shell-attacks-continue-to-rise/


If you want to receive more information in real time, subscribe to our cybersecurity news and reflections channel created by the ElevenPaths Innovation and Lab team. Visit the CyberSecurityPulse page.