What is AI-winter and how to avoid it

Nacho Palou    6 June, 2023

Throughout its history, Artificial Intelligence (AI) has experienced ups and downs: periods when it received more attention and resources, and periods of disillusionment and stagnation.

These periods of waning interest in Artificial Intelligence are known as AI-winters. Two AI winters have been identified in the past 50 years, and we are faced with the challenge of avoiding a third one.

Consequences of an AI-winter

During AI-winters, disillusionment with Artificial Intelligence translates into widespread disinterest. This disinterest leads to a reduction in attention and funding for research and development.

In those periods that last years, there are few significant advancements. And the few that do occur are disappointing. Even after a moment of high enthusiasm, in an AI winter, Artificial Intelligence fades away from conversations and media coverage.

Currently, Artificial Intelligence is experiencing a “Cambrian explosion” that some describe as hype or even a bubble. In any case, we are in a period of high expectations, the complete opposite of an AI winter. And precisely because it is a known pattern, the inevitable question arises: Are we on the verge of a third AI winter?

AI-winters throughout history

Experts have identified two AI-winters. Both occurred after notable advancements and moments of industry, media, and public excitement:

Image generated with Bing
  • First AI-winter, late 1970s and early 1980s: During this period, expectations for Artificial Intelligence were also high. However, the advancements did not live up to the exaggerated promises made by science fiction. As a result, there was a significant decrease in funding and interest in research and development of Artificial Intelligence.
  • Second AI-winter, late 1980s and early 1990s: After the first AI winter, interest in Artificial Intelligence resurfaced. Once again, expectations exceeded achievements. The lack of significant progress and the gap between expectations and reality led to another period of disillusionment and disinterest.

In 1996, IBM’s supercomputer Deep Blue defeated the reigning world chess champion, Gary Kasparov, for the first time.

What is AI-winter and how to avoid it

Factors that could lead to a ‘Third AI Winter’ Some skeptics do not rule out the possibility of a third AI winter occurring after the current period of high expectations. This is mainly because new variables come into play this time, including those related to privacy, security, and ethics.

This is because Artificial Intelligence is not only finding applicability in businesses this time. It is also proving useful for the general public, such as in the case of digital assistants or Generative Artificial Intelligence, two accessible forms of AI for end users.

The widespread applicability of Artificial Intelligence should mitigate the occurrence of the next AI winter, but there are factors that could contribute to it. And the first one is fear.

  • Fear: As Artificial Intelligence becomes more advanced and capable, fears arise, and news spreads regarding concerns and fears about its impact on society: fear of uncontrolled AI, job loss, invasion of privacy, lack of transparency… and the ever-present dystopian scenario.

Mistrust and skepticism regarding technology can arise from these fears. However, there are more factors that could lead to a third AI winter:

  • Restrictive legislation around Artificial Intelligence: If excessively restrictive or ill-conceived regulations are implemented, it could hinder research and development of Artificial Intelligence, limiting innovation and progress.
  • Scarcity of high-quality data: Artificial Intelligence relies on large amounts of data, which are used to train and “teach” algorithms. If there is a lack of relevant and high-quality data in certain domains, or if the data does not consider demographic and social differences, it could hinder the development of reliable Artificial Intelligence models.
  • Technical limitations: Limited computational power, poor energy efficiency, lack of scalability of algorithms, or technical phenomena such as hallucinations in Artificial Intelligence could slow down its progress.

Keys to avoid AI-winter

  • Balanced legislation: Regulations should address legitimate concerns around Artificial Intelligence (including those related to privacy, security, and non-discrimination) without hindering its development and potential benefits. Collaboration between lawmakers, AI experts, and the industry is essential to achieving this balance.
  • Support education and technological advancements: Investing in research and development to drive significant technological advancements requires fostering academic research and collaboration between industry and institutions. It is also critical to educate children and young students.
  • Promote trustworthy Artificial Intelligence: It is essential to address concerns regarding privacy, security, and social impact. Ethics, transparency, explainability, along with responsibility and proper governance, are indispensable principles to avoid an AI-winter.

Although predicting the next AI winter is difficult, it is necessary to learn from the past and take measures to maintain sustainable progress in Artificial Intelligence.

Only in this way can we avoid a third AI winter and harness the full potential of the progress offered by this technology.

Featured image generated with Bing.

The power of sustainable digitalization in the fight against climate change

Nacho Palou    5 June, 2023

Climate change is considered the greatest challenge of our time. Its effects range from desertification and droughts to floods and rising sea levels. Some of its most visible consequences are natural disasters, disruptions in food production, and impacts on energy markets.

However, in this scenario, next-generation digital technologies offer solutions to combat climate change and mitigate its consequences: Artificial Intelligence, Cloud, Big Data, Internet of Things (IoT), or ultra-efficient 5G connectivity are some of the enabling innovations of sustainable digital transformation.. They can help us decarbonize the economy, optimize the use of renewable energy, and protect natural resources.

Sustainable digitization for green transformation

At Telefónica Tech, we are committed to developing digital solutions that protect, optimize, and reuse natural resources with minimal environmental impact. Almost two out of three solutions in our portfolio carry the Eco Smart seal, verified by AENOR.

The Eco Smart seal identifies products and services designed to drive green digitization. This results in reduced water and energy consumption, promoting the circular economy, and lowering emissions that contribute to the greenhouse effect.

Furthermore, our solutions promote energy efficiency, sustainable agriculture, and efficient transportation.

Next-generation digital technologies can contribute to a 20% reduction in CO2 emissions by 2050.

Source: Accenture & World Economic Forum.

The impact of next-generation technologies

The application of next-generation digital technologies has the potential to reduce global emissions by 15% to 35% in the coming years.

These technologies enable improved management, control, and real-time action in various sectors, such as industry, energy, and public utilities.

Image by Freepik

For example, when our digital technologies are applied to:

  • Water supply: they can optimize distribution and reduce losses, which is particularly relevant in a context of water scarcity due to climate change.
  • Infrastructure, such as natural gas, to reduce leaks and greenhouse gas emissions, optimize energy distribution, including renewable sources, or public lighting, among others.
  • The agricultural sector, which is highly exposed to the increasing impact of climate change. They enable smart and precision farming, optimizing the use of resources such as water and chemicals, thus reducing costs.
  • Workspaces, transforming the way we work: they reduce daily commutes, energy consumption, and associated CO2 emissions. Additionally, digitization of documents and reduced paper usage have a positive impact on the environment.

Towards a sustainable and resilient future in the face of climate change

Sustainable digitalization has emerged as a powerful tool to combat climate change and its consequences.

Through the implementation of these solutions, we can:

  • Protect our natural resources, such as water.
  • Optimize operations, logistics routes, and industrial and production processes.
  • Reduce polluting and greenhouse gas emissions.
  • Generate opportunities for progress for all.

Therefore, it is essential to continue driving technological innovation and fostering collaboration between the public and private sectors. This will enable a successful transition towards a more sustainable and resilient future in the face of climate change.

Featured photo: Nikola Jovanovic / Unsplash.

Cyber Security Weekly Briefing, 27 May – 2 June

Telefónica Tech    2 June, 2023

Backdoor discovered in hundreds of Gigabyte motherboards

Cybersecurity researchers at Eclypsium discovered a secret backdoor in the firmware of hundreds of Gigabyte motherboard models, a well-known Taiwanese manufacturer.

Every time a machine with one of these motherboards is rebooted, an update application downloaded and executed by the board’s firmware is silently activated, allowing the installation of other, possibly malicious, software.

The firmware on these systems removes a Windows binary at operating system startup and downloads and executes another payload from Gigabyte’s servers over an insecure connection without verifying the legitimacy of the file. A total of 271 different motherboard versions were identified as vulnerable.

Although the feature appears to be related to the Gigabyte App Center, it is difficult to rule out the possibility of a malicious backdoor due to the lack of proper authentication and the use of insecure HTTP connections instead of HTTPS which could allow for man-in-the-middle attacks.

Even if Gigabyte fixes the issue, firmware updates may fail on users’ machines due to their complexity and difficulty in matching with the hardware. In addition, the updater could be used maliciously by actors on the same network to install their own malware.

More info

SharpPanda’s campaign against the G20

Cyble has published an investigation in which it shares its findings on the campaign currently being developed by the SharpPanda espionage group, allegedly backed by the Chinese government, against the member countries of the G20 (the international forum that brings together the world’s most industrialized countries along with organizations such as the UN or the World Bank).

As Cyble explains, the campaign starts with the distribution of emails to high-ranking officials of the targeted countries in which a .docx file supposedly generated by the G7 (a group of countries within the G20) is included.

This file downloads an RTF document that includes the RoyalRoad malware kit. The exploit creates a scheduled task and executes a malware DLL downloader, which executes another Command & Control (C2) DLL. RoyalRoad exploits a specific set of vulnerabilities, including CVE-2018-0802 , CVE-2018-0798 y CVE-2017-11882, within Microsoft Office.

More info 

0-day vulnerability actively exploited in Email Security Gateway for months

Barracuda recently issued a statement warning customers about an actively exploited 0-day vulnerability in its Email Security Gateway asset.

The security flaw was identified as CVE-2023-2868 and it is noted that exploiting it could allow a remote attacker to perform code execution on vulnerable systems. However, new information has emerged that has identified that the exploitation of this vulnerability has been taking place since October 2022 using a total of three different strains of malware, namely Saltwater, Seaspy and Seaside.

Barracuda has not released any information about the victims publicly, but they have identified evidence of exfiltration of information in some victims to whom all the information has been reported. It should be noted that this vulnerability affects versions 5.1.3.001 to 9.2.0.006 and was fixed on May 20 and 21.

More info

New analysis of BlackCat ransomware

The IBM research team has published an analysis in which it mentions new ransomware variants that enable better data exfiltration and evasion of security solutions. In particular, the experts note that the operators of the BlackCat/ALPHV ransomware continue to evolve the tool, especially from two perspectives.

On the one hand, the operators of this malware are reportedly using ExMatter malware in their operations, the function of which is to optimise file exfiltration processes.

On the other hand, IBM says it has analysed a new strain of BlackCat, which it has dubbed Sphynx, which stands out for having a series of capabilities that allow it to evade security solutions more effectively.

IBM points out that these ransomware evolutions show that the operators behind these threats are increasingly aware of the systems’ infrastructures and are trying to improve their operational efficiency.

More info

CISA has warned about two vulnerabilities in industrial control systems

CISA has issued a warning about two vulnerabilities affecting industrial control systems, specifically Moxa’s MXsecurity product.

Firstly, the vulnerability identified as CVE-2023-33235, with CVSS of 7.2, is a command injection vulnerability that can be exploited by attackers who have obtained authorisation privileges and can exit the restricted shell and execute arbitrary code.

On the other hand, CVE-2023-33236, with CVSS 9.8, can be exploited to create arbitrary JWT tokens and bypass authentication of web-based APIs. Notably, Moxa has addressed these flaws with the update to v1.0.1.

For its part, CISA recommends that users implement defensive measures to minimise the risk of exploitation, such as minimising network exposure for devices, using firewalls and VPNs.

More info

Featured photo: DCStudio on Freepik.

How language puts business Cybersecurity at risk

Nacho Palou    1 June, 2023

Cybersecurity is crucial for businesses and organizations of all sizes and sectors. Cyberattacks can have severe or even fatal consequences for businesses, such as data loss, operational disruptions, or regulatory non-compliance.

They can also directly impact revenue, damage reputation, or undermine employees, clients, and suppliers’ trust. For these reasons, executives must understand the significance of Cyber Security for companies and take appropriate measures to protect their digital assets.

And the only way to achieve this is by speaking the same language. However, that is not always the case.

The language problem in Cybersecurity

On the contrary, according to a recent study, 44% of executives surveyed do not prioritize Cybersecurity in their companies. This is due to the confusing language used in this field. Specifically, because “the language used is confusing and hinders threat understanding,” as reported by Europa Press.

This is despite the fact that 45% of executives from large companies in Spain know cyber threats are the “greatest danger” their company can face.

One-third of the surveyed executives stated that they do not understand the meaning of ‘malware’ (malicious software), and almost another third do not comprehend the term ‘ransomware’ (data hijacking).

Cybersecurity is a complex and technical subject. However, the language barrier is “universal in other professions,” says Sergio de los Santos, Head of Innovation and Laboratory at Telefónica Tech. “If we aim to be precise, we risk being too technical and distant. If we are too simplistic, we may trivialize the problem. Finding a middle ground is challenging but possible,” he explains.

How to overcome the language barrier

To overcome the language barrier and professional jargon in cybersecurity, it is crucial to communicate effectively. This applies not only to executives but also to the general public, including young individuals.

  • Using clear, accessible, and honest language.
  • Avoiding technical terms and professional jargon.
  • Explaining concepts simply.
  • Using examples and real-life cases to illustrate the risks and importance of protecting company systems and data.

Furthermore, to enhance understanding and raise awareness about the importance of Cybersecurity, experts can conduct training courses and workshops targeted at executives. These sessions can provide:

  • Clear and practical risk information.
  • Security recommendations and best practices.
  • Real demonstrations of malicious actions and attacks.
  • Explanation of companies’ measures to protect themselves from cyberattacks.
  • Training on the solutions, tools, and resources available to companies.

Cyber Security experts can also collaborate with companies’ communication departments to create communication materials about cybersecurity that are clear and accessible. These materials can include texts, infographics, explanatory videos, and other resources that help executives and employees better understand risks and security measures.

Beyond language: Cyber Security importance for businesses

Cybersecurity is an essential element for protecting the data, information, processes, and operations of companies against malicious attacks. It also encompasses technological and computer systems, mobile devices, and communication networks.

Although 100% security is never possible, the most effective way to ensure maximum protection is through professional Cyber Security services. It is also critical to apply security best practices and provide training and awareness to employees to ensure the highest level of protection.

Above all, executives must understand the significance of cybersecurity beyond language, as it is relevant to all aspects, including business continuity, of the company. Cybersecurity must be a strategic and corporate priority.

Featured photo: Pressfoto on Freepik.

Cryptography, a tool for protecting data shared on the network 

Carlos Rebato    31 May, 2023

Cyber Security is nowadays an essential element in companies. However, new ways of undermining it are emerging every day. Many have asked themselves: how can companies securely store their data, how can credit card information be protected when making online purchases? The answer to these questions is cryptography.  

Cryptography becomes vital, especially when there are so many computer security risks, such as spyware or social engineering.  

That is why the vast majority of websites employ it to ensure the privacy of their users. In fact, according to projections by Grand View Research (2019), from 2019 to 2025, the global encryption software market is expected to grow at an annual rate of 16.8%.  

How can cryptography be defined? 

This term is defined as the art of converting data into a format that cannot be read. In other words, messages containing confidential information are encrypted. This prevents unscrupulous people from accessing data that could be breached. Such encrypted data is designed with a key that enables access (Onwutalobi, 2011). 

To encrypt a message, the sender must manipulate the content using some systematic method, known as an algorithm. The original message, called plain text, can be encoded so that its letters are aligned in an unintelligible order, or each letter can be replaced by another letter. This is known as ciphertext. 

Why should cryptographic systems be used?

1. They can be used in different technological devices 

Both an IPhone and an Android device, for example, can have their own encryption methods depending on the needs of the companies. 

Likewise, it is possible to encrypt content on an SD card or USB memory stick. There are several possibilities, you just have to find the most appropriate method. 

2. Working remotely is more secure 

Organizations today often need to work securely when their employees communicate remotely. A 2018 North American Report published by Shred-It revealed that vulnerability risks are higher when working in this way. Therefore, working collaboratively through data encryption will prevent information from falling into the wrong hands.  

3. Cryptography protects privacy 

t is frightening to think how exposed data can be in companies. According to an article by Hern (2019) published in The Guardian newspaper, in 2018 more than 770 million emails and passwords were exposed. All of this resulted from vulnerabilities in accessing individual data. Therefore, data encryption can prevent sensitive details from being unknowingly published on the Internet. 

4. Provides a competitive advantage 

Providing companies with cryptographic support for their various sources of information and data will give customers greater peace of mind. Organizations are aware of the essentials of this procedure. A study by the Ponemon Institute (2019) revealed that 45 % of businesses have cryptography strategies. Less than 42% have limited data encryption strategies for certain applications or types of data. 

Types of encryption 

The two most popular ways to apply cryptography are: shared secret key encryption and public key encryption, also known as symmetric and asymmetric encryption systems, respectively (Vasquez, 2015). 

Shared-key encryption

Also known as symmetric. It consists of using the same key for encryption and decryption. Any user who has this key can receive the message. 

This method is fast and can be used for a large volume of data. It is also used with block cipher chaining, which is subject to a key and an initialization vector. All this to form a cipher from a series of data block.  

The user who wishes to decrypt the message will have to possess the key and initialization vector. Often used with public key encryption because of its vulnerability. 

Public key encryption 

Also called asymmetric encryption, it uses two different keys that share a mathematical link. In other words, one key is used for encryption (private key) and another for decryption (public key). 

The public key can be shared between recipients. The other key is kept only by the originator of the message. Its computational cost is high.  Finally, cryptography opens the opportunity for companies to strengthen the security of their customers. This will consolidate their reputation and further commitment to innovations around the latest technologies. 

Featured photo: Christian Lendl / Unsplash

ChatGPT and Cloud Computing: A happy marriage 

Roberto García Esteban    30 May, 2023

ChatGPT (you may not know that it stands for Chat Generative Pre-Trained Transformer) has become the talk of the town for its impressive ability to generate text that looks like it was written by a human, using a combination of both Machine Learning and Deep Learning algorithms.  

New use cases for this technology are emerging every day and many businesses are looking to integrate ChatGPT into their normal workflows.  

This is not the only artificial intelligence application we can use. Google is constantly announcing improvements to its Bard application, already available (for now in the US), which is able to generate longer texts than ChatGPT and also links in its free version to the fonts it used to generate its texts. This is useful for getting to the page and supplementing information or for including citations in papers.  

The relationship between ChatGPT and Cloud 

The relationship between ChatGPT and Cloud Computing is very close. OpenAI, the company that developed the application, uses the Microsoft Azure Cloud for this purpose. ChatGPT is already available within the Azure OpenAI Service, following the partnership established between OpenAI and Microsoft.

ChatGPT can also be used in the Cloud to facilitate numerous tasks and processes. Both chatGPT and Bard or any other similar tool can open up a huge range of possibilities in the development of Cloud services.

Here are some of them: 

Code generation 

ChatGPT has learned from millions and millions of code lines, which makes it a great tool for developers, although they should always keep in mind that this code may not be 100% correct. However, it is certainly a good starting point. It can also be used to remove bugs from code that has already been written, which will help developers test and correct their code.

In addition, it can automatically generate notes and comments when developers create or make changes to their code to explain the latest updates. In the same way that real estate companies already use ChatGPT to write descriptions of their properties, the same approach is used by ecommerce websites to automatically create descriptions of the new items they add to their catalogue. 

Human interaction 

AI-based chatbots already exist in many customer services, but usually these chatbots handle pre-set responses that sound impersonal and unhelpful.

However, integrating chatGPT with a company’s cloud services will facilitate customer and employee interactions with the company by allowing requests for order cancellations, refunds, complaints, returns, etc. to be sent.

Photo: Emiliano Vittoriosi / Unsplash

And beyond customer service, they can also be very useful in training for employees, for instance. 

Customisation 

The ability to generate content in real-time is valuable for brands that use personalisation as a differentiator in their marketing strategies.

Since ChatGPT stores a user’s previous queries, it can provide highly personalised responses and also allows you to take personalisation a step further by offering a cost-effective method of creating content that is unique to each user. This can generate fully personalised emails or purchase recommendations, for example.

It is essential to integrate ChatGPT with the CRM or ERP, usually Cloud-based, that most companies already use to achieve these improvements. 

Work optimisation 

ChatGPT can analyse large amounts of text and generate summaries, which is a valuable feature that transforms the use of cloud-based collaboration tools, allowing for example summarising email threads or generating meeting records.

It is also possible to summarise complex documents such as contracts, SLAs, or company policies so that instead of having to go through all the documents stored in a cloud repository, chatGPT can produce a summary in a matter of seconds.

Another use case for the integration of ChatGPT with Cloud services is to facilitate the summary of customer service tickets, integrating with the cloud CRM that each company has. 

However… 

It is wise not to overestimate ChatGPT. Its responses are logical and coherent and facilitate some tasks, but it also uses generic content that is not always perfectly customised. There have also been many errors in its responses, in addition to the possible security problems reported.  

It is a tool that needs to be used cautiously, but at the same time it is obvious that it has great advantages. Integrating chatGPT with the cloud services a company is using will transform virtually every business process in any company. 

The number of use cases is very large, so there is no doubt that the Cloud—ChatGPT combination is leading us to one of the biggest technological revolutions of this century.   

Featured image: D Koi / Unsplash.

Leave a Comment on ChatGPT and Cloud Computing: A happy marriage 

The importance of access control: is your company protected?

Telefónica Tech    29 May, 2023

By David Prieto and Rodrigo Rojas

In an increasingly digitalized and complex world, information security is critical for businesses. As companies adopt more cloud technologies and services or allow access to their resources through a variety of devices and platforms, identity and access management has become more critical than ever.

How can enterprises ensure information security in this challenging environment?

The first step in solving this issue is identity and access management within the enterprise, and as these are becoming increasingly digital and complex, this management can no longer be handled manually by IT administrators but requires advanced technology partners and solutions.

In this article, among others, we will highlight some of these solutions, as well as the capabilities offered by Telefónica Tech to carry out access management and privileged account management projects.

We will also explain the importance of passwordless authentication, which is becoming increasingly important due to the greater security it offers compared to classic password authentication.

Passwordless, certainly now.

In previous posts we have already talked about the FIDO (Fast Identity Online) standard and its importance in extending passwordless authentication.

Google’s recent announcement was a great leap forward in this strategy of eradicating passwords, as it will allow password-free login, based on the FIDO standard.

Given its importance and impact, we detail below the benefits for both the user and the company itself:

  • Increased security: Authentication data is stored in the security key and encrypted with public key cryptography. This makes it nearly impossible for attackers to steal or tamper with authentication information.
  • Improved user experience: FIDO2 passwordless authentication is easier to use and more convenient than traditional password-based authentication methods. Users only need to tap or insert the security key to authenticate.
  • Reduced fraud: FIDO2 passwordless authentication reduces the possibility that hackers can steal or guess passwords, which reduces the amount of phishing-related fraud.
  • Interoperability: FIDO2 is an open specification that is compatible with a wide variety of platforms and devices, enabling greater interoperability between different systems and service providers.

All in all, FIDO2 passwordless authentication provides a more secure, easy and convenient way to authenticate users online without the need for traditional passwords.

How do we ensure information security?

The answer to this question we asked ourselves in the introduction is access management, a process that allows companies to manage who has access to which resources and when. It is essential that this management has functionalities that allow a complete and effective management of access to corporate resources and ensure the security of their information systems, thus avoiding the risk of intrusions and unauthorized access.

Below, we provide an explanation of some of the functionalities that are available in the service that we offer from Telefónica Tech.

One of the most important is the multi-factor authentication (MFA), which helps to ensure data security by implementing a two-step authentication, with this we want to convey that users must provide additional information to verify their identity, which significantly reduces the risk of unauthorized access to information.

Photo: Pixabay

Another key functionality is single sign-on (SSO), which allows users to access multiple resources with a single login. It not only saves time, but also reduces the need to remember multiple passwords, which can improve overall security.

In addition, Telefónica Tech’s solution features:

  • Passwordless authentication (passwordless), which as we saw earlier allows users to access enterprise resources without having to type in a password, improving many other factors of security.
  • Access management (RBAC) is another important functionality.It allows specific roles to be assigned to users based on their responsibilities or authorizations, as well as controlling their access based on these.
  • Auditing and reporting functionality is a critical element of compliance and monitoring capabilities. This functionality allows companies to track changes in access permissions and generate reports on user usage and activity.

Not only does Telefónica Tech offer a complete and efficient access management solution, but it also has a team of experts in implementation, support and administration of the solution.

What if the accesses are to critical systems?

In this case, the answer is privileged access management, which refers to the management and control of access for users with elevated or privileged permissions.

This type of solution allows secure management of credentials and privileged access, as well as control and supervision of the actions performed by users with such access.

Among the functionalities included are the following:

  • Secure credential management: Enable secure and centralized management of the credentials required to access an organization’s critical systems and applications.
  • Privileged access control: Enables the control and supervision of privileged access.
  • Monitoring of actions performed: They record all actions performed by users with privileged access, allowing the detection of possible malicious or unusual activities.

Telefónica Tech helps customers through a team of experts in privileged access management, with extensive experience in projects for the implementation of this type of solutions based on all types of technologies.

Conclusion

To sum up, access management is a critical process in the protection of confidential information and systems security in enterprise environments. Implementing modern solutions can help enterprises address the challenges associated with access management and minimize security risks.

Yet how can companies prepare for future challenges and stay protected? Telefónica Tech’s access and privileged access management services are end-to-end services that provide all the functionality needed to ensure that security and enable future-proofing by including the latest features such as passwordless in their solutions.

Featured photo: iMattSmart / Unsplash.

Cyber Security Weekly Briefing, 22 – 26 May

Telefónica Tech    26 May, 2023

GitLab patches a critical vulnerability

GitLab has addressed a critical vulnerability affecting GitLab Community Edition (CE) and Enterprise Edition (EE) in version 16.0.0. This security flaw has been reported as CVE-2023-2825, CVSSv3 of 10, and was discovered by a security researcher named pwnie.

As for the cause of the flaw it arises from a cross-pathing issue that could allow an unauthenticated attacker to read arbitrary files on the server when there is an attachment in a public project nested within at least five groups.

Therefore, exploitation of this vulnerability could trigger the exposure of sensitive data such as proprietary software code, user credentials, tokens, files and other private information. GitLab recommends its users to update to the latest version, 16.0.1, to fix this security issue..

More info

​​​Zyxel patches two critical vulnerabilities in its firewalls

Zyxel has issued a security advisory reporting two critical vulnerabilities affecting several of its firewall models. Specifically, these vulnerabilities are the one registered as CVE-2023-33009 with a CVSSv3 of 9.8, which is a buffer overflow vulnerability in the notification function that could allow an unauthenticated malicious actor to perform remote code execution or launch a DDoS attack.

Likewise, the bug assigned as CVE-2023-33010 counts a CVSSv3 of 9.8, which is also a buffer overflow vulnerability in the ID processing function, and its exploitation could lead to the same types of attacks as the previous one.

Zyxel recommends its users to apply the corresponding security updates to reduce the risk of exploitation of these two vulnerabilities.

More info

BEC attacks spike in volume and complexity

In a recent report from Microsoft Cyber Signals, Microsoft’s CTI teams warn of a significant spike in BEC (Business Email Compromise) attacks between April 2022 and April 2023 that have resulted in $2.3 billion in losses according to FBI estimates. Among the most observed trends, two stand out: the use of BulletProftLink (a cybercriminal marketplace that provides all kinds of utilities to carry out phishing and spam campaigns) and the purchase of compromised residential IP addresses that are used as proxies to mask their social engineering attacks.

Among their most targeted targets are executives, managers and team leaders in finance and human resources departments with access to their employees’ personal information.

Microsoft recommends mitigating the impact of these campaigns by maximizing mailbox security options, enabling multi-factor authentication and keeping staff informed and trained about these types of attacks.

More info

​Volt Typhoon: Chinese APT targeting U.S. critical infrastructure

Both Microsoft Threat Intelligence and CISA has published a report on an APT allegedly backed by the Chinese government which they have named Volt Typhoon and which they accuse of being behind a campaign of attacks against critical U.S. infrastructures such as government institutions, military, telecommunications companies or shipping, among others.

Microsoft specifically claims that Volt Typhoon has tried to access U.S. military assets located on the island of Guam, a key territory in case of conflict in Taiwan or the Pacific using as an entry vector FortiGuard devices exposed to the Internet by exploiting 0-day vulnerabilities to extract credentials that allow them to move laterally.

Microsoft points out that Volt Typhoon abuses the legitimate tools present in the attacked systems by camouflaging its activity as routine processes to try to go unnoticed, a technique known as Living Off The Land (LOTL).

More info

​​Vulnerability in KeePass allows master passwords to be recovered

Security researchers have published an article about a new vulnerability that allows master passwords to be recovered in the KeePass password manager.

The vulnerability has been classified as CVE-2023-32784 and affects KeePass versions 2.x for Windows, Linux and macOS. It is expected to be patched in version 2.54, and a PoC is available for this security flaw. For exploitation, it does not matter where the memory comes from, and whether the workspace is locked or not.

In addition, it is also possible to dump the password from RAM when KeePass is no longer running. It should be noted that successful exploitation of the flaw relies on the condition that an attacker has already breached the computer of a potential target and that the password is required to be typed on a keyboard and not copied from the device’s clipboard.

More info

Featured photo: Pankaj Patel / Unsplash

Will Rust save the world? (II)

David García    24 May, 2023

We saw in the previous article the problems of manual memory management, but also the pitfalls of automatic memory management in languages like Java.

But what if there was a middle ground, what if we could find a way to get rid of rubbish collector pauses, but at the same time not have to manage memory manually? How do we do it?

Automatic management relies on the garbage collector, which runs at runtime, along with the program. Manual management falls to the programmer, who must do it at development time. If we discard the runtime collector and take away the programmer’s task during development, what are we left with? Easy: the compiler.

The third way: the compiler

The third way is to introduce (or rather, to make the compiler, which is another of the elements in play, responsible for the memory management options. That is to say, to make it responsible for identifying who requests memory, how it is used and when it stops being used in order to reclaim it.

The compiler, as an element of the development chain, has a broad view of all the elements that make up a program. It knows when memory is being requested and keeps track of the life of an object because it knows which symbol is being referenced, how and where. And of course, most importantly, when it stops being referenced.

This is what programming languages like Rust do, whose model is based on ownership with the following rules:

  • Each value in Rust must have an “owner”.
  • Only one “owner” can exist at a time. No more than one “owner” can exist.
  • When the “owner” goes out of scope or visibility, the object is discarded and any memory it may contain is properly freed.

Rules are simple, but in practice it takes some getting used to and a high tolerance for frustration, as the compiler will interpose itself in the face of any slip-up that inadvertently leads to a violation of the above rules.

The system used by the compiler is called the borrow checker. It is basically the part of the compilation dedicated to check that the concept of owner is respected with respect to an object and that if an owner “borrows” the object, this borrowing is resolved when the scope changes and the concept of single owner is still maintained. Either because the recipient of the object takes responsibility for it or because he/she returns the ownership of the object.

An example:


If we look at the compiler complaints and the code, we see that the variable “s” has the string property “hello”. In line 3, we declare a variable called “t” that borrows (and does not return) the string “hello”.

Then, in line 4, we make “s” add a new string and complete the classic “hello world” sentence, but the compiler won’t let it: it’s an error and lets us know.

What has happened here?

The one-owner rule comes into play. The compiler has detected that “s” no longer owns the string property, which now resides in “t”, so it is illegal for it to make use of or attempt to modify the object it once owned, since it now doesn’t belong to it.

This is just the basics, but it is intended to give us an idea of how this “policing” of the rules by the compiler works.

Image by Freepik.

By the way, where is the toll here?  Of course, in very long compilation times compared to C language or even Java, e.g.

Conclusions

Rust goes a third way in terms of memory management and only sacrifices compile time, programmer frustration while developing, and a certain steep curve in getting used to memory management rules (we have illustrated the basics, but Rust has other notions that complement borrowing, such as lifetimes, etc.).

Despite the price paid (even so, the compiler improves timings with each new version), however, we will be almost absolutely certain that our program is free of memory errors (and also most of the race conditions) that could cause vulnerabilities.

Will Rust save the world? Time will tell, but the reception and adoption of the language is remarkable and will improve the outlook for memory error-based vulnerabilities, which is no small thing.

Featured image: Creativeart on Freepik.

Can Artificial Intelligence understand emotions?

Olivia Brookhouse    23 May, 2023

When John McCarthy and Marvin Minsky founded Artificial Intelligence in 1956, they were amazed how a machine could perform incredibly difficult puzzles quicker than humans.

However, it turns out that teaching Artificial Intelligence to win a chess match is actually quite easy. What would present challenges would be teaching a machine what emotions are and how to replicate them.

“We have now accepted after 60 years of AI that the things we originally thought were easy, are actually very hard and what we thought was hard, like playing chess, is very easy”

Alan Winfield, Professor of robotics at UWE, Bristol,

Social and emotional intelligence come almost automatically to humans; we react on instinct. Whilst some of us are more perceptive than others, we can easily interpret the emotions and feelings of those around us.

This base level intelligence, which we were partly born with and partly have learnt, tells us how to behave in certain scenarios. So, can this automatic understanding be taught to a machine?

Emotion Artificial Intelligence (Emotion AI)

Although the name may throw you off, Emotion AI does not refer to a weeping computer who has had a bad week. Emotion AI, also known as Affective Computing dates back to 1995 and refers to the branch of Artificial intelligence which aims to process, understand, and even replicate human emotions.

Photo: Lidya Nada / Unsplash

The technology aims to improve natural communication between man and machine to create an AI that communicates in a more authentic way. If AI can gain emotional intelligence maybe it can also replicate those emotions.

How can [a machine] effectively communicate information if it doesn’t know your emotional state, if it doesn’t know how you’re feeling, it doesn’t know how you’re going to respond to specific content”

Javier Hernández, research scientist with the Affective Computing Group at the MIT Media Lab,

In 2009, Rana el Kaliouby, and Picard founded Affectiva, an emotion AI company based in Boston, which specializes in automotive AI and advertising research. With customer’s consent, the user’s camera captures their reactions while watching an advertisement. Using “Multimodal emotion AI”, which analyses facial expression, speech, and body language, they can gain a complete insight into the individual’s mood.

Their 90% accuracy levels are thanks to their diverse test-sets of 6 million faces from 87 different countries used to train deep learning algorithms. From a diverse data set, the AI will learn which metrics of body language and speech patterns coincide with difference emotions and thoughts.

As with humans, machines can produce more accurate insights into our emotions from video and speech than just text.

Sentiment analysis or opinion mining

Sentiment analysis or opinion mining, a sub field of Natural Language Processing is the process of algorithmically identifying and categorizing opinions expressed in text to determine the user’s attitude toward the subject.

This use case can be applied in many sectors such as Think tanks, Call centres, Telemedicine, Sales, and Advertising to take communication to the next level.

Whilst AI might be able to categorize what we say into positive or negative boxes, does it truly understand how we feel or the sub text beneath? Even as humans we miss cultural references, sarcasm and nuance in language which completely alter the meaning and therefore the emotions displayed.

Sometimes it is the things we leave out and don’t say which can also imply how we are feeling. AI is not sophisticated enough to understand this subtext and many doubt if it ever will.

Can AI show emotion?

In many of these use cases, such as Telemedicine Chatbots and Call Centre virtual assistants, companies are investigating the development of Emotion AI to not only understand customers emotions but to improve how these platforms individually respond.

Photo: Domingo Alvarez / Unsplash

Being able to simulate human like emotions gives these platforms and services more authenticity. But is this a true display of emotion?

AI and neuroscience researchers agree that current forms of AI cannot have their own emotions, but they can mimic emotion, such as empathy. Synthetic speech also helps reduce the robotic like tone many of these services operate with and emit more realistic emotion. Tacotron 2 by google is transforming the field to simulate humanlike artificial voices.

So, if machines, in many cases, can understand how we feel and produce a helpful, even ‘caring’ response, are they emotionally intelligent? There is much debate within this field if a simulation of emotion demonstrates true understanding or is still artificial.

Functionalism argues that if we simulate emotional intelligence then, by definition, AI is emotionally intelligent. But experts question whether the machine truly “understands” the message they are delivering and therefore a simulation isn’t a reflection that the machine is actually emotionally intelligent.

Artificial General Intelligence

Developing an Artificial General Intelligence, which possesses a deeper level of understanding is how many experts believe machines can one day experience emotions as we do.

Artificial General Intelligence (AGI) opposed to Narrow intelligence refers to the ability of computers to carry out many different activities, like humans. Artificial Narrow intelligence as the name suggests aims to complete individual tasks but with a high degree of efficiency and accuracy.

Photo: TengyArt / Unsplash

When we talk about emotional and social intelligence, forms of intelligence which are not necessarily related to a set task or goal, these fall under Artificial General Intelligence. AGI aims to replicate our qualities which to us, seem automatic. They are not tied to an end goal, we do them just because we do.

Conclusions

We are still many years behind having an Artificial General Intelligence capable of replicating every action we can perform, especially those qualities which we consider most human, such as emotions.

Emotions are inherently difficult to read and there is often a disconnect between what people say they feel and what they actually feel. A machine may never get to this level of understanding but who is to say how we process emotions is the only way. How we interpret each other’s emotions is full of bias and opinion, so maybe AI can help us get straight to the point when it comes to our emotions.

Featured photo: rawpixel.com on Freepik