Artificial Intelligence (AI) has become a frequent topic on this blog. Almost all predictions of technological trends for the coming years include it as one of the key advances.
In my previous article, we addressed the role that these technologies can play in the creation and dissemination of disinformation and fake news. On that occasion, the protagonists were tools such as DALL-E or GauGAN2 for generating images, and although we already mentioned some text tools, at the end of the year a new tool appeared on the scene that has been making headlines ever since: ChatGPT.
A few weeks ago, our colleague Mercedes Blanco introduced us to how ChatGPT works and some of its applications in the business world. This time, however, we will focus on what this tool, and others like it, can mean for cyber security.
As with any technological advance, its consequences can be beneficial both for security teams and for those who take advantage of it for more controversial purposes.
ChatGPT in security research
The tool itself informs us of the many ways in which it can be of use to threat intelligence services, which can be summarised as follows:
- Provide information and act as an advanced search tool.
- Support the automation of tasks, reducing the time spent on tasks that are more mechanical and require less detailed analysis.

Artificial intelligence has been making its way into cyber security tools for some time now. Some examples can be found in our Trending Techies last November, such as the project presented by Álvaro García-Recuero for the classification of sensitive content on the internet.
In the case of ChatGPT, Microsoft seems to be leading integration efforts in its services, such as its Bing search engine and Azure OpenAI Service or, more focused on cyber security, the case of Microsoft Sentinel, which could help streamline and simplify incident management.
Other researchers are betting on its use for the creation of rules that can detect suspicious behaviour, such as YARA rules.
Google, for its part, has opted to launch its own tool called Bard, which will be implemented in its search engine in the not too distant future.
ChatGPT in cybercrime
On the opposite side of cyber security, we can also find multiple applications of tools such as ChatGPT, even though they are initially designed to prevent their use for illicit purposes.
In early January 2023, CheckPoint researchers reported the emergence of underground forum posts discussing methods of bypassing ChatGPT restrictions to create malware, encryption tools or trading platforms on the deep web.
In terms of malware creation, researchers who have attempted proof-of-concepts have come to the same conclusion: ChatGPT is able to detect when a request asks directly for the creation of malicious code, however, rephrasing the request in a more creative way allows evading these defences to create polymorphic malware, or keyloggers with some nuances. The generated code is neither perfect nor fully complete and will always be based on the material that the artificial intelligence has been trained on, but it opens the door to generating models that can develop this type of malware.

Another of the possible illicit uses that have been raised with ChatGPT is fraud or social engineering. Among the content that these tools can generate are phishing emails designed to trick victims into downloading infected files or accessing links where they can compromise their personal data, banking information, etc. There is no need for the author of the campaign to master the languages used in the campaign, or to manually write any of them, automatically generating new themes on which to base the fraud.
Overall, whether the tool is capable of delivering complete, ready-to-use code or content or not, what is certain is that the accessibility of programmes such as ChatGPT can reduce the sophistication needed to carry out attacks that, until now, required more extensive technical knowledge or more developed skills. In this way, threat actors who were previously limited to launching denial-of-service attacks could move on to developing their own malware and distributing it in phishing email campaigns.
Conclusions
New AI models like ChatGPT, like any other advancement in technology, can have applications both to support progress and improve security, as well as to attack it.
Actual use cases of such tools to commit crimes in cyberspace are anecdotal at the moment, but they allow us to imagine the possible cybersecurity landscape to come in the years to come. The constant updating of knowledge becomes, once again, essential for researchers and professionals in the field of technology.
“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
Eliezer Yudkowsky
Featured photo: Jonathan Kemper / Unsplash