Ghosts in the machine: does Artificial Intelligence suffer from hallucinations?

Javier Coronado Blazquez    20 February, 2023

Artificial Intelligence (AI) content generation tools such as ChatGPT or Midjourney have recently been making a lot of headlines. At first glance, it might even appear that machines “think” like humans when it comes to understanding the instructions given to them.

However, details that are elementary for a human being turn out to be completely wrong in these tools. Is it possible that the algorithms are suffering from hallucinations?

Science and (sometimes) fiction

2022 was the year of Artificial Intelligence: we saw, among other things, the democratisation of image generation from text, a Princess of Asturias award, and the world went crazy talking to a machine that had the power to last: OpenAI’s ChatGPT.

Although it is not the aim of this article to explain how this tool works, as it is outlined in Artificial Intelligence in Fiction: The Bestiary Chronicles, by Steve Coulson (spoiler: written by herself), we can say that, in short, it does try to imitate a person in any conversation. With the added bonus that she might be able to answer any question we ask her, from what the weather is like in California in October to defending or criticising dialectical materialism in an essay (and she would approach both positions with equal confidence

Why browse through a few pages looking for specific information when we can simply ask questions in a natural way?

The same applies to AI image generation algorithms such as Midjourney, Dall-e, Stable Diffusion or BlueWillow. These tools are similar to ChatGPT in that they take text as input, creating high quality images.

Examples of the consequences of mind-blowing Artificial Intelligence

Leaving aside the crucial ethical aspect of these algorithms —some of which have already been sued for using paid content without permission to be trained— the content they generate may sometimes seem real, but only in appearance

For instance,

However, as the headline suggests, as soon as we start to look at them more closely we see details that don’t quite add up: mouths with more teeth than usual, hands with 8 fingers, limbs sticking out of unexpected places… none of these fake photos pass a close visual examination.

Artificial intelligence learns patterns and can reproduce them, but without understanding what it is doing.

This is because, basically, all the AI does is learn patterns, but it doesn’t really understand what it is seeing. So if we train it with 10 million images of people at parties, it will recognise many patterns: people are often talking, in various postures, holding glasses, posing with other people… but it is unable to understand that a human being has 5 fingers, so when it comes to creating an image with someone holding a glass or a camera, it just “messes up”.

But perhaps we are asking too much of the AI with images. If you’re a drawing hobbyist, you’ll know how difficult it is to draw realistic hands holding objects.

Photo: Ian Dooley / Unsplash

What about ChatGPT? If you are able to write an article for this blog, you might not make mistakes like that. And yet ChatGPT is tremendously easy to fool, which is not particularly relevant. It is also very easy to fool us without us realising it. And if the results of a web search are going to depend on it, it is much more worrying.

In fact, ChatGPT has been tested by hundreds of people all over the world in exams ranging from early childhood education tests to university exams to entrance exams.

In Spain, he was subjected to the EVAU (the old university entrance exam) History test, in which he got a pass mark. “Ambiguous answers”, “overreaching to other unrelated subjects”, “circular reiterations”, “incomplete”… are some of the comments that professional correctors gave to his answers.

A few examples:

  • If we ask what is the largest country in Central America, it might credibly tell us that it is Guatemala, when in fact it is Nicaragua.
  • It may also confuse two antagonistic concepts, so that if we wanted to understand the differences between the two, it would be confusing us. If, for example, we were to use this tool to find out whether we can eat a certain family of foods if we have diabetes and it gave us the wrong answer, we would have a very serious problem.
  • If we ask him to generate an essay and cite papers on the subject, it is very likely that it will mix articles that exist with invented ones, with no trivial way of detecting them.
  • Or if we ask about a scientific phenomenon that does not exist, such as “inverted cycloidal electromagnon”, it will invent a twisted explanation accompanied by completely non-existent articles that will even make us doubt whether such a concept actually exists. However, a quick Google search would have quickly revealed that the name is an invention.

That is, ChatGPT is suffering from what is called “AI hallucination”. A phenomenon that mimics hallucinations in humans, in which it behaves erratically and asserts as valid statements that are completely false or irrational.

Androids hallucinate with electric sheep?

So, what is going on?

As we have said before, the problem is that the AI is tremendously clever at some things, but terribly stupid at others. ChatGPT is very bad at lying, irony and other forms of language twisting.

When asked how dinosaurs came to build their advanced civilisation in the Cretaceous and what evidence we have today, it won’t question the validity of the starting point, just start ranting.

The problem then lies in having a critical spirit and distinguishing what is real from what is not (in a way, as is the case today with fakenews).

In short, the AI will not give in: if the question we ask it is direct, precise, and real, it will give us a very good answer. But if not, it will make up an answer with equal confidence.

When asked about the lyrics to Bob Dylan’s “Like a Rolling Stone”, it will give us the full lyrics without any problem. But if we get the wrong Bob and claim that the song is by Bob Marley, it’ll pull a whole new song completely out of the hat.

A sane human being would reply “I don’t know what song that is”, “isn’t that Dylan’s”, or something similar. But the AI lacks that basic understanding of the question.

As language and AI expert Gary Marcus points out, “current systems suffer from compositionality problems, they are incapable of understanding a whole in terms of its parts”.

Platforms such as Stack Overflow, a forum for queries about programming and technology, have already banned this tool to generate automatic answers, as in many cases its solution is incomplete, erroneous or irrelevant. And OpenAI has hundreds of programmers explaining step-by-step solutions to create a training set for the tool.

The phenomenon of hallucination in Artificial Intelligence is not fully understood

The hallucination in Artificial Intelligence is not fully understood at a fundamental level. This is partly because the algorithms behind it are sophisticated deep learning neural networks.

Although extremely complex, at its core it is nothing more than a network of billions of individual “neurons”, which are activated or not depending on input parameters, mimicking the workings of the human brain. In other words, linear algebra, but in a big way.

The idea is to break down a very complicated problem into billions of trivial problems. The big advantage is that it gives us incredible answers once the network is trained, but at the cost of having no idea what is going on internally.

A Nature study, for example, showed that a neural network was able to distinguish whether an eye belonged to a male or female person, despite the fact that it is not known whether there are anatomical differences between the two..

Or a potentially very dangerous example, in which a single facial photo classified people as heterosexual or homosexual.

Who watches over the watchman?

Then, if we are not able to understand what is going on behind the scenes, how can we diagnose the hallucination, and how can we prevent it?

The short answer is that we can’t right now.

And that’s a problem, as AI is increasingly present in our everyday lives. Getting a job, being granted credit by a bank, verifying our identity online, or being considered a threat by the government are all increasingly automated tasks.

If our lives are going to have such an intimate relationship with AI, we’d better make sure it knows what it’s doing. Other algorithms for text generation and image classification had to be deactivated, as they turned out to be neo-Nazi, racist, sexist, sexist, homophobic… and they learned this from human biases.

In a sort of Asimov’s tale, let’s imagine that, in an attempt to make politics “objective”, we let an AI make government decisions. We can imagine what would happen then.

Although some people point to a problem of lack of training data as the cause of hallucinations, this does not seem to be the case in many situations.

Perhaps in the near future a machine will be able to really understand any question. Or not.

In fact, we are reaching a point where exhausting the datasphere —the volume of relevant data available— is beginning to be on the horizon. That is, we will no longer have much to improve by increasing the training set.

The solution may then have to wait for the next revolution in algorithms, a new approach to the problem that is currently unimaginable. This revolution may come in the form of quantum computing.

Perhaps in the near future a machine will be able to really understand any question. Maybe not. It is very difficult and daring to make long-term technological predictions.

After all, the New York Times wrote in 1936 that it would be impossible to leave the earth’s atmosphere, and 33 years later, Neil Armstrong was walking on the moon. Who knows, maybe in a few decades it will be AI that diagnoses why humans “hallucinate”…


Featured photo: Pier Monzon / Unsplash

Leave a Reply

Your email address will not be published.