Fake News, the word of the year 2017 according to the Collins dictionary and repeated endlessly both in the media and on social networks. We have even dedicated numerous posts to it in this blog, pointing out possible risks derived from its use, as well as the role of technology in its detection. In this case, the intention is to look at the problem from the other side: how technological development, including that of the systems used to identify fake notifications, is actually contributing to making them more and more real every day.
The intention is to show, through examples, the process of creating a totally fake news story from scratch with as little human effort as possible, letting technology do the rest, without going into the specific details of the technical functioning of these algorithms.
Creating our main character
Every news item needs a protagonist, and this, a context. Thanks to platforms like this X does not exist, in a couple of clicks we can have his face, his mascot or his CV. None of the images generated would have existed until we clicked on them, and they will cease to exist when we refresh the page.
In the absence of imagination to add details such as name, nationality, residence, etc. We can also resort to other free resources like fakepersongenerator.com or fauxid.com. Yes, for the cat too.
The limitation of this type of approach is that we cannot construct a complete identity from a single photograph, and given that Cassidy does not really exist, we cannot ask him to become more. To overcome this drawback, morphing techniques appear that allow us to obtain different angles of the same photograph, change its expression, increase or decrease its age, etc.
These technologies are similar to those used by applications such as FaceApp, which a few years ago had thousands of users on social networks showing “what they would look like when they were 80 years old”. They are also the culprit of many headaches for border agents around the world, as the images generated are close enough to the original image for the human eye to identify them as the same person, but can evade biometric systems.
Now that we have enough photos of our main character, we can also add a background, a context. If we don’t want to worry about someone recognising the original image we have used in our montage, we can describe the landscape to DALL-E (mini version available on its website) or, if we prefer to bring out our artistic side, we can draw it in Nvidia’s GauGAN2.
Special mention should be made of the Unreal Engine 5 video game engine, among others, which, although they allow the creation of scenarios and environments capable of fooling anyone, require much more effort on the part of the creator than the examples presented in this post. A recent example is the recreation of the train station in Toyama, Japan, created by the artist Lorenzo Drago.
Developing and sharing the news
Now that we have given Cassidy a face, it is time for her to fulfil her role as creator, disseminator or even protagonist of false content. If we’re not in the literary mood to write it ourselves, there are algorithms for that too.
Platforms such as Smodin.io can generate articles or essays of considerable length and quality by simply indicating the title. I may or may not have asked for your help in writing this post.
If we were to focus our disinformation strategy on impersonating someone else rather than creating it out of thin air, there are also systems trained to mimic writing styles. In 2017, the Harry Potter chapter generated by Botnik Studios imitating the style of the original author went viral.
If instead of a proper article we want to run a disinformation campaign on social media, we can create short text snippets with the Inferkit.com demo. Perfect for a tweet or a Facebook comment. What if Cassidy were to disprove that man landed on the moon?
In many cases it is not even necessary to create a user on the networks to actually post the content, just a screenshot indicating that you have done so. It could be a WhatsApp conversation, a Facebook comment or even their Tinder profile.
To raise grades
After generating static images and text, if we wanted to go one step further in our creation of fake news, we could turn to video and sound. The well-known deep fakes are a very useful tool in both cases. The blog has previously discussed how they are used in shootings, to impersonate someone’s identity or to carry out a “CEO fraud”.
In addition to these techniques, more focused on the impersonation or imitation of another image or sound, there are platforms capable of creating new voices: some from scratch, such as This Voice Does Not Exist; others allow us to make adjustments to previously created voices, such as Listnr.tech; and others create new voices from our own, such as Resemble.ai.
While the threat of misinformation and fake news has been around for centuries, thanks to technological development, we are now able to generate a person’s image in one click, give them a pet, a job and a hobby in three clicks, instil certain ideas in three clicks and finally give them a voice.
Tasks that used to require a great deal of manual effort by the party interested in creating and disseminating the information can now be automated and done en masse. This also means that these campaigns are now available to anyone and are not limited to governments and large corporations.
As long as technology cannot keep up with detecting what it creates, the only possible solution is based on awareness and critical thinking on the part of users, which starts with knowing the threats they face.
“Our technological powers increase, but the side effects and potential hazards also escalate”. – Alvin Toffler. Future Shock (1970)