In this post we will talk about how to automatically create realistic environments in virtual worlds. As an example, we will use the video game No Man’s Sky, by Hello Games, which in 2016 created entire galaxies and planets on a real scale with a simple algorithm, all of them entirely accessible and different.
As if this was not enough, we can also add artificial intelligence to the equation, which will be a revolution never seen before in the world of videogames
Infinite apes, endless worlds
A famous mental experiment known as the “infinite monkey theorem” says that, if we put an infinite number of monkeys to type on a computer for an infinite time, at some point one of them will write Don Quixote. By pure and simple chance.
Any book, even Cervantes’ opus magna, is a very long string made up of a finite number of characters, such as letters of the alphabet. In other words, in an infinite time, everything can and must happen.
We can ask ourselves whether this experiment can be extrapolated to other types of content. One approach comes in the form of what is known as procedural generation. This means that, starting from a sufficiently complex algorithm, the result of this algorithm can be randomised so that each time it is executed, the result is different.
This type of “unexpected result” has been applied not only in science, but also in the arts such as music or painting. However, it is in the world of video games where it has found a special appeal.
A sandbox (i.e., open world) video game requires a tremendous scale of modelling. After all, we are trying to mimic an entire world in a virtual environment. Due to purely technical and developmental constraints, most sandbox games had a limited number of periodically repeated elements.
Just as Neo in the Matrix would see a glitch in the form of cat-like déjà vu, we would start to see the same textures, the same trees, the same faces over and over again throughout the game. The possibility of randomising these elements, just as in real life, was all too tempting
Although there are more than a few examples of these attempts at procedural generation since the 1980s, probably the prime example due to the exorbitant scale is the video game No Man’s Sky, developed by Hello Games and released in 2016 for various platforms.
In this game, we wake up on an unknown planet with a broken spaceship, and our first mission is to find resources to repair it. So far, so conventional. We quickly realise that, unlike in other games, if we start walking in a straight line there are no invisible barriers, insurmountable obstacles, or anything that prevents us from leaving the modelled area. In fact, we could walk all the way around the planet if we wanted to, encountering extravagant fauna and flora everywhere.
When we manage to get off the planet, we see that it has a natural scale, i.e., comparable in size to Mars or Earth. In this strange Solar System, we find more planets and moons, to which we can travel by means of a fictitious warp drive (or “hyperspace”, as you prefer).
Landing on another of these stars, we find a new world to explore, different from the previous one in climate, landscape, fauna, flora, possible intelligent civilisations, and so on. The final twist comes when we wonder if we can also leave this Solar System, or even the galaxy.
We then discover that the game contains 255 individual galaxies, with a total of 18,446,744,073,709,551,616 planets, all of them accessible and different from each other. The number, in case anyone doesn’t feel like counting commas, is about 18 quintillions. If 100 people visited one planet per second, it would take about 5 billion years to visit them all. That is roughly the age of planet Earth.
Hello Games managed to create an entire Universe with infinite possibilities, without having to explicitly model a single planet. It only used procedural generation to combine these individual elements in different ways.
No two planets are identical, nor do they have the same fauna, flora or civilisations. In fact, by implementing online play capabilities, each player can discover planets and name them, or visit a friend in the underwater base he or she has created in a particularly peculiar Solar System.
The planets are the same for everyone, as the algorithm is deterministic – it is the initial planet assignment that is completely random. As a curiosity, even the game’s soundtrack is procedurally generated, based on thousands of samples from the 65daysofstatic band.
Ender’s (video) game
No Man’s Sky is an outstanding example of procedural generation in video games, but it’s been 6 years since it was released. How can we go further? This is where Artificial Intelligence (AI) comes in.
In video games, AI usually refers to the behaviour of NPCs (Non-Playable Characters), whether they are friends, enemies or neutrals. For example, in a racing game like Gran Turismo, the reaction of the other cars to the player’s actions: does the machine have an excellent driving skill, or a mediocre one?
It is interesting to see how little AI has evolved in video games. Most actions are predictable as soon as we learn the pattern. Even combat games known for their high difficulty (such as Hollow Knight, Cuphead or Dark Souls) present conceptually very simple battles, where the only real challenge lies in our ability as humans to execute a specific sequence of commands on the controller/keyboard at the exact time.
The same goes for the realism of NPCs when talking to the player, as they have a limited number of lines of dialogue and animations. It is typical to burn them out in a few iterations, which would never happen in the real world.
This will change radically with the application of AI, specifically Deep Learning. These algorithms will allow studios not only to have an invaluable programming tool for their works, but to autonomously generate concept art, dialogue or even entire games from scratch. In other words, procedural generation, but instead of being subject to a deterministic algorithm, it will be done organically and realistically, just as a human being would.
Character behaviour will be learned from our gameplay and implemented in real time. Realism will be extreme in terms of interaction with NPCs, as there will be infinite lines of dialogue. We will not be subject to choosing from a few predefined options but will be able to engage in natural conversations with any character. In addition, software such as StyleGAN, designed by NVIDIA and released open source in 2019, allows for the creation of photorealistic faces with a Generative Adversarial Network (GAN), exponentially increasing the immersion in the proposed narrative.
In a way, each person will play a different game, as the same piece of work will be configured according to that player.
Because the AI will always be learning, not only will it constantly generate new content for the game, but in a way the game will never be “finished”; only when we leave it will it stop building and updating itself. However, we must be cautious about applying Deep Learning to video games.
For example, an enemy that learns from our moves could quickly become invincible, as it will quickly see the weaknesses in our strategy and adapt its style, as is the case with Sophy, the new AI in Gran Turismo, which is capable of defeating professional drivers.
Only time will tell how far we can go in combining procedural generation and AI, but it’s clear that the future will be very realistic.