Today we are going to talk about how Artificial Intelligence and, Deep Learning in particular, are being used in the filming of movies and series, achieving better results in special effects than any studio had ever achieved before. And is there any better way to prove it than with one of the best science fiction sagas, Star Wars? To paraphrase Master Yoda: “Read or do not read it. There is no try”.
Deepfake Facial Rejuvenation
Since the release of A New Hope in 1977 until now, a multitude of Star Wars films and series have been made, without following a continuous chronological order. This means that characters that were played when the actors were young have to be played many years later… by the same actors, who are no longer that young.
This is a problem that Hollywood has solved by using “classic” special effects, such as CGI, but the advance of Deep Learning has resulted in a curious fact, as fans with home computers have managed to match or improve the work of these studios.
One example is DeepFake technology. DeepFake is an umbrella term for neural network architectures trained to replace a face in an image with that of another person. Among the neural network architectures used are autoencoders and generative adversarial networks (GANs). Since 2018 this technology has developed rapidly, with websites, apps and open-source projects ready for development, lowering the barrier to entry for any user who wants to try it out.
And how does this relate to Star Wars? On December 18th, 2020, episode 8 of season 2 of The Mandalorian series was released, which included a scene with a “young” Luke Skywalker made by computer (the original actor, Mark Hamill, was 69 years old). Just 3 days later, the youtuber Shamook uploaded a video in which he compared the facial rejuvenation of Industrial Light&Magic (responsible for the special effects) with the one he had done himself using DeepFake
As you have seen, the work of a single person, in 3 days, has improved the work of the special effects studio, which, in this case, had also used DeepFake in combination with other techniques. In addition, Shamook did this using two open-source projects such as DeepFaceLab and MachineVideoEditor.
The same author made other substitutions in recent Star Wars films such as that of Governor Tarkin in Rogue One (the original actor, Peter Cushing, died in 1994) or that of Han Solo in the film of the same name (where they hired a new actor instead of rejuvenating Harrison Ford) that proved the DeepFake technique generalised very well to other films.
These videos, which went viral, did not go unnoticed by Lucasfilm, who a few months later hired the youtuber as Senior Facial Capture Artist.
Outside of the Star Wars universe, Shamook has done face replacements in many other films, usually putting actors in films they have nothing to do with, with hilarious results.
Upscaling models to improve the quality of old videos
But rejuvenating faces is not the only use that Deep Learning can offer film studios. Another type of models, called upscaling models, are trained to improve the resolution of images and videos (and video games). This is useful when you want to remaster, for example, old films that were digitalised and do not allow for easy upscaling.
Fans, again, have taken the lead and are improving the quality of old Star Wars video game trailers using these technologies.
Some have even dared to restore deleted scenes from the first films, and provide tutorials so that those who have the courage can continue to improve their results.
In short, we can see that Deep Learning models are changing the way many businesses are developed. The film industry has a wide range of options to improve its processes using Machine Learning that will allow us to enjoy increasingly realistic and spectacular effects.
Note: you can see the process of Luke Skywalker’s rejuvenation in The Mandalorian in episode 2 of season 2 (“Making of the Season 2 Finale”) of the docuseries “Disney Gallery: Star Wars: The Mandalorian” available in Disney+.