Music is something that unites us. Go anywhere in the world and you will find music: someone singing, playing, rapping or dancing. For thousands of years it has formed a crucial part of every known society in the world, even the most isolated tribal groups, and every society and culture has music that reflects their beliefs, current affairs, and feelings. Our human intelligence; our experiences and our ability to learn from these, is what enables musicians to write music that resonates with a demographic the size of the population of India.
But what the majority of us don´t understand is that music can be complex. If we look at the music industry from a commercial view point, there are over 7 billion people in the world, that´s a huge market. Trying to make music that will sell and appeal to everyone is an impossible task. But making music that appeals to a billion of these, that is attainable.
IBM’s music arm of Watson: Watson BEAT, is changing the game. IBM’s Watson Beat is a cognitive cloud-based music program developed using AI and machine learning. The machine´s music generation algorithms analyse individual tracks and collect data on pitch, time and key signatures, and note sequences. Using this collected data, it works out what a listener might want, or what an artist may be inspired by. Of course, this does not immediately equal a smash hit that everyone will love, but it certainly has the potential to aid producers and song-writers know their audience and get inspired.
Video of music producer Alex da Kid collaborating with IBM Watson BEAT.
The project initially came from former IBM researcher Janani Mukundan. Under Janani´s guidance, it has been influenced by several different people, including musician Richard Daskas, with whom IBM created their collaboration for the Red Bull F1 commercial. The BEAT code uses two methods of machine learning; reinforcement learning (uses the assumptions of modern western music theory to create reward functions), and Deep Belief Network (DBN), where the AI is trained on a single input melody to create a vibrant and complex melody layer.
However, discussions have arisen surrounding the copyright debate. Many believe that a machine cannot own copyright, and that it must belong to the person that sees the spark and creates something with it. Yet entertainment lawyer Bjorn Schipper, says that it is hard to give a conclusive answer due to the novelty of AI in composition.
Also joining in on the debate, Meindert Kennis, the Lead Digital Strategist and CMO of Spinnin’ Records, stated that “A lot of artists create music and come to us and say “Here’s my new track,” but, if they start to use more AI instruments, it might be wise to record the actual recording of the music and the creative process more to show that it’s actually them.”.
Although the technology can compose music by using algorithms and data sets, it cannot guarantee a sound that will appeal to the masses. Echoing one of the themes we have frequently visited in our blogs, the human intelligence is still the most vital piece of the puzzle. Predominantly, this AI is used to reduce the time composers and producers have to spend on repetitive tasks: once again, utilising AI as a tool to aid our human capabilities.
Broadway musician Seth Rudetsky partners with IBM Watson to create a cognitive collaboration
Traditionally, people precieve AI to be simply data and analytics. Watson Beat harnesses the power of AI to turn data into a creative expression. You may have already heard a song composed by AI, who knows, maybe your next favourite will be created by IBM´s Watson BEAT!