Holly Herndon performs PROTO at Red Bull Music Festival New York
© Drew Gurian / Red Bull Content Pool
Music

Is AI making it easier for everyone to become a musician?

Is musical AI helping people to cut corners or, in an era when access to traditional instrument learning is ever more restricted, is it a democratic step forward in making music accessible to all?
By April Clare Welsh
Published on
On her recent high-concept single, We Appreciate Power, Grimes conjures the impending rise and reign of AI. “Pledge allegiance to the world's most powerful computer/simulation: it's the future,” sings the Canadian avant-pop star, adding tongue-in-cheek wit and nu-metal grit to an industrial goth-rock stomper.
The AI takeover is still very much the stuff of science-fiction, but from the nudging reminders in your Gmail inbox to building Spotify playlists to suit your lifestyle, human beings are relying more and more on artificial intelligence to assist in a multitude of automated daily tasks – often without their knowledge. Increasingly, even the music we listen to has AI's fingerprints all over it and recent years have seen the field of AI-generated music move forward in leaps.
A photo of Canadian electronic pop artist Grimes performing at The Mayan Theatre, in Los Angeles, in 2015.
Grimes performing at Red Bull Music Festival in Los Angeles
In 2016, scientists at Sony CSL Research Laboratory in Paris unveiled the first-ever pop song written by artificial intelligence. The summery, Beatles-style Daddy’s Car was composed by an AI system called Flow Machines that wrote the track after analysing a database of songs to create a new composition. The following year, US artist and YouTuber Taryn Southern went one step further to release the first LP by a solo artist composed and produced with AI, titled I AM AI.
Southern used four AI programs to co-write and co-produce the entire album: Amper Music, IBM’s Watson Beat, Google’s Magenta and AIVA. Amper’s AI composer technology enables people with any level of experience to create and customise original computer-generated music. Southern produced her moody chart-pop ballad Break Free using Amper, having chosen the genre and written the lyrics herself.
Elsewhere, researchers from MIT have developed an AI system called PixelPlayer that identifies and isolates instrument sounds from videos using a neural network trained on MUSIC (Multimodal Sources of Instrument Combinations). The deep-learning system was trained on more than 60 hours of video footage and is self-supervised, meaning that it doesn’t require any human annotations on what the instruments are or what they sound like. Programs like PixelPlayer can reduce hours spent on YouTube. So, is AI making it easier for everyone to become a musician? And as AI-based music becomes more developed, what does this all mean for human creativity?
Computer-generated music is nothing new; it’s been around for as long as computers have been around. In 1951, Alan Turing built a primitive machine called The Ferranti Mark 1 that was able to generate three melodies, resulting in this eerie recording of the British national anthem. In 1982, the Commodore 64 paved the way for computer music making in the home – now a ubiquitous phenomenon.
And the notion of algorithmic music – that is, using sets of rules or formal processes with which to compose – has been around even longer. Bach wrote more than 300 types of polyphonic hymn known as a chorale cantata that follow his own set of strict rules and feature a single melody accompanied by three harmonies. Created by the scientists at Sony CSL in 2016, DeepBach uses deep learning – a subset of machine learning that uses a layered structure of algorithms called an artificial neural network (ANN) – to develop a neural network: a set of algorithms, modelled loosely after the human brain, that are designed to recognise patterns.
Holly Herndon performing with Akihiko Taniguchi at EMAF Tokyo during the Red Bull Music Academy in Tokyo.
Holly Herndon live in Tokyo
“The dramatic difference between AI and simple generative music, or algorithmic music, that we’ve seen in the past is that usually the composer would need to understand or choose a rule-set,” offers experimental electronic producer Holly Herndon. “If you wanted to write a fugue, for example, as a composer you would need to understand the rules of a fugue… with AI, you can just simply choose a canon of fugues and have the AI extract the ruleset for you”.
Earlier this year, Herndon released her third studio album, PROTO. Collaborating with AI expert Jules LaPlace, alongside a sprawling cache of human artists, including footwork trailblazer Jlin, the record showcases Spawn: an AI “baby” counting among PROTO’s crew of performers or “ensemble”, as Herndon describes. For the Godmother single with Jlin, Spawn was fed a diet of Jlin’s percussive workouts and attempted to reimagine them in Herndon’s voice. The rest of the AI-generated album used a dataset of vocals recorded from various artists, including Colin Self.
Tehran-born, London-based producer Ash Koosha has also used AI to create a non-human, or “auxuman” musical collaborator. Koosha’s first auxiliary human, Yona, which blends generative AI software with CGI, was made in collaboration with digital creator Isabella Winthrop and introduced into the world via 2018’s C album. “I was hoping to replicate some processes that a human creator goes through and look at the result from the outside to say this process has a name, has an artificial intuition, represents human data and ultimately will become a new breed of entertainers/beings,” he explains.
Now we can automate, save time, therefore use more creative control and curation.
Ash Koosha
Koosha believes that AI can potentially streamline the music-making process. “What interests me about AI is that it has always been about the ability to enhance our creative processes, where we spend a lot of time crafting the ‘labour’ part of something we make and now we can automate, save time, therefore use more creative control and curation,” he offers.
Tehran-born, London-based producer Ash Koosha has also used AI to create a non-human, or “auxuman” musical collaborator called Yona.
Ash Koosha's 'auxuman' creation Yona
“I started implementing automation methods in the way I produced music years ago which led to bigger questions such as ‘What have we been spending time on as creators in the past and how will that change in the future?’ and ‘Am I going to be curating multiple creative algorithms as a human in the future?’" Koosha says that AI helps in the way that it drops relevant ideas or melodies that can then be correlated with his usual decision making and choices.
That pioneering 2016 AI single, Daddy’s Car, was composed by an AI system called Flow Machines, which claims to help artists in their creative process, via an AI-assisted music composing system. The melody and harmony was composed by AI, which drew from a database of songs and combined elements of many different tracks to compose a unique result, and then a human musician, French composer Benoît Carré (aka Skygge) – who has written songs for Francoise Hardy and other high-profile artists – produced, mixed and wrote the lyrics for the track.
French composer Benoît Carré released 2018’s Hello World – the world’s first multi-genre, multi-artist album composed with AI.
Benoît Carré aka Skygge
In 2018, Carré released 2018’s Hello World – the world’s first multi-genre, multi-artist album composed with AI, specifically Flow Machines. The album featured a range of artists, including Canadian singer-songwriter Kiesza, and Carré is keen to stress the human element of Hello World. “During each step of the creative process, like composition, the machine is fed the notes and chords to several pieces of music… The material is just taken as a source of inspiration. And that’s what is most interesting to us; we wanted to create a great song, but a song that we couldn’t have made by ourselves,” he says.
He continues: “Overall, AI accelerates the process of creation. For example, the tool that researchers and I were developing recently, creates harmonisations; imagine if a young musician wants to have string section on this track, but they don’t know how to arrange these strings for orchestra, the system would be able to do that kind of thing. It can enhance creation.”
AI is helping to further develop the relationship between human and machine. “We were seeing Spawn as a performer – she’s not a composer. I’m still writing things and then I’m asking Spawn to perform them,” explains Herndon. For her, the most interesting thing about working with an inhuman intelligence is learning something about our own intelligence “Or trying to gain some sort of insight. It’s not just kind of automating a creative process. To me that kind of creates an aesthetic cul-de-sac where there’s no real progress there – but maybe I’m too progress-oriented.”
“I think sometimes AI can help musicians with ideas and with some new ways of thinking,” offers Joseph Kamaru, aka Nairobi-based sound artist KMRU. Kamaru, whose experimental practice often uses field recording, recently participated in GAMMA Festival’s GAMMA_LAB AI project in St. Petersburg, which offered artists the opportunity to experiment with various different machine learning models, including the Sample RNN – a model for generating audio sample by sample. This endless Al jazz improvisation was trained by John Coltrane's back catalogue also using Sample RNN.
Participants at GAMMA_LAB AI studied three musical genres – baroque, jazz and techno – comparing traditions among other phenomena, in order to create new works or reconstruct lost ones using machine learning technologies. “The idea was not to imitate or plagiarise the art of past artists, but instead pass the material through the black box of a neural network to try to find a new understanding of a known music and establish a new connection on a more metaphorical and abstract level,” explains Kamaru. The artists then incorporated their own musical ideas into the project at an improvised performance in July.
Projects like GAMMA, PROTO, C, Hello World and I AM AI can demonstrate the potential for machine learning as a source or tool for inspiration, helping musicians working in different genres to generate ideas outside the bounds of human expression. It can also be considered as a time-saving device that simplifies the workflow. Now the average internet user or amateur music-maker can generate customised instrumental tracks at the click of a button. For example, Amper Music’s AI composition tools for content creators, Amper Score, “Enables enterprise teams to compose custom music it seconds and reclaim the time spent searching through stock music,” according to its website.
But the extent to which AI-generated music will benefit big companies wanting to cut corners is a crucial concern. Technological innovations are continuously making it easier for artists to create music, but as major labels like Warner Music, streaming services and tech companies pump more and more cash into building AI-generated music products for the masses, what does this mean for creativity?
Often one of the issues people have with AI, is they think the artist is just pushing the buttons - and that’s not the case.
Benoît Carré
Amper Music offers a “Fast, quality, royalty-free service with creative control”. But who actually owns Taryn Southern’s Amper-created songs; Amper or Southern? This issue of copyright and ownership is a massive grey area that currently has no legal precedent.
Looking ahead, Carré suggests that the next step in AI-generated music will involve delving further into deep learning techniques, while on a personal level he hopes to develop a live show based around his developments in AI. “I want to show that there is an artist behind the project. Often one of the issues people have with artificial intelligence, is that they think that the artist is just pushing the buttons - and that’s not the case.”
Ultimately, artificial intelligence doesn’t currently have the context of human intelligence. “I think music is becoming almost like playing a video game. It can be this communal thing which is where it all started… I can see AI helping enable people to have a more active role in playing, but there’s still a role for composers to be pushing that envelope forward, because at the moment AI is only capable of doing what has happened before.”