KRAFTWERK 3D MoMA

As machines and robots become ever more present in our every day life, paired with increasing the digitization we are well on our way towards Kurzweil’s Singularity, but how is this materializing in music in general and in electronic music in particular? We have previously covered how virtual reality and 3D may play an important role in music production and consumption, and now it is time to make a deep dive into Artificial Intelligence (AI). AI is today everywhere present, but how far have we come in having robots or computers making music, not that far it would seem. Sony’s Computer Science Laboratory in Paris have shared a pair of tracks created with the assistance of software called Flow Machines. The program analyzes a database of existing songs to “learn” musical styles and identify commonalities, then uses combinations of style transfer, optimization, and interaction techniques to synthesize original music. Researchers can tailor the process to produce tunes that sound like the work of a particular artist—for example, “Daddy’s Car,” which is intended to emulate the style of the Beatles, check it out below:

However the song wasn’t entirely composed by artificial intelligence, nor did the tool write the words. Instead the Flow Machines takes good help from the French musician Benoît Carré who arranged and produced the songs, and wrote the lyrics. The music-bots analyze works by flesh-and-blood composers and then synthesize original output with many of the same distinguishing characteristics.  Every work of music contains a set of instructions for creating different but highly related replications of itself. According to Sony the Flow Machine can also be used for interactive compositions. So apart from generating songs, the tool can also be used as a tool for musicians. In the following video, French singer and composer Benoît Carré uses FlowComposer as a personal, intelligent assistant who helps him to compose a new song.

David Cope has designed EMMY, an emulator named for the acronym of Cope’s “Experiments in Musical Intelligence” project at UC Santa Cruz and elsewhere.  EMMY spools out miles of convincing music: from Bach chorale to Mozart sonatato Chopin mazurka, Joplin Rag, and even a work in the style of her creator, Cope. He explains: My rationale for discovering such instructions was based, in part, on the concept of recombinancy. Recombinancy can be defined simply as a method for producing new music by recombining extant music into new logical successions. I describe this process in detail in my book Experiments in Musical Intelligence (1996). I argue there that recombinancy appears everywhere as a natural evolutionary and creative process. All the great books in the English language, for example, are constructed from recombinations of the twenty-six letters of the alphabet. Similarly, most of the great works of Western art music exist as recombinations of the twelve pitches of the equal-tempered scale and their octave equivalents. The secret lies not in the invention of new letters or notes but in the subtlety and elegance of their recombination.”

Of course, simply breaking a musical work into smaller parts and randomly combining them into new orders almost certainly produces gibberish. Effective recombination requires extensive musical analysis and very careful recombination to be effective at even an elemental level no less the highly musical level. In reality, the dance music genre is mediated, if not ruled, by machines. The power and sheer volume would not be possible without computers. Below is an example of a Bach inspired piece as conceived by Emmy:

Regardless of what we think of the AI tunes coming out of the Flow Machine it is quite clear that not even the music industry is safe from automation, and it’s not the first time a computer has given songwriting a try. In April, a 20-year-old developed an A.I. music maker that worked with Google Deep Dream to create jazz pieces. The deepjazz framework itself it a two-layer LSTM, which is a kind of artificial neural network architecture. After it learns an initial baseline seed sequence of musical notes, it assigns probabilities to notes and generates the next note based on those probabilities. For example, if you feed the program the scale A, B, C, there is a high probability that the next note deepjazz will generate is going to be D.

deepjazz has been featured in The Guardian, Aeon Magazine, Inverse, Data Skeptic, the front page of HackerNews, and GitHub’s trending showcase (1200+ stars). It has led to the most popular “AI” artist on SoundCloud with 172,000+ listens. Currently, deepjazz is being used as reference material for the course “Interactive Intelligent Devices” at the University of Perugia.

1024px-asimo_conducting_pose_on_4-14-2008

Not even Christmas Carols are saved from the robots. Researchers in Toronto have used a technology called “neural karaoke” to teach a computer to write a song after looking at a photo.

The karaoke technology first listened to 100 hours of Christmas music to figure out a simple melody, to which it then added chords and drums. It then viewed pictures and composes lyrics based on the words associated with those pictures.

“We are used to thinking about AI for robotics and things like that. The question now is what can AI do for us?” said Raquel Urtasun, an associate professor in machine learning and computer vision at Toronto’s computer science lab. “You can imagine having an AI channel on Pandora or Spotify that generates music, or takes people’s pictures and sings about them,” adds her colleague, Sanja Fidler. “It’s about what can deep learning do these days to make life more fun?”

Neural karaoke emerged from a broader research effort to use computer programs to make music, write lyrics and even generate dance routines. Taking music creation as a starting point, Hang Chu, a PhD student at the lab, trained a neural network on 100 hours of online music. Once trained, the program can take a musical scale and melodic profile and produce a simple 120-beats-per-minute melody. It then adds chords and drums.

But it doesn’t stop here even complex works that combine music, acting, and writing have been automated. Thomas Middleditch and Elisabeth Gray starred in a film in June written entirely by A.I. The resulting film, Sunspring, wasn’t really that good by human standards. The A.I., Benjamin, was fed sci-fi scripts until he was predicting words that could fit in the script, working in a similar way to iOS’ predictive word tool.