We have reported on the use of artificial intelligence (AI) in music before, but things are about to change. And the change appears to be positive. To be honest a lot of the AI generated music that we’ve heard so far has been far from good or innovative. It has either been computer generated attempts to create classical dance or pop tunes, or far out space sounding things that tends to be more experimental than listenable. Now a new generation of bands and acts are active in the AI space, but with the difference that they have a more deliberate strategy in how to incorporate AI into their music creation process.
So this new generation of musicians are creatively engaging with algorithmic processes to make some of the most futuristic and genre-bending music coming out right now. Using code to invent weird digital instruments or even using neural networks to shatter The Ramones into 130 fractal pieces, musicians are producing some pretty weird music across a whole range of genres: post-industrial, looped-techno and beyond. So even if there are loads of pessimists that are predicting that even human creativity will eventually be made obsolete by robots, a growing wave of artists are using AI and algorithms to take their own music in new and exciting directions. Some are using machine learning to teach software to compose music they later play themselves, while others are using live coding to program electronic music that’s improvisational, unpredictable, and surprisingly human.
BBC Future has a feature on some of the interesting hybrid forms of composition that have emerged from the coded mind of the computer. It explores the sounds of the Iamus and Melomics109 programs which mimic the process of natural selection:
“It takes a fragment of music (itself generated at random), of any length, and mutates it. Each mutation is assessed to see whether it conforms to particular rules—some generic, such as that the notes have to be playable on the instrument in question, others genre-specific, so that features like the melodies and harmonies fit with what is typical for that style. Little by little, the initial random fragment becomes more and more like real music, and the ‘evolutionary process’ stops when all the rules are met. In this way, hundreds of variants can be generated from the same starting material.”
At the heart of recent artificial intelligence breakthroughs are machine learning algorithms, programs that find patterns in large sets of data. Machine learning made it possible to automate both cognitive and physical processes that difficult or impossible to define through rule-based programming. This includes tasks such as image classification, speech recognition and translation.
When given musical data, machine learning algorithms can find the patterns that define each style and genre of music. But there’s more to it than classification and copyright protection. As researchers have shown, machine learning algorithms are capable of creating their own unique musical scores.
An example is Google’s Magenta, a project that is looking for ways to advance the state of machine-generated art. Using Google’s TensorFlow platform, the Magenta team has already managed to create algorithms that generate melodies as well as new instrument sounds. Magenta is also exploring the broader intersection of AI and arts, and is delving into other fields such as generating paintings and drawings. None of the research teams claim their algorithms will replace musicians and composers. Instead, the believe AI algorithms will work in tandem with musicians and help them become better at their craft by boosting their efforts and assisting them in ways that weren’t possible before. For instance, an AI algorithm can provide composers with a starting point by generating the basic structure of a song and letting them do the tunings and adjustments.
CONTEMPORARY ARTISTS USING AI IN THEIR MUSIC MAKING
To get you started into the field of AI in music a good place to start is at Bandcamp, and below a collection of ten acts are featured that are truly pushing the boundaries in AI music making.
Belisha Beacon
This Is Fine
Good examples of live coding, Belisha Beacon’s debut This Is Fineuses the ixi lang programming language to create minimal, looping techno. Its five tracks build gradually, which is the result of Beacon writing one line of code, allowing the pattern it generates to set the spectral mood, and then writing another. This process is wielded to powerful effect on the album’s opener, “Wishful Sinking.” Over 15 deceptively intense minutes, Beacon layers brisk plinky riffs, glassy beats, and insistent rhythms into one dizzying mix, making it as ideal for meditative headphone listening as for cutting loose at an Algorave.
Happy Valley Band
ORGANVM PERCEPTVS
The 11 tracks on their debut were transcribed by a custom-made machine learning program that was taught to “unmix” its source material and then jigsaw it back together into musical notation. Yet rather than being carbon copies of Madonna or James Brown, what the Happy Valley Band end up performing via rich orchestration is skewed, jittery cacophony. It’s equal parts bewildering and inspiring, highlighting how AI can help humanity see the familiar from a fresh perspective.
Iván Paz
Visions of Space
Iván Paz is another artist who makes extensive use of live coding, yet the Mexican composer’s unsettling Visions of Space from May is also inspired by techniques employed in AI research. The album’s droning yet often harsh electronic soundscapes were put together using musical algorithms whose parameters Paz varies sequentially through time, in much the same way that the parameters controlling an artificial intelligence are altered by the process of learning.
Daniel M Karlsson
Expanding and overwriting
Perfectly encapsulating the hope/fear that AI could serve as a catalyst for human development, Swedish producer Daniel M Karlsson is a transhumanist and singularitarian whose glitchy IDM is based heavily on algorithmic composition. His LP from last June, Expanding and overwriting, hits the listener with chaotic volleys of beats, keys, and samples. All of them are frantic and fractured, made all the more inhumanly complex by the TidalCycles live coding language that allows Karlsson to cue patterns together simply by typing text.
The RAiMONES
I’m Alive!
The result of a Swiss engineer training an artificial neural network on 130 songs recorded by the actual Ramones, the single and its B-side “Mental Case” are AI-generated imitations of the kinds of track the seminal punk band might have produced had they not broken up in 1996. Their simple three-chord verses and two-note riffing may hold few musical surprises, yet their infectiousness suggests that in the future our musical idols will increasingly “record” beyond the grave.
sevenism
red blues
In contrast to records composed by AI or algorithms, red blues by U.K. producer sevenism is a little different, in that the role of artificial intelligence revolved around synthesizing entirely new instruments. The album was made using NSynth, a Google-made program that uses machine learning and neural networks to fuse samples of instruments into new sounds. This gives the 16-track LP an otherworldly, alien atmosphere, as the program combines the “underlying qualities” of a vibraphone and clarinet, for example, to create swirling, cavernous drones.
Miri Kat
Pursuit – تلاش
A rising fixture in London’s underground live coding scene, Miri Kat released her debut EP Pursuit – تلاش at the beginning of December. Not only was its restless post-industrial beauty live coded using a variety of open-source programs, but it also benefits from Kat’s experience as an engineer of electronic musical instruments. This means that such tracks as “d1574n7” and “fl33” shift and unfurl with the manic energy of much algorithmic music, yet they also have an ethereal quality that rewards closer listening.
Algobabez
Burning Circuits
Despite the tongue-in-cheek name, the U.K. duo’s Burning Circuits album from last April is a cerebral effort that marries pounding EDM with knotty braindance. Its two 20-minute tracks use live coding to generate convulsive epics, showing that despite the abstract coldness of its method, algorithmic music can be highly visceral.
Automatonism
AUTOMATONISM #2
Automatonism is the most recent pseudonym for Swedish producer Johan Eriksson, a PhD student who’s focused on “making music with self-playing machines.” It’s also the name of the modular synthesizer Eriksson has engineered, which he showcases via his two self-titled releases from last year. On AUTOMATONISM #1 and #2, generative algorithms create spiky IDM that bubbles and froths electronically, its scatter-gun beats invoking a world where the products of machines become too much for humans to fully process.
WK569
Omaggio a Marino Zuccheri
WK569 are an Italian trio who in October released Omaggio a Marino Zuccheri, an homage to the sound engineer of the same name who worked at the hallowed RAI Electronic Music Studio in Milan. Focused on the “interaction of man and machine,” the single-track EP sees the threesome feed the self-generating output of structured algorithms into a number of vintage synths and samplers. The result is at times obliquely beautiful and at others almost terrifying, as jagged electronic notes flutter, collect, and scatter unpredictably over the course of 18 gripping minutes.