London’s Barbican recently launched an exhibition called AI: More Than Human, which showcases the latest developments in creative and scientific applications of AI. One of the installations you may come across is an AI platform called Mimic, which lets you create and remix sounds using AI software.

At the installation, Massive Attack’s Mezzanine album is fed into Mimic’s AI network, which then creates new sounds based on the iconic record. Visitors to the Mimic booth at AI: More Than Human will also get to physically interact with the Mezzanine project.

Mimic, or Musically Intelligent Machines Interacting Creatively, was birthed from a UK sound research project led by Professor Mick Grierson of the University Of The Arts London. And to further demonstrate the platform’s machine-learning possibilities, a four-way collaboration was struck between Grierson, students from UAL and Goldsmith’s College, Andrew Melchior of Third Space Agency, and Massive Attack’s Robert Del Naja, aka 3D.

MIMIC is a web platform for the artistic exploration of musical machine learning and machine listening. We have designed this collaborative platform as an interactive online coding environment, engineered to bring new technologies in AI and signal processing to artists, composers, musicians and performers all over the world. The MIMIC platform has a built-in audio engine, machine learning and machine listening tools that makes it easy for creative coders to get started using these techniques in their own artistic projects. The platform also includes various examples of how to integrate external machine learning systems for sound, music and art making. These examples can be forked and further developed by the users of the platform.

Find out details about the exhibition at barbican.org.uk.

Background on MIMIC: Musically Intelligent Machines Interacting Creatively
This project is a direct response to significant changes taking place in the domain of computing and the arts. Recent developments in Artificial Intelligence and Machine Learning are leading to a revolution in how music and art is being created by researchers (Broad and Grierson, 2016). However, this technology has not yet been integrated into software aimed at creatives. Due to the complexities of machine learning, and the lack of usable tools, such approaches are only usable by experts. In order to address this, we will create new, user-friendly technologies that enable the lay user – composers as well as amateur musicians – to understand and apply these new computational techniques in their own creative work.

The potential for machine learning to support creative activity is increasing at a significant rate, both in terms of creative understanding and potential applications. Emerging work in the field of music and sound generation extends from musical robots to generative apps, and from advanced machine listening to devices that can compose in any given style. By leveraging the internet as a live software ecosystem, the proposed project examines how such technology can best reach artists, and live up to its potential to fundamentally change creative practice in the field. Rather than focussing on the computer as an original creator, we will create platforms where the newest techniques can be used by artists as part of their day-to-day creative practices.

Current research in artificial intelligence, and in particular machine learning, have led to an incredible leap forward in the performance of AI systems in areas such as speech and image recognition (Cortana, Siri etc.). Google and others have demonstrated how these approaches can be used for creative purposes, including the generation of speech and music (DeepMinds’s WaveNet and Google’s Magenta), images (Deep Dream) and game intelligence (DeepMind’s AlphaGo). The investigators in this project have been using Deep Learning, Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Long Short-Term Memory Networks (LSTMs), and other approaches to develop intelligent systems that can be used by artists to create sound and music. We are already among the first in the world to create reusable software that can ‘listen’ to large amounts of sound recordings, and use these as examples to create entirely new recordings at the level of audio. Our systems produce outcomes that out-perform many other previously funded research outputs in these areas.

In this three-year project, we will develop and disseminate creative systems that can be used by musicians and artists in the creation of entirely new music and sound. We will show how such approaches can affect the future of other forms of media, such as film and the visual arts. We will do so by developing a creative platform, using the most accessible public forum available: the World Wide Web. We will achieve this through development of a high level live coding language for novice users, with simplified metaphors for the understanding of complex techniques including deep learning. We will also release the machine learning libraries we create for more advanced users who want to use machine learning technology as part of their creative tools. 

The project will involve end-users throughout, incorporating graduate students, professional artists, and participants in online learning environments. We will disseminate our work early, gaining the essential feedback required to deliver a solid final product and outcome. The efficacy of such techniques has been demonstrated with systems such as Sonic Pi and Ixi Lang, within a research domain already supported by the AHRC through the Live Coding Network (AH/L007266/1), and by EC in the H2020 project, RAPID-MIX. Finally, this research will strongly contribute to dialogues surrounding the future of music and the arts, consolidating the UK’s leadership in these fields.