We’ve reported on artificial intelligence in music before, as well as Google´s attempts in the field, and now we are seeing some concrete results. Google’s new A.I.-powered synthesizer, the NSynth Super is a device that can fuse the sounds of up to four different instruments—this can create a completely new sonic palette. Check out the video below where London-based producer Hector Plimmer, explores new sounds generated by the NSynth machine learning algorithm.

For this experiment, 16 original source sounds across a range of 15 pitches were recorded in a studio and then input into the NSynth algorithm, to precompute the new sounds. The outputs, over 100,000 new sounds, were then loaded into the experience prototype. Each dial was assigned 4 source sounds. Using the dials, musicians can select the source sounds they would like to explore between, and drag their finger across the touchscreen to navigate the new sounds which combine the acoustic qualities of the 4 source sounds. NSynth Super can be played via any MIDI source, like a DAW, sequencer or keyboard.

NSynth—the algorithm that drives the hardware—is a product of Magenta. That’s Google’s research project for exploring how machine learning can inform artistic and musical tools. According to Google, “NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds.” That means you could be listening to one part guitar and one part trumpet at the same time. For the time being the NSynth is an open-source project. You can find all the details and equipment needed to build it yourself here. But, you won’t be finding this synth on shop shelves anytime soon.

Google has also released a more in-depth video explaining the thinkings behind NSynth and the Magenta project.

NSynth Super is part of an ongoing experiment by Magenta: a research project within Google that explores how machine learning tools can help artists create art and music in new ways. Technology has always played a role in creating new types of sounds that inspire musicians—from the sounds of distortion to the electronic sounds of synths. Today, advances in machine learning and neural networks have opened up new possibilities for sound generation. Building upon past research in this field, Magenta created NSynth (Neural Synthesizer). It’s a machine learning algorithm that uses a deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. Rather than combining or blending the sounds, NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds—so you could get a sound that’s part flute and part sitar all at once.

Since the release of NSynth, Magenta have continued to experiment with different musical interfaces and tools to make the output of the NSynth algorithm more easily accessible and playable.

As part of this exploration, they’ve created NSynth Super in collaboration with Google Creative Lab. It’s an open source experimental instrument which gives musicians the ability to make music using completely new sounds generated by the NSynth algorithm from 4 different source sounds. The experience prototype (pictured above) was shared with a small community of musicians to better understand how they might use it in their creative process.

About Magenta:

Magenta is a research project exploring the role of machine learning in the process of creating art and music. Primarily this involves developing new deep learning and reinforcement learning algorithms for generating songs, images, drawings, and other materials. But it’s also an exploration in building smart tools and interfaces that allow artists and musicians to extend (not replace!) their processes using these models. Magenta was started by some researchers and engineers from the Google Brain team but many others have contributed significantly to the project. We use TensorFlow and release our models and tools in open source on our GitHub.