THE RUBIK’S CUBE SEQUENCER. CAMERA + LIGHT MOUNTED OVER A GAME BOARD, COLOR RECOGNIZING ALGORITHM
PLAYS SOUNDS IN A CHROME WEB SEQUENCER, HORIZONTAL = 16 BEAT LOOP, VERTICAL = LOW PITCH – HIGH PITCH
OR DIFFERENT DRUM SOUNDS, 16 CUBES ( 4X4X4) WITH DIFFERENT COLORS, EACH COLOR REPRESENTING A MUSICAL INSTRUMENT: WHITE IS DRUMS, GREEN IS BASS
ORANGE IS PERCUSSION, RED IS SYNTH 1, YELLOW IS SYNTH 2, BLUE IS SYNTH 3
. IN ORDER TO COMPOSE: PLACE THE RIGHT CUBE IN THE RIGHT BOX, TWIST THE CUBE TO GET THE DESIRED COMBINATION, CHANGING ONE INSTRUMENT
EFFECT OTHER INSTRUMENTS. COMPOSING MUSIC BECOMES A PUZZLE, A VERY DIFFICULT PUZZLE. BUT WHAT IS THE DIFFERENCE BETWEEN PLAYING A GAME AND PLAYING MUSIC? AND WHO IS THE WINNER?
THE RUBIK’S CUBE SEQUENCER
CONCEPT AND SOUND DESIGN BY HÅKAN LIDBO, PROGRAMMING & VISUAL DESIGN BY PER-OLOV JERNBERG, GAME BOARD BY ROMEO BRAHASTEANU
Description: A guitar is designed to be strummed; piano keys are pressed; drum pads are tapped; violins are bowed. But what if a single instrument could be played with any of these techniques? That’s exactly what we’re creating – one instrument that lets you be the whole band.
- Play any instrument, style, and sound with a single device that connects directly to your smartphone, tablet, or computer.
- Our patented multi-instrument technology transforms the INSTRUMENT 1 into a guitar, violin, bass, piano, drum machine… it’s any instrument you want it to be.
- Plug in and play 100’s of apps like GarageBand with universal musical gestures: strumming, tapping, bowing, sliding, and more.
- Digital string-like interface works with any MIDI-compatible software.
- The unique ergonomic design can be held in multiple positions, and is fully ambidextrous.
- Design new instruments and custom tunings via the Artiphon companion app.
- It’s compact, portable, durable, self-powered, and simple.
- Designed and engineered in Nashville, TN.
The sound of empty space explores relationships between microphones, speakers, and surrounding acoustic environments through controlled, self-generating microphone feedback.
Amplifying and aestheticizing the acoustic inactivity between technological “inputs” and “outputs” – stand-ins for their corporeal correlates, the ear and mouth – the notion of a causal sound producing object is challenged, and questions are posed as to the status of the ‘amplified’.
By building flawed technological systems and nullifying their intended potential for communication, the ear is turned towards the empty space between components; to the unique configurations of each amplifying assemblage.
In each of the interrelated works – pieces which are equal parts banal, inventive, and absurd – sound is revealed not as a distinct object or autonomous event, but rather as a mutable product of interdependent networks of physical, cultural and economic relations.
Windows\system32\shell32.dll played as audio data.
When binary files (like EXE or DLL) are imported into some audio editor (like SoundForge) as raw audio data, they sometimes turn into weird and interesting music tracks.
In this case, shell32.dll became an ambient progressive-noise track of 17.5 minutes (I’ve removed silent gaps and some noise), here are the logical parts of the track:
0:00–0:25 – “intro”
0:25–1:35 – flanger-like repeating pattern
1:45–4:00 – the interesting part with many different patterns
4:00–8:10 – an amusing pattern of a series of notes which repeats itself many times in different “arrangements”
8:30–14:25 – a chilly Decepticon-style pattern
14:25–15:00 – “outro”
Overall, such “music” will probably be a hit among the robots in the future (after the humanity is exterminated, of course).
Inspired by: http://www.youtube.com/watch?v=2xZgCV…
Just made a really weird and awesome discovery.
If you import an EXE file into an audio program as audio data, you hear all kinds of cool stuff. The most awesome by far for me was MS Paint.
It’s probably one of the coolest things I’ve ever heard form something like this. All I did to the audio was master it slightly to make it sound less harsh to the ears, as well as remove a long section of noise.
Here it is on soundcloud:
This is the Windows 7 x64 edition of mspaint.exe. I used Adobe Audition to import and edit the audio, but one could jsut as easily use Audacity’s “import Raw Data” feature. I imported this as 8-bit, 22050hz stereo audio. I faded in the beginning, as well as removing a long section of noise part of the way through. Here’s how it was before doing any editing: http://www.youtube.com/watch?v=zd5PyY…
acrylic, conductive paint, colored pencil, nails, MaKey MaKey electronics, wood panel
Touch the painting to release its music. Slide your finger across it to play melodies, play chords with your palm, improvise a duet. We’ve combined traditional painting techniques with conductive paint and capacitive touch sensing. The result is a new form of visual music, combining composition and instrument into a playable score.
This project is a collaboration with Eric Rosenbaum.
“For everyone who asked how this was done, I finally put together an Instructable about it: instructables.com/id/Touch-Sensitive-Musical-Painting/. Enjoy!”
“moDernisT” was created by salvaging the sounds and images lost to compression via the mp3 and mp4 codecs. the audio is comprised of lost mp3 compression material from the song “Tom’s Diner”,
famously used as one of the main controls in the listening tests to develop the MP3 encoding algorithm.
Here we find the form of the song intact, but the details are just remnants of the original. the video was created by takahiro suzuki in response to the audio track and then run through a similar algorithm after being compressed to mp4. thus, both audio and video are the “ghosts” of their respective compression codecs. version one.
Tristan Perich: Microtonal Wall
1,500 speakers, each playing a single microtonal frequency, collectively spanning 4 octaves. Commissioned in part by Rhizome, with additional support from the Addison Gallery.
Video walkthrough from the exhibition “Microtonal Array” (with work by Tristan Perich and Sarah Rara)
A city as a living entity, a bold splash of raw color framed by an austere moon-rise and moon-set.
Interactive robot orchestra
more info and images – vtol.cc/filter/works/nayral-ro
The orchestra consists of 12 robotic manipulators of various designs, each of which is equipped with a sound-transmitting speaker. The manipulators, combined together, form a single multi-channel electronic sound orchestra. Due to constant displacement speakers in space, changing direction of the sound and the algorithms for generating compositions, the orchestra creates a dynamic soundscape. In order to interact with the orchestra, controller Leap Motion is used, that allows to control robots and sound by simple hands gestures in the air – similarly to conducting an orchestra.
The project is based on the idea of a combination of modern music, computer, interactive and robotic concepts and approaches for the creation of works of art. In many ways, it is inspired by well-known works that were presented in the recent past, such as Pendulum Choir (2011) and Mendelssohn Effektorium (2013). However, Nayral Ro is different from these projects in many ways. Its algorithmic system, in which sound and musical composition are being produced, is real time, and the acoustic environment also changes simultaneously with the process of creating the musical piece. Also, the whole process is completely subordinated by the “conductor”, so this a role is similar to such of a composer, performer and operator at the same time.
Creation of more sophisticated versions, more subtly revealing the potential of Leap Motion for tuning to the movement and changes in sound, is being planned for the future development.
video by Nikolai Zheludovich
About the synth below:
The Wavetable Synthesizer utilizes what I have dubbed ‘creative synthesis’. Instead of indirectly affecting waveform shape with envelopes, LFO’s, and oscillators, the Wavetable Synthesizer allows the user to directly control the waveform shape using 12 sliders and two knobs. Eight of the sliders control the overall shape of the wave (acting much like ‘attractors’ on a line), while two knobs control how the points are interpolated (smooth, triangular, or square) and at what resolution (from fine to coarse). The four sliders labelled ‘A’, ‘D’, ‘S’, and ‘R’ are used for attack, decay, sustain, and release respectively (more information on that here). Users can access saved waveforms with a bank of buttons, and when selected, can watch the controller transform automatically to these settings. The rightmost knob allows for control of the transition speed between presets.”
“The software for the Wavetable is where all sound synthesis takes place. Due to familiarity and available resources, Processing (Java) was used. The first challenge in creating the software was to create a smooth waveform from only eight distinct points. In order to accomplish this, an interpolation function needed to be utilized. While the math to perform these interpolations is readily accessible, Java also has libraries available to aid in this. The image to the right shows comparisons of the Apache Lagrange (white), spline (green), and linear (red) interpolations. Ultimately, the spline and linear interpolations were used in conjunction with a “square wave” interpolation.
In order to then play the waveform, the minim wavetable function was implemented. The MidiBus library handled all midi communication