acrylic, conductive paint, colored pencil, nails, MaKey MaKey electronics, wood panel
Touch the painting to release its music. Slide your finger across it to play melodies, play chords with your palm, improvise a duet. We’ve combined traditional painting techniques with conductive paint and capacitive touch sensing. The result is a new form of visual music, combining composition and instrument into a playable score.
This project is a collaboration with Eric Rosenbaum.
“For everyone who asked how this was done, I finally put together an Instructable about it: instructables.com/id/Touch-Sensitive-Musical-Painting/. Enjoy!”
“moDernisT” was created by salvaging the sounds and images lost to compression via the mp3 and mp4 codecs. the audio is comprised of lost mp3 compression material from the song “Tom’s Diner”,
famously used as one of the main controls in the listening tests to develop the MP3 encoding algorithm.
Here we find the form of the song intact, but the details are just remnants of the original. the video was created by takahiro suzuki in response to the audio track and then run through a similar algorithm after being compressed to mp4. thus, both audio and video are the “ghosts” of their respective compression codecs. version one.
Tristan Perich: Microtonal Wall
1,500 speakers, each playing a single microtonal frequency, collectively spanning 4 octaves. Commissioned in part by Rhizome, with additional support from the Addison Gallery.
Video walkthrough from the exhibition “Microtonal Array” (with work by Tristan Perich and Sarah Rara)
A city as a living entity, a bold splash of raw color framed by an austere moon-rise and moon-set.
Interactive robot orchestra
more info and images – vtol.cc/filter/works/nayral-ro
The orchestra consists of 12 robotic manipulators of various designs, each of which is equipped with a sound-transmitting speaker. The manipulators, combined together, form a single multi-channel electronic sound orchestra. Due to constant displacement speakers in space, changing direction of the sound and the algorithms for generating compositions, the orchestra creates a dynamic soundscape. In order to interact with the orchestra, controller Leap Motion is used, that allows to control robots and sound by simple hands gestures in the air – similarly to conducting an orchestra.
The project is based on the idea of a combination of modern music, computer, interactive and robotic concepts and approaches for the creation of works of art. In many ways, it is inspired by well-known works that were presented in the recent past, such as Pendulum Choir (2011) and Mendelssohn Effektorium (2013). However, Nayral Ro is different from these projects in many ways. Its algorithmic system, in which sound and musical composition are being produced, is real time, and the acoustic environment also changes simultaneously with the process of creating the musical piece. Also, the whole process is completely subordinated by the “conductor”, so this a role is similar to such of a composer, performer and operator at the same time.
Creation of more sophisticated versions, more subtly revealing the potential of Leap Motion for tuning to the movement and changes in sound, is being planned for the future development.
video by Nikolai Zheludovich
About the synth below:
The Wavetable Synthesizer utilizes what I have dubbed ‘creative synthesis’. Instead of indirectly affecting waveform shape with envelopes, LFO’s, and oscillators, the Wavetable Synthesizer allows the user to directly control the waveform shape using 12 sliders and two knobs. Eight of the sliders control the overall shape of the wave (acting much like ‘attractors’ on a line), while two knobs control how the points are interpolated (smooth, triangular, or square) and at what resolution (from fine to coarse). The four sliders labelled ‘A’, ‘D’, ‘S’, and ‘R’ are used for attack, decay, sustain, and release respectively (more information on that here). Users can access saved waveforms with a bank of buttons, and when selected, can watch the controller transform automatically to these settings. The rightmost knob allows for control of the transition speed between presets.”
“The software for the Wavetable is where all sound synthesis takes place. Due to familiarity and available resources, Processing (Java) was used. The first challenge in creating the software was to create a smooth waveform from only eight distinct points. In order to accomplish this, an interpolation function needed to be utilized. While the math to perform these interpolations is readily accessible, Java also has libraries available to aid in this. The image to the right shows comparisons of the Apache Lagrange (white), spline (green), and linear (red) interpolations. Ultimately, the spline and linear interpolations were used in conjunction with a “square wave” interpolation.
In order to then play the waveform, the minim wavetable function was implemented. The MidiBus library handled all midi communication
Sound Installation, 44 speakers, wood, black laminate, archival inkjet print on transparent, Variable dimensions.
This installation composed of two different parts, touches upon the multifaceted political, social, and cultural aspects of conflict. The first part of this installation consists of dozens of amplifiers whose fronts are covered with a photographic print of drum leather. The sounds emanating from the amplifiers are those of voices repeating the words “yes” and “no” in different languages. The effect is that of a cultural and aesthetic clash between elements suspended in a state of continuous struggle and confrontation, while building up to a final conflict.
The second part of the installation consists of embroidered ink drawings. Although the ink stains are arbitrarily created and cannot be controlled, their outlines are demarcated by the act of embroidery, which comes to define and domesticate them. In this context, the embroidery appears as a violent act, which injures the canvas in a desperate attempt to give form to the inherently formless stains.
Whereas the circular stretches of drum leather are made to fit into a square frame, the square canvas that forms the support for the ink stains is forced into circular embroidery hoops. These inverse relations between the supports and frames constitute a formal conflict. At the same time, the ink stains resembling maps of charted territories allude to additional, political and social conflicts.
The Metaphase Sound Machine is a kind of homage to the ideas of the American physicist Nick Herbert who in the 1970s has created both Metaphase Typewriter and Quantum Metaphone (a speech synthesizer). These were some of the first attempts to put the phenomenon of quantum entanglement in practice and one of the first steps towards the creation of a quantum computer. The experimental devices, however, had not confirmed theoretical research, and Herbert’s obsession with metaphysics resulted in the publication of several of his works on the metaphysical in quantum physics, that have led to a serious loss of interest to the ideas of quantum communication. One day, in a course of his experiments, Herbert has hacked into an university computer trying to establish a contact with the spirit of illusionist Harry Houdini at the day of the centenary of his birth.
In his device Herbert in order to achieve a quantum entangled state used as a source radioactive thallium, which was controlled by the Geiger radiation counter. The time interval between pulses was chosen as conversion code. Several psychics had participated in the experiments. They tried to influence the endless stream of random anagrams arising from a typewriter or cause “the ghost voice” to be heard out of metaphone. Scientists also have conducted sessions to bring about the “spirit” of a colleague who had recently died, and who knew about this typewriter. In 1985 Herbert wrote a book about metaphysical in physics. In general, his invention and articles quite severely compromised the ideas of quantum communication in the eyes of potential researchers and by the end of the XX century no any substantial progress in this direction was observed.
The Metaphase Sound Machine is an object with 6 rotating disks. Each of the discs is equipped with acoustic sound source (a speaker) and a microphone. Each of the microphones is connected via computer and the rotary axis to the speakers on the disks. Also in the center of installation a Geiger-Mueller counter is set, that detects ionizing radiation in the surrounding area. The intervals between these particles influence rotation velocity of each of the disks. Essentially the object is an audio- and kinetic installation in which a sound is synthesized based on feedbacks, produced by microphones and speakers on rotating discs. Feedback whistles are used as triggers for more complex sound synthesis. Additional harmonic signal processing, as well as the volatility of the dynamic system, lead to the endless variations of sound. The form of the object refers to the generally accepted symbolic notation of quantum entanglement as a biphoton – crossing discs of the orbits.
more info – vtol.cc/filter/works/metaphase-sound-machine
Cymatics is the science of visualizing sound waves.
From the album ‘Solar Echoes’.
Download the video in 4k. All of the science experiments in the video are real. Watch behind the scenes and see how it was made.
Directed by ShahirDaud.com
Cinematographer: Timur Civan
The Rosetta mission has detected a mysterious signal coming from Comet 67P/Churyumov-Gerasimenko.
The mission has five instruments in the Rosetta Plasma Consortium (RPC) that measure the plasma environment surrounding the comet.
Plasma is a charged gas and the RPC is tasked with understanding variations in the comet’s activity, how 67P’s jets of vapour and dust interacts with the solar wind and the dynamic structure of the comet’s nucleus and coma.
But when recording signals in the 40-50 millihertz frequency range, the RPC scientists stumbled on a surprise — the comet was singing, they report.
Through some kind of interaction in the comet’s environment, 67P’s weak magnetic field seems to be oscillating at low frequencies. In an effort to better understand this unique ‘song’, mission scientists have increased the frequency 10,000 times to make it audible to the human ear.
First detected in August as Rosetta approached the comet from 100 kilometres, this magnetic oscillation has continued.
Rosetta scientists speculate that the oscillations may be driven by the ionisation of neutral particles from the comet’s jets.
As they are released into space, they collide with high-energy particles from interplanetary space and become ionised. Because it is electrically charged, the plasma then interacts with the cometary magnetic field, causing oscillations. But to draw any conclusions about this, further work is needed.
“This is exciting because it is completely new to us,” says Karl-Heinz Glaßmeier, head of Space Physics and Space Sensorics at the Technische Universität Braunschweig, Germany.
“We did not expect this and we are still working to understand the physics of what is happening.”
Rosetta is currently lining up to deploy its robotic Philae lander to the comet at 20.00 AEDT.
During landing manoeuvers, the RPC is expected to help tracking Philae’s descent to the comet’s surface. The time between separation and landing is expected to take around seven hours.
It takes 28 minutes and 20 seconds for signals to travel at the speed of light from Rosetta to mission control in Germany.