A city as a living entity, a bold splash of raw color framed by an austere moon-rise and moon-set.
Interactive robot orchestra
more info and images – vtol.cc/filter/works/nayral-ro
The orchestra consists of 12 robotic manipulators of various designs, each of which is equipped with a sound-transmitting speaker. The manipulators, combined together, form a single multi-channel electronic sound orchestra. Due to constant displacement speakers in space, changing direction of the sound and the algorithms for generating compositions, the orchestra creates a dynamic soundscape. In order to interact with the orchestra, controller Leap Motion is used, that allows to control robots and sound by simple hands gestures in the air – similarly to conducting an orchestra.
The project is based on the idea of a combination of modern music, computer, interactive and robotic concepts and approaches for the creation of works of art. In many ways, it is inspired by well-known works that were presented in the recent past, such as Pendulum Choir (2011) and Mendelssohn Effektorium (2013). However, Nayral Ro is different from these projects in many ways. Its algorithmic system, in which sound and musical composition are being produced, is real time, and the acoustic environment also changes simultaneously with the process of creating the musical piece. Also, the whole process is completely subordinated by the “conductor”, so this a role is similar to such of a composer, performer and operator at the same time.
Creation of more sophisticated versions, more subtly revealing the potential of Leap Motion for tuning to the movement and changes in sound, is being planned for the future development.
video by Nikolai Zheludovich
About the synth below:
The Wavetable Synthesizer utilizes what I have dubbed ‘creative synthesis’. Instead of indirectly affecting waveform shape with envelopes, LFO’s, and oscillators, the Wavetable Synthesizer allows the user to directly control the waveform shape using 12 sliders and two knobs. Eight of the sliders control the overall shape of the wave (acting much like ‘attractors’ on a line), while two knobs control how the points are interpolated (smooth, triangular, or square) and at what resolution (from fine to coarse). The four sliders labelled ‘A’, ‘D’, ‘S’, and ‘R’ are used for attack, decay, sustain, and release respectively (more information on that here). Users can access saved waveforms with a bank of buttons, and when selected, can watch the controller transform automatically to these settings. The rightmost knob allows for control of the transition speed between presets.”
“The software for the Wavetable is where all sound synthesis takes place. Due to familiarity and available resources, Processing (Java) was used. The first challenge in creating the software was to create a smooth waveform from only eight distinct points. In order to accomplish this, an interpolation function needed to be utilized. While the math to perform these interpolations is readily accessible, Java also has libraries available to aid in this. The image to the right shows comparisons of the Apache Lagrange (white), spline (green), and linear (red) interpolations. Ultimately, the spline and linear interpolations were used in conjunction with a “square wave” interpolation.
In order to then play the waveform, the minim wavetable function was implemented. The MidiBus library handled all midi communication
Sound Installation, 44 speakers, wood, black laminate, archival inkjet print on transparent, Variable dimensions.
This installation composed of two different parts, touches upon the multifaceted political, social, and cultural aspects of conflict. The first part of this installation consists of dozens of amplifiers whose fronts are covered with a photographic print of drum leather. The sounds emanating from the amplifiers are those of voices repeating the words “yes” and “no” in different languages. The effect is that of a cultural and aesthetic clash between elements suspended in a state of continuous struggle and confrontation, while building up to a final conflict.
The second part of the installation consists of embroidered ink drawings. Although the ink stains are arbitrarily created and cannot be controlled, their outlines are demarcated by the act of embroidery, which comes to define and domesticate them. In this context, the embroidery appears as a violent act, which injures the canvas in a desperate attempt to give form to the inherently formless stains.
Whereas the circular stretches of drum leather are made to fit into a square frame, the square canvas that forms the support for the ink stains is forced into circular embroidery hoops. These inverse relations between the supports and frames constitute a formal conflict. At the same time, the ink stains resembling maps of charted territories allude to additional, political and social conflicts.
The Metaphase Sound Machine is a kind of homage to the ideas of the American physicist Nick Herbert who in the 1970s has created both Metaphase Typewriter and Quantum Metaphone (a speech synthesizer). These were some of the first attempts to put the phenomenon of quantum entanglement in practice and one of the first steps towards the creation of a quantum computer. The experimental devices, however, had not confirmed theoretical research, and Herbert’s obsession with metaphysics resulted in the publication of several of his works on the metaphysical in quantum physics, that have led to a serious loss of interest to the ideas of quantum communication. One day, in a course of his experiments, Herbert has hacked into an university computer trying to establish a contact with the spirit of illusionist Harry Houdini at the day of the centenary of his birth.
In his device Herbert in order to achieve a quantum entangled state used as a source radioactive thallium, which was controlled by the Geiger radiation counter. The time interval between pulses was chosen as conversion code. Several psychics had participated in the experiments. They tried to influence the endless stream of random anagrams arising from a typewriter or cause “the ghost voice” to be heard out of metaphone. Scientists also have conducted sessions to bring about the “spirit” of a colleague who had recently died, and who knew about this typewriter. In 1985 Herbert wrote a book about metaphysical in physics. In general, his invention and articles quite severely compromised the ideas of quantum communication in the eyes of potential researchers and by the end of the XX century no any substantial progress in this direction was observed.
The Metaphase Sound Machine is an object with 6 rotating disks. Each of the discs is equipped with acoustic sound source (a speaker) and a microphone. Each of the microphones is connected via computer and the rotary axis to the speakers on the disks. Also in the center of installation a Geiger-Mueller counter is set, that detects ionizing radiation in the surrounding area. The intervals between these particles influence rotation velocity of each of the disks. Essentially the object is an audio- and kinetic installation in which a sound is synthesized based on feedbacks, produced by microphones and speakers on rotating discs. Feedback whistles are used as triggers for more complex sound synthesis. Additional harmonic signal processing, as well as the volatility of the dynamic system, lead to the endless variations of sound. The form of the object refers to the generally accepted symbolic notation of quantum entanglement as a biphoton – crossing discs of the orbits.
more info – vtol.cc/filter/works/metaphase-sound-machine
Cymatics is the science of visualizing sound waves.
From the album ‘Solar Echoes’.
Download the video in 4k. All of the science experiments in the video are real. Watch behind the scenes and see how it was made.
Directed by ShahirDaud.com
Cinematographer: Timur Civan
The Rosetta mission has detected a mysterious signal coming from Comet 67P/Churyumov-Gerasimenko.
The mission has five instruments in the Rosetta Plasma Consortium (RPC) that measure the plasma environment surrounding the comet.
Plasma is a charged gas and the RPC is tasked with understanding variations in the comet’s activity, how 67P’s jets of vapour and dust interacts with the solar wind and the dynamic structure of the comet’s nucleus and coma.
But when recording signals in the 40-50 millihertz frequency range, the RPC scientists stumbled on a surprise — the comet was singing, they report.
Through some kind of interaction in the comet’s environment, 67P’s weak magnetic field seems to be oscillating at low frequencies. In an effort to better understand this unique ‘song’, mission scientists have increased the frequency 10,000 times to make it audible to the human ear.
First detected in August as Rosetta approached the comet from 100 kilometres, this magnetic oscillation has continued.
Rosetta scientists speculate that the oscillations may be driven by the ionisation of neutral particles from the comet’s jets.
As they are released into space, they collide with high-energy particles from interplanetary space and become ionised. Because it is electrically charged, the plasma then interacts with the cometary magnetic field, causing oscillations. But to draw any conclusions about this, further work is needed.
“This is exciting because it is completely new to us,” says Karl-Heinz Glaßmeier, head of Space Physics and Space Sensorics at the Technische Universität Braunschweig, Germany.
“We did not expect this and we are still working to understand the physics of what is happening.”
Rosetta is currently lining up to deploy its robotic Philae lander to the comet at 20.00 AEDT.
During landing manoeuvers, the RPC is expected to help tracking Philae’s descent to the comet’s surface. The time between separation and landing is expected to take around seven hours.
It takes 28 minutes and 20 seconds for signals to travel at the speed of light from Rosetta to mission control in Germany.
Yaybahar is an electric-free, totally acoustic instrument designed by Gorkem Sen. The vibrations from the strings are transmitted via the coiled springs to the frame drums. These vibrations are turned into sound by the membranes which echo back and forth on the coiled springs. This results in an unique listening experience with an hypnotic surround sound.
What you hear in this performance is captured in realtime without any additional effects and with no post audio processing.
For contact: firstname.lastname@example.org
Performence: Görkem Şen
Video: Levent Bozkurt
Video Editing: Olgu Demir
Sound Mix: Mert Aksuna
Place: Alişler Yurdu
It’s not the music that triggers dance, it‘s the dance that creates music!
The challenge was to get the museum‘s visitors dancing. But music alone wouldn’t make it for a spontaneous groove. Therefore, we turned the tables and invented the STEPSEQUENCER – a tool which is able to create sound and music out of human movements.
Sounds and beats are being created by the STEPSEQUENCER for as long as you interact with the exhibit’s stations: a round floor-projected “instrument” and three physical tools on the side – one for jumping, one for twisting and one for seesawing.
The pads of the round floor projection are being activated by touch. If you mark a field with your hands or feet, a sound is resonating each time the rotating pointer hits the field. Depending on the marked pad’s position, different sounds are arising, each of them variously combinable with others. Once the movements stop, the sounds will hush.
Concept / Design / Code:
Johannes Timpernagel, Ingolf Heinsch, Sebastian Huber, Robert Pohle – schnellebuntebilder.de
Moritz Haberkorn – morast.at/
Jan Bernstein – quadrature.co/
büroberlin – bueroberlin.net/
Prof. Dr. Axel Buether
– – –
– – –
tanz! Wie wir uns und die Welt bewegen
Oktober 2013 – Juli 2014
– – –
Rosenpictures – rosenpictures.com/
Excerpt of the a/v work Morpheme by Electric Indigo and Thomas Wagensommerer, original duration 28 minutes.
All sounds are derived from a 9 seconds long audio recording of one phrase:
“To let noise into the system is a kind of fine art in both cybernetic terms and in terms of making music, too.” [Sadie Plant]
The source text is deconstructed and reassembled on a sonic, musical, linguistic and graphical level.
Morpheme can be performed as a multichannel audiovisual concert.