BIRL wind instrument prototype by Snyderphonics.
Development by Jeff Snyder and Danny J. Ryan.
A little explanation:
Super quick rough edit to show to some people I know, I’ll edit a real demo together at some point.
It is designed to have a very basic wind instrument keying kind of like a recorder/flute/saxophone, with octave keys for the thumb. It lacks a lot of the special-purpose saxophone keys (i.e. the multiple pinky keys and whatnot).
The cool thing is that you can “train” it to use any fingering (within the limits of the physical keys) that you want. As in – put your fingers in some pattern and say “this should be an Eb” to the software. Then, you can store those as presets in the instrument and it will remember them and recall them. It’s using a neural net to learn what you want.
The other neat feature is that I am trying to get a much better sense of embouchure. Right now it’s in an early stage, but you can see it working in the video. You can put your mouth in some particular position (say, tightening your lips) and say “when I do this I want the sound to get buzzier”. The neural net also learns these things. Then it creates an large space of possibilities – I’m trying to approach the wealth of sound and technique options that something like a saxophone has, so that you can really shape the tone and get microtones and squeals and fluttertongue and everything. The idea is that every new “training” you do will have a whole world of extended techniques alongside the expected behavior you have trained.
For synthesis, it’s using some simple FM stuff and a physical modelling patch (which is amazing with it).
I’m working on a schedule to try and come out with it as a small-run product via a kickstarter campaign by the end of summer.
Let me know if you’re interested and I’ll put you on the list to get updates!
Jeff at snyderph
The next module by the Italian company Soundmachines might BLOW YOUR MIND in multiple kinds of ways…
With their latest announced module you will be able to control your modular (or your synth) with your brain.
The Mindwave Mobile was chosen for its low cost and rather good EEG capabilities.
The headset is connected to the module via a Bluetooth link, so that NO wires are running from the modular to your head.
Soundmachines is expecting to release this module around June 2014.
The Ototo is an experimental PCB-based synthesizer, created by design and invention studio Dentaku.
Ototo allows you to combine sensors, inputs and touchpads to create your own electronic musical instrument. Ototo is designed to let anyone unpack a kit and interact with sound however they want to, no soldering or coding required.
- 12 key capacitive touch keyboard (1 octave) with connectors
- 4 sensor inputs, 5V analog input
- Onboard speaker and 3.5mm headphone output
- Powered by 2 x AA batteries or micro USB
- No coding required
- 128 Mbit Flash memory
Theometrica (first prototype)
99-Prepared Needles, Distance RangeFinder, Sound generation software.
Oscar Palou & Alexander Müller-Rakow.
Exposed in ImageTransfer; Universität der Künste, November 2013, Berlin.
Inspired by acupuncture, this sculptural instrument is designed to control sound elements in real time by fixing specific pins into a spinning disk. 99 prepared needles, a distance rangefinder and Max/MSP sound generation software work together to let you use the pins to create geometric shapes that are translated into sound.
Quite amusing actually
*No Copyright Infringement Intended
all rights by THE PRODIGY except the sounddesign.
-created for parody purposes.
Ryan McGee has released a new sound design app for iOS, VOSIS, that synthesizes sound based on the greyscale image pixel data from photos or live video input.
OSIS is an interactive image sonification interface that creates complex wavetables by raster scanning greyscale image pixel data.
Using a multi-touch screen to play image regions of unique frequency content rather than a linear scale of frequencies, it becomes a unique performance tool for experimental and visual music. A number of image filters controlled by multi-touch gestures add variation to the sound palette. On a mobile device, parameters controlled by the accelerometer add another layer expressivity to the resulting audio-visual montages.
Animated/Directed by Ambar Navarro
Music by Hyperbubble
Additional Animation by
Quique Rivera Rivera
Isabela Dos Santos
Post-Prod done by Julian Petschek
Shot at BE∆RD H∆US
In this experiment with the Jasuto Modular synth (http://www.jasuto.com/main/) , Mark has got Noise node modulating FM of a Triangle wave and a sample. The noise node is moving slowly via motion modulation. He added an Accelerometer device which is modulating LFO FM of Square node and delay time. His iPad is held to a mic stand with The Gig Easy Mount (https://thegigeasy.com/) which allows him to move the iPad around pretty aggressively without worrying about a drop. The video also illustrates that motion is relative and even though he manually move the Noise node, it keeps moving with the recorded motion relative to where he drags it.
For more info:
Heineken are using scent with music to improve the experience of dance. With the Scenthesizer, the two claim to be ‘pushing boundaries in music and scent’.
In a nutshell, the system allows on-the-fly mixing of scents, which are then delivered on cue into the audience. The idea is to take DJing, which already incorporates sound and light, and add controlled scents to the experience.
The accelerating oscillations of the washing machine spin cycle are mimicked with the Korg Monotribe. The dual speed setting on the LFO allows the performer to push the speed up into the audio range allowing for FM synthesis like textures. This piece was shown, along with similar work, at my MFA thesis exhibition in May 2013.
NOTE: This is a binaural recording mixed with a monophonic, analog, synthesizer performance. Please use headphones to experience the binaural effect. For more info please visit http://audiocookbook.org/duet-for-syn…
This is a binaural recording mixed with a monophonic, analog, synthesizer performance. Please use headphones to experience the binaural effect.
This video created by Caleb Coppock (vimeo.com/calebcoppock) illustrates the time scope (from dusk until dawn) of the “In Habit: Living Patterns” performance at Northern Spark, June 2012. The music for the video was composed by John Keston for the sixteenth and final vignette in the sequence titled, “Energy.”
In the video I have focused on illustrating how one might use two iPad synthesizer apps and a hardware synthesizer together including Cassini, Sunrizer, and the MKS-80. The BS3X is used as both the iPad interface and MKS-80 controller. No computer is required, but a simple change of cable allows for a computer to be integrated into this setup because the MOTU UltraLite interface and standalone mixer has MIDI I/O. In other words two MIDI interfaces are still necessary with a computer, but prior to this experiment I was only using the BS3X as a controller for the MKS-80 and bypassing the class compliant USB MIDI interface functionality. Since the USB hub was required I also added the QuNexus to the setup. This was dedicated to feeding notes into the arpeggiator in Cassini. The keyboard controller was split so that in the low end I could play the MKS-80 effect then tweak it with the BS3X knobs and sliders as it decayed. In the upper end of the same keyboard I played a lead sound programmed in Sunrizer.