Short Version : It’s something that will help you ‘see’ and ‘feel’ music.
An immersive environment where user generated audio is not just visualized but also sent back as haptic feedback. I’ve been working with Phan on visualizing audio for ICM projects and decided to step it up a notch by combining the PComp and ICM final to build the sensory audio enhancer.
Input: a controller/sequencer that lets the user control and input pre-built layers of audio that operates a little bit like the monome.
Output: Visual immersive environment + Haptic/Tactile feedback strapped on to the body for each layer of audio. Here is a reference of the kinda animation we’re gonna be coding in processing
Here is the basic processing sketch that responds to music.. Needs to be beefed up and more responsive but the skeletal code is in place.
Here’s the vibration motor responding to trigger from Max.
This is going to be tricky because the Gods of Processing/Arduino/MAX/MadMapper all need to co-operate without causing significant delay.