Jack Kalish

Sound Affects

An interface that creates music from emotional expression.

http://vimeo.com/33702287

Classes
New Interfaces for Musical Expression


The affective states of the user is extracted, analyzed, amplified, and sonified. Emotional expression becomes a soundscape. Sound Affect was originally developed as a performance piece for New Interfaces for Musical Expression

Background
I have been doing research on the nature of emotion over the past year. Emotion can be defined as the conscious recognition of physiological changes in the body that occur subconsciously in response to external stimuli. There are many ways in which such physiological states can be quantitatively measured. In this project I decided to explore a handful of these: facial expression, heart rate, and galvanic skin response.

Audience
Sound Affects was originally designed as a performance piece featuring two on-stage performers. For the ITP Winter show, I decided to create an interactive version of this project to allow others to create music with their emotions. It is meant for everyone to play with!

User Scenario
A user is presented with a screen, GSR sensor, a pair of headphones, and a keyboard. The user sees their own reflection in the computer screen. When the user makes a face in the screen (for example, a smile) different sounds are triggered in response to different facial expressions. Placing their hand on the GSR sensor produces notes in response to their skin conductivity. The user also hears my heartbeat as it is amplified through a microphone. Playing the keyboard changes the pitch and timbre of the heartbeat. The user may be able to attach a stethoscope to hear their own heartbeat as well.

Implementation
Sound Affects uses an open-source face-tracking library in open-frameworks called ofxFaceTracker. <br /><br /><br /><br /><br /><br /><br /><br /> To get the heartbeat sound, a stethoscope head is outfitted with a microphone. The signal is then filtered and distorted using MAX/MSP.<br /><br /><br /><br /><br /><br /><br /><br /> I have also built a simple GSR sensor using some custom electronics, an Arduino, and Processing. The Arduino processes the GSR signal, sends the information to Processing, which does some averaging, and then outputs the signal as MIDI which is captured by Garage Band to trigger the musical notes.

Conclusion
I learned a lot about real-time audio signal processing and audio hardware, something I never had much experience with in the past.