I created a system to seamlessly integrate music, visuals, and depth sensory data to perform as a conductor. The performer moves in a space to conduct music samples and generative visuals.
I made an immersive experience without a device or VR headset. I have tried a VR headset and felt disoriented and extremely uncomfortable. I most definitely do not want to work in that medium. It is important that the performer not touch things like a conductor does not play a specific instrument in an orchestra, rather directs the performance. My experience with piano performance is that it is not very physically moving, the piano is stationary and the performer is seated at the keyboard. There is little showmanship or stage presence. I have felt the need to perform my music in another way. I explored how to make an immersive experience – including music and visual effects – using a touch free interface for modern composers to perform room scale installations. I used a Kinect camera to gather depth sensory data with Processing to map music and generative visuals in a system I built with Max MSP Jitter, a visual programming language that patches and connects the data to music and visual elements. I used a Triplehead2go for 3 projector display to create a room-scale visual experience.
Algorithmic Composition, Choreographic Interventions, Thesis