Max Horwich

A voice-controlled web VR experience invites you to sing along with the robotic choir


Join is an interactive musical experience for web VR. A choir of synthesized voices sings from all sides in algorithmically-generated four-part harmony, while the user changes the environment by raising their own voice in harmony.

Inspired by the Sacred Harp singing tradition, the music is generated in real time, based on Markov chains derived from the original Sacred Harp songbook. Each of the four vocal melodies are played from the four corners of the virtual space toward the center, where the listener experiences the harmony in head-tracking 3D audio. A microphone input allows the listener to change the VR landscape with sound, transporting them as they join in song.

While the choir is currently programmed to sing only in solfege (as all songs in the Sacred Harp tradition are usually sung for the first verse), I am in the process of teaching the choir to improvise lyrics as well as melodies. Using text also drawn from the Sacred Harp songbook, I am training a similar set of probability algorithms on words as notes. From there, I will use a sawtooth oscillator playing the MIDI Markov chain as the carrier, and a synthesized voice reading the text as the modulator, combining them into one signal to create a quadrophonic vocoder that synthesizes hymns in real time.

For this show, I present to show Join in a custom VR headset — a long, quilted veil affixed to a Google Cardboard. Rather than strapping across the user’s face, this headset will be draped over the head and hang down, completely obscuring their face and much of their body. After experiencing the virtual environment, participants are invited to decorate inscribe the exterior of the headset with patches, fabric pens, or in any other way they see fit — leaving their own mark on a piece that hopefully left some mark on them.


Algorithmic Composition, Electronic Rituals, Oracles and Fortune-Telling, Expressive Interfaces: Introduction to Fashion Technology, Interactive Music, Open Source Cinema


Jesse Simpson

Plop is a physics based sound application used to generate unique musical scores based on sound synthesis and samples.



Plop utilizes the matter.js library to employ a physics world where body collisions play a series of pre-recorded samples and synthesized sounds. The project is based entirely in javascript/p5js and was used in my final performance for Algorithmic Composition, accompanied by another Max based application that utilized algorithmic techniques.


Algorithmic Composition, The Nature of Code


Brandon Kader

I created a system to seamlessly integrate music, visuals, and depth sensory data to perform as a conductor. The performer moves in a space to conduct music samples and generative visuals.


I made an immersive experience without a device or VR headset. I have tried a VR headset and felt disoriented and extremely uncomfortable. I most definitely do not want to work in that medium. It is important that the performer not touch things like a conductor does not play a specific instrument in an orchestra, rather directs the performance. My experience with piano performance is that it is not very physically moving, the piano is stationary and the performer is seated at the keyboard. There is little showmanship or stage presence. I have felt the need to perform my music in another way. I explored how to make an immersive experience – including music and visual effects – using a touch free interface for modern composers to perform room scale installations. I used a Kinect camera to gather depth sensory data with Processing to map music and generative visuals in a system I built with Max MSP Jitter, a visual programming language that patches and connects the data to music and visual elements. I used a Triplehead2go for 3 projector display to create a room-scale visual experience.


Algorithmic Composition, Choreographic Interventions, Thesis