The concept for this idea comes from the desire to try and change or diversify the way music composition is approached. Lucas Zavala and myself are exploring the idea of creating visual feedback for compositional choices as a means of inspiring a song writer.
To control music choices we built a Monome, an 8 x 8 LED/button matrix that sends Open Sound Control messages to MaxMSP. Lucas composed 7 different loops that when layered together, complete an entire song but still give the user control over how many and which loops are turned on. Only four loops are able to be played at a time, allowing the user to swap out any 3. For the sake of getting this first prototype functioning, the loops and samples are predetermined, but the ultimate goal is for the user to have full control over what is being written with this interface.
For every compositional choice the user makes, there is a simple visual response that builds a narrative as the user builds a song. For this installment, we chose to build a narrative using the memories of the Barstow family of Wethersfield, Connecticut. The family has made their home movies available on archive.org and we chose footage of their family vacations spanning from 1957-1961 to build an imagined story. We chose seven different small moments that both felt memorable and emotionally affective. Since the user is able to play 4 loops at a time, likewise, 4 parts of a story can be projected at once with the ability to swap out 3 by triggering different music loops. The video is projected inside of picture frames to accentuate that these videos are a family’s memories and the loops are kept short and somewhat frantic, much like the way pieces of a memory form in our minds.
Our biggest technological challenge was getting the Monome to communicate serially with both Max/MSP and Processing. Unfortunately we had to admit defeat in this area and use key commands to trigger the videos in Processing, not the Monome (although we do have a sketch built and ready to go that uses the Monome to trigger the visuals, in the event we solve this problem in the very near future). The ‘videos’ are actually image arrays cycling through Processing (which uses the Monomic library). This was both our way of getting around Processing’s limited video capabilities and also creating a noisy, stop-motion effect. This further distorts the video and causes it to look the way a memory looks in our minds. Both Lucas and I found that we draw from our past experiences and use our memories to inspire artistic choices and we chose the name ‘mnemonic’ to play on the project’s memory aiding properties.