The title of this final project is yet to be determined. I decided that I will combine my ICM and PCOMP finals to make this project possible. This project will be the first step towards a much bigger and more involved idea that I have been thinking of for a while and that I hope to see come to reality during my time at ITP.
The idea is immediate visual feedback to the compositional choices you make resulting in a visual composition to go along with your musical composition.
The concept for this idea comes from the desire to try and change or diversify the way music composition and performance is approached. Every musician, songwriter or composer has their own way of inspiring themselves. I know that I personally respond well to film. I will frequently get flooded with ideas and motivation when watching films. I consider this final project to be the beginning of another way that people can inspire themselves and use that inspiration to compose a piece of music.
So, what will be happening here is there will be a controller that will allow you to trigger loops and samples at your discretion. There will be basic controls like volume and panning, start and stop. The controller itself for this iteration will be a Monome. The Monome is an LED/button matrix. Mine will be an 8×8 matrix giving me 64 buttons to work with. I will be constructing this Monome controller from scratch with the Monome build group that is taking place at ITP this month. The loops and samples will all be composed by myself and will be triggered by the Monome through an application such as Live (I’m leaning towards Live at the moment because it seems to be the most versatile when it comes to external control). For the sake of getting this first prototype functioning the loops and samples will be pre-determined by me and will not be changeable, but the ultimate goal to is to have full control over what is being made or written in this interface.
Now the visual aspect to this project comes from the choices you make in terms of the loops and samples you decide to use. There will be a screen that will display an “object”. This “object’s” environment will be changed by you and your musical choices. Every trigger on the Monome will be set to display something in the “object’s” environment when activated. These visuals will be done in Processing. The objective is to have your musical and compositional choices interact with the visual environment of this “object” and provide feedback about the choices you’re making and in the end you have a “scene” to go along with your composition.
In the end (after this final) I would like to port this interaction over to an immersive touch screen environment where you are the “object” and you are affecting your environment as you’re writing your song.
I’ll be posting my progress as it happens.
Here are some things that inspired me:
So the Monome is built! The build was fun but a bit tedious. Putting all the parts together was not difficult but it was definitely time consuming to solder SO much. The 64 teeny tiny diodes were particularly difficult because of their size requiring the use of tweezers and steady hands. This experience was definitely a crash course in soldering which I was lacking in experience.
The more difficult part was probably debugging the thing. Once I was able to upload the monome firmware to the ATMEGA 32 I was hit with the challenge of finding out why two rows of leds would not light up and one button would not respond. I was able to download a MaxMSP patch that would test the response of all the buttons. All buttons were sending messages except for one and two rows failed to light up. So, it was off to test all the solder points that had any connection to these rows’ power path. Morgen Fleisig was extremely helpful with this and finally helped me realize that I had a couple cold solder points at the headers that sent power from the logic board. I desoldered those points and re-soldered them and success! Troubleshooting this was actually extremely helpful in understanding the flow of power and data within the LED board and to the logic board. Next was finding out why that single button lit up but would be send a button message. Because was focused to that button I decided to first try troubleshooting the only thing that dedicated to that button alone, its diode. I desoldered and re-soldered its diode and once again, success! Once that was done I finally had a full working matrix of LED buttons. From here I began to play with varies MaxMSP patches that were made for the monome.
Ok, finally got the monome talking to processing. We found a processing library that was built for communication and control of the monome called Monomic. As always, the library lacks in documentation and resulted in a lot confusion, frustration and trial and error. BUT we succeeded in our quest for the night. We were determined to have the monome trigger some video in processing and we figured it out in the end. There is still much understanding that needs to happen in that Monomic library but we at least got some basic communication down. The next challenge is figuring out whether it’s possible for the monome to communicate to two separate applications. We’re a little skeptical of this because part of the reason why we were struggling a bit is because the processing sketches would not work unless the MonomeSerial application was open. We had that application open at all times because we were told that it needs to be open in order for the Monome to speak to anything. This happens to be not so true in our case, since it seems that the MonomeSerial application actually needs to be closed for any processing sketch to work with the device. Anywho, here’s a video of Calli triggering an array of images in “motion”.
Well, after a bit of a bumpy road we finally came to a “completed” first step towards this idea of trying to re-invent how music composition is approached and thought of. Since the last update we ran into a few road blocks that really changed how we approached this project. The first and really the biggest one was the fact that the Monome can only speak through one port at a time which means that only one application can receive messages from it at one given moment. This quickly changed our original plan of having the Monome speak to processing and Live at once. We realized that we would have to create a chain where the input from one app will trigger output to another, in this case between the Monome, Live and Processing. Well, this was also a challenge. In the end we decided to use Max Msp as the interface for the Monome to control the audio loops that would be used. We decided on Max Msp because Max seemed to be the application that the Monome has the strongest relationship with. We used an pre-built interface called “mlr” which allows for the control of 7 loops. We used this patch because it was pretty straight forward and provided enough visual feedback to the user to at least allow them to know that loops are playing and to be able to follow their progression. The next step was to get the Monome to trigger the visuals which were to be done in Processing. We quickly realized with the decision of using Max that using Jitter to trigger the visuals would by far be the easiest solution but because we needed to use Processing to satisfy the ICM component of this project we needed to find a way to make it work. This brought us back to the issue of having a Monome event trigger two events in different applications. Ideally we would simply output OSC messages with every trigger of a loop to Processing which would interpret that messages as a video trigger start. Unfortunately, neither of us were versed enough in Max to make sense of the mlr patch enough to know where to place the OSC output functions. We spoke to many people and all came to the same conclusion of “I don’t know” which I didn’t really blame. Digging deeper and deeper into the patch revealed how complicated and convoluted it was. So after hours and hours of research and many unsuccessful work arounds we finally compromised and decided for the time being we would just trigger the video with key presses in Processing.
The music loops were created by myself using Logic Pro. The video loops were chosen and composed by Calli Higgins. She found an old family vacation video on archive.org from 1957. We both immediately grew attached to it and decided to use it as our visuals that would accompany our music loops. Calli scrubbed through and found 7 clips that would go with our 7 loops. The aesthetic of video naturally lead us to the theme of memory as our narrative. So with each music loop that you choose to add to the mix you also get a memory that together in your mind begin to form a narrative of this family’s relationship with one another.
The relationship between the music loops and their respective video loops is really what we wanted to focus on for this very basic first step towards the more grand idea that was described at the beginning of this post. We wanted to try and achieve that moment where you decided to use a particular music loop because you grew an attachment to the video loop and the narrative that it’s helping build in your mind.