Interaction (no real progress but lots of ideas)

I haven’t made much concrete progress since last week. I spent the early part of this week getting a camera feed into Three.js, which I succeeded in doing. I started experimenting with a processing sketch for tracking the center of change , but I couldn’t figure out how to convert the whole thing into p5 before the rest of the week caught up with me and I had to put it down for the time being.

However, I am confident and excited moving forward. I’ve started pulling together the pieces for my final project, which is also the final project for every other class I’m in, which means after this weekend, it will be the only thing I have to work on for the rest of the semester.

From this gem of a web 1.0 site, I’ve scraped .mus files — an esoteric musical file format similar to MIDI — for over 100 songs from the Sacred Harp Songbook. I will be using these files to generate a Markov Chain to algorithmically generate this style of music in real time. Because the music is inherently “spatialized,” with sound coming from all corners of the room, it is perfectly suited to a 3d environment like Three.js.

I have more specific ideas regarding execution that I’ll talk about in class — I don’t think I have time to get them all out in the next 25 minutes — but I’d like to put some time into explaining why I want to do this. Beyond a desire to create something transcendently beautiful (which I believe this will be), it serves as a useful proof of concept in developing a more universal system.

The field of Cantometrics seeks to provide a qualitative analysis of all the musics of the world, studying what makes the music of each culture unique, and by extension, what unites them all. It’s a mid-twentieth century concept that has blossomed recently with the aid of computerized analysis, but to my knowledge has not yet sought to actually generate anything with all those data points. This could be a first step driving this field of study toward creation. If such a system could be developed for music, it stands to reason that it could be developed for any form of expression.

In pursuit of this goal, I’ve been diving into several Javascript libraries that deal with generative text and speech synthesis. I also used Python for the first time to pull the .mus files (I didn’t even really know what Python did at the beginning of this week but wow Python is amazing). I have a lot to learn, and this is going to be a lot of hard work. I can’t wait.