I haven’t made much concrete progress since last week. I spent the early part of this week getting a camera feed into Three.js, which I succeeded in doing. I started experimenting with a processing sketch for tracking the center of change , but I couldn’t figure out how to convert the whole thing into p5 before the rest of the week caught up with me and I had to put it down for the time being.
However, I am confident and excited moving forward. I’ve started pulling together the pieces for my final project, which is also the final project for every other class I’m in, which means after this weekend, it will be the only thing I have to work on for the rest of the semester.
From this gem of a web 1.0 site, I’ve scraped .mus files — an esoteric musical file format similar to MIDI — for over 100 songs from the Sacred Harp Songbook. I will be using these files to generate a Markov Chain to algorithmically generate this style of music in real time. Because the music is inherently “spatialized,” with sound coming from all corners of the room, it is perfectly suited to a 3d environment like Three.js.
I have more specific ideas regarding execution that I’ll talk about in class — I don’t think I have time to get them all out in the next 25 minutes — but I’d like to put some time into explaining why I want to do this. Beyond a desire to create something transcendently beautiful (which I believe this will be), it serves as a useful proof of concept in developing a more universal system.
The field of Cantometrics seeks to provide a qualitative analysis of all the musics of the world, studying what makes the music of each culture unique, and by extension, what unites them all. It’s a mid-twentieth century concept that has blossomed recently with the aid of computerized analysis, but to my knowledge has not yet sought to actually generate anything with all those data points. This could be a first step driving this field of study toward creation. If such a system could be developed for music, it stands to reason that it could be developed for any form of expression.
I really felt the split between the conceptual and technical parts of this class this week. I walked away from Remix feeling inspired to go reinvent the way people communicate, only to spend an entire week trying and failing to upload a dancing alien to a website.
Having spent most of the week trying and failing to make one change after another to my previous code, I realized the problem was there was too much going on in the code that I didn’t understand. So I started a new script from scratch to figure out what each individual part of the code was doing. I’m most of the way there:
It doesn’t do much yet, but it sure looks cool. Can’t wait to see what it looks like when I know what I’m doing a little better.
Having done a bit of work in VR last semester, I’ve been interested for a while in how to cut between scenes in a completely immersive environment. I’m attracted to the hyper-reality created by the gradual fade between a recognizable environment (a path in the woods), a semi-recognizable environment (that same image inverted) and something completely abstract (the fractal patterns created by the interplay of trees, ground and sky as the two images cross-fade). I look forward to continuing to explore and develop this environment as I gain a better understanding of how to navigate Three.js.
“The act of musicking establishes in the place where it is happening a set of relationships, and it is in those relationships that the meaning of the act lies. They are to be found not only between those organized sounds which are conventionally thought of as being the stuff of musical meaning but also between the people who are taking part, in whatever capacity, in the performance; and they model, or stand as metaphor for, ideal relationships as the participants in the performance imagine them to be: relationships between person and person, between individual and society, between humanity and the natural world and even perhaps the supernatural world.”