## December 03, 2007

### Someone stop me before I markov again

Since I had already written the necessary code for my last mapping project, here's the Thesis Title Generator. I scraped the ITP thesis pages for titles, dumped them all into a text file, and trained my Markov algorithm on them. The program generates strings that very much resemble, but are not (or at least, rarely) identical to, thesis titles from years past. Some recent favorites:

• massive Narrative, and their interactive Shoes
• Be My Father
• The Spectacular Interactive Network City

### Mapping: Weekly Response Project

A few weeks ago, Rachel had us fill in a sheet of "responses" to the content of every week of the course. The space allotted to each response was small: one line, maybe one or two sentences (more if your handwriting was small). Later, she directed us to use these responses as the raw material for a map. The class was divided into groups, each group taking responses from a different week (or combination of weeks). I worked with Riddhima, mapping the responses for weeks two and three.

The resulting map is here.

Okay, so it's only a map in a very abstract sense: It's a program that generates text from the weekly responses using a Markov chain algorithm. Here's how it works: the program parses all of the source text (in this case, the student "responses"), and breaks it into groupings of n letters; these are called n-grams (or k-grams). It then calculates the probability of any other letter occurring after each n-gram. For example, given this source text:

and
animal
androgynous
animosity
anchor

The n-gram an would have a 40% probability of being followed by d, a 40% probabiliy of being followed by i, and a 20% probability of being followed by c. The program above then does a random walk through the map, printing out letters according to their probability, then feeding the next n-gram back into the algorithm; the result is a generated text that outwardly shares many of the surface features of the source text, while not being identical with any portion of it.

In essence, the program is building a probability map of the raw text. In the process, it reveals the lexical and structural similarities in all of our responses. The resulting texts are humorous (or at least, I think they are!), but the process of generating them is subversive: just like a well-made map, it subjects the underlying topography to new readings.

Source code is available on request (if you really need to see a trivial implementation of a Markov chain in Perl...). The original transcription is here.

## October 25, 2007

### Algorithmic Composition: Final Project Idea

One of my interests is how we structure text in order to make it computer-legible. Probably the most pervasive way of structuring text on the Internet today is HTML, which has several interesting properties that make it relevant to music-making.

First, it's recursive: most elements can themselves contain other elements. In this sense, an HTML document resembles an L-System, and it's possible to draw a tree from the flattened data structure that looks very much like a classical L-System visualization. (See, for example, Websites as Graphs.)

In addition to being recursive, HTML also repetitive: think of menu items in an unordered list, or table cells in a table. Most HTML documents contain small, repeating (though not necessarily identical) structures like this.

So here's my idea for a final project: a piece of software that sonifies HTML structure.

Specifically, I'd like to build a web browser plug-in/extension that generates sound from the web page that the user is currently looking at. This structure might be very simple or highly complex; it might change in the course of viewing the page (given the possibility of modifying the page in real-time with JavaScript). The idea is to provide another layer to the experience of browsing the web—an experience that HTML suggests, but is not (generally) planned for in the design of a web page. In this way, the piece will serve to expose and make more transparent the structure of the underlying data.

Challenges and extra credit after the jump.

### Meditation #4

In this meditation, we were directed to sonify the string of an L-System that draws a space-filling curve. I chose the Sierpinski-Square L-System, as illustrated on p. 88 of The Computational Beauty of Nature:

```Axiom: F-F-F-F
Rule: F=FF[-F-F-F]F
```

(I suppose that this isn't technically a space-filling curve, but I think it'll do for the purposes of the assignment.)

This Processing applet displays the curve and will also (if you download it and run it on your own computer) generate the score, according to the algorithm given below. (Here's the original csound file, including sample score and instrument definitions.)

A note is generated and time advances in the score every time an F is found in the string. The - character moves the current note up one step (using a pre-selected scale); [ and ] push and pop note values off a stack. The real trick of this piece is that all generations of the L-System are played simultaneously: the duration of each note is equal to the (predetermined) length of the song divided by the number of Fs in that generation's string. For the axiom/ruleset given above, this leads to the notes of generation 0 being six times as long as those of generation 1, which are in turn six times the length of generation 2, etc. This strategy leads to a sort of rhythmic play between generations, which I think does a good job of relating the fractal nature of the underlying data.

A thirty-second excerpt of the piece is embedded below, or you can download the whole thing (192kbps MP3, 2'30").

### Meditation #3

For this meditation, I made a fairly simple sonification of global earthquake data (obtained from here). The score contains a note for every earthquake with a magnitude of five or higher in the past ten years, played in a compressed amount of time (about 200ms for every hour). The depth of the earthquake corresponds to the note's pitch of the note (deeper depths equate to lower notes) and the magnitude corresponds to the number of overtones. I used csound's adsynt opcode, so the whole thing basically sounds like a mess of Tibetan singing bowls.

Here's the csound file and the python script I used to generate the score. Here's the data file I used (somewhat massaged from the output of the original USGS search).

An excerpt from the piece is included below. You can download the entire piece (192kbps MP3, 2'53") here.

## October 18, 2007

### Meditation #2

In Meditation #2, we were directed to make a musical piece that recreates a random "walking tour" of the Seven Bridges of Königsberg. In my piece, each island and bridge is treated as a separate node and each node is associated with a note. The notes are played in quick succession—around 20 per second—and have a long decay time. The idea was to create something that illustrates the structure of the graph, something architectural: a wash that contains many events, but appears to move slowly (or not at all).

This Python script generated the score and this csound file generates the output (includes some sample score data). You can play a sample run of the program (1200 steps, about 0:55) below (or download it here).

## October 10, 2007

### Meditation #1

Procedure: Using Babelfish, translate a text from English into the language (other than English) with the largest number of speakers in the United States. Take the resulting translation, translate it back into English, and use this as the source text when repeating the process, this time using the language with the second largest number of speakers. Repeat until the text is mangled to your satisfaction. (If a language is missing from Babelfish, you can skip it.)

Suitable texts for this piece: any legislation or constitutional amendment that would make English the "official language" of the jurisdiction in question (municipal, state, federal). This is the text I used for my performance of the piece—an amendment to the United States Constitution, proposed during the 107th Congress:

The English language shall be the official language of the United States. As the official language, the English language shall be used for all public acts including every order, resolution, vote, or election, and for all records and judicial proceedings of the Government of the United States and the governments of the several States.

Here's the result:

Language of office S.U.A. It English. They entire danger of motions, is which it it does obtain by close one and that it benefits, language and the English witness of office are assumed? You will air order including/understanding differently, dissolution, voice or government S.U.A. of the document of the government of danger landscape architecture and also situation and a certain compensation everything certifyd it.

The translation chain: English -> Spanish -> Chinese -> French -> German -> Italian -> Korean -> Russian.

## September 27, 2007

### Algorithmic Composition: My first stochastic canon

This is a short little exercise that uses a linear distribution of random numbers to select notes and note durations that appeal to me. These are then arranged into an AABA pattern that gets played over itself in double and quadruple time. Here's a schematic representation:

```AABAAABAAABAAABA
A A B A A A B A
A   A   B   A
```

Here is the csound file, and here is the python script that generates the score. The output from a run of the program (with a 16 second pattern length) is included below (download here).

## September 20, 2007

### Algorithmic Composition: csound is my friend

Here's my first experiment with csound: Conway's Game of Life made audible. This Processing applet creates the score. The essential mapping is this: if a cell is alive at a particular generation, it sounds a note; the cell's y-coordinate in the grid determines the note's pitch, and its x-coordinate determines the cutoff frequency of a lowpass filter on the note. All cells are sounded as a chord for each generation, and the generations are played in sequence.

The csound instrument itself is very simple: it just plays a sample, starting at an offset determined by the score. The sample contains sixteen notes in a pentatonic scale, so the offset effectively controls the pitch. (Download this sample in aiff format here.) Another parameters in the score controls the frequency cutoff of the low pass filter (created with the `lowpass2` opcode). (download instrument and example score here)

Sound samples after the jump.