A few weeks ago, Rachel had us fill in a sheet of "responses" to the content of every week of the course. The space allotted to each response was small: one line, maybe one or two sentences (more if your handwriting was small). Later, she directed us to use these responses as the raw material for a map. The class was divided into groups, each group taking responses from a different week (or combination of weeks). I worked with Riddhima, mapping the responses for weeks two and three.
The resulting map is here.
Okay, so it's only a map in a very abstract sense: It's a program that generates text from the weekly responses using a Markov chain algorithm. Here's how it works: the program parses all of the source text (in this case, the student "responses"), and breaks it into groupings of n letters; these are called n-grams (or k-grams). It then calculates the probability of any other letter occurring after each n-gram. For example, given this source text:
The n-gram an would have a 40% probability of being followed by d, a 40% probabiliy of being followed by i, and a 20% probability of being followed by c. The program above then does a random walk through the map, printing out letters according to their probability, then feeding the next n-gram back into the algorithm; the result is a generated text that outwardly shares many of the surface features of the source text, while not being identical with any portion of it.
In essence, the program is building a probability map of the raw text. In the process, it reveals the lexical and structural similarities in all of our responses. The resulting texts are humorous (or at least, I think they are!), but the process of generating them is subversive: just like a well-made map, it subjects the underlying topography to new readings.
Source code is available on request (if you really need to see a trivial implementation of a Markov chain in Perl...). The original transcription is here.