Above is a portion of a screenshot from one of my thesis devices. I'm calling it the Entropic Text Editor.
How it works: an analog value is read from a repurposed expression pedal. (This one, specifically.) The position of the pedal is fed into a text editing application, which is programmed to intercept the user's keystrokes. A randomization algorithm is applied to the user's input, on a character by character basis, as the keystrokes occur; the further the pedal is pressed down, the more random the text gets.
The Entropic Text Editor is the simplest implementation of what I see as a class of devices: prepared (augmented and/or constrained) computer keyboards. The hands are free to engage in the familiar act of typing, but another channel of information is added that modifies how the typing works. The artifacts that result from the Entropic Text Editor incorporate not just the literal content of the text, but also a history of the user's gestures.
Here's a PDF exported from a session with the Entropic Text Editor, during which I transcribed Jabberwocky. I coaxed the pedal to the maximum value up until the end of the second stanza, then gradually eased off until the end of the fourth; the fifth stanza is full out, pedal-to-the-metal randomness, and the last stanza has no randomness at all.
See below the cut for images from prototype versions of the software.
Click the image below for a video of my latest thesis prototype (or download the full-size version here).
The working title is Generate/Modulate. It's essentially an interactive Markov chain generator, based on word probabilities. As the source text (Genesis 1 from the KJV in the video above) is parsed, the program makes a list of all two-word collocations and every word that can follow that collocation; for example, the collocation in the can be followed by any of the tokens in this list: image, open, midst, seas, earth, etc. Pressing the "A" button on the controller looks at the last two words on the screen and displays the word most likely to follow them. If more than one word is possible, the word is displayed in blue, and you can use the joystick to move between alternatives. If only one word is possible, it's displayed in red. Pressing "A" again will generate the next word in the chain, using the most recently generated word plus along with the word that directly precedes it.
A wordy explanation, but I'm actually kind of happy with the intuitiveness of the interaction. You're building a text that retains the semantic and rhythmic characteristics of the original, but with unexpected syntax and lexical juxtapositions. The interface constantly presents you with choices that are immediately meaningful, but also strongly suggest the shape of future choices. It's kinda fun to watch, too.
The Xbox 360 controller isn't what the final interface will look like, of course—I was just using it to prototype the software and the interaction as quickly as possible. I've had a couple of ideas relating to the final interface. Here's my favorite so far:
Behold the first physical prototype of the Text Drum. I turned the practice pad in the lower right-hand corner of the photograph into a drum trigger by outfitting it with a piezo sensor (I followed these instructions, though I used the bottom of a can of Danish butter cookies instead of a disc of galvanized steel). The pad worked so well that I decided I needed a second sensor, so I glued a second piezo to the side of a block of wood I scavenged from the shop.
Both the pad and the block are connected to my Arduino, which sends data from the piezos (using code adapted from todbot's tutorial) over serial to the Semantic Anomalizer (pictured on the screen, in the process of mutating Pride and Prejudice).
Overall, I'm pleased: I'm getting reliable, well-timed readings from the drum triggers, and using them along with the software I've been prototyping was gratifying. As I mentioned above, playing with the prototype made it obvious that more than one trigger was needed; I programmed the second trigger (the wood block) to insert a line break into the text, which adds a few new expressive and structural possibilities.
Problems: I cut the foam inside of the practice pad kind of unevenly, and the metal that the piezo is attached to is kind of warped. As a result, the response of the trigger is kind of uneven over its surface. The trigger works reliably; it just doesn't give reliable data about how hard it has been hit. Right now, I don't need that data—I just want a digital trigger. But this is definitely an avenue for future improvement.
Also, I'm not sure how well my original idea for the interface will work—i.e., a mapping between your rhythmic accuracy and the amount of randomness in the order of the words that the program outputs. For the most part, I just enjoyed hitting stuff and making words come out. I'm not sure if subtly varying your timing is the best way to be expressive with this thing. More experimentation is needed.
For my A2Z midterm, I'm going to implement a portion of my thesis. I'm calling my thesis New Interfaces for Textual Expression; it consists of a series of devices and interfaces intended to make the act of creating text more like a performance. These devices augment or replace the keyboard (and other literal means of input); they're designed to be intuitive (for both the user and the observer) yet still create unique (baffling, nonsensical, even touching) and readable texts.
My midterm project will be one of the devices I need to prototype for my thesis. I call this one the Text Drum:
Here's how it's supposed to work. The Text Drum allows you to "play" a source text. Playing a perfectly steady rhythm will output the source text (word by word) in its original order. As you syncopate the beat, however, the words will be scrambled, with an amount of entropy proportional to your distance from the beat.
The implementation will consist of a hardware component and a software component. The hardware presents the main technical stumbling block, since I have no idea how to build something like this. Piezos may be involved. The software will consist of some kind of receiver for reading data from the controller, which will send the data to my Semantic Anomalizer—a WebKit-based text editor that responds to OSC (more details here).
Depending on time, I may end up presenting a prototype that includes only the software portion, with the drum emulated by key strokes. We'll see.
The above is a screenshot of the very first prototype of the software I'm writing for my thesis. I'm calling the software the Semantic Anomalizer. It's a subclass of Apple's WebKit HTML renderer that I've hacked to respond to OSC. The screenshot above depicts the result of a live typing session in which the size of the font is being modulated by a sine wave signal coming from a Processing applet. It's the simplest possible application, but the results are promising.
More as it develops.
Here's a PDF of the presentation on methodology I gave on Wednesday. My new thesis idea is New Interfaces for Textual Expression: real-time interfaces for generating text. Some highlights from the deck (click on any of the links below for larger versions):
The first diagram represents what I see as the current state of digital poetry (and interactive text in general): the author designs a text, and an interface around that text, which the user must then discover and uncover and generally get annoyed with. The second diagram represents a new model: the interface comes between the author and the text, proposing new ways for texts to come into existence. The audience is a witness to this action. In this way, watching someone write becomes more like watching any other kind of performance (music, dance, theater, video games...).
Some of my prototype ideas:
The "Text Button" is the simplest of my ideas. It's a button. When you push it, text appears on the screen. When you release the button, the text stops. Simple, probably not effective, but nevertheless the first place to start.
This is one idea for an "augmented keyboard" - it's a keyboard with a foot pedal attached. The foot pedal can change aspects of the text the author is creating as he or she types, such as the size of the text, font weight, line spacing, etc.
The Text Drum allows you to "play" a source text. Playing a perfectly steady rhythm will output the source text (word by word) in its original order. As you syncopate the beat, however, the words will be scrambled, with an amount of entropy proportional to your distance from the beat.
More prototype ideas forthcoming.
The assignment this week in Living Art was to "make random." I made this:
Essentially, it's a ball that has words glued to the outside of it. When you ink it up and roll it across paper, it creates a composition: fragments of words, spread across the page in a way that is responsive to the gestures of the artist, but retains some amount of unpredictability.
The printing sphere is made out of a 4 lb Everlast medicine ball, glued to which are 20 or so words, hand-carved in Speedball "Speedy Carve" medium. The typeface is News Gothic (bolded variant).
Andy Miller took some excellent photos of my presentation of the Sphere in Living Art.
Rationale and more images after the jump.
(Edit: This is no longer my thesis idea! Although it is still an awesome idea.)
Originally a 2D game:
... and so fans were apprehensive when the 3D sequel was announced. But then it turned out that everyone liked it!
A "zero player" RPG:
It maintains all of the mechanics of a role-playing game (e.g., World of Warcraft, Final Fantasy), but has a very different means of interaction. It's an example of a game made in response to another game (or genre of games). Part of the recent Zero Gamer show.
A "performance" of Ulysses on Twitter, in celebration of Bloomsday. They're taking an already pretty ludic text and adopting it to a different genre—importantly, for my purposes, a text-based genre. It works as commentary not just on the source material but the medium itself (though I don't agree with Bogost's sentiments about Twitter at all).
More information about the project. I'll be giving a demo in class.
I still haven't gone through the bibliography of Twisty Little Passages in detail. Montfort talks at some length about interactive fiction adaptations of novels, which will be a useful reference.