Category Archives: Social Software

Comicable

Check it out at: http://comicable.com/

“The Never-Ending Break-Up Make-Up Storytelling Comic Book Generator” (or NEGBUMCSBG as we like to call it ) arose from an interest in storytelling, comics, and the desire to create a shared space that not only allowed participants to create stories but also allowed individuals to locate and connect similar stories, or other similarly minded storytellers. Continue reading Comicable

Xena Footnotes

This is a chat environment that allows fans to congregate around their favorite part of the Xena television show. This interface differes from movie SpaceTime Chat because the clip is broken down into segments which are depicted by the bars of the bar chart. The height of the bar maps to the number of comments for that segment. Comments are distributed synchnonously to other people logged on concurrently and also stored for future users. The software works for all types of streaming media. Special moderators can log on and edit chat. Chat is automatically censored for profanity. There is also a sniffer which detects the appropriate version of this clip to play for the user’s connection speed. I worked with Sharleen Smith, Yaron Ben-Zvi at Oxygen.

Continue reading Xena Footnotes

Booty Cam

In most chat environments we rely solely on text. Expression, intonation, and gesture do not get conveyed. As Goffman says, “only what you give out, not what you give off.” This project is an experiment to bring the added context of a person?s expression into a 3D virtual chat environment. In this world, the user can shake their booty in front of the camera and capture an animation that is the sent to all other clients. It also looks at the idea of navigation by similarity. Most 3D worlds use a travel metaphor for navigation. In this world you move yourself into the company of other people with similar booty moves by moving your booty or by stealing other people’s booty moves. This was a collaboration with Lili Cheng and Sean Kelly of Microsoft’s social computing group.

Technical Notes: I wrote the video tracking software for capturing booty movement as a Java Applet. I did the scripting for the Microsoft VWorlds software in JavaScript.

Space of Faces

SEE VIDEO

This is a conceptual space of all possible faces. As the user contorts a cartoon face, they navigate through the space. Related faces are displayed next to the users face. The user can quickly navigate towards another persons face by clicking on, and thus stealing their feature. This is an interface experiment trying to improve on the travel metaphor used in most spatial interfaces, for instance. 2D Macintosh Desktop, or 3D virtual environments. These break down with the scale and interactivity of the Internet where more people are able to contribute as opposed to merely view material. In traditional interfaces, people place themselves in one spot along the dimensions of x, y, and z. They will be related to their neighbors only along those 2 or 3 dimensions or along some category mapped to those dimensions. Searching for other people becomes as tedious as traveling on foot through an enormous and growing city. Internet search engines avoid this problem by allowing documents to be related by as many dimensions as there are words in the document but they abandon the spatial interface altogether and look more like the old DOS Interface. Space of Faces tried to take the best from these two interface techniques.

Technical Notes: I wrote the web client as a Java applet and the server as a Java application connected using JDBC to a MS-SQL database.

Time Space Chat

SEE VIDEO

This is a computer conferencing system for watching movies. Many people may be watching or listening to the same linear media but only the comments of other people viewing a frame close in time to the frame you are viewing will be displayed prominently. Other comments blur in the periphery. Each person has normal control over the movie (pause, fast forward, rewind) or they can slave themselves to another user’s control.

This software associates a person’s comments with the given moment in the linear media that they were viewing when they made the comment and sends the comment out over the network. The software then filters comments coming in over the network according to how close they are chronologically or thematically to the frame in the linear media that the user is watching. The association works both ways. People who are experiencing the same moment of a clip would naturally be more interested in each other’s comments. Conversely people who like talking about a particular topic might want to be represented by a particular moment of the clip.

This is an improvement over techniques like chat rooms, channels, or threads in conventional conferencing software because: 1. the users form groups without much conscious effort by merely showing a preference for one part of the media to watch and; 2. because associations are along a continuum without the discreet boundaries of chat rooms, channels and threads.

Technical Notes: I wrote this in Macromedia Director/Lingo exported to Shockwave and networked it using the Macromedia�s Multi-user server. NYU is pursuing a patent on this idea.

MindChime

This was an experiment for using a narrative to constrain a virtual environment. I worked with a writer to develop a story about a futuristic environment and an underground resistance. After going through the story, the user has to prove his/her membership to the resistance by taking a test, called the “MindChime”. The user will have a different appearance in the chat environment as a result of his/her test score.

This was a collaboration with the playwright Michael Barnwell and Lily Cheng of the Microsoft, Virtual Worlds Group at Microsoft Research.

Technical Notes: This was created in Macromedia Director/Lingo.

Near There

SEE VIDEO

This is a web application where one person creates a collage using color, circles, squares, lines and text. As an individual makes their collage on the left side of the screen, the most similar collage of previous users shows up on the right side of the screen. As a user begins to intuit the matching mechanism, they alter their drawing deliberately to place their drawing near another person. They then begin to navigate through the vast space of all possible drawings by making one of their own.

This is an interface experiment accommodating applications where many individuals are making contributions. It tries to make the act of contributing a drawing also an act of navigation towards other people.

In addition to standard browser functions like bookmarks, histories and searches, the application also allows people to see who has seen (matched) their collage, and to send email.

Technical Notes: This was created with Beta Shockwave software and Perl/CGI programming using Unix DBMs.

Collage Narrative

SEE VIDEO

An interactive collaging tool for combining QuickTime movies and QuicktimeVR movies into a narrative and sharing them over a network. The idea was to allow people to make stories and move closer to people making similar stories. The interface made extensive use of contextual pie menus.
Technical Notes: This was written in Macromedia Director/Lingo and connected to a server directly via modem (this was done before Internet technologies were widely used.)

Dan’s View

I made a pan and time-lapse photograph over a period of 24 hours out a window. The television audience could adjust, with their touch-tone telephone, the angle of pan or the time displayed. People at home could then replace any particular time or pan angle by sending in a computer graphic across a modem.

Technical Notes: The television interface was programmed using CanDo software on the Amiga attached to an Alpha Products box decoding the touch-tones. It was an A-Talk 3 script accepting the transmitted graphics files.

Yorb

SEE VIDEO

Project Description: This was an investigation of interactive television where the viewers create the programming content instead of choosing between more professionally produced alternatives. Urban design, architecture and interior design become interface tools as Manhattan cable viewers were invited to navigate around a virtual world using the buttons on their telephone. Within the world viewers encountered pictures, sounds and video that had been sent in by other viewers using modem, fax, telephone and Ethernet. The messages seen on TV are also available for distribution over these networks. I conceived, designed and programmed this automated television program. This work was widely written up and presented at numerous conferences such as Imagina in Monte Carlo and the New York Interactive Association. It aired three nights a week on Manhattan Cable Television and was sponsored by NYNEX. It became a showpiece for the department.

I worked with Red Burns, Lili Cheng, Nai Wai Hsu, Eric Fixler and about a million others.

Technical Notes: V.P.L.s virtual reality software on an SGI machine was used to render the imagery in real-time. The SGI machine was located at the Medical School and the video output had to travel across Manhattan making use of several different technologies with an Ethernet connection going back to control the SGI. The system also made use of a Video Toaster on an Amiga, for mixing video and various boxes for telephone voice and touch-tone telephone input. A Macintosh, running HyperCard was the main program for serving up media.

Mayor For A Minute

 SEE VIDEO

This was a pie chart where each viewer could shift the consensus of how conflicting priorities should be resolved. A computer graphic pie chart filled the screen. Each slice represented a portion of the city. A viewer could then pick a slice and reallocate resources from one slice to another. While the viewer worked on a slice video bytes advocating spending in that area showed through the slice. The idea was to have this video sent in by community groups on a daily basis forming a computer-based television network. After a while, a video face would come out of the pie and editorialize on the reallocation. NYU and Apple Computer cosponsored the project. A prototype was demonstrated at SIGCHI 92.

Technical Notes: This used a videodisc and later QuickTime controlled by HyperCard. The touch-tones were decoded by a BlackBox DTMF to ASCII converter.