I have a Master’s Degree in Interactive Telecommunications so I get a lot of calls to help people with their technical systems. These calls are mostly from my Mom wanting to watch TV and they mostly result from her not understanding the universal remote. My life on the family AV squad would be helped if I could see what she sees when she calls. I thought it would be nice to give her a hat with a camera in it. My first thought was one of those swell network cameras. They are small wireless and I would not have to do any of the netwoking. Unfortunately my Mom does not have an Internet connection. My next thought was to use some the solutions that the lifecasting folks are using like Qik but I don’t think that can stream to another phone and she usually need help when I am out and about. Also I kind of wanted learn how to program for Android so I decided to do it myself on one of those. I did it as a one day project for ITP’s 4in4.
Continue reading Camera Hat for Complicated Televisions
A Series of experiments in combining television and internet technologies.
1) Roaming Close Up
This is a client for viewing streaming video. First a high resolution still image of the studio background is displayed. A small rectangle of full motion streaming video of a zoomed in shot from the studio camera is then superimposed on the wide shot. The rectangle of video moves around on the client’s machine depending on what the camera in the studio is aimed at. Another version had the video leave a trail so that the background would always be updated by the video passing over it.
2) Video Cursor
This is a software system for broadcasting video commentary about very precise part of web pages. This software allows one person to stream an image of themsleves talking in a small window that floats above the web pages of a large audience. The software allows the speaker to move this small video window to a particular area of the web page and have it move to over that same part of the web page of everyone in the audience. It is a little bit like a video cursor.
3) Real Person, Virtual Background
This software connects the actual camera in the studio to the virtual camera in the 3D environment. This allows cromakeyed in backgrounds to move realistically as the camera pans in front of them. As the camera pans right to follow the character, the background pans left.
4) Real Head Virtual Body
This software streams video of person in the studio into a virtual world display on the audiences machine. The video is typically texture mapped on to the head of a character in the virutual world. Movements of the person’s head in the studio are tracked using a combination camera/orientaion sensor bicycle helmet and are passed over the network to control the movements of the character in the virtual world. A 3D chat interface was also part of this project
Project Description: This is a videoconferencing system that can connect up to twelve people across the Internet. Each user has a camera that is aimed towards them. The camera digitizes an outline of that individual and relays it to the other people in the conference. The camera captures the gesture and the general appearance of the user without the intrusive image detail that many users dislike about ordinary video conferencing systems that take the full video image. Using outlines has the advantage that they can be overlaid on each other without obscuring the other. Instead of the usual split screen grid (as seen, for example, in the opening credits of The Brady Bunch), this allows for more interesting interfaces where users can express themselves with the placement of their outline. Finally this type of conferencing uses much less bandwidth and allows for more people to participate and with better frame rates. Users can therefore participate over slower connections. I have also worked on versions that allow the level of detail to be adjusted from merely outlines to including internal edges and to cartoon colorization.
Technical Notes: I wrote the client or end-user software as a Java Applet using QuickTime for the video digitization. I wrote the UDP server as a Java application. NYU is pursuing a patent on this idea.
In most chat environments we rely solely on text. Expression, intonation, and gesture do not get conveyed. As Goffman says, “only what you give out, not what you give off.” This project is an experiment to bring the added context of a person?s expression into a 3D virtual chat environment. In this world, the user can shake their booty in front of the camera and capture an animation that is the sent to all other clients. It also looks at the idea of navigation by similarity. Most 3D worlds use a travel metaphor for navigation. In this world you move yourself into the company of other people with similar booty moves by moving your booty or by stealing other people’s booty moves. This was a collaboration with Lili Cheng and Sean Kelly of Microsoft’s social computing group.
Project Description: This is a webcam that only send outlines of fast moving objects in its view. Only after an object has been still for a while does it get added as an image to the background. The viewers can move their mouse to express interest in different areas of the image. The cursor coordinates are sent to the other users and show up as hi-lighted areas of interest.
I created several versions of this application. One version automatically refreshed the image under the users mouse. This webcam was set up in the lobby at ITP with a display showing people in the lobby able to see the areas of interest to web viewers. It also allowed web viewers to shout out to people in the lobby using text to speech.
Technical Notes: The video digitizing software and the server were written as a Java applications.
An interactive television show that allowed viewers to virtually navigate through my apartment by speaking commands into their phone. This began as an experiment looking for better interface metaphors than the desktop. By putting it on television, it took on a strange voyeuristic quality. It developed a large cable following and provided me with my 15 minutes of worldwide attention. I did this work as a student at ITP, NYU.
Technical Notes: I shot all pathways through my apartment onto video and pressed them to a laserdisc. I built a box using a Voice Recognition chip from Radio Shack and connected it via the parallel port of an Amiga to AmigaVision software that controlled the graphics and the laserdisc.