March 19, 2006
With a week off to play and catch up on missed opportunities, I return this week with a preview and primer of things to come.
The video at right was accomplished using a Jitter patch from an article written by bi-coastal artist and contributor to Res Magazine, Perry Hoberman. Implemented with two iSight's and a little common evolutionary trait known as stereoscopic vision, all you need is two eyes and a pair of standard 3-D anaglyph red/blue or red/cyan glasses (got mine from St. Marks Comics) and you're in business.
In the article (Double iSight: EZ 3-D Filmmaaking, res magazine, jan/feb 2006) I am happy to hear Hoberman comment, "At last the technological infrastructure may be in place to allow 3-D to become part of mainstream cinematic practice...However, 3-D will remain nothing more than a gimmick without the development of a new cinematic language..." Defense of content. It is always nice to hear.
Posted by andrew schneider at 10:52 PM
March 02, 2006
1.) Out of ContextCam >>> Capture triggered by sound. A small camera attached to my face captures video triggered by a sound threshold. Talk and it records. Shut-up and it stops. I am interested here in capturing meaningful (or not) moments throughout a day. Compile the footage and we see and hear one side of a conversation completely out of context. What are the implications for generating creative content? Will Richard Foreman want one? What other applications does this system lend itself to? I've experimented with the camera trained just on the mouth, as well as looking outward from the face. The video shows a compiled example of each.
2.) Requiem JumpCut >>> Using the JumpCutCam in Java which inherits Dan O'Sullivan's Motion Detector Cam, I've algorithmically edited the film "Requiem For a Dream" down to only the first frames following a cut. Certain patterns can be seen such as strobing between two characters in particular scenes, as well as this film's quick-cutting techniques to portray drug use. I hope to further refine the Cam using "seed planting" video-tracking techniques which are much less computationally expensive. I will also explore the juxtapositions between the rapid and constant cutting in this film and a slower paced film such as Roy Andersson's "Songs From the Second Floor."
Posted by andrew schneider at 07:08 PM
February 25, 2006
Tracking Blobs|MIDTERM WORKSHOP
Assignment: Make a controller.
Make a midterm.
way down in the code: I've been working with various instances of this code that I first pulled from Chris Adamson's QuickTime for Java: A Developer’s Notebook. In starting to modify the code, I quicky realized what a blessing Dano's vxp has been. I am now in the land of a different sort of QuickTime. I am at a point where I need to start from scratch with the Adamson book. Something is not working getting the code to check the sound levels to control the start and stop of capture. It seems like a relatively simple thing. I've been working on it now for three weeks.
Posted by andrew schneider at 03:39 PM
February 18, 2006
Lighting, Manual Cameras, IR, Polarization, Retroreflective ir flood lights
Grouping Pixels Points, Rectangles
Posted by andrew schneider at 09:16 PM
February 04, 2006
Make an electronic glass.
Reading : Mirror Neurons
>>>me in your life
For the “electronic-glass” project, I am interested in “augmenting reality” by placing something within the “reflected frame” of an individual. Things are not what they seem. The technical aspect seems straight-forward enough. I plan to install the cam-screen set up in an indoor hallway to control my background as well as to limit my depth noise. Background deletion and edge detection could then be used as a way to paint “pre-recorded” pixels on to a buffered image then display it to the screen. This is where I may have trouble wrapping my head around this thing.
For the sake of discussion let’s say that what I am inserting into the scene is a prerecorded loop of myself (in profile with a cupped hand & whispering to the moving objects (people) in the frame.) This means I have two sources -
1. the live video from an external camera with a the same background as source#.2
2. the pre-recorded loop of myself against the same background as source #.1
It is my instinct to segment out just the “me” pixels from the pre-recorded loop of myself and paint those specific pixels at a certain location on the live “buffered image” based on movement within the frame (i.e. a person).
As a proof of concept, perhaps it would make sense to first “paint” a still image onto the live buffered image. let’s start there.
Slight trouble with the code amongst other things slowed the completion of this project for a while. I still have a very strong desire to complete the thing.
In the mean-time, I've toyed around with another 'webcam project' I had in mind - to capture every jump cut on a single television station across a 24 hour period. Here is an excerpt from Fox5 showing just over 600 jump-cuts in just over 30 minutes.
This is simple differencing from one frame to the next: if the percentage of change between two frames is great enough (as in a standard, jarring jump cut) snap a picture.
Posted by andrew schneider at 11:30 AM
February 01, 2006
Extend WebCam class. Make a Camera for taking still photos of a space. What spaces are interesting to capture buildings doorways, skies, rooms, highways, power pants, your neighbors apartments, moutains of afganistan, every possible perspective in the world. How is it triggered, timelapse, sound, movement, physcomp rig, or mouse clicks of unemployed peopler. How are they displayed, a sequence, a blending, acollage of sub images and master images, a panorama, or a cubist assembly of many people's perspectives of the same thing. Where are they published, back in the space, on the web, on a phone or on the wall.
It has been my experience that most instances of interactive video and "webcam" projects deal primarily within the realm of the event: capturing an event / an event triggering and event / an event intitiating the action. In todays media-saturated and hyper-everythinged world blah blah blah it is a rare public event when things become still.
For my first webcam project I have decided to explore the notion of the non-event: stagnation.
Extending the WebCam Class, I came up with the StagnantCam. StagnantCam uses methods implemented in MotionDetectorCam and WebCam. The basic logic used: Measure the percentage of change between the previous frame and this frame. If that percentage of change is below a certain threshold start a timer and begin the process again where we grab a new frame of video and the old "this frame" frame becomes the old frame. If the percentage of change is below a certain threshold keep the timer running. If the timer has timed long enough, take a picture and start over. If the timer has not timed long enough, keep the timer running and start over. If the percentage of change is now above the threshold, reset the timer and start over. This pseudo-code assured "stillness" before the taking of a picture as opposed to other incarnations of the code which would merely take stillness across two frames of video (about 1/15th of a second).
In this way, a picture will only be taken during moments of extreme stillness, stagnation. With moderate fine tuning, the code can be adapted to any environment.
Running the code on the floor of ITP, I've noticed that the camera doesn't neccessarily trip only when there are no individuals in the frame. The camera trips most often when the things in the frame are at lease semi-permanent fixtures in that environment. Where as an individual passing down a hallway at a distance of 100 feet will probably not be captured, an individual who passes down the hallway, stops to get a drink at the drinking fountain, and moves on probably will be captured.
Other ideas for multiple iterations of this project include an interface which fades slowly between the display of the second to last captured frame and the display of the last captured frame.
A link to the StagnantCam code can also be found here.
Posted by andrew schneider at 12:47 PM
January 26, 2006
1: Hello Class
* Hello Java: main
* Hello Eclipse: new project, new class, run
* Hello CVS : share> new repository, update committ
* Extra Credit HelloProcessing, HelloWindow
* Find Example
* Reading: Golan, Head First Java p 1-150
The transition into the Eclipse environment was a smooth one perhaps only because I never picked up any steam as an even semi-proficient programmer in the Processing environment. Great for me...clean slate.
I won't bore you here with the details of a "Hello World" in a new environment. I'll save the "boring you" for the exciting stuff.
In regard to the Golan reading and starting to think about the webcam assignment I find Suicide Box by the Bureau of Inverse Technology (Natalie Jeremijenko and Kate Rich) to be very intriguing for various reasons. The actual content of the piece is quite interesting, as well as the technical aspects I can imagine were used to implement the project. I think I may use similar methods for a mouth tracking project which could turn into the webcam project of next week.
Posted by andrew schneider at 11:50 AM