February 26, 2006
Tonite marks the first post to something called the "k.log". It is a project I have wanted to implement for some time now. It will reinforce daily writing routines as well.
what is it?
a notebook, scanned, and posted on the internet.
Posted by andrew schneider at 12:53 AM
February 25, 2006
Tracking Blobs|MIDTERM WORKSHOP
Assignment: Make a controller.
Make a midterm.
way down in the code: I've been working with various instances of this code that I first pulled from Chris Adamson's QuickTime for Java: A Developer’s Notebook. In starting to modify the code, I quicky realized what a blessing Dano's vxp has been. I am now in the land of a different sort of QuickTime. I am at a point where I need to start from scratch with the Adamson book. Something is not working getting the code to check the sound levels to control the start and stop of capture. It seems like a relatively simple thing. I've been working on it now for three weeks.
Posted by andrew schneider at 03:39 PM
margin of error
Why did you do it.
Why did you do it New York Times? I was reading your daily news, the stuff that's fit to print anyway and you turned on a little tv. Right there. Right next to the article about "Taking Spying to Higher Level, Agencies Look for More Ways to Mine Data." Panasonic's new 'book of toughness' has come out with a tv spot on the internet. Full video quality. Dante's Inferno in Latin, that's a toughbook. Computer's are not books, Panasonic. Stop jump-cutting in the margins of my paper. Well...there's my problem. A screen is not a newspaper.
Posted by andrew schneider at 10:09 AM
February 20, 2006
Last week ended up as a critique week for the generative methods projects. We were to discuss numbers, beautiful math, and geometric construction - Fibonacci series in nature, The Golden Section, however this week has been dedicated to the completion of working proofs of concept.
Posted by andrew schneider at 03:44 PM
February 18, 2006
Lighting, Manual Cameras, IR, Polarization, Retroreflective ir flood lights
Grouping Pixels Points, Rectangles
Posted by andrew schneider at 09:16 PM
February 13, 2006
We begin to look at the methods found in works that are considered to be "Generative". Here we will examine the rules that govern and define these methods.
Sampling, Mapping, and the Manipulation of Attributes.
Apply one or more of the methods
The idea was to map video output brightness and composition to audio coalescence. Sending prerecorded and/or live video input through an upended video monitor and taking in the brightness levels via photo-resistors, the values would be translated through a PIC 18F452 and finally send to a midi-synth and output speakers. This may turn into a midterm project so I'll leave a more detailed desription for a later time.
Using two store-bought wall clocks, I ripped the mechanism from the face of one clock and inverted it onto the other. The hands were also inverted so as not to be able to distinguish one set of hands from the other. The clock now inexorably marches in both directions.
Posted by andrew schneider at 10:45 AM
February 04, 2006
Make an electronic glass.
Reading : Mirror Neurons
>>>me in your life
For the “electronic-glass” project, I am interested in “augmenting reality” by placing something within the “reflected frame” of an individual. Things are not what they seem. The technical aspect seems straight-forward enough. I plan to install the cam-screen set up in an indoor hallway to control my background as well as to limit my depth noise. Background deletion and edge detection could then be used as a way to paint “pre-recorded” pixels on to a buffered image then display it to the screen. This is where I may have trouble wrapping my head around this thing.
For the sake of discussion let’s say that what I am inserting into the scene is a prerecorded loop of myself (in profile with a cupped hand & whispering to the moving objects (people) in the frame.) This means I have two sources -
1. the live video from an external camera with a the same background as source#.2
2. the pre-recorded loop of myself against the same background as source #.1
It is my instinct to segment out just the “me” pixels from the pre-recorded loop of myself and paint those specific pixels at a certain location on the live “buffered image” based on movement within the frame (i.e. a person).
As a proof of concept, perhaps it would make sense to first “paint” a still image onto the live buffered image. let’s start there.
Slight trouble with the code amongst other things slowed the completion of this project for a while. I still have a very strong desire to complete the thing.
In the mean-time, I've toyed around with another 'webcam project' I had in mind - to capture every jump cut on a single television station across a 24 hour period. Here is an excerpt from Fox5 showing just over 600 jump-cuts in just over 30 minutes.
This is simple differencing from one frame to the next: if the percentage of change between two frames is great enough (as in a standard, jarring jump cut) snap a picture.
Posted by andrew schneider at 11:30 AM
February 01, 2006
Extend WebCam class. Make a Camera for taking still photos of a space. What spaces are interesting to capture buildings doorways, skies, rooms, highways, power pants, your neighbors apartments, moutains of afganistan, every possible perspective in the world. How is it triggered, timelapse, sound, movement, physcomp rig, or mouse clicks of unemployed peopler. How are they displayed, a sequence, a blending, acollage of sub images and master images, a panorama, or a cubist assembly of many people's perspectives of the same thing. Where are they published, back in the space, on the web, on a phone or on the wall.
It has been my experience that most instances of interactive video and "webcam" projects deal primarily within the realm of the event: capturing an event / an event triggering and event / an event intitiating the action. In todays media-saturated and hyper-everythinged world blah blah blah it is a rare public event when things become still.
For my first webcam project I have decided to explore the notion of the non-event: stagnation.
Extending the WebCam Class, I came up with the StagnantCam. StagnantCam uses methods implemented in MotionDetectorCam and WebCam. The basic logic used: Measure the percentage of change between the previous frame and this frame. If that percentage of change is below a certain threshold start a timer and begin the process again where we grab a new frame of video and the old "this frame" frame becomes the old frame. If the percentage of change is below a certain threshold keep the timer running. If the timer has timed long enough, take a picture and start over. If the timer has not timed long enough, keep the timer running and start over. If the percentage of change is now above the threshold, reset the timer and start over. This pseudo-code assured "stillness" before the taking of a picture as opposed to other incarnations of the code which would merely take stillness across two frames of video (about 1/15th of a second).
In this way, a picture will only be taken during moments of extreme stillness, stagnation. With moderate fine tuning, the code can be adapted to any environment.
Running the code on the floor of ITP, I've noticed that the camera doesn't neccessarily trip only when there are no individuals in the frame. The camera trips most often when the things in the frame are at lease semi-permanent fixtures in that environment. Where as an individual passing down a hallway at a distance of 100 feet will probably not be captured, an individual who passes down the hallway, stops to get a drink at the drinking fountain, and moves on probably will be captured.
Other ideas for multiple iterations of this project include an interface which fades slowly between the display of the second to last captured frame and the display of the last captured frame.
A link to the StagnantCam code can also be found here.
Posted by andrew schneider at 12:47 PM
Finite State Machines:
A finite state Machine (FSM) is a series of states, input events, output events, and a state transition function.
Build a finite State Machine
Unfortunately, during the final stages of this mock-up, the VCR that I've spent a good portion of the my first year hacking into temporarily died. I say temporarily, as a case of wishful thinking.
Synopsis: the case contains a hacked-into VCR that, when closed, fast-forwards through a physically spliced 30-second loop of video tape containing images of the construction of the World Trade Center. When the case is opened the tape pauses while the rotating play head keeps spinning, eventually leaving subtle wear in the places the loop is paused.
The lid of the case is physically attached by a thin wire to a box housing a monitor on which the images from the tape-loop play. When the case is opened and the loop is paused, the front of the housing cracks open enough for a viewer to peer in. This state-dependence is reversed as the case is closed - the tape-loop begins to play as the housing slowly closes, obscuring the image from the viewer.
I finally settled on the content of the World Trade Center construction on the physical tape-loop as a way to properly tie-in to the tangibility of memory, the effect of media on memory, as well as to give the piece an inherent feeling of inevitability.
I hope to revive the VCR and continue with the piece.
Posted by andrew schneider at 12:44 PM