I am trying to mull over how to generate 3D sculptures generated from kinect depth data. I want to start working from Dan Shiffman’s Point Cloud example, and figure out how to record depth data for a specific gesture over time. Then I want to collapse the data into one instance, and see what abstractions appear once it’s printed 3D.
Here are some links and video of what people are doing with the Kinect and 3D printing at this point.
This happened at the Tangible, Embedded, and Embodied Interaction conference. How do I get to go to this next year? via Joystiq.
Here the Fabricate Yourself team describes their process in more detail.
They seem to be outputting STL files. I will have to research those a bit more, as well as what 3D printer I’m going to use. I hear they’ve been doing a lot of it at MakerBot so I may try and check that out.
This post by FaceCube seems to be reacting to the fact that Fabricate Yourself didn’t publish their methods. Here is a tutorial with links to a library he used (made?) to render out the 3D. I’m just not sure which language he’s using.
These tutorials will also probably prove helpful. It seems like Jose Sanchez at genware.org has already figured out how to export the data from a point cloud. Here is also a page of links on algorithms for programming natural behaviors.
The camera I carry everywhere with me is 8 megapixels. In 5 years it will probably be 20 megapixels with a retractable lens that yields the same quality as a Canon G11. This camera will also make calls, check email, pay my bills, make sure I didn’t forget to turn my bathroom light off and remember to send my mom an (electronic) birthday “card.” The calls I make on this camera (phone!) will almost always be “face to face,” since all smart phones will have a camera on either side.
I’m okay with all of this. I enjoy the ability to take pictures and video of my surroundings, and to send these images to people I know who might enjoy them. I like documenting my life, and the advancements in technology has made it much easier to do so. I mean, I carry my cell phone with me pretty much everywhere, and only remember to bring my digital camera with me every so often. Cameras attached to a cellular network also make it possible to immediately share images with other people, with a “social network.” It has taken me some time to get used to this as well, but now it seems normal to upload a picture of what I’m eating, a rooftop view, or a guy walking around with a parrot on his shoulder.
Color, a new app that just came on the market and has been getting some buzz on the internets seems to be a direction that networked cameras might be heading. The basic premise (as I understand it) is that this app automatically uploads pictures you take in a certain location to an aggregate that displays pictures taken within a certain radius of your location. People with the app may view other people’s images as long as their physical locations are within a certain distance. Instead of sharing images with your social network (ie friends who may or may not live near you), you are sharing images with strangers who happen to be in the same place at the same time as you. In addition, the app’s algorithm analyzes the images and gives hierarchy to the most “interesting” ones.
For me, there are a few disconcerting implications of an app like this. What sort of real life behavior would this enable? Plain voyeurism, or perhaps stalking? In addition, what sort of feelings do you get viewing other people’s photos in real time? Envy that they seem to be having more fun, perhaps at a cooler bar or party with more “friends.” Depression, lower self-esteem? I guess part of me understands wanting to share bits of your life with people you already know, but I don’t quite understand the urge to share them with strangers. It seems to feed the exhibitionist and voyeuristic tendencies that have made reality television so popular. I’m curious whether these images actually engender real interaction between people, or whether it just exists within the virtual voyeuristic plane.
I came across another project recently where the omnipresence of cameras was used for a very ambitious goal – to map the development of language in an MIT scientist’s young son. His entire house was wired with cameras, and the subsequent footage algorithmically analyzed to see how certain sounds developed into words, and how the locations where certain words are spoken affects language acquisition. The scope of the project is pretty impressive, and shows how recording and analyzing behavior can allow us to better understand human development. Maybe it’s the pursuit of science, or the fact that this family now has an amazing record of this child’s first words that make me comfortable with the parameters of the project.
When Josh Harris spoke at ITP last Friday (an event that deserves a whole post of its own), one thing he said that seemed really prescient was that the battle for privacy is no longer at play. It has been decided, in favor of diminished privacy in exchange for ease of use. He predicted the new battle would be for individuality. This is what I fear might become lost most as what people consume becomes more and more tailored to their categorized marketing profile, as memes make it so that the collective consciousness is instantly aware of the same thing at the same time. The Color app seems to point to this as well, where my fear is that instead of delighting at the diversity of what people choose to photograph (and by extension what information they put out about themselves), people will see what others have done and do similar, if not the same.
Here are a few links that I’ve found to get me started on the Solar BEAM Robot project for Sustainable Energy. I am thinking of making something that is nocturnal, meaning it stores up energy from the sun during the day, and releases it at night, usually as light, but possibly as sound. The BEAM robot term for this is a “Pummer,” and a common circuit can be found here. I also have an idea to make some sort of winged creature, which flaps its “wings” every so often. I’m thinking I’ll need servos for each wing (since they’ll be moving in opposite directions). Or I would like to experiment with electromagnets.
I love Bjorn Shulke’s work, and am curious how he makes his circuits. It’s also amazing what a little white paint will do to turn a “robot” into “art.”
The concept for our Kinetic Energy project evolved from an interest in generating sound by hand. What we ended up creating was an AM frequency receiver that picks up all the frequencies in its bandwidth. It currently doesn’t have tuning capabilities but instead it is sensitive to all frequencies in its immediate vicinity.
In order to make the receiver we worked from this circuit to create the portable “radio.” Rather than of using a walkman PCB, though, we chose to substitute the LM380N to amplify the signal.
The most complicated part of the process was generating enough energy to power our circuit by hand. We used a geared stepper motor from a printer as our supply voltage. The stepper motor created a decent amount of AC power but we put the stepper in series to increase the supply and then ran it through a bridge rectifier to convert it to DC power. We used a 2200uf 50V capacitor, large resistors and signal diodes which helped store, smooth and regulate our output voltage.
Our gear box was helpful to increase the revolutions on the stepper but we attempted to attach a pull start to make the revolutions more continuous. The mechanics of attaching the pull start to the gear box proved to be difficult since there aren’t any exposed gears that we could easily attach. We tried to create our own gears but the gears weren’t precise enough to line up with the teeth on the gear box.
We then attached a hand crank which made it easier to turn the crank by hand but wasn’t as continuous a motion as the pull start. We still had trouble making a secure connection between the hand crank and the gear we attached to we, though. This made it difficult to generate as much power as we had hoped for despite trying a variety of capacitors in an attempt to store energy before delivering it to the speaker.
Our ultimate goal is to transmit our own signal and tune the receiver. We’d like to do this on both an AM frequency and also explore the possibilities of transmitting a signal to a TV. Pirate radio/TV!
I decided to try and implement the OpenCV – Kinect blob detection sketch with steering forces governing the particle / edge attraction instead of the attractor / repulsion gravity force method that wasn’t quite giving me what I wanted in my previous iteration. At this point I’ve succeeded in populating the edges of the OpenCV blobs with target PVectors that the particles are attracted to. I’ve only really done a mashup using Dan Shiffman’s steering example and combining it with the Kinect and OpenCV blob edge detection code I was using before.
Here are some screen videos of how it’s behaving right now:
And here it is with the OpenCV image visible to give an idea of how it’s following the body:
The next step I’d like to take is bring the depth information into the particles as a z-vector that would affect their speed traveling to the edges of the body. I haven’t quite been able to make this work yet. I’d also like to add a repulsion force between the particles so they’re not so closely spaced. Although the effect I have going now isn’t what I had intending, it is sort of interesting, so I might keep going in this direction.
Here is the progress of my midterm project I worked on for Computational Cameras and Nature of Code. I wanted to create a system where a viewer would see themselves abstractly made up of particle or molecule type objects. This was inspired by a lifesize graphite drawing I saw a while back by Nathaniel Price.
This is the only image I could find of the piece
In order to make particle objects populate around a person’s body, I decided to use the Kinect camera to set a threshold. Using a threshold, I made a PImage and drew black pixels for anything in front of the threshold and white pixels for anything behind it. OpenCV then did blob detection on the image and returned an array of blobs. Next I called the OpenCV “points” function to step through the array of pixels that made up the perimeter of each blob. I divided this amount by the number of attractor objects I wanted the system to have, which gave me a way to disperse them regularly around the blob using this interval. Once I drew the attractor objects, the particle objects had an imbedded force calculation so that they would determine which attractor they were closest to, and become attracted to that attractor.
Here are a few screen videos of the program running. The first shows you how it’s running as of now (a bit slower due to video capture). Even without capturing video at the same time it’s still fairly slow because OpenCV can be slow, and I’m also constantly updating the attractor location, which in turn means every particle has to recalculate which attractor to be attracted to. There is a better way to do this, but my program kept breaking when I tried to change the for loop so that’s something I can revisit if I decide to keep moving forward with this direction.
The next video shows you the same program, but is drawing the attractor objects as gray dots to give an idea of how they’re finding the edges of the blobs, as well as updating every 15 frames.
The last video also shows the Kinect / OpenCV Image that is having the blob detection run on it:
I’ve been having some problems trying to figure out how to make the attractors populate in the center of the body, not just around the perimeter. It’s a matter of finding the center point, then radiating out I believe, but my code keeps breaking when I try to implement that. After speaking with Dan Shiffman, he encouraged me to leave the attractor / repulsion gravity force model I’d been using, and try to see if I could give the particles a steering force instead, so that they’d “arrive” at target points dispersed around the perimeter of the body.
For the computer vision portion of my midterm project for Comp Cameras and Nature of Code, I decided to use the Kinect camera to separate people from a background, and then run OpenCV Blob Detection to find their contours. Eventually, I am going to populate the outlines of peoples’ bodies with Attractor objects, so that Particle objects start to delineate their bodies. For more about the concept and goals of the piece, please read more here.
In order to see if I could initiate attractor objects around the edge of a person, my first step was to start with a static image. I used the Blob Detection library in Processing to trace the edge around a silhouette of a woman. Next I asked for the array of edge vertices that made up the outline, divided the number of edge vertices by the number of attractor objects I wanted to draw to get the spacing interval, and stepped through the edge vertices by that amount each time.
Drawing the edge in green, the bounding box in red, and the attractor objects in gray
It was definitely tricky trying to figure out the correct way to write the ‘for loop’ for stepping through all the vertices, and writing the ellipse at specific intervals. For a long time I could only see one ellipse, since I was always writing it at the same place. After a bit of guess and checking, and then trying to think logically, I finally got what I wanted.
The next step was to put some particle objects into the system as well and see how they were attracted to the outline of the figure, to see if I was going in the right direction for what I want the behavior to do.
Particles and Attractor location based on Edge Detection
Not drawing the image, but doing the edge detection, gives you a little more of an idea how it looks with only the particles visible. The attractors are drawn lightly in gray but should be turned off.
Without drawing the silhouette image or edge detection
After I got my code working on a static image, I decided to try for a moving one. I also wanted to use the Kinect as my camera input since the depth map makes it so simple to separate a person out from a background, and later I want to see if I can use the z axis data to affect the particles in some way as well. I decided to switch blob detection libraries and go on to OpenCV, since it seems to be more powerful and has so many other capabilities in addition to blob tracking. I owe a big thanks to Craig Kapp and Saul Kessler for their help getting me up and running with the Kinect talking to OpenCV.
One issue I spent a while trying to figure out was why the blob detection in OpenCV didn’t seem to be working. Starting with Dan Shiffman’s RGB Depth Test, I was taking depth map information, and based on a threshold, telling the computer to display that information as a PImage, with the pixels in back of the threshold white, and the ones in front gray. Next I wanted OpenCV to do blob detection on the PImage, and then populate the edges with attractor objects. For the longest time I was only able to draw attractors in the four corners of the screen, as you can see below.
Turns out OpenCV can only “see” in black and white, namely it can only do blob detection on images whose pixels are entirely 0 or 255. My problem was that I was drawing the information in front of the threshold as gray, which wasn’t appearing at all. By default, OpenCV decided I wanted to do blob detection around the entire PImage itself, since all it could find was a large rectangle of white pixels. Once I changed that it I was able to draw ellipses around the edge of the moving figure.
Not pretty but much more functional!
Since I have most of the computer vision side worked out at this point (or so I believe) I’m going to switch gears for a bit and focus on getting my particle objects to look and feel the way I want them to. More on my progress soon.
A while back I had the chance to visit the diRosa Preserve, a museum of sorts in Napa, CA showcasing an amazing collection of contemporary art collected by Rene and Veronica di Rosa. I strongly encourage anyone visiting wine country to check it out. It truly is a wonderful place. But I digress.
I saw a lot of wonderful work that day, including a beautiful installation of Chartres Bleu by Paul Kos (it’s worth visiting just to see this piece), but a piece that keeps recurring in my mind is by an artist named Nathaniel Price, called “Another Matter XIII,” made in 2002. In the lifesize drawing, circular marks abstractly delineate the figure of a person hung upside down. There are surprisingly few resources about Price’s work on the internet, but here is a link to a Kenneth Baker review of the show where I believe the diRosas bought the drawing. I remember the piece as graphite on paper, but it seems that he used hot circular objects to burn into the paper.
This is the only image I could find of the piece
For my midterm, I propose to make an installation where person’s body is abstractly formed from particle objects. When no one is in front of the installation, there will be particle objects moving around the screen. When someone approaches, attractor objects will initiate around their outline, so that the particles will slowly start to move towards their figure (the attractor objects won’t be drawn, nor will the person’s video image be drawn, so the only way they will see themselves “represented” is from the particles moving towards them. In terms of quality, I want the installation to respond to stillness, where the particles will move fairly slowly, and only settle when the person is standing still.
In terms of what the particles look like, I am going to start with drawing them as outlined circles, but if I have time I would like to experiment with importing textures so that the particles appear more “hand drawn.”
I decided to try and see how I could implement some of the sketches I’ve been doing for Nature of Code with the tracking capabilities of the Kinect. At this point it is still pretty rough, but basically I’m adding a Particle System to follow the Average Point Tracking sketch instead of an ellipse. I left the IR image of my hand and body in front of the threshold to show what is happening, which is that the Particle System is following the average point of whatever region is in front of the threshold.
My attempts to combine the particle system with sketches that track one spot haven’t been successful as of yet, but I will keep working on it this week since I would like to continue developing this project for the midterm.