I.C.U. is an eye-tracking game that enables the user to explode objects on a screen simply by looking at them.
Here's how it works: User dons rad, red eye-tracking glasses, stands directly in front of a projection screen and then looks at 12 calibration points that are projected on the screen, one by one, while a brief calibration is performed. Then the game begins. An object, perhaps a banana, appears on the screen, and the user must stare at that object until it explodes into yellow particles that fly all over the screen. Then another object, perhaps a strawberry, appears, and the user shifts his/her gaze to the strawberry, which blows up in a cloud of red strawberry particles. A satisfying "splat!" sound accompanies each explosion.
I.C.U. was created entirely in Processing. I loved the idea of blowing up objects by looking at them - but being a chill, peaceful type of person, I wanted to make the explosions as comical and kid-friendly as possible. Instead of making epic explosions as previously intended, I decided to work again with the Processing Box2D physics library, an open-source physics simulation library written with game designers in mind, to create explosions of colorful particles incorporating weight, velocity, gravity simulation and collision detection.
When the user's eye rests on an object, the object jiggles, then quickly disappears to be replaced by an explosion of particles. The initial explosive force causes a burst of particles to fly in all directions, then decelerate and fall downwards. I created a boundary on the sides of the screen for the particles to bounce off, creating a confetti effect and then falling to the ground.
My decision to use Box2D was to enable the particles to mount up, allowing the user to make a big fruit salad that filled up the entire screen. I've discovered, however, that Box2D probably was not the most efficient way to go in the case of this particular project. The video tracking involved in eye tracking component of the project takes up a huge amount of processing power, and when combined with Box2D the frame rate slows down significantly -- from 60 to 15 fps, in some cases -- once a few hundred particles are present on the screen. Rewriting the code using a single particle system to create the explosions will likely be a significant improvement.
The eye-tracking component of the project can be challenging and finicky at times, but when the eye-tracking works, it works really well. The eye-tracking Processing code relies on the positions of the pupil and the glint to tell where the user's eye is focusing on the screen. It's important to get a good image of the eye, especially for the calibration stage - the LED (only one LED, mind you, or there will be multiple glints, which will not do) needs to be positioned so that it lights up the whole eye, and must be kept to the side rather than getting within view of the camera.
The rad, red eye-tracking glasses are based on the EyeWriter design, the full directions for which can be found on Instructables. Our glasses consist of a hacked PS3 Eye camera fitted with an infrared filter to block out all but IR light, a pair of sunglasses from the ever-wonderful St. Marks Place, some alligator clips, a battery pack and an infrared LED to illuminate the eye. The glasses cost about $50 to build. They're a little on the large side and tend to slip down my nose (which is a little on the small side), but they're awesome, uber-nerdy and lots of fun to wear.
The Processing sketch and code can be found here. Feel free to bug me for my eye-tracking glasses, or if you have about $50 and an hour or two to spare, try making your own EyeWriter using the Instructables.
There are many cool possibilities for future eye-tracking projects, and I plan to continue working with it. Once I figure out how to make the eye-tracking system more robust onscreen, I'd like to move off-screen to create a physical moving object that the user can control with their eyes. It's the next best thing to telekinesis, methinks.
Also, lasers. I want to put lasers on the glasses so the user can have awesome superhero laser vision.
My work with eye-tracking to date led to a few more random observations:
- Eye-tracking is a good way for me to drive other people bananas by controlling what they can see, based on what I'm looking at.
- Eye-tracking is a good way for other people to have a laugh at my expense, depending on what's happening on the part of the screen I'm not looking at.
- The eye is an inefficient cursor for controlling objects in a larger context, because you can only "see" what you're looking at.
Below are some pictures from the project. Click on the thumbnails for a larger image.
I'm working with Scott Wayne Indiana on a project that enables kids to use their eyes to control objects on a screen.
Out of the many computer vision techniques we've looked at in the Hospitable Room class this semester, eye-tracking appealed to us because it can cater to kids with a wide range of physical mobilities, from mild to severe. Eye-tracking also seems like a more direct route to the brain than any other sort of camera tracking we've played with, and could likely allow the user to do things that appear impossible to do - even magical.
In preliminary testing/research of eye-tracking, we've learned that any interface we develop most likely cannot depend upon the eye as a reliable cursor. Also, when a user is looking at an object, they tend to lose focus of everything else, to the point where that object is pretty much the only thing they can see.
What compelling activities can result from knowing what someone is looking at - or perhaps what they aren't looking at? Ideas included laser vision, X-ray vision and more, but we kept returning to two things that we can , both of which lend themselves to play:
- Telekinesis. Moving objects, seemingly with your mind. But really, your eyes would move it. This could be screen-based at first, and then made physical (think magnets under the table, and other "magic" tricks).
- Blowing Stuff Up. Who doesn't like to blow things up? Staring at an object until it explodes would be fun. This firework show done in Processing looks pretty cool, and this Pixel Explosion example may be a good way to begin playing with exploding existing images.
The destruction of Alderaan is an epic explosion that could be worth trying to replicate in code:
We've received some pretty positive feedback on the idea of blowing stuff up, so we're exploring that idea first. In my mind, here is how it will work: User sees object, then stares at it. As the user stares at the object, the object begins to shake or grow as an indication that something is about to happen, and the shaking/growth gains momentum the longer the user stares at it. At the max time, the object explodes.
At this point, here are some of the most pressing questions:
- Would staring at something until it explodes be compelling to kids?
- What kinds of objects should be blown up? What would kids like to detonate? Should we encourage kids to detonate things?
- What is the best way to make an epic explosion in Processing? Light effects of some sort might be cool.
- Blow up an area of a large image, or blow up a separate smaller image?
In terms of hardware, we're currently using a baseball cap with a camera and IR LED mounted on it. We will build a pair of eye-tracking glasses from the EyeWriter Instructables, which uses a hacked PS3 Eye camera and a pair of sunglasses to build the hardware for about $50.
I had fun playing around with the Box2D library a few weeks ago, and also enjoyed working with pendulum motion for this sketch. A string of pearls seemed to combine these ideas nicely - so, for the Nature of Code midterm assignment, I set out to recreate the motion and behaviors of a string of pearls.
The "big idea" is a string of pearls suspended from above, swinging from side to side until it collides with an object and snaps in half, allowing the pearls to fall to the ground. Once the pearls hit the ground, they will bounce and roll around on the ground until they eventually come to rest.
The string of beads is comprised of bead objects attached via joints. My first task was to create the appropriate joints to attach the bead objects to one another. I researched several joint possibilities in the Box2D library, and went with the Revolute Joint, which forces each set of 2 beads to share a common anchor point around which the two bodies can rotate.
The working Processing sketch and code can be found here.
My next step will be to make the beads collide with the boundary, snap the joints and make the beads fall to the ground.
Our first assignment for Nature of Code was to select an example of real-world "natural" motion and develop a set of rules for the movement of an object based on that motion. Since it's January, I attempted to warm things up a little (in my mind, at least) by replicating the motion of these palm trees blowing in the wind.
Initially, I'd like to mimic the swaying motion of the tree trunks, which appear to slow down as they tend further towards the right or left. As a first step, I've gotten the square to move mostly right/left, and up/down to a lesser degree.
I'm continuing to try and wrap my head around vectors...