The Kinect is a motion sensing input device by Microsoft, originally designed for the Xbox 360 video game console. The primary sensor employed by the Kinect is an IR camera. Laser beams are sprayed out by an IR projector, bounce back & are received by an IR camera (this is how distances are detected -- see below). Additionally, the Kinect has an RGB camera (a webcam, basically) with a maximum resolution of 640 x 480.

This is how it "knows" where your hand is and what gives you the capability to use your body as a controller. Obviously 3D scanning capabilities are not new; the government has been using them for years as have NASA and architects and other high end institutions that could afford the technology. What is ground-breaking about the Kinect is that it lets you do 3D sensing at $150. That is crazy cheap.

The Kinect uses proprietary middleware called OpenNI NITE developed by PrimeSense, the company that developed the hardware in the Kinect. PrimeSense just released their own 3D camera called the ASUS Xtion Pro. OpenNI NITE runs a complex algorithm that translates your skeleton data so that it can be used in software programs. The Kinect reads the 3D depth image, OpenNI NITE translates the data of where each of the users’ limbs are in space, and then that data can be used in a front-end software platform like Processing to do any number of things that use your body as a controller.

Sources

Microsoft Kinect Windows SDK (Windows compatible) | $249
Microsft Kinect for XBox (currently Mac, Windows, Linux compatible) | $149
Processing | Free - Open Source
Simple OpenNI for Processing | Free - Open Source
OpenNI NITE | Free
Microsoft Visual Studio 2010 | $799
Microsoft Visual Basic | Free (requires Visual Studio)

Accessories

NYKO Zoom for Kinect | $16

Applications

Kinect Abnormal Motion Assessment System

Technical Characteristics

Microsoft Kinect for XBox

  • Communicates serially via USB
  • Standard frame rate: 30fps
  • Resolution: 640x480

Microsoft Kinect for Windows SDK

  • Communicates serially via USB
  • Standard frame rate: 30fps
  • Resolution: 640x480

Hardware Requirements

Kinect for Windows SDK

  • 32-bit (x86) or 64-bit (x64) processor
  • Dual-core 2.66-GHz or faster processor
  • Dedicated USB 2.0 bus
  • 2 GB RAM
  • A Microsoft Kinect for Windows sensor

Software Requirements

Kinect for Windows SDK

Kinect for XBox

  • Processing
  • Windows, Mac, or Linux

Setting up your Kinect with Simple OpenNI and Processing

The simple OpenNI for Processing project home, which includes all support files and download versions, can be found "here":

1. Install OpenNI NITE. This is what allows us to access all of the 3D data from the Kinect for use in Processing. It is the software written by PrimeSense that allows us to communicate with the Kinect. This is the only proprietary software used with the Kinect. To install, download the file, then launch the Terminal (Application s –>Utilities –> Terminal). Then change the directory by typing “cd” and dragging the unzipped folder to the Terminal which will set the correct file path. Hit return and type sudo ./install.sh to run the installer.

2. Install Simple OpenNI. This is the library that allows us to communicate the data from OpenNI to Processing. To do this, install and unzip the file linked above. Once unzipped, drag the folder to your libraries folder in Processing. Once it is there, restart Processing.

  • Note: In initial Kinect stages, it was necessary to use OSC to communicate skeleton data back and forth from Processing to the Kinect on OS X. The skeleton data, or “skeletonization” is what reads where your skeleton (joints/limbs) are in 3D space. This is what allows you to designate various parts of your body to be a “controller”. Fortunately, PrimeSense released their software (OpenNI NITE) that serves as middleware to perform the skeletonization. This makes things much easier.

3. Launch Processing, plug in your Kinect and get started coding!

Code Samples

You can find all of Greg’s Making Things See examples on his GitHub page.

Skeletal Tracking

Synapse

Max/MSP/Jitter

"jit.freenect.grab" by Jean-Marc Pelletier
depth sensor & RGB camera only
"2 methods for undistorting the Kinect depth map in Max/Jitter"

Books

Making Things See by Greg Borenstein, published by O’Reilly Meet the Kinect: An Introduction to Programming Natural User Interfaces by Sean Kean, Jonathan Hall, Phoenix Perry

Typical Behavior

Peculiarities

  • Natural sunlight: If you are having trouble calibrating, check the amount of sunlight in the room. There is infrared light in sunlight which throws off the ability of the Kinect to read the 3D space. Because it uses IR to read depth, it does not require indoor lighting to work
  • Reflective surfaces: Can interfere with the infrared sensor, particularly in regards to skeleton tracking
  • Calibration: The Windows SDK version does not require calibration for skeleton tracking, but the XBox version does. This means to use skeleton tracking, you have to have the user stand in submissive pose to calibrate the Kinect before using it. This takes ~5-10 seconds
  • Distance: The Kinect can only sense where you are if you are in a certain area. If you are too close, or too far, it cannot establish a readable depth image. The Kinect for Windows has “Near Mode” which allows closer sensing capabilities. Using the NYKO Zoom also helps. This is a chart of distance limitations from Microsoft.

Application Notes

How to get set up with the Kinect
NYKO Zoom specs
Kinect Abnormal Motion Assessment System