Jung Min Hong
Hand Gesture Recognition
Hand Gesture Recognition
http://reginahjm.wordpress.com/icm/
Classes
Introduction to Computational Media
Introduction to Computational Media
By using processing and Kinect, I am working on the hand gesture recognition.
My project specifically focused on bird hand gesture.
When user stands in front of kinect (2-3ft infront), showing bird hand gesture (ten figures spread like bird flying) real bird image is released.
The animated bird flies around and sits on the users head.
I would like to project those images on the brick walls. Please see my prototype works in my website (http://reginahjm.wordpress.com/icm/)
Background
Inspiration from:
Golan Levin's "Manual Input Sessions"
http://www.youtube.com/watch?v=3paLKLZbRY4
Examples of hand tracking
http://www.youtube.com/watch?v=K1dP0k3n_LI
and hand tracking with processing:
http://www.youtube.com/watch?v=_q4VdaJyX0Y
Processing libraries I studied:
-Blob Detection
-Blobscanner
-ezGestures
-FigureTracker
-leap motion for Processing
-OpenCV for Processing
-Open Kinect
-PeasyCam
-SimpleOpenNI
Audience
I am targeting all ages of audience from kids to adult.
User Scenario
1. user comes and see instruction
2. In the instruction, it says "pose bird hand gesture 2-3 ft in front of kinect sensor to release a bird in your hand"
3. user see their hand silhouette image on the projected wall
4. when user pose on the bird hand gesture, real bird image releases and show animation of bird flies around
Implementation
I would like to project image on the brick wall to give natural scenario.
Conclusion
I learnt and experienced on a lot of processing library to try to find the most useful one for my project.
Kinect Figure detect is the closest one and it works pretty accurate.
Now the interaction works only one person at a time, and try to make it work when more than two people at a time.
My project specifically focused on bird hand gesture.
When user stands in front of kinect (2-3ft infront), showing bird hand gesture (ten figures spread like bird flying) real bird image is released.
The animated bird flies around and sits on the users head.
I would like to project those images on the brick walls. Please see my prototype works in my website (http://reginahjm.wordpress.com/icm/)
Background
Inspiration from:
Golan Levin's "Manual Input Sessions"
http://www.youtube.com/watch?v=3paLKLZbRY4
Examples of hand tracking
http://www.youtube.com/watch?v=K1dP0k3n_LI
and hand tracking with processing:
http://www.youtube.com/watch?v=_q4VdaJyX0Y
Processing libraries I studied:
-Blob Detection
-Blobscanner
-ezGestures
-FigureTracker
-leap motion for Processing
-OpenCV for Processing
-Open Kinect
-PeasyCam
-SimpleOpenNI
Audience
I am targeting all ages of audience from kids to adult.
User Scenario
1. user comes and see instruction
2. In the instruction, it says "pose bird hand gesture 2-3 ft in front of kinect sensor to release a bird in your hand"
3. user see their hand silhouette image on the projected wall
4. when user pose on the bird hand gesture, real bird image releases and show animation of bird flies around
Implementation
I would like to project image on the brick wall to give natural scenario.
Conclusion
I learnt and experienced on a lot of processing library to try to find the most useful one for my project.
Kinect Figure detect is the closest one and it works pretty accurate.
Now the interaction works only one person at a time, and try to make it work when more than two people at a time.