William Jennings, Erica Newman, Becky Kazansky
Since February, we have been in contact with an individual with severe motion and speech constraints and partial blindness. This individual has never had the capacity to write an email without the assistance of others. He loves jazz and NPR, but has no consistent access.
Having the chance to design with a specific person in mind is a beautiful opportunity. Constraints take on greater importance. Expectations are raised.
We began by creating a simple piece of software that, with the use of a web cam, allows a user to open browser targets with a swipe of the hand across two regions on a screen.
As of April 23, we now have a rudimentary gesture interface for the Kinect that allows Winston (or any user with a kinect hooked up) to open and scroll through itunes with up or down hand motions. This is just the beginning: a full scale gestural alphabet is the goal.
Erica is taking the project to Makerfaire in San Francisco at the end of May.
Create an interface that allows Winston to have more autonomy in his day-to-day life. Specifically, the ideal goal was to create a system for him to compose emails without the help of an aide.
-Geography – hard to test consistently
-has to be PC-based
-Winston has limited vision and poor movement
-What does Winston like?
-What can we provide Winston access to that he currently doesn’t have access to?
-Focusing in on a few possible deliverables
Create a scalable system with simple gestures
-Kinect/camera: A simple gestural navigation system utilizing the right and left arms that would allow Winston to launch an interface and use a number of preselected applications
-Xylophone-like interface: incorporate color and sound into mapped quadrants
-Physical interface: A board with wireless connection to Winston’s computer that has four or so physical buttons, whether they come in the form of FSRs or a multi-touch screen
-Making buttons dynamic to account for user testing
-Two initial targets in mind: NPR and iTunes
-Camera lighting issues. Adjusting threshold.
-Are the buttons easy to reach? How will Winston’s gestures translate?
- Will a gestural system tire him out?
-How scalable is the button system for other applications?
Webcam-based motion tracking with 2 colored buttons on screen, each controlling a different function in iTunes.
-Back to the Kinect:
-Turns out Winston’s computer can handle it
-We need depth
-Initial target is, again, iTunes
-Keep it simple with 4 directional gestures
(up, down, left, right). Directions are more scalable than buttons: can be put into different combinations as we expand the software for use with other applications
-Account for gestural anomalies with averaging