A collaboration with Alvin Chang
see Alvin’s post here
Soliloquy is a non-visual interactive system that implements simple body tracking, spatial audio, and airflow feedback.
Initially using a Kinect and the OSCeleton library with Processing, we combined constrained skeleton tracking (limited to the upper body of a seated individual) with directional airflow feedback via a specialized radial fan rig consisting of 8 12v DC fans modulated with an Arduino microcontroller as well as spatial audio from ToxicLibs via noise canceling headphones. The airflow system is powered by 3 modified PC power supplies providing a total of 33A of system current to the fans with each fan drawing a maximum of 7A.
The user is blindfolded and seated in the center of an adjustable perimeter of directional fans. The array of fans delivers variable intensity airflow from front, back, left, and right. The intensity of each fan’s flow is mapped to the orientation of the user’s head and shoulders as well as a directional ‘wind’ sound effect; where if the user gradually leans forward, the frontal air flow would gradually intensify along with the wind sound effect; offering feedback to the user that their movement and position is active/reactive within the system. From here the user can explore a two dimensional auditory space populated with sound objects.
Spatial Audio and Sound Objects
Each sound object is located in the auditory space in the form of a looping monoaural sound file. The 3D spatial audio capability from ToxicLibs provides logarithmic intensity and doppler shift. At present we can only render simulated spatial audio using distance attenuation without any filtering. This means that without any OpenAL support or 5.1 surround capability (on a MacBook Pro) the user’s perception of front and back are difficult to discern. This can make spatial navigation difficult, though some users have been able to adapt to it and accurately locate sound objects. Optimally we would like to try to add true “circumaural” rendering to the system with a device like Creative Lab’s Tactic3D gaming headset. This would allow true 3D spatial audio.
The system is experimental; exploring the capabilities of non-visual spatial navigation only, though more cohesive audio-spatial environments would be an eventual goal.
An alternate configuration being considered would replace the Kinect as a method of tracking a user’s body position with a simple color tracking system that might track single or multiple points on the user’s headset from above. This approach could potentially offer the same if not more accurate level of control in the current configuration. While this would limit the addition of further gestural interaction later on it would eliminate the need to calibrate each user for skeleton tracking with the Kinect, something that has not been seamless due to the required lower than normal position of the Kinect and its difficulty recognizing a user within the obstructing airflow array.