Let’s Dance with Music

Qian Zhang

Advisor: Yuliya Parshina-Kottas

An AI-powered system that detects emotional expression in dance and generates responsive sound in real-time, transforming movement into music.

Project Website Presentation
A dancer in motion on a dim stage, surrounded by soft projected light and overlaid sound visualizations representing emotion

Project Description

This project explores the emotional dialogue between human movement and generative sound. By training a custom machine learning model on dancers’ movements, the system detects three core emotional expressions—Sadness & Inner Struggle, Conflict & Tension, and Freedom & Liberation—and responds with generative sound textures mapped to each emotional layer. Inspired by contemporary dance practice and therapeutic movement exploration, this project empowers performers to “speak” through their bodies and let their emotions shape the sonic space. It challenges the traditional performer–music hierarchy by making sound reactive to the dancer’s internal state, not just their external actions. The result is a dynamic, live emotional instrument that blurs the line between choreography and composition.

Technical Details

The system uses ml5.js’s neuralNetwork with time-series input (position, velocity, acceleration) from body tracking data to classify movement into three emotional categories. Real-time audio feedback is generated using Tone.js, including layered ambient textures, string glissandos, and percussion that dynamically respond to dancer input. The model was trained on annotated motion clips and deployed in a browser-based interface using p5.js and webcam capture.

Research/Context

This work draws from dance therapy, the Laban movement system, and emotion recognition in computer vision. Inspired by personal experiences of reconnecting with the body through dance, the project aims to create an expressive platform where bodily emotion is not only seen—but heard.