In even the tiniest fragment of digital sound (especially music) there lies a multiplicity of information hidden within. Using audio analysis techniques, this data can be distilled into a vast array of characteristics that describe various different features of the sound. These include things like the loudness, pitch, or the spectrum of frequencies being detected. Through additional analysis, these data points can be used to detect higher level musical features representing things like tempo, rhythm, or melody. Furthermore, the sound and music information can be used to train deep learning models that can then make accurate predictions (eg. what a sound is, what genre a song is, what mood a song evokes). Or, we can use machine learning for generative purposes using the data to guide the creation of new sounds, synthesizers, or even entire songs. The preceding are activities that fall under the areas of digital signal processing, music information retrieval, and machine learning, a trifecta that form the technological foundation for the research area known as machine listening. With a focus on ambient sound and music, this class will explore how tools and techniques from the field of machine listening can become a powerful aspect, or even strategy, in the realm of creative applications. This course will not cover, nor will it assume knowledge of, the underlying technical aspects of machine listening, or music theory. Resources for further pursuance of each week’s topics will be provided but will not be required for class. Instead, our aim will be on understanding what these techniques are doing, when and where to apply them, and how to access and apply them effectively through powerful software libraries. This high level approach will allow us to keep our efforts directed towards creative experimentation without becoming bogged down. Ultimately, students will synthesize the semester’s work into their own creative application involving sound. Here are some examples of the types of projects this class could support: An app that visualizes audio through graphics or DMX/LED lighting to create synesthesia-like effects An automatic system for transcribing music based off of a recording or real-time input A music remixing system where tracks are automatically selected, spliced, processed, and rearranged A musical instrument that adapts to its player based on real-time analysis of the played sound A synthesizer that uses machine learning to optimize and tune its parameters A music education software that visualizes rhythm and melody for the purpose of instruction A rhythm game that derives its gameplay from music information (Guitar Hero, Rock Band, DDR) A tool that analyzes the health of a machine based on its sound through a contact microphone The course will be taught in JavaScript with ICM-level programming experience recommended. No formal training in sound or music is expected or required. This course will be a great fit for any student that is interested in sound and wants to explore it more deeply. Please feel free to reach out to me via email with any questions about the class.
ITPG-GT.3018.1 () | Instructor: Michael Simpson | Fri 3:20pm to 5:50pm | Meeting Pattern: 7-First Half | Start Date