Yotam Mann – ITP Spring Show 2018 /shows/spring2018/ Tue, 15 May 2018 16:53:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 35 Years of Mass Shootings in the US /shows/spring2018/35-years-of-mass-shootings-in-the-us/ /shows/spring2018/35-years-of-mass-shootings-in-the-us/#respond Tue, 15 May 2018 16:00:38 +0000 https://itp.nyu.edu/shows/spring2018/35-years-of-mass-shootings-in-the-us/ Continue reading "35 Years of Mass Shootings in the US"

]]>
Mary Notari

A piece of music is generated using a comprehensive dataset of the past 35 years of mass shootings in the US compiled by Mother Jones as the score in an attempt to answer the question: how can music move us?

http://www.marynotari.com/2018/03/28/interactive-music-final-project/

Main Project Image

Description

Since first publishing a report in 2012, Mother Jones has been continuously updating an up-to-date spreadsheet of dates, locations, casualties, and other metrics for each instance of a mass shooting that has occurred in the US since 1982. This project was a result of Yotam Mann's class, “Interactive Music.” We were challenged to imagine and perform a completely novel musical score––namely, one that did not use traditional music notation. To that end, I have conceived of this dataset as my musical score. Each column of data-points corresponds to different audio events within a browser-based sketch. I use Tone.js and Moment.js libraries to generate the audio and time it out. Mozart's “Requiem” provides the basis for the chord progression. The audio events occur in concert with a p5 animation over a map of the US, with the shootings visualized as ellipses sized according to casualty counts. Users may hover their mouse over each ellipse to receive a detailed description of the shooting as provided by Mother Jones. After 3 minutes of continuous play, a button with the words “Stop this” begins to fade in. The song and animation will loop infinitely unless the user clicks the button, which will lead them to a 5calls.org page about anti-gun-violence advocacy. Like the shootings in real life, nothing will change unless those who are able to take action. The reasoning for this feature is to connect the function of the piece to its conceptual core: what is the point of aestheticizing data this fraught? Can there be a tangible connection between aesthetics and action? How can music be used to make subjects that might otherwise be paralyzing and overwhelming accessible and knowable? Put another way: how can music move us?

Classes

Interactive Music

]]>
/shows/spring2018/35-years-of-mass-shootings-in-the-us/feed/ 0
Join /shows/spring2018/join/ /shows/spring2018/join/#respond Tue, 15 May 2018 16:00:36 +0000 https://itp.nyu.edu/shows/spring2018/join/ Continue reading "Join"

]]>
Max Horwich

A voice-controlled web VR experience invites you to sing along with the robotic choir

https://wp.nyu.edu/maxhorwich/2018/04/30/join/

Main Project Image

Description

Join is an interactive musical experience for web VR. A choir of synthesized voices sings from all sides in algorithmically-generated four-part harmony, while the user changes the environment by raising their own voice in harmony.

Inspired by the Sacred Harp singing tradition, the music is generated in real time, based on Markov chains derived from the original Sacred Harp songbook. Each of the four vocal melodies are played from the four corners of the virtual space toward the center, where the listener experiences the harmony in head-tracking 3D audio. A microphone input allows the listener to change the VR landscape with sound, transporting them as they join in song.

While the choir is currently programmed to sing only in solfege (as all songs in the Sacred Harp tradition are usually sung for the first verse), I am in the process of teaching the choir to improvise lyrics as well as melodies. Using text also drawn from the Sacred Harp songbook, I am training a similar set of probability algorithms on words as notes. From there, I will use a sawtooth oscillator playing the MIDI Markov chain as the carrier, and a synthesized voice reading the text as the modulator, combining them into one signal to create a quadrophonic vocoder that synthesizes hymns in real time.

For this show, I present to show Join in a custom VR headset — a long, quilted veil affixed to a Google Cardboard. Rather than strapping across the user’s face, this headset will be draped over the head and hang down, completely obscuring their face and much of their body. After experiencing the virtual environment, participants are invited to decorate inscribe the exterior of the headset with patches, fabric pens, or in any other way they see fit — leaving their own mark on a piece that hopefully left some mark on them.

Classes

Algorithmic Composition, Electronic Rituals, Oracles and Fortune-Telling, Expressive Interfaces: Introduction to Fashion Technology, Interactive Music, Open Source Cinema

]]>
/shows/spring2018/join/feed/ 0