Join

Max Horwich

A voice-controlled web VR experience invites you to sing along with the robotic choir

https://wp.nyu.edu/maxhorwich/2018/04/30/join/

Description

Join is an interactive musical experience for web VR. A choir of synthesized voices sings from all sides in algorithmically-generated four-part harmony, while the user changes the environment by raising their own voice in harmony.

Inspired by the Sacred Harp singing tradition, the music is generated in real time, based on Markov chains derived from the original Sacred Harp songbook. Each of the four vocal melodies are played from the four corners of the virtual space toward the center, where the listener experiences the harmony in head-tracking 3D audio. A microphone input allows the listener to change the VR landscape with sound, transporting them as they join in song.

While the choir is currently programmed to sing only in solfege (as all songs in the Sacred Harp tradition are usually sung for the first verse), I am in the process of teaching the choir to improvise lyrics as well as melodies. Using text also drawn from the Sacred Harp songbook, I am training a similar set of probability algorithms on words as notes. From there, I will use a sawtooth oscillator playing the MIDI Markov chain as the carrier, and a synthesized voice reading the text as the modulator, combining them into one signal to create a quadrophonic vocoder that synthesizes hymns in real time.

For this show, I present to show Join in a custom VR headset — a long, quilted veil affixed to a Google Cardboard. Rather than strapping across the user’s face, this headset will be draped over the head and hang down, completely obscuring their face and much of their body. After experiencing the virtual environment, participants are invited to decorate inscribe the exterior of the headset with patches, fabric pens, or in any other way they see fit — leaving their own mark on a piece that hopefully left some mark on them.

Classes

Algorithmic Composition, Electronic Rituals, Oracles and Fortune-Telling, Expressive Interfaces: Introduction to Fashion Technology, Interactive Music, Open Source Cinema

ONE ZERO

Dongphil Yoo, You Jin Chung

Two laptops play hypothetical ping pong game.

https://

Description

How machine perceive itself or the others? *Two* project is started from this simple but deep question. We are expecting people re-consider about perception, objective and subjective, and reality in a philosophical way.

Classes

The World, Pixel By Pixel

An Egg and a Banana but Abstract (Sound Object Series)

Katya Rozanova

These are objects that emit sounds when set in motion.

http://www.katyarozanova.com/blog-1/2017/10/23/sound-sculptures

Description

The sound playground is a group of objects that emit sound when set into motion, whose sonic behavior changes over time and whose sonic interactions cannot be predicted with certainty. The algorithms only allow for some control of the outcomes. The addition of unpredictable sonic interactions – giving some of the objects a mind of their own – aims to bring them to life just enough to stay in the intersection of object and entity. The goal is to invite play and discovery.

Classes

Project Development Studio

Mimosa

Ridwan Madon

Mimosa is a wearable technology jewellery for women to distract creepy stares on their bustline.

https://www.ridwanmadon.com/single-post/2018/03/23/Project-Development-Progress

Description

Inspired by the mechanics of a mimosa plant, the project reflects on the sole purpose of the wearable piece. Mimosa was made for women who are confident of their body, but feels uncomfortable when they receive stares on their bust line. Mimosa is triggered and controlled by the user as and whe she feels her space is being invaded. Using only her phone and bluetooth to trigger the servo motor, the piece creates an extention of the user ability to shield herself from this unjustified behaviour.

Classes

Expressive Interfaces: Introduction to Fashion Technology, Project Development Studio

MaiSpace

Mai Arakida Izsak

MaiSpace invites you to wander through a virtual place that is accessible to a global community on the VRChat platform, and to explore future possibilities of fostering personal connections in a fully digital universe.

https://

Description

VR has so far largely been treated as an extension of film – a solitary experience offered up at exclusive festivals by acclaimed directors with a point of view. A new wave of platforms blend VR with Gaming and Social Media, creating a virtual community that can teleport to connect with one another, and conjure up real-world tools, impossible spaces, and even bodies to enhabit.

MaiSpace is my personal home within VRChat, one of the leading social VR platforms. Users can choose their avatars and traverse the world through portals and rides, experiencing pieces of my identity in the process. The space includes a 360 video shot at the Dead Sea, a dance floor of swaying palm trees, and a giant bubble structure that encompasses a birdseye view of the environments within it. Other users will be able to join remotely and roam this little corner of the virtual multiverse.

As spaces are shaped by the activity of those who occupy it, and in turn, spaces shape people’s experiences, MaiSpace aims to provide a positive atmosphere for its new virtual community. It is imperative that we participate and help set the tone for our public virtual spaces to enable genuine interaction and connection. This project explores the potential for VR to provide fluidity and freedom of identity, as well as agency and representation through the sensations of embodiment and telepresence.

Classes

Open Source Cinema, Synthetic Architectures, The Poetics of Space

Digital Fern

Lucas Chung

This project will be a robotic limb actuated with hydraulic cylinders using a neural network to move it towards points of light.

http://chung.work/blog/2018/04/digital-fern-branch-prototype/

Description

The Digital Fern is a robotic limb that uses syringes as hydraulic cylinders and mimics the behavior of plants by unfurling in the presence of light. Users will be invited to interact with it by controlling a light and seeing how the Digital Fern behaves as a reaction.

The Digital Fern is controlled by a simple trained neural network being run on a laptop nearby.

Classes

The Nature of Code, XYZ

BOOM!

Ji Young Chun, Namsoo Kim

A playful AR app that brings any object that has a detectable 'face' to life by placing user's speech balloon next to the face.

https://

Description

BOOM! is a playful AR app that detects a face of any kind (real human face, face in posters or pictures, face drawings, etc) and places a speech balloon next to the face by using the users' voice. It uses face detection technology to detect a face, and speech-to-text technology to recognize a speech and convert into text. When users enter the app and find any object that has a detectable face, they press a button to start. When a speech balloon is placed next to the detected face and the users start speaking, their speech will be turned into a text and placed inside the speech balloon. Then they may save this video and share it to social media. Pictures, posters, and drawings can also have a life!

Classes

Mobile Lab

Programmable Air

Amitabh Shrivastava

A controllable air source that sucks as well as it blows.

https://github.com/tinkrmind/programmable-air

Description

The field of inflatable robotics is still in its infancy. So there are a lot of low hanging fruit up for grabs. As such, the more people involved in active experimentation, the faster the field will mature.

Working with inflatables involves a lot of prototyping as a part of the design process. To do any repeatable experiments with inflatables, a controllable air source is required.

I feel there isn't any affordable and easy to use programmable air source available for makers right now. This is a barrier to entry. Programmable Air is my attempt to create a bottom of the line air source that is cheaper and easier to use than anything available right now.

During the spring show, I will have a few fun configurations of programmable air like a balloon, vacuum gripper, SMD pick and place, a soft silicone robot ready for users to play with. But I'm looking for user feedback on what they'd want to use it for. And I would be looking for collaborators to contribute to the project.

Classes

Soft Robots and Other Engineered Softness

byte

Arnav Wagh, Lauren Race, Lucas White

Social media is predominantly visual. Byte is a social media platform that is purely auditory, designed specifically for the low vision community.

https://lgr277.itp.io/

Description

Social media platforms are reactive, in that they only provide access to those with low vision after their success with the sighted user market and they are mainly visual. Byte is a social platform for all, built in consideration of users with low vision. By recording an audio byte, users can share their lives on a feed, along with posts from friends and family.

Classes

Looking Forward 2: Design for Accessibility

Picasso in 2018

Lu Wang

Broken the masterpiece from Picasso under the computer age.

https://www.acomposer.me/copy-of-2

Description

Using projection mapping to map this painting form Picasso on a canvas, and using Kinect to track people's body movement, and this movement will trigger the distortion of the painting from Picasso.

Classes

The World, Pixel By Pixel