Thanks to augmented reality, new layers of perception can be built on top of the world. AR technologies have now been integrated to mobile devices, enabling users to see, listen and move around elements that don’t actually exist in the physical world but coexist with us in temporal instances of virtual worlds.
“sonAR” explores the construction of an imaginary sound landscape built from basic elements that are virtually represented in 3D space. The user, from the view field of the mobile screen, can also discover and interact with these sound elements through space.
Inspired by 'Water Pachinko' by Hara Kenya, 'White Mountain, Black Water' is an attempt on brining regular everyday substance to life.
In this project, we study the characteristics of water movement and its potential to create sound. By letting water drops traveling down through a white canvas that has small pins attached to it as obstacles, the water drops can either change direction, multiply into several drops or combine with other drops each time they hit the pins. This allows the water drops to move around vividly, putting life into it.
A microphone is installed next to the bowl below the canvas to collect the sound of the water dropping and use it as one of the sound sources. A camera will track the pace and positions of each water drops, turn them into numbers which will be translated into Midi notes. Each time water is being dropped to the canvas, it will make different sounds, emphasizing the uniqueness in movement of each water drop.
User will be able to use water as a musical instrument and perform with it while enjoying organic sounds created by water, mixed with electronic melodies that represent far-east style instrumental music.
Palette is a digital installation that provides a space for people to share and appreciate different languages. Users will speak to the microphone, watching the projection and hearing the music generated by their and others’ words.
CONCEPT: Superfície is a live performance that discusses identity and memory through the materiality of a simultaneously physical and digital living human body. Audiovisual textures that compose the performer's existence in space and time are triggered by her movements, leading the audience into an experience of scanning through her skin. This work dialogues with the idea of a body that is somewhere between stage and the digital world, co-existing in these two spaces.
TECHNICAL DESCRIPTION: Superfície uses Kinect to gather data from the performer's movements, transforming them into sound, video and light. The sound is a movement translation, having a set of rules that trigger different instruments and effects (i.e. the distance between the left foot and the right hand dictate the reverb of a specific synthesizer). The main video output is an on-skin projector showing textures from the performer's body and memories, as well as manipulations of the image being gathered by the IR camera. All elements are being triggered and manipulated in the same way that the sound is.
Note: I am signing up for performing live. Also available for participating in production the event.
In case the live performance ends up not happening, I would like to somehow present documentation of this process, since ITP haven't given me the chance to perform it.
The thumb-piano app is a browser-based thumb piano that can be played with facial gesture, the mouse, as well as with a touchscreen device. It is targeted for users with weak hand muscles, but are able to use a smart device to engage in music.
Inspired by the album “Hailu Mergia & His Classical Instrument” by Ethiopian artist Hailu Mergia, I built a musical wand that mimics these Mergia's distinct mode of playing. The wand controls musical pitches with aided by a visual component to increase understanding and engagement.
Diting is a mythical creature who is good at listening. Qiuniu and Pulao are two sons of Chinese dragon who are good at music. The physical part of the project is a panel. The panel is the physical part of my final performance for three classes: designing interface for live performance, code of music, and networked media. The whole project is a live performance that I will control the physical part, the panel, and the audience can join the show by entering my website. The server will collect data and trigger sound effects. Back to the physical part. In the link above, there is a sketch of my project and how the panel will look like. My current plan is to make a Weiqi chessboard and trigger the beats, melody, and harmony effects by putting down chess pieces. The sensors I will take advantage of include buttons, distance sensors, accelerometer, potentiometer, etc. The sound will be made in Abelton through Max. All the sound effect will produce visual effects with P5.js. The server end will use node.js and Jquery to collect and manipulate data. The physical part will be about the laptop size.
Designing Interfaces for Live Performance (UG), Networked Media (UG), The Code of Music (UG)