[VIOLASTREAM 2.0 LIVE]
Wednesday, December 16, 8-10pm EST
Thursday, December 17, 8-10PM EST
To participate, go to: https://violola.herokuapp.com/
To watch stream, go to: twitch.tv/violola/
>>>>>>>>>>>>>>>>>>>
VIOLASTREAM 2.0 is an online interactive performance where I hand control of my actions over to my audiences, who will collectively vote for my next tasks, movements, emotions, and when to do them.
Audiences have access to a webpage with embedded livestream, action choices, and a comment section. This page is connected to my stream, which displays those inputs in real time. While a computer voice speaks to me all your commands, only the highest voted task will be acted out. To stop a current action and trigger the next, you need to vote for “stop task” until it surpasses the vote count that triggers the task.
With performance artists Tehching Hsieh and Marina Abramovic in mind, I'm utilizing web technologies and streaming platforms to explore my own body and identity performance in relations to the others. While the audiences act as commanders and spectators, Viola's body perform the role of the object and machine, creating a cybernetic relationship through webcam and livestream as medium.
Bharatanatyam is a form of classical Indian dance that involves using complex footwork, hand gestures, and facial expressions to tell stories. The dance is traditionally accompanied by Carnatic music and an orchestra consisting of a mridangam drum, a flute, cymbals, and other instruments. Net-Natyam uses three ml5.js machine learning models (PoseNet, Handpose, and Facemesh) and a webcam to detect the movements of a Bharatanatyam dancer and trigger a corresponding sequence of electronically composed sounds.
zoom link:
https://nyu.zoom.us/j/91942675074?pwd=Sm13YzQyVm9abEFUOXlFdUsrSk8rUT09
GO TO
https://midi-sender.herokuapp.com/
OR CLICK PROJECT WEBSITE TO PLAY THE DRUMS YOURSELF!
I have collected, arranged, and hung 7 percussive and sonic objects in and array around the listener's ear, be it a human or electronic eardrum. To each object is attached a solenoid motor which will strike the object, and I will control this striking both live with buttons and by creating rhythmic MIDI clips in Ableton Live. I'll then explore the vocabulary of sounds possible with my room-sized instrument, incorporating it into musical performance, perhaps on its own, played and manipulated by multiple people, and with other sound sources, for instance a pitch-detecting harmonizer I created, or just an acoustic instrument like the bass clarinet. If I have time and luck I'll make it possible for spectators to trigger the sculpture over the web.
This project comes from questions: how can people exist, and how can existence be proven? So I combine the idea of long-exposure in photography with the p5 sketch and set the installation in a dark place. When the audience triggers the sketch, it will start to capture the audience's movement and draw the light trace on the dark canvas, and when the audience leaves, the trace will disappear.
In the spirit of COVID-19 remote learning, we wanted to create synchronous entertainment done remotely – the first ITP Sensor Battle. We are a group of 5 members located in 13-hour-difference time zones and we created a live sensor battle entertainment connected via UDP server. This is a match among 3 players to determine the best and strongest sensor. 3 players, in the USA, are battling with 3 different sensors – joystick, gesture and muscle sensors – connected to the battle (UDP) server to control their respective RC cars in Beijing battle ground. The battle ground and competing RC cars are in Beijing and players and their sensors are in the USA.
The team ideated 4 different gameplays to determine the battle, but after putting to the vote to the ITP class of 2022, we decided on the Paintball gameplay. Each RC car has a canvas and a paint gun mounted; and the players eliminate the others by shooting the paint on the other players’ mounted canvases.
Technicality aside, we imagined ourselves as extraterrestrial beings representing different sensor communities. The Sensor Battle is taking place at Star-dium stadium on the remote frontier planet of Batuu. Hope you enjoy! *Vulcan Salute*
How to use:
1. Put your left hand on the hand shape of the sphere. Starting from your thumb to your little finger one at a time, pressing down the sensor on the finger tips.
2. When pressing down the sensor, it will activate one set of the LED inside the sphere. And also activate the animation on the monitor.
3. Each sensor controls one set of LED and a clip of the animation. At the end when you are pressing down all five of them, you will complete the “Kame Hame Ha.”
In Sound and Color Bender, we explore the relationship between the movements of our bodies, color, form and sound. This project is the beginning of what we would like to be a tool for performers to use to create music and visual art simultaneously. What is the connection of the gestures of an arm moving, to the frequency of a melody, to a visual pattern on a screen? While we are based on opposite sides of the country, Natalie and I worked together to create a glove that responds to the movement of the user’s hand. We used the micro controller’s built in accelerometer and gyroscope to measure the tilt and acceleration of the hand, which sent those values to audio and visual software. The project’s current state provides a meditative space, with stimulating visuals and an airy, atmospheric audio experience.
Standing in the middle of some LED pillars, there will stand a DJ. There is also a DJ control panels are attached to the headphone, the panning of the panels will change the electrical effect of the music, as well as the gradient changing of the LED pillars. We want the main material of the pillar to be half-transparent, with several LEDs at the bottom of the pillars. The device is expected to be played in a fairly dark place. With the glowing of the lights, the light will shine through the half-transparent material and gradually changes over time.
Animal talk is a web-based application that was created for Connections Lab class in Fall term 2020 by Christina Lan & Dorian Janezic. Our collaboration started for a midterm project, where we had to use sockets.io to create an interactive website for multiple users.
We both are very interested in sound, so we have decided to explore p5 sound library and created a pitch-matching game where users can try to click as close as possible to the Oscillator's frequency that was emitted. We were happy with the final output and the feedback so we decided to continue our collaboration for the final project. Our ideas shifted towards a performative interactive piece with multiple users which could use animal sounds to communicate, create their own encrypted language by mapping letters, words or phrases to specific sounds. Right now the project is at the stage where users can select the animal sound that will communicate their message to other users and when you receive a message you can hear the message in a converted, transformed or encrypted language. The sound is associated with an animation that plays when listening to the messages. The users can also preview or demo the animal sounds that they have selected by clicking on the canvas.
We want to further develop the project, and add new features that would form new interactions between users. One of the next steps is to upgrade UX and add an option for users to click on the text message and listen to the message and try to decode it or at least guess which animal did the sender choose.
Special thanks to Christina for collaboration, Craig and Mathura for making Connections Lab an unforgettable experience and all the residents and Ima Low Res Team.
ZOOM PASSWORD : ITP
Every 10 minutes the performance resets, using the audio and motion capture data of the renditions before, the live actor responds to the ghosts of eir past experience. Building on top of itself until becoming a cacophonic crowd. The actor cannot escape eir past as it continues to present itself, ey can only respond.. and respond to the response. If ey cannot change what has and is happening, can ey still find resolve? Does the moment of resolve then become another reflection? A study of presence.