We created an experiential project with dynamic visual effects and sounds from the deep universe. Everything in the universe comes from the singularity and first cause and first cause. The player will act as the creator of the universe, giving the universe its first driving force and deeply participating in its construction.
Net-Natyam is a hybrid training system and performance platform that explores the relationships between music, machine learning, and movement through electronic sound composition, pose estimation techniques, and classical Indian dance choreography.
Bharatanatyam is a form of classical Indian dance that involves using complex footwork, hand gestures, and facial expressions to tell stories. The dance is traditionally accompanied by Carnatic music and an orchestra consisting of a mridangam drum, a flute, cymbals, and other instruments. Net-Natyam uses three ml5.js machine learning models (PoseNet, Handpose, and Facemesh) and a webcam to detect the movements of a Bharatanatyam dancer and trigger a corresponding sequence of electronically composed sounds.
The Mr. Scribbles Dancing Drawing Robot was created to help people feel more comfortable about their bodies, about their movements — about being weird sometimes. Dancing Drawing Robot is a robot, controlled using dance poses.
OR CLICK PROJECT WEBSITE TO PLAY THE DRUMS YOURSELF!
I have collected, arranged, and hung 7 percussive and sonic objects in and array around the listener's ear, be it a human or electronic eardrum. To each object is attached a solenoid motor which will strike the object, and I will control this striking both live with buttons and by creating rhythmic MIDI clips in Ableton Live. I'll then explore the vocabulary of sounds possible with my room-sized instrument, incorporating it into musical performance, perhaps on its own, played and manipulated by multiple people, and with other sound sources, for instance a pitch-detecting harmonizer I created, or just an acoustic instrument like the bass clarinet. If I have time and luck I'll make it possible for spectators to trigger the sculpture over the web.
This web instrument allows you to make music by planting and watering different kinds of “audio seeds” that grow into lush melodies and textures.
Watering the seeds causes them to grow both visually and sonically, and distinct areas in the garden cause the plants to behave in different ways.
Composing using this interface is more spacial than linear. Plants emanate sound that you navigate through using the mouse, so moving through the space influences the mix of sounds.
The implementation represents different types of sound using basic geometric forms and generates growth patterns algorithmically using L-Systems — a way of modeling generational systems. These patterns are at times also used to produce melodies.
The musical garden invites exploration, and can be found at https://musical-garden.netlify.app/
A letter block board with text-to-speech engine designed for tangible learning.
Smart Tiles is a block board that recognizes wooden letter tiles to generate speech and play games. It operates entirely offline. The project was designed with my 4 year old niece and nephew in mind. They both enjoy playing with educational smartphone apps but I had a hard time finding any really compelling educational tangible toys for learning letters and words. This is my attempt to fill that gap. The letter tiles are designed to be familiar, like the wooden alphabet blocks I had growing up, and CNC milled from eco friendly Green-T Birch plywood. I have pushed to make the technology as “invisible” as possible to bring some magic to the user experience. Both English Braille and print characters are engraved on each block to be inclusive for tangible learners.
This project comes from questions: how can people exist, and how can existence be proven? So I combine the idea of long-exposure in photography with the p5 sketch and set the installation in a dark place. When the audience triggers the sketch, it will start to capture the audience's movement and draw the light trace on the dark canvas, and when the audience leaves, the trace will disappear.
A microhabitat is a virtual open-door to a tiny habitat that can be seen only through a microscope.
A microhabitat is a virtual open-door to a tiny habitat that can be seen only through a microscope. It welcomes audiences to a tiny room of a person living in NYC. The tour is not as big and fancy as you see in many of YouTube's’ open doors. Comparing the size of the room to that of other open door videos, it may feel like looking into microscope slides where details are only visible via a microscope.
The work brings the housing problem the 20s and 30s are facing. Finding a habitat has become more difficult. The more you get close to the central part, such as NYC or Seoul, the more expensive rents get making it tougher to find a place, and the smaller the room becomes. However, no matter how small the room may be, there lives a person with their own unique story and a big dream.
The audience peeks into the small room of a person through a microscope that has two controllers. Using the knob on the right side, a stage controller, the audience can look around the room as it rotates the camera situated at the center of the microhabitat. Using the knob on the left side, coarse adjustment, the user can look into the details on specific objects located in the room. Each object contains a personal story of the person living there as if you see the product details of objects that YouTube has in one's open-door video.
Have you ever dreamed of becoming a magician and having the power to make some magic stuff? Both Alina and I love all magical things. Getting inspiration from Card Captor Sakura, we planned to build a magic circle generator. The basic interaction of this generator is that users input their birthday and the generator will create a unique magic circle for him or her. Every magic circle is formed by four parts: a rotating 3D magic stone, a unique pattern, a special song and a sentence of the day. Hope you can enjoy this sparkling and glowing visual effect.
As humans, our existence is defined by different emotional states. When we feel an emotional impulse, it's like a ripple is dropped inside of us. This ripple flows outward and is reflected in how we perceive the world around us, as well as how we act within it.
For this project, we wanted to visualize emotional states using colors, shapes, and sounds in a poetic way.
The first thing we did is dividing all emotion words into 6 classifications: happy, content, sad, angry, shocked, afraid and then used p5.speech to recognize words instead of training words myself in the teachable machine because it’s far more accurate and for now this project can recognize over 110 emotion words.
We create a flowing 3d object and use sin() function to generate a beautiful ripple. More importantly, we generate multiple filters for one song in response to different emotions, and the amplitude of the song will affect the frequency of the ripple. For the visual part, we believe matching colors and custom shapes to different emotion words based on color and shape psychology could give people an immersive experience.