We created an experiential project with dynamic visual effects and sounds from the deep universe. Everything in the universe comes from the singularity and first cause and first cause. The player will act as the creator of the universe, giving the universe its first driving force and deeply participating in its construction.
This web instrument allows you to make music by planting and watering different kinds of “audio seeds” that grow into lush melodies and textures.
Watering the seeds causes them to grow both visually and sonically, and distinct areas in the garden cause the plants to behave in different ways.
Composing using this interface is more spacial than linear. Plants emanate sound that you navigate through using the mouse, so moving through the space influences the mix of sounds.
The implementation represents different types of sound using basic geometric forms and generates growth patterns algorithmically using L-Systems — a way of modeling generational systems. These patterns are at times also used to produce melodies.
The musical garden invites exploration, and can be found at https://musical-garden.netlify.app/
Immersive DJ experience with the dancing light art
Wyatt Zhu, Shinnosuke Komiya, Weiwei Zhou
Standing in the middle of some LED pillars, there will stand a DJ. There is also a DJ control panels are attached to the headphone, the panning of the panels will change the electrical effect of the music, as well as the gradient changing of the LED pillars. We want the main material of the pillar to be half-transparent, with several LEDs at the bottom of the pillars. The device is expected to be played in a fairly dark place. With the glowing of the lights, the light will shine through the half-transparent material and gradually changes over time.
As humans, our existence is defined by different emotional states. When we feel an emotional impulse, it's like a ripple is dropped inside of us. This ripple flows outward and is reflected in how we perceive the world around us, as well as how we act within it.
For this project, we wanted to visualize emotional states using colors, shapes, and sounds in a poetic way.
The first thing we did is dividing all emotion words into 6 classifications: happy, content, sad, angry, shocked, afraid and then used p5.speech to recognize words instead of training words myself in the teachable machine because it’s far more accurate and for now this project can recognize over 110 emotion words.
We create a flowing 3d object and use sin() function to generate a beautiful ripple. More importantly, we generate multiple filters for one song in response to different emotions, and the amplitude of the song will affect the frequency of the ripple. For the visual part, we believe matching colors and custom shapes to different emotion words based on color and shape psychology could give people an immersive experience.
☕️ Lo-Fi Player is a virtual room in your browser that lets you play with the chilling VIBE!
“Lo-Fi Player” is a virtual room in your browser that lets you play with the BEAT! Try tinkering around with the objects in the room to change the music in real-time. For example, the view outside the window relates to the background sound in the track, and you can change both the visual and the music by clicking on the window.
Check out the blog: https://magenta.tensorflow.org/lofi-player
A musical performance synthesizing two humans – a dancer and a singer, into one digital avatar
A “live” musical performance, using two performers to create a digital avatar – a singer animating the face, and a dancer animating the body.
The dancer's movement is recorded through a motion capture bodysuit, and the singer's facial expressions are captured by the iPhone's ARCore. Both are streamed live into the Unreal Engine game environment. The video image of both performers is also streamed and forms part of the virtual performance space, showing the physical and digital bodies side by side.
Jordan Rutter – singer
Stacy Grossfield – dancer
Cold Genius aria from King Arthur by Henry Purcell
This project gets the field recording based on the location of the International Space Station's longitude and altitude and the altitude, distance, and zenith angle of the sun when there is no person engaging. When there are people in the environment recognized by the webcam, the distortion of the field recording will be more vibrant, creating a symphony between humans and nature.