We created an experiential project with dynamic visual effects and sounds from the deep universe. Everything in the universe comes from the singularity and first cause and first cause. The player will act as the creator of the universe, giving the universe its first driving force and deeply participating in its construction.
The Kaleidoscope Band is a shared experience, through a new form of a toy that combines both a kaleidoscope and a music box.
This kaleidoscope band is using an analog music box as input to change the projected kaleidoscope pattern through P5.js. As turning the knob of the music box, through a potentiometer, the pattern of the kaleidoscope would change over time through the projector along with the beautiful music sung along from the box.
Combining technology and crafts, we fabricate the box through wood and tried to deliver the nostalgic traditional toy that everyone is familiar with on top of creating the kaleidoscope pattern through a digital platform.
I never liked the Mona Lisa. Or Gogh's Café Terrace at Night.
You know what else I never liked? The pressure to like a piece of art, because it supposed to be 'all that'. Oh, and the judgement when you don't get it. Blasphemy.
Everyone's experiences and personalities are different. What they like, feel, think, believe are different. How everyone experiences a piece of art is different. No two people look the same painting the same way – it speaks to each person differently.
The painting here quizzes the viewer and the answers (or the viewer's personality) shapes what they see and hear. You might not like what you see, but I might love how I see the same thing.
We just want people to listen to themselves not base their judgement art critics or on accolades given by the ‘gate-keepers’ of art.
We will place the screen in a fancy golden frame and we want people to have an intimate conversation with the painting when they come stand in front of it. It’ll ask questions to the viewer and they have to respond ‘Yes’ or ‘No’. Their response (and their Spotify data) will change what they see. We want them to see how THEY inform the painting.
Yes, I love Hopper’s Nighthawks and Duncan loves the Mona Lisa. To each their own.
Try it out here: https://editor.p5js.org/rajshree.s/present/Xv8iT8w19
So much of how we view the world depends on what we see. But is the world really an objective truth that's just for everyone to eventually see it? This could be something for the audience to think about while exploring the diversity of animal vision. From finding out how your pet sees you, to learning extraordinary ways different animals see the signals that we could not observe, it's a fun experience to explore how we all see the world a little differently.
*This project aims to show the diversity of color vision. However, as some animals perceive aspects of light that we cannot see, and may perceive colors with a higher dimension of complexity, these visualizations are my speculation of what could be.
Space-mapper car is a tiny car––less than 5×9″ large––that uses IR distance sensors to detect walls and objects in its surrounding area and map them out on screen. Using two N20 encoded gear motors, the car can accurately track its movement speed and angle and transmit them through serial communication to p5.js, which can use the information, along with the data from its 2 Sharp IR distance sensors, to gradually piece together a rough map of the space.
The user can choose between 2 modes, each curating a different interaction between the physical space and the on-screen map. In the first mode, the car will fill out a map of its surrounding space on screen, as described above. In the second, more entertaining mode, a path will be drawn on screen for the robot to follow, and a series of lines will be drawn parallel to the prescribed path. The user will have to place a series of objects along the car's physical path in order to match the on-screen map as best as possible. The two modes therefore allow for bi-directional interaction between the car's physical space and its on-screen replication; the physical space dictates the on-screen map in the first mode, and vice-versa in the second mode.
Haunt that House is a many vs one player ghostly arcade game where several ghosts in a digital space compete against one human in a physical space. As the ghosts float around and interact with digital objects, they trigger real world effects like changing the color of a lamp to a ghastly green or making household objects rattle and shake.
The human, meanwhile has a handy dandy ghost detector which they can point at objects to detect the presence of the paranormal and which can be used to ZAP the ghosts and kick them out of the objects. The ghost detector communicates with the haunted household objects which enables for a very reliable “scanning” effect which reveals the presence of the paranormal.
The ghosts have 3 minutes to haunt 2 of the 3 objects in the humans house an the human is just trying to keep all the ghosts at bay! Things still in development for the project are a “fancier ui” for the ghost and a little bit clearer feedback for the ghost hunter, but the underlying communication and interactions are really solid. We are very happy with how reliable the interactions are between the humans and the ghost, now it's just a matter of adding a little bit of polish which we are very confident we can do in the time remaining!
While the game is on the surface quite silly, we were really drawn to the questions like “how can you make a game that is playable across a digital and physical divide?” With so many ITP students in various parts of the globe, we hope that our spooky little experience can bring folks together in a new and interesting way.
“Soundrop” is a poetic interactive painting of rain that reflects the viewer through splendid colors. In this piece, we focused on ‘rain’ and ‘meditation’. So through conversation, we decided to write a poem about rain as a part of the process of making. We tried hard to create a relaxing and meditative visual-sound experience for the viewer. Through using PoseNet and Webcam, we can capture the viewer’s position and present the shape of the viewer with rain. When the viewer stands in front of the P5.js painting with rain falling from the top, the white raindrops turn into rainbow colors as soon as they pass through the viewer’s body outline. The changing color of rain will be updated according to the body movement of the viewer. The speed of the rain and the volume of the rain sound could be changed according to the position of the viewer's body. We want people to experience the rain in a new way as they are standing in the rain, and listening to our self-written poems. (Co-created by Junoh Yu and Bei Hu)
The Shining Tree combines Posenet to detect human and p5.js for the growing tree itself. Our purpose is to remind people the magical and beautiful nature and its amazing arrangement of the circle of life and at the same time, make it a calm and peaceful process for people to experience by themselves as a way of meditation and reflection. In the chaotic time of Covid-19, not only social interactions are partially impeded, but our connection to nature. A shining tree seems to be a great solution to the anxiety we are facing now and can help us to reconnect with the most basic yet wonderful experience of living and the hope it carries within the circle of life.
This is my semester-long project of my Fall 2020 at ITP, which involves a bunch of eye symbols and graphics, expressing the theme: Looking for a witness of life. Inspired by Katy Perry’s 5th studio album , a underestimated pop records that kept me being alive, got me thinking in depth, and stimulating my souls and energy.
I deconstructed, and restructured the storyline and meaning of each track in this album, collected the lyrics that resonated with me, and visualized them in different forms such as Unity animation, video and audio, projection mapping, installation and graphic art. It was not that easy to show the sincerity and intimacy, not even to mention transferring those memories, emotions and feelings, into programming visual language, physical interactions and fabrications.
Those works speak for me, and by that I want to open more potential connections and communications in the future. Not only the connection to the others who can be your side and ride the journey with you, but also to the self.
Aside from the visual composition, I bring more personalities, narratives, and intimacy into it by etching my own monologue into the piece. Visit the project website for more details about SEEN.
https://jeeyoonhyun.github.io/WordEater/
Ever felt confused of so many words floating around the Internet?
WordEater is a browser based game that lets you gobble up a bunch of meaningless words in order to make another meaningless sentence, eventually removing all words that you see in the screen.
It doesn't matter if you don't understand what the words or sentences are trying to say – after all, they are going to be swallowed and eaten anyway. All you need to do is get some peace of mind by consuming all the disturbing, shattered pieces of information that makes complete nonsense. The goal of the game is making your web browser more cleaner by scavenging fragmented data with your mouth. After all, your web browsers also need some refreshment from the gibberish they encounter everyday!
WordEater uses the Facemesh API in ml5.js to detect your mouth in your webcam. You can play the mouse version if you can't use your webcam – for example, if you are wearing a mask.