Bharatanatyam is a form of classical Indian dance that involves using complex footwork, hand gestures, and facial expressions to tell stories. The dance is traditionally accompanied by Carnatic music and an orchestra consisting of a mridangam drum, a flute, cymbals, and other instruments. Net-Natyam uses three ml5.js machine learning models (PoseNet, Handpose, and Facemesh) and a webcam to detect the movements of a Bharatanatyam dancer and trigger a corresponding sequence of electronically composed sounds.
The Mr. Scribbles Dancing Drawing Robot was created to help people feel more comfortable about their bodies, about their movements — about being weird sometimes. Dancing Drawing Robot is a robot, controlled using dance poses.
Men all have a well-framed dating app bio. However, we all had a weird and frustrating dating story behind it. No matter what happened on our date, there's only one ending – we never talked again.
The project explores using existing datasets on the Internet and brings awareness of dating data privacy to the audience.
Swipe! Use it as a regular Tinder. If you swipe right, you will get my dating story with the man you matched. The chat stores all the men you connect with and the dating story behind them.
Liminal spaces are undefined, transitional spaces, often devoid of spatial cues and context. What has AI learned about our experiences in liminal spaces and how do AI-generated spaces reflect our conversations and images of liminality? Using media generated entirely by AI and machine learning programs, liminal mind is a Web VR experience comprised of three liminal spaces featuring soundscapes, a generated voiceover by a neural voice and equirectangular photos created from GAN images.
https://jeeyoonhyun.github.io/WordEater/
Ever felt confused of so many words floating around the Internet?
WordEater is a browser based game that lets you gobble up a bunch of meaningless words in order to make another meaningless sentence, eventually removing all words that you see in the screen.
It doesn't matter if you don't understand what the words or sentences are trying to say – after all, they are going to be swallowed and eaten anyway. All you need to do is get some peace of mind by consuming all the disturbing, shattered pieces of information that makes complete nonsense. The goal of the game is making your web browser more cleaner by scavenging fragmented data with your mouth. After all, your web browsers also need some refreshment from the gibberish they encounter everyday!
WordEater uses the Facemesh API in ml5.js to detect your mouth in your webcam. You can play the mouse version if you can't use your webcam – for example, if you are wearing a mask.
Weather Journals is an attempt to put people in touch with their surroundings and with each other. The whole thing is powered by OpenAI's GPT2 algorithm, which is a machine learning algorithm that generates human-like text.
First, submit a reflection of the weather where you are in the box on the top left. This can be whatever you want: what do the clouds look like? How does the weather make you feel? Does it remind you of another time?
After you've put in your reflection, adjust the length and the creativity of the text you'd like the weather to write you in return. Then hit the button and wait.
At the end of the day, everything that everyone has written for that day is used to train the machine learning model. What that means is that every day into the future, the model reflects all the reflections of all the days before it, growing and evolving with the weather.
Inspired by a tendency to take meditation way too seriously, Mindful Breathing challenges users to accumulate breaths, add upgrades, and wager their progress on the journey to transcendence. Using ml5 and PoseNet, participants' bodies are tracked in order to log breaths and transform what is, at first, a simple interface into a claustrophobic cacophony of 'mindfulness enhancers.' After a certain amount of progress, players are able to measure their success against the self-actualization of others and bet their breaths for the chance to surpass the competition.
This project is inspired by Universal Paperclips by Frank Lantz. Big thanks to Mathura, Craig, Ellen, Lisa Jamhoury, Lisa Sokolov, and all my classmates for their help!
My family sold the house I grew up in this year, which was very sudden, but for the best. I had never really been that connected to my home state, but when I discovered I may never go back there, I realized that I had come to really appreciate it as a place to grow up. We ended up renting a house not too far from where I grew up to get through the pandemic, but it made me think about what it was that caused me to become so nostalgic. What kept popping into my head was that it is a beautiful place. It has lovely forests, beautiful colors, coastal towns, and even a few mountains. Then, I stumbled across a site which shows the elevation in CT using colors.
This inspired me, I created an ElevationGAN and used Runway's hostel model feature to grab the images generated. I then used p5.js to process down the images into a pixel grid. From there, I used serial communication to send the elevation data to an Arduino, which actuated pixels in and out to reflect these values. This is a proof of concept piece that could be scaled up to create 1:1 representations of elevation maps – ultimately creating wooden topographies. I have plans to elaborate on this ML model and try new things.
As humans, our existence is defined by different emotional states. When we feel an emotional impulse, it's like a ripple is dropped inside of us. This ripple flows outward and is reflected in how we perceive the world around us, as well as how we act within it.
For this project, we wanted to visualize emotional states using colors, shapes, and sounds in a poetic way.
The first thing we did is dividing all emotion words into 6 classifications: happy, content, sad, angry, shocked, afraid and then used p5.speech to recognize words instead of training words myself in the teachable machine because it’s far more accurate and for now this project can recognize over 110 emotion words.
We create a flowing 3d object and use sin() function to generate a beautiful ripple. More importantly, we generate multiple filters for one song in response to different emotions, and the amplitude of the song will affect the frequency of the ripple. For the visual part, we believe matching colors and custom shapes to different emotion words based on color and shape psychology could give people an immersive experience.
Tell me your feeling with one word.
I hear you, I feel you.
This clock is inspired by The Order Of Time by Carlo Rovelli, who theorizes that time as we usually imagine it only exists because of a “blurred macroscopic perspective of the world that we encounter as human beings [and that] the distinction between past and future is tied to this blurring and would disappear if we were able to see the microscopic molecular activity of the world.” Along with the thought that humans create time itself, we are tying in quantum mechanics and the idea that nothing exists in a determinate state until an interaction occurs or a measurement is taken. In our clock, the time is indeterminate and blurred until we measure (by looking at the clock) which causes an exact time to be visible.