The Kaleidoscope Band is a shared experience, through a new form of a toy that combines both a kaleidoscope and a music box.
This kaleidoscope band is using an analog music box as input to change the projected kaleidoscope pattern through P5.js. As turning the knob of the music box, through a potentiometer, the pattern of the kaleidoscope would change over time through the projector along with the beautiful music sung along from the box.
Combining technology and crafts, we fabricate the box through wood and tried to deliver the nostalgic traditional toy that everyone is familiar with on top of creating the kaleidoscope pattern through a digital platform.
This web instrument allows you to make music by planting and watering different kinds of “audio seeds” that grow into lush melodies and textures.
Watering the seeds causes them to grow both visually and sonically, and distinct areas in the garden cause the plants to behave in different ways.
Composing using this interface is more spacial than linear. Plants emanate sound that you navigate through using the mouse, so moving through the space influences the mix of sounds.
The implementation represents different types of sound using basic geometric forms and generates growth patterns algorithmically using L-Systems — a way of modeling generational systems. These patterns are at times also used to produce melodies.
The musical garden invites exploration, and can be found at https://musical-garden.netlify.app/
Text2Video is a software tool that converts text to video for a more engaging learning experience.
I started this project because during this semester, I have been given many reading assignments and I felt frustration in reading long text. For me, it was very time and energy consuming to learn something through reading. So I imagined, “What if there was a tool that turns text into something more engaging such as a video, wouldn't it improve my learning experience?”
I did some research and found a number of articles and studies supporting that videos can be more effective in learning than text for many people including the following data:
– The human brain can process visuals 60,000 times faster than text.
– Viewers retain 95% of a video’s message compared to 10% when reading text.
– 65% of people consider themselves to be visual learners.
I created a prototype web application that takes text as an input and generates a video as an output.
I plan to further work on the project targeting young college students who are aged between 18 to 23 because they tend to prefer learning through videos over books based on the survey I found.
The technologies I used for the project are HTML, CSS, Javascript, Node.js, CCapture.js, ffmpegserver.js, Amazon Polly, Python, Flask, gevent, spaCy, and Pixabay API.
Application link: https://text-to-video.herokuapp.com/
Demo Video: https://vimeo.com/489223504
Github repository: https://github.com/cuinjune/text2video
Me and another ITP student Wendy Wang created a maze that was made with visually impaired people in mind. The player can be guided through the maze only by sounds. For people, whose vision is not affected, the game offers an opportunity to understand how difficult it can be to rely just on hearing sense. At first, it is challenging and confusing to navigate through maze but once you get used to it, it becomes a fun and interesting challenge. The maze is generated randomly every time, and its complexity can be easily changed in the code. The player’s movement is controlled with the arrow keys on the keyboard. Sounds are played in a list format and the player can't move until the sounds are finished playing. All the sounds are easily distinguishable, and left and right sounds were made stereo so that they can be played in a left ear or right ear respectively. To understand the game easier, the greetings and instructions are played when the person first loads the game and can be listened again at any point by pressing Shift key.