Most of us are accustomed to navigating the world using our eyeballs. If we think of the sense of sight as a muscle, we get regular, rigorous visual exercise as we stare into screens, navigate public spaces, and snap photos with our smart phones.
But what about our other senses?
This project came out of a desire to exercise and explore two of those underappreciated, underutilized senses: the sense of smell and the sense of sound.
As ITP alum Alex Kauffman wrote, “Smell is subjective, it’s ephemeral, and it’s not binary.” Interactions that involve smell are qualitatively different from interactions that involve our eyes.
Since much has been made of the relationship between smell and memory, as well as the relationship between smell and pleasure, we use scent as a positive feedback mechanism to encourage vocal students to practice singing.
Here's how it works:
1. Select the note you want to practice by touching one of the circles. You'll hear a recording of the selected note and a recording of the tanpura for you to sing along to.
2. Sing! As you sing into microphone, the device determines when your voice is within the frequency range for the note you selected. When you're within range, the device dispenses a delicious smell.
Car horns can be incredibly loud and equally frivolous. At major intersections these short and seemingly ephemeral sounds can turn into a 24 hour cacophony. However, with modern sound insulation and the frustration of navigating a traffic bottleneck, individual drivers have little reason to concern themselves with their impact on surrounding neighborhoods. Local residents, on the other hand, may be very frustrated but lack any effective means of communicating this to drivers. Honk Box can be displayed in these areas to communicate with drivers on behalf of local residents. As honking increases Honk Box's face will sour and tally the number of honks so drivers can understand their collective impact on the soundscape. In addition, honk box can be adjusted for sensitivity and should run on most browser enabled devices. All classified honks are noted in a running data log that can be exported and referenced.
Backyard Labyrinth is a cardboard-iPad semi-VR game that requires two players, a VR player and an iPad player. The VR player wears a Google cardboard to experience the immersive first-person view while the iPad player is able to see the top view (god view) scene. In this game, they work together all the time to finish a series of tasks. Each of them has different information about the surrounding environment. Therefore, they should communicate frequently to share what they know in order to maximize their efficacy.
Through the American Zeitgeist Almanac I wanted users to be able to explore different years of American history. Nowadays when people want to know what happened in one year they look up wikipedia and read through events. But I have always found that a visual experience is much more intuitive and powerful. By having a collage of cultural touchstones (including the most popular song of every year playing), you can compare each year and finally understand the difference between 1954 and 1956!
Much of the early history of computer graphics has been defined by the challenge of creating beauty and art within the severe limits of early computer systems and display hardware. While this meant making certain compromises, it also enabled a wealth of creativity, as artists and designers sought innovative solutions for working within constraints. However, as televisions get larger and our mobile devices become more high resolution, these sorts of concerns have become less relevant, and viewers risk losing touch with the material properties of a screen as a grid of discrete colored lights. The continued influence of pixel art as a nostalgic style speaks to this somewhat, but does the artform a disservice by presenting it so devoid of context.
For my final project in Homemade Hardware, I've been developing a custom Arduino-based video monitor for presenting short, 128×128 pixel video loops on a low-resolution OLED screen. As an approximately 1-inch square, the screen is designed to foster an intimate viewing experience of a piece of video art, requiring close watching. The screens can be loaded with simple animated GIF images transferred via serial USB. Because the screens use a well-established animation format, and all of the image decoding is done on the actual device, these screens will hopefully make it easy for video artists to present work in a novel way.
For the winter show, I intend to hang four or five screens on a wall. These screens will be displaying various short video/animation loops created by myself and other ITP students. I'm also planning on having a laptop computer available with another screen (perhaps an earlier prototype), for me to use for demonstration of the circuitry and complete video loading process, if anyone is interested. The entire installation will take up about 2 feet of wall space and only requires connection to a power outlet and a power strip.
Pattern generator based on patterns seen in nature.
This project tries to create art out of maths and physics that we see around us everyday but don't realise it. For instance, it uses fractals and motion of gas in air to create art. The interaction will be touch based and users will have the option of creating art and uploading/mailing it to themselves.
An interactive installation that provides a space for two people to slow down and have a locked in connection with one another. Two friends, strangers, family members, people in love are invited to crawl into a “fort” made of stretched spandex. As they sit down the space becomes filled with color coming from projectors located outside of the structure. Users are invited to have a conversation. Through small webcams and microphones within the space, which are monitoring the emotional expression on their faces and the frequency and amplitude of their voices, the colors and light shift around them based off of the type of connection they are having. The goal is to mimic the moment and feelings associated with having a very present connection, where outside noise around you seems to disappear, and capture the unseen parts of connection that we throw at one another.
Introduction to Computational Media, Introduction to Physical Computing
Interactive modular origami lights that responds to change in physical arrangement.
The project involves a modular origami structure with lighting inside them. The origami model is designed such that its spatial arrangement can be changed by manually moving it. Each movement results in different 'circuits'. That is, a different arrangement leads to a different circuit connection which in turn changes the behaviour of the light inside the model. The main interaction is physical in the sense that it has to be handled and played with to see changed in the lighting.
Introduction to Physical Computing, Introduction to Physical Computing
DOPPELCAM is a digital camera that only displays images ‘visually similar’ to those taken with it. It sends the source photo through an image-drop search engine and displays the top result.
Every two minutes, we take more pictures than the whole of humanity in the 1800s. Doppelcam operates within the photographic redundancy generated by this mass of photographic media.
Doppelcam refers back to previous iterations of photographic technology while subverting the art form’s intentions. It puts the mystery back into picture taking. Since the advent of digital cameras with preview screens, we’ve been able to see our photographs immediately after we take them.
No longer do you aim your lens and generally know what image you’ll get in return – it will be similar but not exact.
Introduction to Computational Media, Introduction to Physical Computing
My project aims to leverage emerging technologies to produce new forms of meaning and meaning making. By 'plugging in' text sources to the nebulous web of knowledge accessible through the massive production of multi-media and resources available online, we work to create an interactive and interconnected form of comprehension which reflects the dynamic and rhyzomatic structures of information and meaning in our world today.
The project leverages jquery for a flipping effect, allowing users to dive into each sentence in more and connected detail. Currently the types of interactions available are supplemental video, animated sequences produced through Adobe After Effects, interactive activities that connect to the reading; produced on Codepen, SVG animations using the Vivus.js library, and user text input boxes using jquery. In addittion to the card flipping animation, a secondary model of accessing additional information or interactions is through an accordion style drop-down also coded with jquery.