A Physical Interface for Digital Sculpting

Gabriel Weintraub

In designing software to simulate physical activities we must consider how to make the experiences authentic to their real world analogs. This project attempts to recreate the sensation of working in clay in the context of a 3D modeling environment.



Today, most design work is conducted using computerized tools. There are, of course, significant conveniences gained by forgoing traditional physical media, but in the transition we have severed the special connection a designer shares with their materials. Many artists and designers choose to augment their computing experiences by using tools like digital graphics tablets, but the sensation is not far removed from working with a mouse. These solutions are effective in a two dimensional workspace, but once a third dimension is involved, the tools at hand become inadequate. My project attempts to remedy those inadequacies by simulating the haptic sensations of manipulating a block of clay in the context of a 3D modeling environment.




Maria Fang

Experience my dreams in VR



My thesis is called Awaken. It is an experimental art project where I recreated my personal dream experiences for others to experience in a Gear VR headset.

It is a dream sequence that is around 8.5 mins long. From opening to ending, with four dreams, and various transitions. The user will wear a headphone, and hopefully get to sit in a comfortable arm chair.




Aaron Montoya-Moraga, Corbin Ordel

Machine Learning will be demonstrated as we teach a piano to score films and tv shows according to the colors displayed on screen



Piano Vision: Dark Side of The Moon is a machine learning and music composition project. Simply put, this is a piano that watches a television screen playing movies and composes and original score. Using MaxMSP the patch will reads all of the RGB values from the screen and playsmusic according to the intensity of each color. For example, if there is fire (more red) it plays fast, water (more blue) plays slower, trees (more green) it plays somewhere in the middle. Using a machine learning program call Wekinator, this project attempts to teach a computer different musical notes that correspond with color values in order to trigger a varied response. The responses are then recorded or *learned* and then used to score different films. The instrument is a upright piano with the keyboard and hammers removed, exposing the internal *guts* of the piano. The playing of the piano will be the achieved with solenoids to hit the strings and servos to pluck the strings. This project will be the orchestra to all of your movies, haunting and serene, glitchy and humorous. This iteration will have our Piano watch the trailer for the movie Die Hard. Searching for explosions and fire, the piano will play accordingly to accurately play compositions familiar to the action movie genre. The intention of this piece is to associate the absurdity of the destruction constantly displayed in our consumable entertainment and the incredible technology used to make these sequences come to life. When we pair a half destroyed piano with a film conveying over the top destruction, can we sense the connections between the two? Have we become so desensitized to what we watch that cities falling to the ground no longer bothers us? Does this not bother you? Does a broken piano trying to play music bother you? Both a piano using machine learning and a big budget Hollywood movie utilize incredible technologies, but in the end, what do they make us feel, if anything? We want to demonstrate the connection – the connection that our ability to use these incredible technologies is a beautiful dance between art, creativity, and imagination.


Machine Learning for Artists, Readymades

Potato glasses

Manxue Wang

Potato Glasses are a wearable device designed to ease anxiety during public speaking. Biosensor feedback triggers the device, which shows humorous visualizations. While a set of digital eyes maintain visual connection with the audience.



Social behavior is a very important part in our daily lives. For example, conversation connects us with others. In ITP, two areas I'm interested in are helping people to create better social interactions and visualizing people's personal data. I'm interested in these topics because I'd like to know how conversations shape ourselves and affect the audience.

Potato Glasses is a headset worn by a speaker, which contains a mobile phone, microcontroller, and two battery-operated OLED screens. It’s connected to a pulse sensor, which collects heart rate data. A high pulse, associated with nervousness, activates the device.

Inside the headset, an augmented reality application detects the audience’s faces and replaces them with potatoes. It also displays the scripted presentation. The exterior OLED screens show a pair of digital eyes that reflect the users.
The device enhances the speaker’s and audience’s experience – the speaker is able to ease their communication anxiety by avoiding direct eye contact and focusing on their script. And the audience gets a better presentation.



Aware Chair

Chanwook Min

Aware chair is an interactive object about the human sitting posture. A majority of people spend a large part of their time sitting on a chair. But we forget our sitting posture. The project helps people to aware their posture with humorous way.



In the modern society, a majority of people spend a large part of their time sitting on a chair. However, they always forget their bad sitting posture. However, it causes serious problems. How can we be aware of our bad sitting posture with humorous way?

Aware chair helps people be aware of their bad sitting posture. It visualizes their posture with screen and mouse pointer and bothers them to use a computer when they are sitting with bad sitting posture. Another way is the project push people fall down. This way of being aware of sitting posture make people laugh or annoy. This can be a better way to be aware of them than traditional information visualization.




Hub Uy

CropTXT is a network of sensors that aims to help farmers conserve water and at the same time, make farming more convenient.


Rice yield potential has been slowly decreasing in the Philippines and other neighboring Southeast Asian countries in recent years due to climate change. The decline in yield is mainly attributable to the El Nino phenomenon, a drought affecting arable land for a minimum of 3 months, making water for agriculture increasingly scarce.

Rice culture requires a tremendous amount of water compared to other crops. According to the International Rice Research Institute (IRRI), it takes an average 5,000 liters of water to produce 1 kg of rice. It is estimated that by 2025, 15-20 million hectares of irrigated rice will suffer from some degree of water scarcity that will result in rice shortage all over South-East Asia. This shortage is a big issue for the Philippines and other neighboring developing countries that depend on rice – as a food staple, a source of income, or both.

CropTXT is a network of sensors that aims to help farmers conserve water and at the same time, make farming more convenient.

First, the farmer sticks wireless, GSM-based sensors on the ground. From there, the sensors analyze soil conditions and the amount of water the crops need. After irrigation, the water level will gradually decrease. When the water level has dropped to 15 cm below the soil surface, CropTXT will alert the farmer to re-flood the field. Once re-flooding of 5 cm is reached, CropTXT will again send another SMS to alert the farmer to stop irrigating.

According to IRRI, allowing the water in the field to drop 15 cm below the surface before irrigating again is called “Safe Alternate Wetting-Drying (Safe AWD).” The process of Safe AWD that CropTXT simulates will not cause any yield decline since the roots of the rice plants will be able to take in water from the saturated soil, and at the same time, resulting in 25% water savings.



Liminal Space

Sergio Mora

Liminal Space explores the creation of a temporary environment through collective motion, lights and sounds, transporting participants from their everyday reality to a meaningful shared experience.



In rituals and sacred experiences, liminality refers to a threshold of consciousness, the boundary between ordinary and alternate reality. It’s that place where our non-rational, imaginative, open-hearted self takes over.

Liminal Space seeks to create an experience that sacralizes space, time and the connection among people through their active participation. It is composed of an arrangement of vertical translucent fabrics, dynamic lights and sounds that react to people’s gestures using their mobile phones.

The device we carry in our pocket acquires a new meaning throughout the experience. It allows people to interact with the dynamic environment and with one another, improvising a ritual that creates, for a moment, an imaginary shared world.


Live Web, Thesis

Muggles' Pensieve

Shan Jin

Shake your memories from your mobile phone into the Pensieve!


Pensieve is a concept from Harry Potter, where wizards could pull their memories out of their head, put it into the Pensieve and look at it from a third-person perspective. In our non-magic, muggles’ world, I want to recreate this experience with the mobile phone as our wand. An app will read the photos on the phone and project a list of places the user has been to. The user could choose a place and shake the photo into the Pensieve!



Of Aedes and Anopheles

Viniyata Pany

A virtual space to experience and interact with mosquitoes.



As cities grow, the native species become endangered, or even locally extinct. The most prominent, and rather singular view of urbanization does not take into consideration our coexistence with wildlife. Of Aedes and Anopheles is a conversation centered around peoples' aversion to wildlife in urban environments.

The experience emulates the natural occurrence of mosquitoes circling over one's head in VR. What used to be a common occurrence, has almost become unfamiliar to urban residents.


The Nature of Code

Private Data

Leon Eckert

your private data is up for grabs – it is used to the benefit of cooperations or people you have never heard of – in the case of my projects it is used for art.




The project consists of 3 parts in each of which a different kind of sensitive data is used as a medium for creation.

[part 1]

Our devices are extremely sociable and – when they are not connected to a wifi network – very loud. Then, they constantly shout out the names of their past partners, the networks they have been connected to before, in the hope to reunite:

“I have been connected to Starbucks Wifi, are you anywhere out there? I have also been connected to nyuguest, are you anywhere out there? I have also…”

Not only is that a very private and sensitive kind of information about their user (a list of network names can say a lot about a person), but – in my opionion – it is also a very poetic one.

Air Poems is a program that listens to devices in the surrounding and forms poems using their words in real time – poems 100% extracted from air.

At the show, this would run on a screen and use the show vistor's network names as the medium for poetry.

[part 2]

another screen based project shows live stream of insecure ipCameras from across the world. This might show a street junction in japan or a tennis court in california. Live streams seem real to us, honest, unfiltered information. In my case, the image seen is maniplated in realtim, altering that “reality” and making the people in fron of the cameras create drawing on my screen, without their knowledge. (video to follow soon). There is ways to interact with the video and flick through different drawing algorithms.

[part 3]

the last part is another poetry project I am finalising right now. like with the air poems, the output depends on time (who passes by when) and space. In this case however the space is defined by the physical device the program is executed on. THe program creates a poetic output using the computer's owner's iMessage database as source material. The ouput is intimate yet not revealing, the algorithms processes words and brings them together in enw ways, creating new, fictional narratives that, like that, actually never occured in the message history. This piece would be shown as printouts of poetry created prior to the show as visitors are unlikely to bring their own laptops with them.