Weather in a Jar

Chunhan Chen, Tianyi Xie

Put the real time weather from your hometown into a jar and bring it with you all the time.


Design Concept:

Have you ever lived far away from home and get homesick? What if there’s an object that could ‘physically’ put your hometown real-time weather into a jar and put it on the table, which allows you to see your hometown weather anytime at a glance.

Some people who lived far away from home tends to bring something from home as a reminder or representation of their connections with hometown. And the goal of ‘weather in a jar’ was to make this connection even stronger. A real-time weather status of a city reflects a very specific moment & location which could create a unique connection between a person and his/her hometown disregarding the distance of physical presentness and time zone.


Right now, I have the ‘weather jar’ and pepper’s ghost effect worked, assembled and ready to show, and inspired by Chunhan Chen’s pepper’s cone ICM final project, we are hoping to combine our projects and display the real-time weather effect in 3D.

Next Step:

Technology attempting:

In order to augment the visual display, a web version of Pepper’s Cone (originally created by Luo, Xuan etc in Unity) is developed to make the 360-degree hologram with lower cost. Technologically, the Pepper’s Cone For Web exploits customized shaders in GLSL, pre-distortion with image processing, 3D scene building with three.js and development in purely Javascript. In order to real-timely render the scene to a distorted texture, a buffered scene is used for storing models and environmental settings as a buffer texture. Based on that, vertex shaders and fragment shaders would wrap the scene utilizing an encoded map.


Introduction to Computational Media


Heather Kim, Katie Krobock

CyberScamp creates a unique, playful experience where interaction with a pup reaches into both the physical and digital worlds.


CyberScamp is a project connecting physical input to a digital output. Based on the user’s interaction with a physical stuffed animal, there is an animated p5.js output through which the animal responds. These animated outputs would likely be made with an outside program, then exported into p5.js. The project uses an Arduino, a stuffed animal dog, a force sensitive resistor, and p5.js. The resistor is in the back of the stuffed animal. By assigning values to the levels of pressure exerted on the dog, we are able to break those different pressures into ranges. One range is very low pressure, which the animated dog does not respond to at all; he is neutral. The animation would maybe be the dog looking eagerly at the user, waiting for some kind of attention; because this is the animation displayed when there is no pressure being exerted (no user interaction), this scene would encourage someone to come and interact with the project in the first place. Another range is medium to high pressure, achieved through petting or patting the stuffed animal, which the animated dog responds well to. The animated dog would be very happy, possibly rolling on his back with a tongue out. A third range is very high pressure, in the case that the user punches or squeezes the stuffed animal too hard. While this isn’t really an ideal interaction with the project, we feel it’s necessary to add an output that addresses it. The animated dog would react poorly to this treatment, possibly looking sad, upset, and hurt. This upset reaction by the animated dog may last a bit longer than the happy reaction, but would eventually fade back and reset to neutral again.


Creative Computing

Space between us

Elvin Xingyu Ou

"Space between us" is a spatial boundary that allows people from two separate space to communicate and interact through light.


“Space between us” is an exploration of human interaction in an architectural scale that focuses on transforming the individual experience into a collaborative connection through light. The project is composed of a set of two screens suspended back to back with a light matrix embedded in between them. The lights are activated by data that is collected from two cameras on opposite sides of the screen that capture movement, which is then live processed and displayed on the respective screens. Users will be physically separated by the panels but will visually perceive the other side's movement, similar to seeing through a filtered window.


Introduction to Computational Media, Introduction to Physical Computing