Category Archives: Uncategorized

Final Project-documentation


<script src=””></script>

google drive:


In this final project, I tried to use the tools that I learned in this semester as much as I could.  I use tracery to form the structure, and I used rhymes, corpora database, turtle and markovify model.

There are five outcomes, I named them “cat”, “pig”, “rose” “snow” “sakura”. Each outcome accompanies with a drawing made by turtle that is related to the text. The form of the outcomes are diaries, most of them are delightful but there is one that makes people (probably) feel sad.

I’m more like an invisible person who can see all those diaries which convey the writers’ emotions and feelings.  Every time I refresh the cells is like I pick out one diary (kind of old fashion). You won’t know what comes next, and I find a little bit excited about wondering which outcome it will be. I think it will be more interesting if there are more outcomes, which will bring much more variety.


PS: Peppa’s code comes from here:


# 6

github: <script src=””></script>

.ipynb document in google drive:

This assignment took me so long time to finish, but it doesn’t seem good though… I want to make a fairy tale generator and make its form looks like a poem( so that I can apply rhyme on it). I combine my #5 assignment which is about markovify model and the word vector together. First I generate some texts and print them out like last time. The problem I confronted is that after I use “for” to get a printed version output, I can’t make this output into a list/or a string. Because of this, I have to give up my original thought which is “replacing each noun a rhyming synonym”.  Then I insert the pronunciation thing before the final text was printed out. Even though it seems like the word vector largely change the poem(story?) it actually only occupies two lines of the entire code.

I get a very strange feeling when I was writing these codes, the word vector seems to have millions of ways of using it, but I feel like they are hard to use in fact. Probably because it cannot satisfy “meaning” and “rhyming” at the same time. If I want the text to be readable and make sense, it’s hard to find a rhyming word; on the other hand, if I just want to focus on the rhyming part, then the text becomes less logical.

There is so many stuff going on in “word vector”, I think the most important for me is “how to make good use of them”. Now each of the functions is more or less isolated with one another. I’m very frustrated by this work… I’m so sad.


github:<script src=””></script>

google drive ( .ipynb document):

I actually think a lot about how to combine two different stuff together, but it frustrates me. Several of my ideas was denied by the program…

In this assignment, I combine spacy with markov model.  For the markov model, I use three texts: Cinderella,  Sleeping Beauty, and Genesis.  And then I use spacy to identify the adjectives of my generated sentences, replace them with “fat”. I got very confused by the bug when I did the spacy process first, it shows that the markov model system is not able to accept my new “fat” version of my story, so I put spacy to the second part of the program. Furthermore, my last outcome turns out to be each word in a line, and I can’t make a list of it because I did not change the doc. However, when I try to use the replace function, it turns into an error. I wish I could make my outcome into a fluent single line sentence.


For efficient production, I’m staying faithful to Ableton’s software and interface as the layout allows for quick changes in notes and layering whereas Chrome Lab’s version is based on a wheel of major and minor notes.  The addition of the drum machine below is also a great way to begin producing music. For experimentation, on the other hand, Chrome Lab’s version is great as it allows you to try out different notes on the fly without committing to them in your timeline. The Piano Keyboard wasn’t working for me on Chrome or on Safari so I didn’t really get a chance to experiment with it.  From what I saw, there was no way to save your choices, as is the case with Chrome Lab, both programs are on-the-fly experimentation tools whereas Ableton is focused on production and layering. By being able to save the notes you choose and play them over and over again, you’re able to layer and create more intricate and detailed harmonies and music overall.


Attached below are my two experimental harmonies, I’m especially proud of the Ableton one and I can  genuinely see it being implemented into future work. The ability to “Export to Live”, the downloadable software, is also a plus as you can be playing around, create something you like, and instantly get to work on the full-fledge software to produce music.



Assignment #4 3.12

github: <script src=””></script>

.ipynb document in google drive:


Sample poems:



Lightened immersive rancher
Eventually it deadened

Firing  humorously decelerates preparatory nervousness
Angelic infamy, however
Teared a bicycle that never sell

Bonus accelerate a transmitter that never increase
Unhappiness worsen the freestyle
Newest success
Nonsense legislation
Yearly  reload under the closeness



Happily  toughened
Unanimously  erased
Graciously  harmonized a mediator that never unbutton

Freestyle  accidentally rewinds a storey
Adjustable boasting, however
Thickened the standpoint

Drained in bolstered widget
Opposing tablespoon erodes



Kookily  widen the brainstorming that entwines
Improve the schoolboy
Standing Tuesday
Spiky fragmentation

Footing  supremely sickens a hoarding
Amorphous specification, however
Tired the terry that despawns

Crack a biology
The remorseless habitation

This arostic program uses tracery.  Originally I want to break down the words in Bible to see what will the poems look like, but the text is too long, so I give up and use the word lists that  we got in the class, then I google some adverb lists to satisfy the needs of the poems. In these poems, the first stanza and the third stanza(“I” and “FAT”) are fixed, but there are changes in the second and fourth stanza. The second stanza is behavior, and the fourth stanza is animals. I have to pick the kind of words I want and limited the word choice by using function “.startwith()”. So I think this is a combination of tracery and “fake” arostic.

I think if human did this same work, there will be less grammar mistakes, but it will take longer time to finish the arostic because finding appropriat words take time.

#3 2/26 mixed poem

.ipynb document:

GitHub: <script src=””></script>

I mixed three documents in this poem. “nouns.txt”,”adjs.txt” and “sea_rose.txt”. Originally I want to do the “austen” one, however I cannot break down the long sentences and I don’t think the words will make sense if I break them into single words (because there are long paragraphs and conversations going on, the quotation mark is also a problem that is hard to get rid of).

In some of the lines I randomly  combine the adjectives with the nouns. Those words can be refresh if users run the lines before them. I don’t want to make it a fixed poem because I think the uncertainty , randomness, the expectation and curiosity that readers feel when they refresh the lines are more important.


<script src=””></script>

My Original format is “A House of Dust”. I replace the word choices in the list and insert my personal interest: fat animals into the poem. I add some other word categories in the similar forms that have been given, such as clothes and wearing.


Copy and paste to a .txt file

                                                                                                  |||||||||                                     |||||||||
                                                                                            |||||||||                                                  |||||||||    
                                                                                       |||||||                                                                 ||||||
                                                                                  ||||||                                                                           ||||||
                                                                               |||||                          |||||||||||||||||||||||||                                  |||||
                                                                            |||||                         |||||                      ||||                                 |||||
                                                                          |||||                         |||                              \\                                 |||||
                                                                        |||||                         //》》                   《《《 \\__________                |||||  
                                                                      |||||              ______// 》》                ______ ========]                     |||||  
                                                                     |||||             [_______ =========                                                        |||||
                                                                    |||||                                       @                 ____             ____                      |||||
                                                                   |||||                    // === \            ===#          //           //      \\                 |||||
                                                                   |||||                  //            |       ||           //        //            \\____// \ _            |||||
                                                                   |||||                //________/        ||       //          #===--                                  |||||
                                                                   |||||              //                       ||      #===--                                                |||||
                                                                    |||||           //                                                           //                               |||||
                                                                     |||||        //                           ||       ||                 ==//==                           |||||
                                                                      |||||                                   ||   ___#_     |     |        //                                 |||||
                                                                         |||||                          ==#~     ||        \__/\_    /                                  |||||
                                                                           |||||                            ||       ||                       ________                     |||||
                                                                              |||||                                  _________----------------                   |||||
                                                                                 ||||||                  =====----------                                     ||||||
                                                                                   |||||||                                                                          ||||||
                                                                                        |||||||                                                               |||||||
                                                                                             |||||||||                                               |||||||||
                                                                                                      ||||||||||||                           ||||||||||||

Inspiration for Our Final Project – We Got Sole

Our inspiration for this project were the Nike Air Mags — a shoe popularized by their appearance in the movie Back to the Future and renditioned by Nike in 2015 with Michael J. Fox being the first to get a pair.

These shoes gained popularity for their unique futuristic and out-of-this-world look and their unbelievable price-tag.

msx cm cnv nd

What I love about the shoes is how the lights are so pretty and floe with the shoe with it having a hollow-like feeling within the shoes.  They are integrated into the design.  This is something that Christshon and I strive for. We hope to make our project  interactive by giving the user the ability to choose the colors displayed on their feet — hopefully, through a button or potentiometer.


For our project, Christshon and I created a pair of shoes that are both fashionable and interactive. We were inspired by sneaker culture and wearable technology.

Out of all clothing items we feel that shoes are the one that have the most potential the it comes to wearable technology. That being said the only shoes that integrate fashion and technology are extremely overpriced and largely unobtainable for the average person. We feel that it’s time that interactive fashion became something that is accessible and to prove it we made this pair of shoes that illustrates how possible it is for designs like this to be made in a cost efficient way. Although the amount of interaction between the wearer and the shoe is not very extensive art the moment, we feel that this is a good place to build from and I am now much more aware of the possibilities within the realm of wearable tech.

Our Process:

Due to issues with shipping our process was pushed back quite a few times but we still found time to get it all done. The first step was making our user interaction diagram. This was our plan for how the user would get output and what output they would receive (LEDs). Next I bought the shoes from a thrift store for only $4. After purchasing them from Salvation Army, we saw that the shoes were pretty beat up so we had to clean them and then paint them in order for them to appear new. They came out beautiful all thanks to Christshon. I entrusted him with this task because he used to paint shoes a lot back at his home.


His inspiration for the colors and shape was the “heavy-duty sneaker” look that many high-end brands like Balenciaga and Gucci have been popularizing lately. After the painting was finished, we uploaded our code to our Arduinos, we began wiring the LEDs and Arduinos to the shoes.




  1. Arduino Uno

2. Jumper Wires

3. 10k potentiometer

4. New Balance Shoes

5. Neopixel RGB LED Ring

6. Velcro

7. Paint

8. 9V Battery

9. Solderable Breadboard

It took a lot of time and even more troubleshooting but eventually we were able to get both shoes to operate smoothly. Then, we “created” a spot on the heel were the batteries would attach in order for them to not be in the way. We placed them behind the end of the shoe. Finally, we used wires to mimic shoelaces as we felt added to the futuristic vibe that we were striving for. Now that the physical component was done we put our focus onto creating the video that we would play as part of our presentation.

The idea was to create a video that demonstrated the importance of sneaker culture while also highlighting the progress that sneaker design has made since they first came onto the scene. The video was created using Premiere Pro and our plan was to create a makeshift screen to project it onto, however we did not budget enough time to create the screen so instead I decided to play the video off of Christshon’s laptop. We also decided to go with this option instead because the video lost some of its quality through projection.

Overall, I feel that we did a very good job despite the adversities Christshon and I had to endure. To be honest, he has been one of the best partners I’ve had on a project. So thank you David for letting us team up. Christshon and I will probably try to dive into some more wearable tech but let’s see for in the future.  It was cool to see how Christshon’s knowledge of shoes and my knowledge of wearable tech bloomed together. I look forward to adding onto this project along with our given feedback and to explore more possibilities of art installations and wearable tech.

Collage of the process of the interaction:

Processed with MOLDIV

Perceptron (Machine Learning)

For my machine learning project I created a perceptron, which is essentially a code that functions similarly to a single neuron of a Brain. After being given an X and Y input the perceptron decides which group those coordinates belong to. In the canvas window there are dots sorted into two groups which are the groups that the perceptron attempt to divide the information into.

Here is my code




This is the link to my work, its not the most expressive but I feel that it has the potential to be expanded upon in more interesting ways;

Auto-Adjusting Volume Headphones

-What is it?

It is a headphone with an auto-volume controller and a led showing outside noise level.

This project has two parts, the p5 part and the LED part. The p5 part mainly controls the volume of music playing in the headphones. For example, in super noisy environment, the volume of music playing in the headphones would automatically become higher to allow the user listen more clearly, while in quiet environment, the volume in headphones would become lower in order to avoid disturbing others.

-For whom?

Everyone frequently uses headphones


Avoiding awkward situations and making life easier

Code Part:

p5- Full screen:

p5- Code:


LED & Arduino:

Step 1:

Step 2:

Step 3:

Food Have Feelings Too

All the documentation we have accumulated is located on this website link:

What is the project?

This is an interactive storytelling piece that includes anthropomorphic food, food made out of clay, photo sensors linked to an Arduino and the p5.js editor. Users will have to interact with the clay food and the interaction triggered by Arduino and light sensors will activate pre-made animations made in After Effects.

Who is it for?

Everyone who enjoys food, grumpy old men, annoying teenagers, sad little boys and our visually appealing world. Also, this who can relate to the heart aching pain that comes when someone leaves or is taken out of one’s life. 

Why Have We Made It?

We made this because we wanted to continue with the idea that was born for our hyper cinema projection mapping project. We also wanted to tell funny yet sad stories in a playful way using the skills we’ve learned so far. We wanted to say something about that inevitable truth, but do so in a playful and implicit way that seemingly skims over the true pain that it can cause an individual. We want to explore a new interaction with well-known and well-loved foods using the skills we have learned this semester.

How Does It Work?

We have three anthropomorphic food animated characters that we created for the interaction to compliment.

Storyline: Our three characters are a young, sad boy, a hormonal and annoying teenage girl, and a grumpy, old man. Using these archetypes of people in society, we are going to make scenarios using animation to create the reactions of these characters as the ones they like and love leave or are taken away.
Roger (Doughnut): A grumpy, old man whose super bitter about everything and is very mad at humans picking up his family and brothers in arms because it reminds him of his impending doom.
Raechel (Pizza): An annoying teenager who wants to do nothing but talk about her boyfriends and acts like she doesn’t care if you take her boyfriends away. But now, she has to deal with the harsh reality of life and loneliness.
Ronnie (Dumpling): A sad, lonely boy who has encountered too much loss in his life when it comes to his friends leaving. He has become jaded and thinks that inevitably everybody will leave him.
In the 30 second video, you will see our interaction is working with the dumpling and the light sensors. The clip at the end is a snippet of one of our animation that will be linked to one of the sensors for the dumpling.

So how the interaction should work is the user should be prompted by a sign that says something along the lines of “Pick up the food one at a time if you dare” or something like that. The user can start at any prototype and put on the headphones. It will be pretty self explanatory from there. The user one by one will watch all of the animations and move on to the next prototype if they wish. Hopefully, nothing is broken after the user is done. There is a failsafe in the code to not mess up the interaction if two foods are picked up at the same time.

Video of the Pizza Prototype Working:

Video of User Interaction With Pizza Prototype:

Video of Dumpling Prototype Working:


As we set up for the final presentation we came across a lot of problems, ones that we didn’t expect. Our wires somehow kept breaking even with the soldering and some stripped wires and kit wires kept breaking as well. So as we tried to fix this problem, the dumpling and donut prototypes stopped working. We got the dumpling prototype to work but it’s very fragile and we might need to replace all the loose wires before the show to ensure that it works correctly.

Final Code:

Arduino Code:

void setup() {
void loop() {
  int reading = analogRead(A0);
  int valueone = map(reading, 0, 1023, 300, 500);
  int secondreading = analogRead(A1);
  int valuetwo = map(secondreading, 0, 1023, 300, 500);
  int thirdreading = analogRead(A2);
  int valuethree = map(thirdreading, 0, 1023, 300, 500);
  int fourthreading = analogRead(A3);
  int valuefour = map(fourthreading, 0, 1023, 300, 500);
p5.js/Atom Code:


var Ben;
var Julian;
var Jose;
var Camron;
let serial;
var options = {
baudrate: 9600
var xData;
var yData;
var jData;
var sData;
var videoplay = false;
var benisplaying = false;
var julianisplaying = false;
var joseisplaying = false;
var camronisplaying = false;

function preload() {

Ben = createVideo(“”);
Julian = createVideo(“”);
Jose = createVideo(“”);
Camron = createVideo(“”);

function setup() {
createCanvas(400, 400);

serial = new p5.SerialPort();

// Let’s list the ports available
var portlist = serial.list();

// Assuming our Arduino is connected, let’s open the connection to it
// Change this to the name of your arduino’s serial port“/dev/cu.usbmodem14101”);

// Register some callbacks

// When we connect to the underlying server
serial.on(‘connected’, serverConnected);

// When we get a list of serial ports that are available
serial.on(‘list’, gotList);

// When we some data from the serial port
serial.on(‘data’, gotData);

// When or if we get an error
serial.on(‘error’, gotError);

// When our serial port is opened and ready for read/write
serial.on(‘open’, gotOpen);

// We are connected and ready to go
function serverConnected() {
print(“We are connected!”);

// Got the list of ports
function gotList(thelist) {
// theList is an array of their names
for (var i = 0; i < thelist.length; i++) {
// Display in the console
print(i + ” ” + thelist[i]);

// Connected to our serial device
function gotOpen() {
print(“Serial Port is open!”);

// Ut oh, here is an error, let’s log it
function gotError(theerror) {

// There is data available to work with from the serial port
function gotData() {
var currentString = serial.readStringUntil(“\r\n”);

if (currentString) {
let values = currentString.split(‘,’);
xData = int(values[0]);
yData = int(values[1]);
jData = int(values[2]);
sData = int(values[3]);
// console.log(values[0]);

function sensordetect() {
if (videoplay == false) {
if (xData >= 400) {;;
videoplay = true;
benisplaying = true;
} else if (yData >= 400) {;;
videoplay = true;
julianisplaying = true;
} else if (sData >= 400) {;;
videoplay = true;
camronisplaying = true;
} else if (jData >= 400) {;;
videoplay = true;
joseisplaying = true;

if (videoplay == true) {
if (benisplaying == true) {
if (julianisplaying == true) {
if (joseisplaying == true) {
if (camronisplaying == true) {

if (xData < 400 && yData < 400 && sData < 400 && jData < 400) {
videoplay = false;
benisplaying = false;
julianisplaying = false;
joseisplaying = false;
camronisplaying = false;


function draw() {



Final Project Documentation

The attached link is of the documentation of my project. Currently, there is stock footage used; however, I will be shooting my own footage later. There is also a sample of spoken word poem, which I will be swapping out for my own poem. I am currently waiting for my “voice actor” to record the poem so that I can input it.

This is still a work in progress and thus, I apologize about the various spelling errors and strange pacing.

This is a screen recording of what the user would see on the computer screen.



Prototyping update

Here is a link to the current prototype of my final:

I added the opening text, changed the photo of me to actual webcam footage, and added a partially complete ending.

Areas of success:

  • I cropped the webcam footage to focus less on the entire body. This helped direct user attention and in turn, offers a more impactful experience
  • I cropped the animation so that it goes outside of the screen borders, helping reduce “sticker effect”
  • Audio is better than last week, but can be a bit more powerful


Issues discovered:

  • Transition between the opener (context) and into the story (picking up the phone) is extremely weak and needs to be worked on
  • If users are wearing headphones, it will be confusing to pick up phone: issue that needs to be resolved
  • Smoother transition from panic attack into to closure
  • Veins are now animated to grow on to the body; however, it not noticeable

Posenet w/ Gudetama

For this week’s assignment I decided to play with Posenet and the sample code given on the ml5 website.

Here is the code:

let video;
let poseNet;
let poses = [];
let skeletons = [];
var img;

function setup() {
createCanvas(640, 480);
video = createCapture(VIDEO);
video.size(width, height);
poseNet = ml5.poseNet(video, modelReady);
poseNet.on('pose', function (results) {
poses = results;
// Hide the video element, and just show the canvas
img = createImg("gudetama.png")

function modelReady() {
select('#status').html('Model Loaded');

function draw() {
image(video, 0, 0, width, height);
background(244, 211, 255);

function drawKeypoints() {
for (let i = 0; i < poses.length; i++) {
for (let j = 0; j < poses[i].pose.keypoints.length; j++) {
let keypoint = poses[i].pose.keypoints[j];
if (keypoint.score > 0.2) {
image(img, keypoint.position.x, keypoint.position.y, 250, 200);

There were some issues. I couldn’t figure out how to use the specific points for each feature like left eye or right eye. It kept glitching and not working out unfortunately. I also could not get the image I replaced the dots with from the example to match perfectly to my face. I think there was something wrong with the png file that I used the made it geared towards the bottom right of the screen.

Here is a video of the project with the webcam:

Here is what I did to cover up the issue:

So as you can see there’s something wrong with the png that alters the placement of the points that posenet defines. But I covered it up and its cool that it still moves when you move your body.


@Stoker for helping with my code

@ml5js library examples for being the baseline of my example’s code


Algorithms run our lives because algorithms are in everything. Algorithms are just the equations that decide behavior. Every decision you make is based on your personal algorithm. Other algorithms like ones to predict weather and traffic also run our lives because we make decisions based on what these algorithms say, whether or not they are always correct. Especially with the rise of machine learning, algorithms become more and more important so that computers can learn how to do things, for example, recognizing faces as mentioned in the Joy Buolamwini TED talk. As machines learn to do more and more, it is important to look at how people are deciding what constitutes as what. The people that make these algorithms are not immune to biased tendencies as all human beings are, so it is important that they look at how they choose these data sets that they use to teach machines carefully with an eye for keeping everything diverse. They make these algorithms by setting up parameters and giving machines sets with things that the machine has to identify and things the machine is not supposed to identify. The machine then goes through the set is told whether they identified things correctly or not. Then, they learn from before and when given a new set, they will try again with more success. With more diverse sets, computers will learn to recognize more, which is important for inclusion for everyone in these new technologies. This is what Joy Buolamwini was talking about when she advocated for taking selfies and sending them in so people could make bigger sets for face-identifying robots. Through a community of people that want to make technology more inclusive, people are able to create more sets that better teach computers.


progress on final project

Since I pivoted my idea, I realize that I slightly behind schedule but here is what I have completed so far.

I found images of lungs online, photoshopped them to avoid copyright issues, and then uploaded them to After Effects. On After Effects, I was able to create an animation of lungs during a panic attack. The best part is the animation is transparent so I can easily apply it to a PoseNet code. Here is the animation:


Here is where everything fell apart. While I am able to export the animation as a transparent video, the file is too large for p5. When I try to compress the file, it is no longer transparent. Furthermore, I am missing something in my PoseNet code and as a result, I cannot test with it. I keep getting an error that states “ml5 is not defined”.

Unsure what to do, I decided to test out the lungs animation by putting an image of myself behind it. I showed possible users this very strange prototype to get insights on what other design elements I need. Here is a gif of the “prototype” (ignore the weird photo of me)


What I discovered was that the lung design needed to be simplified. In particular the part that connects to the throat needs to be removed since users found themselves fixated on it and its placement. Furthermore, more needed to be added to design the connect the lungs to the rest of the body so they do not just feel “glued” on like a sticker.

I am currently playing around with veins as a way to tie in the rest of the user’s body:


We Got Sole Diagram – Holly & Christshon


Here’s our user block diagram! So far, we have been coding through Arduino and making various patterns for our RGB LED patterns while using a button function to set these off. We decided to not use single-colored LEDs so we can make our own different colors. We also thought about adding a fingerprint scanner so the lights cannot be changed or activated by anyone else but by the user who owns the shoes.

To take our RGB LEDs to the next level, maybe we could have potentiometers for the LEDs on the shoe and have the change depending on the range of the potentiometer.

Also for our design aspect, we are hoping to deconstruct a shoe and to put the component inside.

Next thing on our agenda is to think of different textures for our video mapping that would look cool such as colorful amoeba or a sunset.

Volume Adjusting Part

My project is basically divided into two part, the adjusting volume part and the led part. In this week, I have done the adjusting volume part in P5, since most music players including mobile phones and laptops has mic themselves. However, for the LED part, due to the bad weather, the LED screen I bought online is still shipping and it is expected to be delivered by Monday. Therefore, I will do the part this week.



I have had a really busy week and did not get everything I wanted to get done for my prototype, but I still made progress.

These are my animatic clips for my protections:


I did not get a chance to finish up and record my poem yet, which is the basis of my project so without that everything else is at a halt.

The block diagram for this project is rather straightforward. There are not many options the user has. The story either pauses if the interaction is not meant or the story continues. The interactions will be based on different sensors and inputs the user will trigger.

This is my block diagram:



Search Giphy Get Images

For my project this week, I decided to use the giphy API. It was a bit complicated and I didn’t come out with what I thought I would. I wanted to use the search API to allow my user to search for anything in the giphy arsenal and have 3 gifs come up in my p5.ja sketch. But I across a few problems. One problem I came across is the JSON viewer on Chrome didn’t read the API and come out with images it gave out specific URLs, much to my chagrin.


Because of this, I had trouble trying to extract the actual gif from the URL that was given to me. Using the loadImage function didn’t allow me to load the actual links of the gifs and only the first still image of that gif, because the loadImage function requires the link or image to be in ‘quotes’ and that changed the link.


So I got the images to load, but they all loaded on top of each other, because I had to call them somewhere and 0, 0 was my best option. I think a nested loop would help remove the canvas to put the images one after the other, but I’m not sure how to do that specifically.

I also found that the first image of the cat on the laptop does not change with the multiple searches no matter how many times I search for something different.

3 4

If you can see the wood picture is the same throughout all the images I embedded in.

Here is the code!

var pics = [];
var img = [];
var link = [];

function setup() {
createCanvas(400, 900);
input = select('#keyword');
var button = select('#submit');


function ask(){
var first = '';

var rest = '&limit=3&offset=0&rating=G&lang=en'

var url = first + input.value() + rest

// print (url);

function gotData(info){
var pig = info;
for (var i=0; i<3; i++){

pics[i] =[i].id;
// pics[i] = 'link';
link[i] = '' + pics[i]+'/giphy.gif';




function zhanxian(pic){
// pic.size(width,height);
// rect(100,100,100,100);

function draw() {
if (pig){
for (var i = 0; i<3; i++){


This is a video of this working!

Here is the link!


@Cass for MAJORLY helping with my code!

@Helen for helping me debug my code!


Current Progress: Prototyping

After my materials finally arrived //  over a week later >:(    //,  I was able to finally build a prototype of my lungs. I decided to use red latex balloons to create a “lung” since I thought the latex material would be the easiest to “blow up”, having the least resistance to my fan. I purchased a 12V fan of Amazon and while many customers claimed it to be strong, I knew that it may not be a strong as I needed it to be. Turns out, I was right! It was not strong enough at all.


With the help of Professor David, I was able to learn how to wire my 12V fan to another more powerful power supply. However, the fan was not powerful enough to inflate or deflate my prototype.

Thus, I decided to pivot my idea and take this physical concept and make it AR. Earlier this week, Ellen had asked me if I was incorporating my love for webcams and gifs in my final project and I began to wonder why I had not. I started to brainstorm ways my project could translate into a webcam form and thought of digital mirror in which a user could see their “lungs” and “heart”. When the user did a task, they could see how these organs biologically changed due to anxiety (heart rate increase etc).

I decided to then prototype what that digital mirror could look like. Since I am going home for thanksgiving, I knew I had a lot of time to create the animation that would display on my 6 hour flights (12 hours total).

I wanted to learn exactly what the animation needed to look. I used p5 to generate a simple code that mapped on a heart and then a lung. While p5 claims that it can “tint” images to be transparent, the tint option does not work on gifs. Luckily there are many online resources that can turn a gif transparent.

Here is my output (i cropped out my face cause it was not very flattering haha)

e h

(above is a gif, click to play)


I learned that the organs need to be connected in someway that creates an implicit storyline otherwise it feels weird. For instance, the heart just seems strange and perhaps, this is simply because it is reflected. Yet, nonetheless steps should be taken to create a more completed look. I also know that the design must be high quality and look as close to the a real “organ” as possible. The heart seems to be more impactful than the lungs which were lower quality and less realistic.

I tested it out with a user and they provided similar feedback. I will create an animation on my flight on Tuesday and be able to test during the break with more users!

Here is my block diagram for my new concept:


Prior Art – Tangible Course Search

There are two major categories to examine when considering prior art: physical interface inspiration and  course search inspiration.

For the course search generally, I’m taking general organizational cues from Rensselaer Polytechnic Institute’s YACS student-designed course search as well as the existing NYU Class Search.


For physical filter interfaces and buttons there is a more diverse field of prior artwork available. One of the most direct inspirations for the concept of a physical interface for digital search was the ITP project “Search Divides Us” made by Asha Veeraswamy, Anthony Bui, and Keerthana Pareddy. It has a higher quality physical construction than I can hope to do with my current skill set, but nonetheless the same concept is still there. I noticed it at the Maker Faire while volunteering a month ago.

Glowing Buttons

Prior Art: Final Project w/ Yulin & James

Terraform Table

“Tellart’s Terraform table enables users to ‘play God’. Located at London’s V&A Museum, projection mapping turns the giant sandpit into a rugged landscape, with mountains, valleys and lakes. Here’s the cool bit: thanks to a machine learning algorithm, the Table is able to read the height of the sand and respond to any changes. In short, this means you can dig a hole to form a lake, raise a hill to create a snowy peak, or smooth a river over to expand a forest.”

This is related to our project because this is an example of user input or interactivity that we want to incorporate into our project as well. Although this is with VR and not video mapping the interactivity is the same.


“Using two walls, a treadmill, and some nifty projection, director Filip Sterckxcreates a virtual world for the musician Willow’s music video. As with most projection mapping projects, it’s the technique that charms here.

Singer Pieter-Jan Van Den Troost gropes at doors that aren’t really there, trots on the spot down imaginary stairs, and kneels pretending to be paddling in the sea. It’s all surprisingly lo-tech, and all the better for it.”

This is another example of immersive interactivity. While we might not go to that extreme, the interactivity portion of our project that will trigger the video-mapped animation is a vital part to make this project more interesting. And this one uses video mapping too! It’s really cool to have examples like this one to motivate us into doing great work!

Prior Art

Collectives & Projects dealing with gentrification

Most of these projects and collectives mainly deal with visual art and I thought that these visuals would serve as great inspiration for my animation aspect. It makes me think of how I can really be specific in the detail in regards to my experience in my community and what type of style I’m going for.

Poets on gentrification:

These are some spoken word pieces I found that talk about gentrification. I really found inspiration in the first one the most. I want to incorporate that performance aspect in the voice as much as I can so the effect can be greater on the listener.


I feel that there’s something sinister about globalization, about its historical context and the implications that it has for the present and future. Because even if imperialism and colonizing are out of fashion,    we still live in a Western-centric world. English is the lingua franca. Language is a powerful instrument of control, and it was utilized by colonizers in Kenya to cause dissociation of students, whose indigenous language was deliberately undervalued in school, from their own cultural environment. Women of color still bleach their skin or get their jaws shaved down and their noses fitted with silicone to look like white women, to fit global, Eurocentric beauty standards. The desire for a global market has contributed to the emergence and continuance of unethical (to say the least) working conditions, for both Lithium miners and Amazon warehouse workers. Things are changing, sure, but globalization has taken its toll. I think that at least now the goal of globalization isn’t inherently malicious, but we tend to be detached from the situation, and might conclude that the ends justify the means.

One part of the globalization of technology in particular that I see as a positive is the spread of open-source hardware and software. This makes plain the inner workings of technology, a bit of an antithesis to the plain, enigmatic shell of the Amazon Echo. Arduino boards, for example, are open-source, available for anyone to recreate or modify in their own designs. It invites the consumer to take things into their own hands, to investigate and tinker, to figure out what makes their machines tick and to build their own for personalized or public use. The consumer is then elevated; they can become sovereign over technology. I believe in putting knowledge and power of the hands of the many, and I think that globalization that involves the unbiased sharing of information, such as open-source code, plays a pivotal role in achieving this.


Following the tutorial, I connected array with the objects. I also assigned random colors to the bubbles. However, the problem I met is, when I tried to change the mode of bubble’s movement, there is always some problems as I comment in the code. Besides, I want to figure out how to change the color of bubble one by one when mouse is pressed.


full screen:


Arrary and for loop

The game already used both of them.

The array stores the bullets and enemies

The for loop checked out them each draw and asked them to work.

let i_fly
let i_bb
let i_em
function setup() {
createCanvas(400, 400);
hardslider = createSlider(1, 3, 1);
textAlign(CENTER, CENTER);
input = createInput();
input.position(135, 300);
button = createButton(‘start’);
button.position(input.x + input.width, 300);
i_fly = loadAnimation(‘images/spaceship.png’,’images/spaceship2.png’)
i_bb = loadAnimation(‘images/bullet.png’)
i_em = loadAnimation(‘images/enemy.png’)
let score=0;
let hname=”none”;
let hscore=0;
let page =1;
let x = 200;
let y = 350;
let b = false;
let W;
let A;
let S;
let D;
let J;
let timer1 = 0;
let timer2 = 0;
let timer3 = 0;
let action = 5;
let pspeed = 4;
let espeed = 3;
let gspeed = 60;
let sspeed = 20;
let army = [];
let army_s= [];
let bullets = [];
let bullets_s = [];
let hp = 3;
let hard=1;
let avatar;
let aangle;

function keyPressed() {
if (key === “w”) {
W = true;
if (key === “a”) {
A = true;
if (key === “s”) {
S = true;
if (key === “d”) {
D = true;
if (key === “j”) {
J = true;

function keyReleased() {
if (key === “w”) {
W = false;
if (key === “a”) {
A = false;
if (key === “s”) {
S = false;
if (key === “d”) {
D = false;
if (key === “j”) {
J = false;

function actionCheck() {
if (A === true) {
if (W === true) {
action = 7;
} else if (S === true) {
action = 1;
} else {
action = 4;
} else if (D === true) {
if (W === true) {
action = 9;
} else if (S === true) {
action = 3;
} else {
action = 6;
} else {
if (W === true) {
action = 8;
} else if (S === true) {
action = 2;
} else {
action = 5;

function enemy(x, y, number) {
this.speed = espeed;

this.move = function() {
if (army_s[number].position.y>390) {
hp -= 1;
// print(‘hp = ‘+hp)

for (let i = 0; i < bullets.length; i++) {//collision check
/* let j= bullets[i].checkx()
let w= bullets[i].checky()
if (j > x-20 && j < x + 20 && w < y&& w > y-40)*/
//rect(x-10, y-10, 20, 20);
// y = y + this.speed;

this.destroy = function() {
army.splice(this.number, 1);

function bullet(x, y,number) {
// bullets_s[number].addAnimation(‘1’,i_bb)
this.speed = 10;
this.move = function() {
// ellipse(x, y, 10, 10);
/* print(‘length ‘+bullets_s.length)
print(‘indext ‘+number)
if (bullets_s[number].position.y < 0) {
// y = y – this.speed
/* this.checkx =function()
return x;
this.checky =function()
return y;
this.destroy = function() {
bullets.splice(this.number, 1);

function start()

function draw() {
actionCheck(); //control start
if(page===1)// in manu
text(“use WASD to control the spaceship”,200,50);
text(“press J to shot”,200,100);
text(“If enemy hit the ground three times,”,200,150)
text(“you die”,200,200)
text(“Please enter your name and start”,200,250)
text(“High score :”,100,350)
text(“Drag the bar to change difficulty”,200,375)
if(page===2)// if in game

if (action === 4) {
//avatar.position.x = avatar.position.x – pspeed;
if (action === 1) {
//avatar.position.x = avatar.position.x – sqrt(2*pspeed);
// avatar.position.y = avatar.position.y + sqrt(2*pspeed);
if (action === 3) {
//avatar.position.x = avatar.position.x + sqrt(2*pspeed);
//avatar.position.y = avatar.position.y + sqrt(2*pspeed);
if (action === 7) {
//avatar.position.x = avatar.position.x – sqrt(2*pspeed);
//avatar.position.y = avatar.position.y – sqrt(2*pspeed);
if (action === 9) {
//avatar.position.x = avatar.position.x + sqrt(2*pspeed);
//avatar.position.y = avatar.position.y – sqrt(2*pspeed);
if (action === 6) {
//avatar.position.x = avatar.position.x + pspeed;
if (action === 8) {
//avatar.position.y = avatar.position.y – pspeed;
if (action === 2) {
//avatar.position.y = avatar.position.y + pspeed;
if (J === true) {
if (b === false) {
b = true;
bullets.push(new bullet(avatar.position.x, avatar.position.y – 15,bullets.length))


if (avatar.position.x < 0) {
avatar.position.x = 0;
if (avatar.position.x > 400) {
avatar.position.x = 400;
if (avatar.position.y < 0) {
avatar.position.y = 0;
if (avatar.position.y > 365) {
avatar.position.y = 365;
} //controll end

background(70); //characer and ground generation
//triangle(x – 10, y + 10, x + 10, y + 10, x, y – 20);
fill(255, 0, 0);
rect(0, 375, 400, 25);

fill(255, 255, 0); //bullet generation
for (let i = 0; i < bullets.length; i++) {
if (b === true) { //bullet reload
if (timer1 < sspeed) {
timer1 += 1;
} else {
timer1 = 0;
b = false;

fill(0, 0, 255); //enemy generation
for (let i = 0; i < army.length; i++) {

timer2 += 1; //enemy reload
if (timer2 === gspeed) {
army.push(new enemy(random(40, 360), 0, army.length))
timer2 = 0;
fill(0,255,0)//draw hp
}// end of gaming loop

if(page === 3)//game over
text(“Game over”,200,200);
text(“Your score is”,200,250)

Final Project – Interactive Shoes (Christshon & Holly)

For our final project we will be creating an interactive pair of shoes that allows for our wearer to express themselves through the colors (and potentially the sounds) produced by their shoes remotely. The goal is to also give the wearer the ability to program the LEDs to flash in different patterns that they see fit. Much of our focus will be put into the design of the shoes  and the placement of the components because we want them to look like something that is functional rather than a bulky science experiment. Our goal is to create a pair of shoes that allows the wearer to share a little bit of themselves in a fun and creative way so style and individualism will be big factors in our production. For this project we are going to have to figure out interesting ways to connect the components of our circuits in ways that don’t compromise the functionality of the shoes themselves. We also plan on using video mapping  to project a short video onto our shoes explaining our goals and highlighting the different possibilities.


Final Proposal


This is a video version of my material for presentation. My ppt version will have higher quality.

Basically, my project is a book with foldable paper stages between every page. So it can be video mapped. It is a story, a comic book, or a show with using multiple techs. For now, it has two version. The first one is to insert Arduino into the physical body which allows the acts change when people turn pages. The second one is to use buttons to transit chapters. The formal one is cooler but also much harder.


My project basically has six parts which are shown in the ppt. Some are motivated by my personal interest and the book idea is from a video mapping show which was shortly introduced in Com Lab class. My initial ideas of background sound and animation are from a playful comic book called “Florence” which has no difficulty to get through every chapter but show a clear and beautiful story like a comic book. The connection part is just because it is needed in this case. With this form, the book can be used as a tool for education, a storyteller, or a gift. Based on the six parts of the book, I make a weekly schedule in ppt. Also, the estimated cost is attached.

Orange Particles

Happy Halloween everyone!

for my project I watched a couple of videos and tried to play around with them. Eventually, I just made a particle effect to where circles just fly out of one concentrated area and fly upwards.

I wanted to be able to make it aim at the direction of the mouse, but I could not figure that out, so I just opted for the mouse to have an ellipse cursor.

I’m gonna have to work with using ellipses and their speed so learning how to make them come out at a certain speed is important knowledge to have.

Fall and Bounce Right Back Up

For my project I actually wanted to play around with one of my previous projects.

To jog memories, this sketch basically just has a ball that changes velocity every time it hits a wall.

For this project I wanted to play with the idea of velocity and use a potentiometer to adjust the speed of the ball at any given time. However I was unable to connect my arduino as a controller which halted me in making progress in my project. I tried for hours to get this down but ultimately no matter what I did I am missing something that allows it to work.

I will keep working with Serial communication to understand this further, as I want to use it for my final project.

Brain Storming


Using both Arduino and p5 to make a game. The game should be appearing on p5, and the player has to use the Arduino to control it. The Arduino may use buttons, tilt sensor or distance sensor.

E.g.1.A golf game. Based on how long the player presses the button on the Arduino, the travel distance of the ball on p5 will change. Only if the player hit the ball into a specific area, he will win.

E.g.2. Using distance sensor to control the spaceship on p5, try to avoid all the obstacles and get points.


2.Online piano

Also, relate the Arduino and p5, but this time it is p5 to control the Arduino. There will be a piano keyboard on the screen. Once the person clicks on the piano key, the beeper on the Arduino will begin making sounds. The beeper will make different sounds using different frequencies. Meanwhile, a small character will dance on p5 with the pressing of keys.


3. puzzle

This one is only for p5. A picture will show up for 5 seconds, and then it will be divided into 9 or 16 pieces. Then you have 30 seconds to put them where they are supposed to be. The player doesn’t have to put the pieces exactly to field, as long as the piece is placed near its field, it will automatically place into the field.

4. Looking at me

I’ll select some famous artworks and put them together on the screen. Their emotion and eye focus will change when the player moving their mouse. If it is possible, there will be a camera to detect people’s faces. All the figures in the artworks will look at the player.

5.Super fat cat

Interactive with fat cats on the screen.  Pressing the screen, and the direction that the cat looks at will change. The cat will behave differently while the players are pressing different body parts of its body. Players can also feed the cat, etc.

Brainstorming for Final Project

  • Laser-light-activated instrument:

The instrument can be an electric board with photoresistors or some other types of light sensors built inside. People can use a laser pointer/pen to operate this instrument from a distance.

Idea #1

  • AR project–Stranger Things (ITP Floor ver.)

           We can 3d scan the whole floor(or maybe just a part of the floor) and reconstruct a 3d setting, the “Upside Down,” base on the reality. Then we can make this into an AR project so that people can explore the “Upside Down” of the ITP floor using their electronic device like iPhone or iPad.

  • Using multiple iPhones to create a path for a character in a game.

Idea #3

I wanted to be a mean love guru, but being evil is hard

I decided to recreate an arcade game that I loved playing in middle school for this week’s assignment, but putting a small cynical twist on it in honor of this TRICK or treat season.

Here is a version of the arcade game I am referring to. arcade game

Essentially, a user would place their hands upon one of the two hands on the console and their partner would place their hand upon the other hand print. The game would then “calculate” how strong their love was for one another.

There were other versions of this game that were one player, in which the player would put their hand down on a similar hand print to determine if their crush liked them back or how hot you were.

Overrun by puberty, I loved games like this in middle school. I mean, for just 50 cents I could see if Jordan from 3rd period and I were really going to get married! (Of course, we did not haha).

I wanted to create a version of the one player game. In which, a user could determine how their current partner really feels about them. They would place their hand upon a box with a hand print and discover the answer onto the computer screen.

Here is the box:


In the middle of the “hand print” is a light sensor. Once a hand covered the light sensor, the computer screen would show how much that user’s lover actually cares about them on the screen through p5. The box was easily to make however, the following parts of the process is where things became complicated.

Before I begin discussing my process, I want to provide an overview of what my p5 code needed to have.

  1. To take in the light sensor data
  2. Recognize that the light sensor was covered
  3. Have an array of possible sentences (fortunes)
  4. display a random sentence when light sensor is covered

Getting the light sensor data was not a problem and recognizing when the sensor was covered was not difficult either. The problem emerged was having an array of sentences.

When I finally put together my initial code, I realized that when I covered the light sensor, p5 was receiving a range of numbers. It was considered covered at 30 all the way to 100. Yet, the issue was, that it would bounce between this range even though my hand was still and over the light sensor. The light sensor must have been detecting small amounts of light and then bouncing around. As a result, the code would continuously loop and display different sentences (fortunes). To better understand what I am referring to watch the video below.

In order to tackle this issue I decided that I need to take the range of numbers that are associated with the light sensor being covered.

My initial thought was to declare the range in p5 in my if statement, but that did not work. The I decided to try to divide up the range and that did not work. I realized I needed to covert “not covered” into one number and “covered” into another number before it even entered p5. Specifically, I needed to take an analog data and make it digital. I went onto my Arduino code and turned into a boolean. This allowed me to have the desired covered and not covered data input into my p5. Here is the Ardunio code:

arduino code

Now that the I was able to simplify how p5 was reading the light sensor I decided to go back to getting the random fortunes to display.

However, my own luck and fortune was not by my side. p5 crashed meaning that I had to redo my code (which was not that heartbreaking). However, for some reason when I redid the code my serial controller no longer wanted to stay on the serial port I was using and kept returning back to my bluetooth headphones. As a result, I kept getting an error that my serialEvent function was not working properly.

This resulted in a quick one hour long cry out of frustration. Luckily I was able to simply restart my computer, delete my serial controller app and redownload a new one and get it working again. Yet, I needed a break and since this was day two working on this project, I was very emotionally exhausted.

After the break, I was able to finally get it working!

Yet, as I kept playing around with it and testing with different graphics for the display, something terrible happened.

Basically, my light sensor started to think it was alway receiving light. When I checked on the serial monitor it always depicted “1”. Watch the video to see what I mean (also sorry for my gross room in the background):

I tried everything. I took the light sensor out of the box and tried to cover nothing worked. At this point, I am assuming that either this is karma for trying to make people sad about love or (more likely) I got my light sensor from a knock off ardunio kit, so it may simply not be the best quality. I could either add a new light sensor but it was past midnight and the shop was closed so I would be unable to solder it.

Update: I spent all night thinking about what went wrong and perhaps it was because my Arduino code made “no light” false. That meant if there was ANY source of light, it would depict true. However, maybe it should be flipped so that if there is any source of darkness it should be false. It is currently 6am but I’ll retry this after class. I really want to make this box work and I have faith that I can be an evil love guru!

current fullscreen:

current edit:

current arduino code (now that I am iterating upon it):

new code


Human thinking is anything but linear. Making decisions as simple as what to eat for dinner can sometimes leave us stuck. We consider multiple choices and their consequences before making our decisions. My thought process is typically very skewed. I could start a task just for my mind to wander in every direction except the task at hand.

Hypertext has revolutionized convenience. Anything we want can be accessed with the click of a button-it’s almost too easy. Instead of flipping through book pages at a library or web pages in a computer, so much www information is already perfectly categorized and waiting to be found. Hypertext represents linear thinking. Every hyperlink is direct and straightforward, ready to take u from one place to the other, already predecided and predictable. Unlike human thinking, there is no exterior consideration of the consequences and outcomes of clicking the hypertext.

In a way, everything online is already accessible by URL. Every time we click the “enter” key or a button, said key or button are connected to a link to bring us to the website or place we are trying to arrive at. A way to make URLs more accessible is making technology and technological knowledge more accessible and easy to learn.

Digitizing everything

I am definitely not a liner thinker. One second I could be thinking about getting some dunkin donuts coffee after school, then random thought about kids choice award winner Willem Defoe and the next thing you know I am writing a screenplay about Willem Defoe waking up as a Donut and becoming the Donut Man. This type of stream of consciousness is what makes me and billions of humans unique. Random events and topics popping in and out of the consciousness is what makes creativity a beautiful thing. And Hypertext is great for that reason. Sometimes, we don’t need to get to point A before we get to point B, sometimes we just need to get to point B without any stops and Hyper text is great for that.  And since the plan is to digitize everything, Hypertext is the best way to go. Although I don’t think everything can be digitized, its still out best bet.

Calling the Future

For this week, I made a slider controlled face that can represent a person’s mood!

I wanted to build off my last week’s self portrait, but couldn’t figure out how to get the slider controlled mouth to stay in front of the bounce. I also wanted to use the same slider to control the smiley face color, but couldn’t figure out how to use the same input for two outputs.

Calling the future


full screen

calling the future
calling the future


try type in other words

try hold your mouse



//wall variable
var f=0;
var c=[0,10,20,30,40,50,60,70];
var frame=0;
var a=0;
var i=0;
var bright=100;

let info;
let button;
let displayText;
var textbright=40;

function setup() {
createCanvas(940, 875);


function updateText(){

function draw() {


function leftWall(){
for(var i=0; i<940/50;i++){

function rightWall(){
for(var i=940/33; i<940/20;i++){

function timer(){
if(frameCount-frame==5) {


function display(){


boucing game

Initial model

However, it can only bounce once.

Second model

I turned out the mistake is I confused pX with pY.

Third model

I add everything, but the score can not show.


I realize I did not add a function for updating scores.

However, the ball is still stuck in the boards sometimes which causes the miscounting of the score.

I narrow the Y range of position that can trigger the bouncing to reduce the bug, but it still possible to be stuck in theory.

Thank Wuji for helping me find another bug in my simple game which is because of the weights of the lines, there are mismatches of the visible lengths and real lengths. Thus, I rewrite the real length to make the ball interact with the visible length.

Linear & Non Linear Thinking

Personally, I think I am not an exact linear thinker. But it is true that linear thinking always appears in my daily life. People like to interpret things simply based on the evidence they got, without considering other factors that may also play a role. Linear thinking is usually straightforward and clear. In the subject of math or physics, things happen exactly as it should be, so linear thinking will be very useful. However, in other fields such as history or psychology, there is no absolute connection between one factor and one result. My middle and high school teachers always tell us to consider things from different aspects in order to avoid linear thinking.


Of course, computers are a linear thinker. Or in another word, they won’t consider factors outside the data we give to them. Hypertext is a representation of linear-thinking. Once you click the link, it will lead you directly into the page. It is useful and convenient, which doesn’t require a lot of thinking process to use. Even though we need non-linear thinking in our real life, we also need things that are clear and easy to use in our online world.


Although a link is linear-thinking, I don’t think the collection of URL is linear. If someone searches something on the internet, there will be millions of URL for him/her to choose. There will be also a lot of related information which may not exactly for the word he/she search. Search engines are not linear thinkers.

Follow the Path

If I think something, I’ll usually pursue what I’m thinking. I’ll look into the topic, talk about it, do what I’m thinking about, etc…

In that sense, I believe I am a linear thinker. If I think about something I’ll find a way of pursuing it further, I’m honestly not even sure if I can describe any other way of thinking.

I feel that computers themselves, while not being living beings, are also linear thinking. You give an input, they give an output, there is no in-between. I also believe that this is just beneficial in general, it allows us to think and use computers as a thinking tool. An example of this could be the use of URLs. URLs also benefit a linear way of thinking, by a simple click, they can provide you an article, a download, a video to something related to what the URL implied to you. In a way, through URLs we basically have most information given to us with ease, we can go into a search engine (Google is the best) and look up a topic we’re looking into, after we input that, we can get thousands of URLs related to that singular topic, providing us information, downloads, videos, text posts on what we wanted to know. I feel like this is one of the most useful tools we have on the internet, and that I personally don’t see anything additional that it would need, I believe that if it were NOT linear that it would be more of an obstacle than anything. It would make it harder for us to immediately access the info we crave, and perhaps we would lose interest before we can even obtain that information.

I’m interested to see if any of my classmates feel differently, but personally as a “Linear Thinker”, URLs are such a great tool that help us out.

Digital Selfie



function setup() {
createCanvas(400, 400);

function draw() {









Rube Goldberg – Ethan and Sama


As arranged with other groups, our input and outputs were both going to be servos. Because of the more simple nature of our input and output, we wanted to build something really cool for the in-between stage. After more than an hour and a half of brainstorming and ideating, we came up with the idea of doing a sailboat race inspired and powered by a blower fan, seen below in its final position, that we found on the discard shelf.

Blower Fan

Build Process

We started by figuring out what we could use as a water basin. After testing a couple different things we found on the junk shelf, we settled on my Arduino case as it had waterproof plastic ‘lanes’ in the case that we could use as channels for each boat. Next we started creating boats that could both float on the water and be properly pushed by the air. This was an especially long and laborious process, as it seemed that any design we came up with either sank, flipped over in the wind, or otherwise stopped working in some way. Eventually we came up with the boats seen below.


Concurrently with the boat development that Sama and a few helpers we found around the shop (namely Zoe) were working towards, I was working on the electronic components of the project. This included two ultrasonic sensors, LEDs, and a servo. The ultrasonic sensors were in place to determine the winner of the boat race and light up the corresponding LED before triggering the next stage of the race. The plan was originally to use one ultrasonic sensor to detect both boats, but it turned out to be quite inaccurate so we used two sensors. The final sensor and LED configuration can be seen below.

Ultrasonic Sensors

The programming was a fairly simple comparison of the two values, triggering the next stage of the Rube Goldberg if it detects one of the two boats to be close enough.

//Library Includes
#include <Servo.h>

//Pin Declaration
const int trigPin1 = 9;
const int echoPin1 = 10;
const int trigPin2 = 7;
const int echoPin2 = 8;
const int LED2 = 12;
const int LED1 = 13;
const int servoOuput = 6;
Servo servo;

//Variable Declaration

long duration;
int boat1Distance;
int boat2Distance;
bool raceOver = false;

int servoAngle = 0;

void setup() {
  //Ultrasonic Pins
    pinMode(trigPin1, OUTPUT); // Sets the trigPin1 as an Output
    pinMode(echoPin1, INPUT); // Sets the echoPin1 as an Input
    pinMode(trigPin2, OUTPUT); // Sets the trigPin1 as an Output
    pinMode(echoPin2, INPUT); // Sets the echoPin1 as an Input
  //Servo Attatchment
  //Open Serial Port
    Serial.begin(9600); // Starts the serial communication
  //Put servo in base position

void loop() {
    //BOAT ONE Distance Sensing
        // Clears the trigPin1
        digitalWrite(trigPin1, LOW);
        // Sets the trigPin1 on HIGH state for 10 micro seconds
        digitalWrite(trigPin1, HIGH);
        digitalWrite(trigPin1, LOW);
        // Reads the echoPin1, returns the sound wave travel time in microseconds
        duration = pulseIn(echoPin1, HIGH);
        // Calculating the distance
        boat1Distance= duration*0.034/2;
     //BOAT TWO Distance Sensing
        // Clears the trigPin2
        digitalWrite(trigPin2, LOW);
        // Sets the trigPin2 on HIGH state for 10 micro seconds
        digitalWrite(trigPin2, HIGH);
        digitalWrite(trigPin2, LOW);
        // Reads the echoPin2, returns the sound wave travel time in microseconds
        duration = pulseIn(echoPin2, HIGH);
        // Calculating the distance
        boat2Distance= duration*0.034/2;
      Serial.print("Boat 1 Distance: ");

      Serial.print("Boat 2 Distance: ");
      //If the race is over
        if(boat1Distance < 9 || boat2Distance < 9 ){
          raceOver = true;
          if(boat1Distance < boat2Distance){
            digitalWrite(LED1, HIGH);
            Serial.println("Boat 1 Wins!");
            digitalWrite(LED2, HIGH);
            Serial.println("Boat 2 Wins!");
   }//End if race not over

/* Sources for Code:
Ultrasonic Sensor HC-SR04 and Arduino Tutorial

Finally we worked the input and output into the mechanism- for input we had a block fall down triggered by an Arduino servo that completed a circuit to turn on the fan. For the output we had a servo that moved a piece of wood to hit the joystick that the next group was using as an input.


In the end however, all this work has been put in jeopardy  because the power supply unit that we were using suddenly decided that the fan wanted 26 volts of DC power instead of the 12 volts that it was happily providing for the seven hours beforehand. As far as I can tell, it shorted the internal circuitry of the fan just as we were doing our final tests of the night. We have a few different options for where to go from here. One is to buy a new fan ($8-15 on Amazon) and have it shipped here before Wednesday. Another is to find two smaller fans that we could possibly incorporate.  Or we can use one large, constantly blowing fan and two servos to hold the boats back from going before they circuit has been completed.


Zoe, Josh, Andri, Katie, Ruyi, and some ITP people for their help throughout the process.

Our Goldberg Piece (Aproova and I :) )

We created a machine that when given a light input, will roll a ball down a slide.

We started with a drawing:


At first we were going to have a pulley system, but then we decided that a flag would not be a sufficient output and then created a system in which when the motor turned, a little flap on the end would go back and swing forward so that we could knock the ball forward.

Here is the sketch!


And then we started testing it out!

First, to make sure that we got the light sensor plugged in correctly, we tested out the light sensor by seeing if we could light up an LED.

After we got the light sensor working, we hooked it up so that the when the light sensor was activated, we could get the motor to turn.


After that, we created a little flap that attached to the motor so that something would come in contact with the ball. At the same time we worked on creating a slide so that the ball would have something to roll down. After engineering all our parts together, we were able to create our piece of the Rube Goldberg Machine!

Here is our slide and the platform it would be later attached to.


And here is our video of our fun chicken slide in action!

Alison & Amin: Group 9

For our piece of the Rube Goldberg Machine, we decided to us an ultrasonic sensor for our input and an LED for our output. After hearing that group 8 was doing something that “pops up” and group 10 was using a light sensor, we decided the best way to go would be an ultrasonic sensor for motion and an LED for light.

First, we created a schematic for our machine.


Next, we assembled our machine.


Then, we created the code. We used pins 7, 11, and 13 for the LEDs, trigger pin, and echo pin respectively. The trig pin will receive a signal of 10uS from the Arduino to begin the ultrasonic sensor ranging. Next, the sensor will send out a cycle of 8 bursts at 40 khz and assume it’s maximum radius. An echo line is the width of an echo pulse that is equivalent to the distance to the object. Before detecting an object, the echo line starts at the sensor’s maximum detectable radius. After detecting an object, the sensor lowers its’ echo line to give a give pulse width in uS.

Pulse width is then converted to distance with the formula: distance = (traveltime/2) x speed of sound or cm = (duration/2)/29.1. Travel time/duration is divided by two because the wave had to be sent out, hit the object, then return to the sensor.

We chose a distance of 10 cm to set off the LED because it was close enough so other items would not be likely to effect the sensor, but far enough so the item would still be easily detectable. Our if else statement says if an item is farther then 10 cm, the LED would be off. If an item is closer than 10 cm, the LED would be on.

Lastly, we set our results to be printed in the serial monitor.


Here’s the device!

As you just saw in the video, as the hand gets closer, the serial monitor shows the cm decreasing. When the hand gets farther, the cm count increases.


Complete Guide for Ultrasonic Sensor HC-SR04 with Arduino



The Body

A few weeks ago we discussed how our hands contribute to our understandings of the world around us in a previous reading. This week we are expanding on this discussion and evaluating our entire body as an information synthesizer. While this week’s reading focused on examples like embodied cognition, there are many other examples in which our bodies directly provide us knowledge on our surrounding environment. Our bodies are constantly taking information  to create spatial map of our surroundings. In order to read my writing, you need to look at a screen (a computer or smartphone); yet, as you look upon this screen, you can be confident that the space behind you has not changed.

How can you be confident about that? Your eyes are fixated on to screen and you literally cannot see what is behind you. Our bodies are constantly collecting sensory information that our minds evaluate to create a map of our surroundings. Since the sensory information your body is collecting has not dramatically changed, your mind can assume that your surroundings, even the things behind you have not.

By collecting sensory information, our bodies allow us not navigate our wold more efficiently. We do not need to stop before crossing the street to determine if the sidewalk is stable. Instead if the motion in which our feet hit the ground does not align with our mental definition of “stable” surface (such as, it does not wobble or move), our feet will inform our brains that the road may not be safe. As I write about the relationship of the mind and body, specifically how the body influences the mind, it becomes more and more evident that designers must take into account how our bodies gather information when creating powerful interaction designs. For the past two years I studied cognitive science and simply assumed that the information I know about memory is all I could apply to my UX designs. While it is important to recognize the limitations in a user’s memory, a designer must also be aware that reading/listening is not the only way a user is gathering information about the product.

Designers must also focus on the influence of emotion. Perhaps the most fascinating piece of information I have learned in my background in cognitive science is that memories are dependent on emotions. Unfortunately, your memory of an experience is never accurate. Instead, every time you remember an experience, you are actually simply remembering how you recalled the experience last time. Thus, whatever emotions you were feeling about the experience wash over the memory, greatly altering the memory. For example, childbirth is the most painful experience a woman will have to endure in her lifetime. However, during childbirth and continuously afterward, the new mother’s brain is flooded with dopamine. Why would the brain do that? It is because as the mother starts recalling this extremely painful experience, the extra dopamine ensures that she remembers it less negatively. As a result, the new mother would no longer remember childbirth as extremely painful and maybe even traumatic, and now will be willing to have another child.

By taking into account emotions, designers can be more aware and can better control the experiences their users are having with their design. For instance, if a designer wants the user to continue to use a product, they can add things that would spike up a user’s dopamine level (like a pleasant sound or funny meme) during the experience. They can tailor the notifications a user receives about a product (such as an app) resonate positive emotions and further ensuring that the user remembers the product positively.

One of the most valuable things I learned while taking a course on design thinking is that designers need to truly listen to users and at times even read between the lines, because users may not be able to vocalize what they are experiencing or need. It would be a shock if a user could vocalize what their body or mind was experiencing as they used a product, and thus, by having knowledge on the mind and body, designers can truly understand their users.

Expressivity Machine

For my assignment I attempted to create a device that would work similar to an alarm clock, playing a pretty annoying tune while also flashing a light in unison.




Although I learned how to play the tune on command and flash the light without using the “delay” command I was not able to figure out a way to successfully combine these two into one code so that is something that I will explore more


The only way I could think of syncing the blinks and the flashes was to give them a simultaneous delay time however thats what I was trying to avoid so I decided to look for another way.


On the left is my code for the flashing light and on the right is my code for the alarm which is triggered by a button press

Musical LED

To create something expressive, at first, I attempted to create something with a potentiometer. I was going to have it so that when it hit a certain frequency the LED would flash the tune of twinkle, twinkle which is a fun song that I really like because it was the first song I learned how to play on the cello.


Here is the soldering I did for the potentiometer:


I plugged in everything but I realized I could not get the potentiometer to work so I decided to scrap that idea and try with a button


Here are some of the wirings I tried with the button:

Wiring 1

Wiring 2

I’m not sure what went wrong because I tried a lot of troubleshooting but I could not get the LED to flash when I wanted it to. I was able to wire it so that the window showed me when I was pressing the button and display me changing the potentiometer. I’m still trying to see what I need to fix but here is the code that I implemented.


Lazy people’s led

My purpose is to make the led light up when the surroundings get dark so that people don’t need to turn on the lights themselves.

Video of the result–>  Lazy people’s led

And here is the code.


I classify the value of sensorReading as 5 levels of brightness in order to tell the Arduino that it should turn on the light when the brightness detected by the photocell is at, or lower than, level 3(which is a little bit dark but not completely dark).


I tried to have a disco but I failed

I began this week’s assignment with a vision to create a disco ball. My plan was to attach a RGB LED to a light sensor and when it would be dark (the light sensor would be covered), the RGB Led would light up and change colors, alternating between green, red, and blue.

I began by first creating a system in which a regular LED would turn on when it is dark (or when the light sensor is covered). With help from sources online, specifically, I was able to create the required code.

I was fortunately successful in developing this simple dark activated LED. Here is a video of my LED working!

(Just realized how terrible this documentation is and I have learned my lesson! My hand is creating like a “cup” to cover the light sensor)

Confident, I decided to create my little disco ball!

I added a RGB LED, rewiring my board so that each prong had a resistor and was attached to an individual output on the Arduino. I then added to my code, establishing the RGB and stating that when the light sensor detects that is dark, the RGB LED will light up red, then delay, green, then delay, and then blue. I thought this would create a disco ball effect.

Here is a photo of the new set up (up close).


Here is the code I made! I was inspired by Sama’s code! So thank you Sama!


However, when I plugged in my Arduino, the RGB LED light up for a brief moment and then did not light up again. It was then that I discovered I had accidentally shorted my RGB LED.

Without another RGB LED, I realized my disco dreams had to come to an end for now. I will try again later this week and hopefully have more success! Nevertheless, I was able to properly incorporate digital and analog I/O, which is a bit of a victory.



What matters to us is dependent on us as individuals. Each person has gone through a set of things in order leading up to this point. Therefore all those things have had an impact on your experience, thus altering what each person values. I may value time with my family, the person next to me may despise it. Both those views are right for each one of us due to our upbringings.

Who makes the decisions? The controversial subject of the self driving car can help understand what I mean.

Like Patrick Lin says, the outcomes of all foreseeable accidents will be determined years before they even happen. Programmers will dictate what happens when a two lives are in danger. So I don’t think it’s machines making decisions, it’s machines following orders. And the difference those two orders can make is between premeditated homicide and an instinctual reaction.

Then the legal implications come in, is the programmer of the code that instructs the car to save you responsible for the death of the person it couldn’t? It’s all more complicated than we think it is.

A Switch that Requires No Hands

Since the challenge was to create a switch that could be activated without hands, I wanted to further push myself and see if I could create a switch that could be activated with another state of matter, specifically gas. I wanted to see if I could get the switch to be turned on by using air in some way. Below is my schematic.

My original idea was to create a pinwheel that would have a piece of copper on one of the wings. When air was blown on to the pinwheel it would rotate the wing with the copper tape. The goal was that enough air would blown, to rotate the copper wing so that it could touch another piece of copper, which was attached to the rest of the circuit, in turn, lighting up the LED. Here is a picture of the very flimsy pinwheel and the diagram I made for it.


However, after I constructed the pinwheel with a paper plate and hot glue, I realized that it was difficult to control the amount the pinwheel would rotate. Furthermore, the pinwheel was very flimsy and did not always rotate accurately. I then decided to pivot my design and create a wind tunnel.

The user would blow through the tunnel. At the end of a tunnel a copper flap was created. This flap would fly upwards when air was blown through the tunnel. The copper flap would then hit another copper panel that was taped to another surface. This copper panel would be connected to the breadboard. This can be seen in the photograph and videos below.

At the end, I was able to successfully blow through the tunnel and light up the LED. I feel like a similar system to could be implemented in various medical devices, particularly those created for lung function tests.

I also soldered something! Here is an image of the two wires I was able to solder together! It may seem like a small task, but I am very proud of myself!


Using my leftover’ s aluminum foil as a switch…

After eating lunch, I realized that the aluminum foil used to wrap my sandwich is conductive and would make a great reusable resource for my switch.

This is my schematic diagram and visual diagram for my idea.

I created a lever that connects an aluminum foil path for the electrons to flow through when stepped on.

The materials I used for the lever were cardboard and aluminum. I wrapped the aluminum over the cardboard.

I found a flat, slightly raised board to use as a fulcrum and attached putty to the edge. The putty will keep the lever in place when stepped on.

After following my schematic and visual diagram, this was the outcome.

I attached the wires by poking them into the aluminum as shown above.

When the lever is at rest, it does not connect.

This was the final result. (click link above to see the video)

Look ma, no hands!



Like people, computers can only know about someone what that person has told them. My daily searches of the route to 721 Broadway, what food places are open at 1 am, and how to do laundry have probably led my computer to the conclusion that I am just one technologically dependent fish in a sea of technologically dependent web surfers. Society’s heavy reliance on technology has reached a point where some have began to question if technology is helping or enabling us. In less fortunate parts of the world where technology use is not as prominent, the people are still able to survive. However, many are often less educated, less worldly, and less connected. Many are often unaware of the world beyond the town they were born in. Easier access of technology would change the world by giving every person a chance to learn and by having a global wealth of knowledge from every background.  To me, a more inclusive device would have to be a part of the body, something that everyone is entitled to at birth and something that cannot be taken away at any point. However, this may not be ideal as technology would then become a literal part of us. There is no ideal device because there is no standard for ideal. An ideal  inclusive device would impact the lives of millions of people and not everyone would be ready for that. The concept of a single technological device that would appeal to and be welcomed by every audience without any trade offs can never exist.