All posts by Sama Srinivas

Food Have Feelings Too

All the documentation we have accumulated is located on this website link: http://samasrinivas.hosting.nyu.edu/category/food-have-feelings-too/

What is the project?

This is an interactive storytelling piece that includes anthropomorphic food, food made out of clay, photo sensors linked to an Arduino and the p5.js editor. Users will have to interact with the clay food and the interaction triggered by Arduino and light sensors will activate pre-made animations made in After Effects.

Who is it for?

Everyone who enjoys food, grumpy old men, annoying teenagers, sad little boys and our visually appealing world. Also, this who can relate to the heart aching pain that comes when someone leaves or is taken out of one’s life. 

Why Have We Made It?

We made this because we wanted to continue with the idea that was born for our hyper cinema projection mapping project. We also wanted to tell funny yet sad stories in a playful way using the skills we’ve learned so far. We wanted to say something about that inevitable truth, but do so in a playful and implicit way that seemingly skims over the true pain that it can cause an individual. We want to explore a new interaction with well-known and well-loved foods using the skills we have learned this semester.

How Does It Work?

We have three anthropomorphic food animated characters that we created for the interaction to compliment.

Storyline: Our three characters are a young, sad boy, a hormonal and annoying teenage girl, and a grumpy, old man. Using these archetypes of people in society, we are going to make scenarios using animation to create the reactions of these characters as the ones they like and love leave or are taken away.
Roger (Doughnut): A grumpy, old man whose super bitter about everything and is very mad at humans picking up his family and brothers in arms because it reminds him of his impending doom.
Raechel (Pizza): An annoying teenager who wants to do nothing but talk about her boyfriends and acts like she doesn’t care if you take her boyfriends away. But now, she has to deal with the harsh reality of life and loneliness.
Ronnie (Dumpling): A sad, lonely boy who has encountered too much loss in his life when it comes to his friends leaving. He has become jaded and thinks that inevitably everybody will leave him.
In the 30 second video, you will see our interaction is working with the dumpling and the light sensors. The clip at the end is a snippet of one of our animation that will be linked to one of the sensors for the dumpling.

So how the interaction should work is the user should be prompted by a sign that says something along the lines of “Pick up the food one at a time if you dare” or something like that. The user can start at any prototype and put on the headphones. It will be pretty self explanatory from there. The user one by one will watch all of the animations and move on to the next prototype if they wish. Hopefully, nothing is broken after the user is done. There is a failsafe in the code to not mess up the interaction if two foods are picked up at the same time.

Video of the Pizza Prototype Working:

Video of User Interaction With Pizza Prototype:

Video of Dumpling Prototype Working:

Problems:

As we set up for the final presentation we came across a lot of problems, ones that we didn’t expect. Our wires somehow kept breaking even with the soldering and some stripped wires and kit wires kept breaking as well. So as we tried to fix this problem, the dumpling and donut prototypes stopped working. We got the dumpling prototype to work but it’s very fragile and we might need to replace all the loose wires before the show to ensure that it works correctly.

Final Code:

Arduino Code:

void setup() {
  Serial.begin(9600);
}
void loop() {
  int reading = analogRead(A0);
  int valueone = map(reading, 0, 1023, 300, 500);
  int secondreading = analogRead(A1);
  int valuetwo = map(secondreading, 0, 1023, 300, 500);
  int thirdreading = analogRead(A2);
  int valuethree = map(thirdreading, 0, 1023, 300, 500);
  int fourthreading = analogRead(A3);
  int valuefour = map(fourthreading, 0, 1023, 300, 500);
  Serial.print(reading);
  Serial.print(‘,’);
  Serial.print(secondreading);
  Serial.print(‘,’);
  Serial.print(thirdreading);
  Serial.print(‘,’);
  Serial.print(fourthreading);
  Serial.println();
}
p5.js/Atom Code:

 

var Ben;
var Julian;
var Jose;
var Camron;
let serial;
var options = {
baudrate: 9600
};
var xData;
var yData;
var jData;
var sData;
var videoplay = false;
var benisplaying = false;
var julianisplaying = false;
var joseisplaying = false;
var camronisplaying = false;

function preload() {

Ben = createVideo(“Ben.mov”);
Ben.hide();
Julian = createVideo(“Julian.mov”);
Julian.hide();
Jose = createVideo(“Jose.mov”);
Jose.hide();
Camron = createVideo(“Camron.mov”);
Camron.hide();
}

function setup() {
createCanvas(400, 400);

serial = new p5.SerialPort();

// Let’s list the ports available
var portlist = serial.list();

// Assuming our Arduino is connected, let’s open the connection to it
// Change this to the name of your arduino’s serial port
serial.open(“/dev/cu.usbmodem14101”);

// Register some callbacks

// When we connect to the underlying server
serial.on(‘connected’, serverConnected);

// When we get a list of serial ports that are available
serial.on(‘list’, gotList);

// When we some data from the serial port
serial.on(‘data’, gotData);

// When or if we get an error
serial.on(‘error’, gotError);

// When our serial port is opened and ready for read/write
serial.on(‘open’, gotOpen);
}

// We are connected and ready to go
function serverConnected() {
print(“We are connected!”);
}

// Got the list of ports
function gotList(thelist) {
// theList is an array of their names
for (var i = 0; i < thelist.length; i++) {
// Display in the console
print(i + ” ” + thelist[i]);
}
}

// Connected to our serial device
function gotOpen() {
print(“Serial Port is open!”);
}

// Ut oh, here is an error, let’s log it
function gotError(theerror) {
print(theerror);
}

// There is data available to work with from the serial port
function gotData() {
var currentString = serial.readStringUntil(“\r\n”);
//console.log(currentString);

if (currentString) {
let values = currentString.split(‘,’);
xData = int(values[0]);
yData = int(values[1]);
jData = int(values[2]);
sData = int(values[3]);
// console.log(values[0]);
}
}

function sensordetect() {
if (videoplay == false) {
if (xData >= 400) {
Ben.show();
Ben.play();
videoplay = true;
benisplaying = true;
} else if (yData >= 400) {
Julian.show();
Julian.play();
videoplay = true;
julianisplaying = true;
} else if (sData >= 400) {
Camron.show();
Camron.play();
videoplay = true;
camronisplaying = true;
} else if (jData >= 400) {
Jose.show();
Jose.play();
videoplay = true;
joseisplaying = true;
}
}

if (videoplay == true) {
if (benisplaying == true) {
Jose.hide();
Julian.hide();
Camron.hide();
Jose.stop();
Julian.stop();
Camron.stop();
}
if (julianisplaying == true) {
Jose.hide();
Ben.hide();
Camron.hide();
Jose.stop();
Ben.stop();
Camron.stop();
}
if (joseisplaying == true) {
Camron.hide();
Ben.hide();
Julian.hide();
Camron.stop();
Ben.stop();
Julian.stop();
}
if (camronisplaying == true) {
Jose.hide();
Ben.hide();
Julian.hide();
Jose.stop();
Ben.stop();
Julian.stop();
}

}
if (xData < 400 && yData < 400 && sData < 400 && jData < 400) {
videoplay = false;
benisplaying = false;
Ben.stop();
julianisplaying = false;
Julian.stop();
joseisplaying = false;
Jose.stop();
camronisplaying = false;
Camron.stop();
}

}

function draw() {
sensordetect();

}

ALL PROTOTYPES WORKING!

Posenet w/ Gudetama

For this week’s assignment I decided to play with Posenet and the sample code given on the ml5 website.

Here is the code:

let video;
let poseNet;
let poses = [];
let skeletons = [];
var img;

function setup() {
createCanvas(640, 480);
video = createCapture(VIDEO);
video.size(width, height);
poseNet = ml5.poseNet(video, modelReady);
poseNet.on('pose', function (results) {
poses = results;
});
// Hide the video element, and just show the canvas
video.hide();
img = createImg("gudetama.png")
}

function modelReady() {
select('#status').html('Model Loaded');
}

function draw() {
image(video, 0, 0, width, height);
background(244, 211, 255);
drawKeypoints();
//drawSkeleton();
}

function drawKeypoints() {
for (let i = 0; i < poses.length; i++) {
for (let j = 0; j < poses[i].pose.keypoints.length; j++) {
let keypoint = poses[i].pose.keypoints[j];
if (keypoint.score > 0.2) {
fill(0);
noStroke();
image(img, keypoint.position.x, keypoint.position.y, 250, 200);
}
}
}
}

There were some issues. I couldn’t figure out how to use the specific points for each feature like left eye or right eye. It kept glitching and not working out unfortunately. I also could not get the image I replaced the dots with from the example to match perfectly to my face. I think there was something wrong with the png file that I used the made it geared towards the bottom right of the screen.

Here is a video of the project with the webcam:

Here is what I did to cover up the issue:

So as you can see there’s something wrong with the png that alters the placement of the points that posenet defines. But I covered it up and its cool that it still moves when you move your body.

Attributions:

@Stoker for helping with my code

@ml5js library examples for being the baseline of my example’s code

“Automatic If Statement”

Algorithms run our lives just like society runs our lives. Many people never stop to think how much society’s rules and ideals have changed the way we think and act, and are ingrained into our way of life. I think algorithms can work the same way. We never stop to think about the ingrained stereotypes that are present in our daily lives.

Snapchat is a very popular app in my generation, and a lot of people enjoy using the filters that are offered. But I notice, and \when I point it out to people, they notice it too, that the filters try to make your skin lighter. Your eye color lighter. And they feed into this type of white supremacist ideal that hasn’t really disappeared from our lives. It’s just changed its shape a bit. The belief that lighter skin and lighter eyes make you more beautiful is disguised in everything we see that we don’t even notice that we feed into that belief as well. It’s in magazines, ads, and the businesses know that they can take advantage of the fact that people don’t question this fact anymore. It’s quite sad really, when I see it and still feed into it. It’s because when you grow up in an environment that fosters a way of thinking, it’s not like when you’re older you can just turn it off. Things don’t work like that. Impressions and influences don’t just turn off like that. And it’s quite scary I think.

Unfortunately, this same thing is related to Joy Buolamwini’s finding to the face recognition system that she was experimenting with at the MIT Media Lab. The program just wouldn’t register her face. It would register everyone’s , who had skin lighter than her, but not hers. This ingrained thing , I don’t even have the right word for it, leaves us at a loss. Because like I said, it’s hard to question and change your way of thinking when the society you live deems it okay and even something to be celebrated in some aspects.

The people who make things like snapchat or the algorithms in facial recognition systems I have to guess might be predominantly white. We live in a predominantly white society and usually the 1%ers and the politicians in our society are predominantly white as well. This ultimately affects and has affected how our society functions when it comes to academia and progress.

Search Giphy Get Images

For my project this week, I decided to use the giphy API. It was a bit complicated and I didn’t come out with what I thought I would. I wanted to use the search API to allow my user to search for anything in the giphy arsenal and have 3 gifs come up in my p5.ja sketch. But I across a few problems. One problem I came across is the JSON viewer on Chrome didn’t read the API and come out with images it gave out specific URLs, much to my chagrin.

1

Because of this, I had trouble trying to extract the actual gif from the URL that was given to me. Using the loadImage function didn’t allow me to load the actual links of the gifs and only the first still image of that gif, because the loadImage function requires the link or image to be in ‘quotes’ and that changed the link.

2

So I got the images to load, but they all loaded on top of each other, because I had to call them somewhere and 0, 0 was my best option. I think a nested loop would help remove the canvas to put the images one after the other, but I’m not sure how to do that specifically.

I also found that the first image of the cat on the laptop does not change with the multiple searches no matter how many times I search for something different.

3 4

If you can see the wood picture is the same throughout all the images I embedded in.

Here is the code!

var pics = [];
var img = [];
var link = [];

function setup() {
createCanvas(400, 900);
input = select('#keyword');
var button = select('#submit');
button.mousePressed(ask);
}


//https://api.giphy.com/v1/gifs/search?api_key=OJy4WfGXwJS5T4PCn3HB0MwyzxJko9I0&q=&limit=25&offset=0&rating=G&lang=en


function ask(){
var first = 'https://api.giphy.com/v1/gifs/search?api_key=OJy4WfGXwJS5T4PCn3HB0MwyzxJko9I0&q=';

var rest = '&limit=3&offset=0&rating=G&lang=en'

var url = first + input.value() + rest


loadJSON(url,gotData);
// print (url);
}


function gotData(info){
var pig = info;
for (var i=0; i<3; i++){

pics[i] = info.data[i].id;
// pics[i] = 'link';
link[i] = ' https://media.giphy.com/media/' + pics[i]+'/giphy.gif';

print(link[i]);

loadImage(link[i],zhanxian);
}

}

function zhanxian(pic){
// pic.size(width,height);
image(pic,0,0);
// rect(100,100,100,100);
}


function draw() {
background(220);
if (pig){
for (var i = 0; i<3; i++){

image(img[i],0,0);
}
}
}



This is a video of this working!

Here is the link!

https://editor.p5js.org/samasrinivas/full/H1ScdK10m

Attributions:

@Cass for MAJORLY helping with my code!

@Helen for helping me debug my code!

 

Final Project Progress

This week I started to draw out the final food for our animation, so we can start animating that one.

dumpling

While I also started to make the pizza and donut food out of clay, they still are not done. We decided to make four slices of pizza and four donuts to make our lives easier.

Here are a few pictures that I am using as reference.

Image result for life sized donuts out of clay

Image result for pizza out of clay

Photo Booth

What I decided to for my project this week is play with the live webcam that was introduced in last class. What I did was really simple, while following a Coding Train tutorial, I create a button that allowed ‘snapshots’ or paused images of the live cam.

Here’s the code:

let capture;

function setup() {
  createCanvas(640, 480);
  capture = createCapture(VIDEO);
capture.size(320,240);
  capture.hide();
  button = createButton("PAUSE");
  button.mousePressed(takesnap);
}

function takesnap(){
  image(capture, 0, 0, width, height);
filter(THRESHOLD, 0.3);
}

Things I wanted to try but for some reason didn’t work:

  • try to add another video on the same canvas
  • try to make the live cam have another effect where it distorted the image/ played with the pixels

I’ll keep trying this stuff out as I watch more tutorials! look out for an updated version!!

full screen link: https://editor.p5js.org/samasrinivas/full/B1cVgXLTQ

@attributions

@The Coding Train! I followed the tutorial to add a button to the live video to create the photobooth effect!

@the p5.js library!

Prior Art: Final Project w/ Yulin & James

https://www.creativebloq.com/video/projection-mapping-912849

Terraform Table

“Tellart’s Terraform table enables users to ‘play God’. Located at London’s V&A Museum, projection mapping turns the giant sandpit into a rugged landscape, with mountains, valleys and lakes. Here’s the cool bit: thanks to a machine learning algorithm, the Table is able to read the height of the sand and respond to any changes. In short, this means you can dig a hole to form a lake, raise a hill to create a snowy peak, or smooth a river over to expand a forest.”

This is related to our project because this is an example of user input or interactivity that we want to incorporate into our project as well. Although this is with VR and not video mapping the interactivity is the same.

Sweater

“Using two walls, a treadmill, and some nifty projection, director Filip Sterckxcreates a virtual world for the musician Willow’s music video. As with most projection mapping projects, it’s the technique that charms here.

Singer Pieter-Jan Van Den Troost gropes at doors that aren’t really there, trots on the spot down imaginary stairs, and kneels pretending to be paddling in the sea. It’s all surprisingly lo-tech, and all the better for it.”

This is another example of immersive interactivity. While we might not go to that extreme, the interactivity portion of our project that will trigger the video-mapped animation is a vital part to make this project more interesting. And this one uses video mapping too! It’s really cool to have examples like this one to motivate us into doing great work!

“Global Village”

I think the New York Times article made an interesting point: “The history of labor shows that technology does not usually drive social change. On the contrary, social change is typically driven by decisions we make about how to organize our world. Only later does technology swoop in, accelerating and consolidating those changes.”

I think that in our world today, the ‘first’ world we are in at least, a lot of people believe that technology is the guiding force that leads all the change in society. So why does this New York Times writer think differently? He uses many examples to make his point, but I think the most relevant for this topic, is when he refers to today’s digital age. He says it is not as nuanced as everyone thinks it is. it is only. a second industrial revolution that has been working for the past forty years, just not as up front until now. I don’t know if I agree or disagree with his point that technology is what confirms human’s work in society, but I think there is merit in it.

Globalization is not the result of technology, it is the result of human greed. I know, it’s quite a pessimistic way of looking at it, but it’s true. Globalization truly started when people decided they wanted to conquer the world, namely the European powers in old times, but dating all the way back to the Romans in the ancient times. What does this quest for world domination give us? It gives upcountries and continents that were taken from its people and given new rules by people that don’t share any of their value, and usually let alone their own skin color. America is an example of this, but places like Africa and India are more so striking examples. While, I digress about people’s greed in this world and their poor choices and opinions, I do have to admit that much of our technology has come out of it. Nuclear weapons or nuclear anything wouldn’t have come into fruition without the need for one country to one up the other just in case they needed to blow up the other side of the planet. Much of America’s and Europe’s technological advancements were due to the war-time need for things. So, no I don’t think technology has caused globalization. It is definitely and byproduct and a tool to keep the wheel going.

If we look at today’s companies for example. One of the readings was all about Amazon and the poor worker-life quality, but those people stay in those jobs because people aren’t going to lose their buying power momentum for a long time to come .  Technology was a byproduct for the need to make tasks easier such as shopping for necessities on the internet. But now, it’s a tool for all these big companies to keep luring people into to buying their products, into getting hired to increase their footprint on the world.

Trial & Error Objects/Arrays

For my assignment, I tried out a bunch of things that sometimes did and sometimes did not work. I followed The Coding Train’s videos pretty closely to get most of my code, but I pieced stuff together and tried to incorporate new things into the code to make it my own.

I started by first introducing myself to object-oriented programming. I did a basic bouncing ball example but used an object to organize my code. I did try to incorporate an image to replace the ball, but for some reason, it did not work.

This is the code for that example:

var ball = {
x: 300,
  y: 200,
  xspeed: 4,
  yspeed: -3
}

function setup() {
  createCanvas(windowWidth, 400);
}

function draw() {
  background(0);
  
  move();
  bounce();
  display();
}

function move(){
ball.x = ball.x + ball.xspeed;
  ball.y = ball.y + ball.yspeed;
}

function bounce () {
if (ball.x > width || ball.x < 0) {
  ball.xspeed = ball.xspeed * -1;
  }
  if (ball.y > height || ball.y < 0) {
  ball.yspeed = ball.yspeed * -1;
  }
}

function display () {
stroke(255);
strokeWeight(4);
noFill();
ellipse(ball.x, ball.y, 24, 24);
}

The next thing I tried, with the help of the videos, is object communication. I wanted to see if I could make something happened when many bubbles touched or overlapped. I got the color to change, and then while trying to incorporate an image, my code kind of went haywire and the image overlapped the bubbles. The repeated image were quite concentrated on one side of the screen, though I do not know why. I wanted to try tot make something happen if the mouse was released like maybe reset the sketch, but I was unable to find the right syntax to accomplish this.

This is the code for this example. I commented out the code that made the changes in the video. It currently show the last ‘state’ of the sketch that is shown in the video.

let bubbles = [];

let gudetama;

function preload() {
  gudetama = loadImage("gudetama.png")
}

function setup() {
  createCanvas(600, 400);
  for (let i = 0; i < 50; i++) {
    let x = random(width);
    let y = random(height);
    let r = random(10, 40);
    bubbles[i] = new Bubble(x, y, r);
  }
}

//function mousePressed() {
//}

function draw() {
  background(0);

  //   if (bubble1.intersects(bubble2)) {
  //   background(200, 0, 100);
  //   }

  //   for(let i = 0; i < bubbles.length; i++) {
  //   bubbles[i].show();
  //     bubbles[i].move();
  //   }


  for (var b of bubbles) {
    b.show();
    b.move();
    let overlapping = false;
    for (var other of bubbles) {
      if (b !== other && b.intersects(other)) {
        overlapping = true;
      }
    }
    if (overlapping) {
      b.changeColor(255);
    } else {
      b.changeColor(0);
    }
  }
}



class Bubble {
  constructor(x, y, r) {
    this.x = x;
    this.y = y;
    this.r = r;
    this.brightness = 0;
    this.xspeed = 1;
    this.yspeed = -1;
  }

  intersects(other) {
    let d = dist(this.x, this.y, other.x, other.y);
    return (d < this.r + other.r);
    //if (d < this.r + other.r);
  }

  changeColor(bright) {
    this.brightness = bright;
  }

  // contains(px, py) {
  //   let d = dist(px, py, this.x, this.y);
  //   if (d < this.r) {
  //     return true;
  //   } else {
  //     return false;
  //   }
  // }

  move() {
    this.x = this.x + random(-6, 6);
    this.y = this.y + random(-6, 6);
  }

//   bounce() {
//     if (this.x > width || this.x < 0) {
//       this.xspeed = this.xspeed * -1;
//     }
//     if (this.y > height || this.y < 0) {
//       this.yspeed = this.yspeed * -1;
//     }

//   }

  show() {

    image(gudetama, this.x, this.y, 550, 500);
    // stroke(255);
    // strokeWeight(4);
    // fill(this.brightness, 125);
    // ellipse(this.x, this.y, this.r * 2);
  }
}

^^https://editor.p5js.org/samasrinivas/full/rJQHDU6h7

The last example I did is a combination of the examples that Daniel Shiffman showed in his videos for the Coding Train, but I tried to add an image to the code, but because of the changes that are triggered I couldn’t find a way to succeed in incorporating it to the sketch.

This is the code:

let bubbles = [];

let gudetama;
function preload(){
  gudetama = loadImage("gudetama.png");
}

function setup() {
  createCanvas(600, 400);
  for (let i = 0; i < 50; i++) {
    let x = random(width);
    let y = random(height);
    let r = random(10, 50);
    let b = new Bubble(x, y, r);
    bubbles.push(b);
  }
}

function mouseDragged() {
  let x = random(width);
  let y = random(height);
  let r = random(30, 50);
  let b = new Bubble(mouseX, mouseY, r);
  bubbles.push(b);
}

// function mouseReleased(){
// locked = false;

// }

function mousePressed() {
  for (let i = bubbles.length - 1; i >= 0; i--) {
    if (bubbles[i].contains(mouseX, mouseY)) {
      bubbles.splice(i, 1);
    }
  }
}

function draw() {
  background(0);
  //   for (let i = 0; i < bubbles.length; i++) {
  //     if (bubbles[i].contains(mouseX, mouseY)){
  //     bubbles[i].changeColor(200, 0, 100);
  //     } else {
  //     bubbles[i].changeColor(0);
  //     }

  //     bubbles[i].move();
  //     bubbles[i].show()
  //   }

  for (var b of bubbles) {
    b.show();
    b.move();
    // if (b.contains (mouseX, mouseY)){
    // fill(255);
    // } else {
    // fill(0);
    // }
    let overlapping = false;
    for (var other of bubbles) {
      if (b !== other && b.intersects(other)) {
        overlapping = true;
      }
    }
    if (overlapping) {
      b.changeColor(200, 0, 100);
    } else {
      b.changeColor(0);
    }
  }
  //add if adding mouse dragged
  if (bubbles.length > 50) {
    bubbles.splice(0, 1);
  }
  // locked = true;
}

class Bubble {
  constructor(x, y, r) {
    this.x = x;
    this.y = y;
    this.r = r;
    this.brightness = 0;
  }
  intersects(other) {
    let d = dist(this.x, this.y, other.x, other.y);
    return (d < this.r + other.r);
  }
  changeColor(bright) {
    this.brightness = bright;
  }

  contains(px, py) {
    let d = dist(px, py, this.x, this.y);
    if (d < this.r) {
      return true;
    } else {
      return false;
    }
  }

  move() {
    this.x = this.x + random(-5, 5);
    this.y = this.y + random(-5, 5);
  }

  show() {
    image(gudetama, this.x, this.y);
    stroke(255);
    strokeWeight(4);
    fill(this.brightness, 0, 0, 255);
    ellipse(this.x, this.y, this.r * 2);
  }
}

^^https://editor.p5js.org/samasrinivas/full/B1tVSB6hm

A lot of things I tried didn’t work out, but I’m going to try to keep working on my code to get to do what I wanted to do in the code.

Attributions:

@The Coding Train, all of my trial and errors came to fruition because of those videos and the p5.js library.

Final Project Brainstorm

FINAL PROJECT DRAFTS

#1

One idea I had that came to mind after Andri brought it up in class, has a multitude of part to it. So basically, I wanted to 3D print a brain and a heart be my “human’ for this project. I would be stripping away things like gender, race, sexuality, etc. Somehow I want to connect a large monitor that has  a self esteem meter and a list of compliments and insults with responses from the ‘human’ that also has responses from the ‘human’. I wanted to only talk about how words can affect a human psyche and self esteem. People tend to build walls, suffer in silence with no outward scars, and no longer believe the compliments given to them. I wanted to have an led on the brain go off once a compliment and insult is chosen. After a compliment is chosen a blue led goes off, and after an insult is chosen a red led goes off. The self esteem meter would start at full and every time an insult is chosen, it would go down, but compliments would make it go up again. After a certain number of insults though, the ‘ human’ no longer accepts the compliments, and the self esteem meter keeps going down. For example, if the point on the meter was already reached and someone chose “You’re beautiful” as a compliment, the ‘human’ would respond “I’m not, and I never will be”. Every compliment and insult then will have two responses, that depends on the self esteem meter that is controlled by user input.

>>I wanted to make some action be don’t to the heart based on if a compliment or insult is chosen, like a needle pierces the heart for every insult? But I don’t know if that’s feasible, and am looking for other representations of pain.

final

#2

The second idea I had is to take my “booty bumper” idea further. I’d want to create a system that allows a person to monitor the physical therapy exercise they’d need to do at home. Something that can measure they’re progress on mobility for example. I could incorporate a distance sensor. I don’t really know if p5, has the tools I need to create something like this?

final

#3

The third idea I had is a collaborative project with James and Yulin (if we are allowed to do so). We are already working on a a video mapping project in hypercinema, and we want to take it even further by adding interaction using Arduino and p5. We are also adding more boxes! So we are currently adding a pizza box & storyline. We are thinking Chinese dumplings as well. We were thinking that four different foods and different interaction in each corner of a room. <<Still currently brainstorming the larger project.

We want to add donuts where if someone take one out, it illicit a reaction from the donut man, and we could do that with a light sensor. We want to add sound as well

Space Explorer

For my “Stupid Pet Trick”, I decided to use a potentiometer to trigger movement on p5.js in someway.

wiring wiring

^^This is the wiring of the two potentiometers I used to move my spaceship across the screen.

My original idea was to create a maze and use one potentiometer to move a ‘ball’ up and down and the other potentiometer to move side to side, to navigate the maze. But as I was learning how to use serial communication and code this, it became a project where I bit off more than I could chew. So in the end, I only got the potentiometers to move side to side. It was a happy accident that one potentiometer was much more sensitive that the other, and moved the object across the screen much faster and way off of my canvas on p5.js. But I decided to use that to my advantage and create this little space explorer world.

space explorer

^^I used pictures from online, and used a function to create the background, and replace what was originally a red circle into a spaceship.

arduino

^^This is the code on my arduino to to turn on my potentiometers.

var serial;
var latestData = "waiting for data"; 
var img1
var img2

function setup() {
createCanvas(700, 500);
img1 = loadImage("assets/Ship.png");
img2 = loadImage("assets/Space.png");
// Instantiate our SerialPort object
serial = new p5.SerialPort();
serial.open("/dev/cu.usbmodem14621");
serial.on('data', gotData);
}

function gotData() {
var currentString = serial.readLine(); 
trim(currentString); 
if (!currentString) return; 
latestData = int(currentString); 
console.log(latestData); 
var output = map(mouseX,0,255);
serial.write(output);
}

function draw() {
image(img2, 0,0, img2.width/2, img2.height/2);
fill(255,0,0);
var data1 = map(latestData, 0, 1023, 0,width);
image(img1,data1,200,150,150);
fill(255,255,255);
textSize(30);
text(data1, 10, 30);
textSize(20);
fill(255,0,0);
text('Left for Warp Speed!', 10, 480);
text('Right for Normal Speed!', 475, 480);

}

^^This is the code I used on p5.js. I did use Professor O’Sullivan’s code as a platform for my code, because he also used a potentiometer to move an object on the screen.

Troubleshooting: It was a little hard to write the code to make the spaceship move in the direction I wanted. At first, it had begun to move in a diagonal direction, but after a little debugging, I got the spaceship to move from left to right.

^^Success! There are two setting for the speed of the space ship, normal speed on the right and warp speed on the left!

Attribution!!

@Josh for helping me A LOT with my code!!

@Grace for giving me the spaceship idea!

@Professor O’ Sullivan for the basic code to help me with my project!

@the p5.js and arduino libraries!

“Hyper”

What interested me about the reading this week is the constant acknowledgment the technological advancement is on a very steep upward trajectory. I don’t think that upward trajectory is linear, because progress as a human species isn’t linear. But there is no denying that advancement is going up and will keep going up. I’m sort of bewildered at this fact, and it is also kind of intimidating. I know that my thinking on this subject isn’t linear as well. I appreciate all the technological advancements that have been made, but I also don’t like how fas the industry changes. It makes me wary about how I will fare in the job market in the future, and if what I am currently learning will help me at all. This worry can stretch across different fields, but it is especially glaring in the technological industry. What are the next steps that the industry is taking? Do I need to become talented at predicting these things to make sure I can navigate the market at that time? Will I have to be in a perpetual state of worry my whole life, to only settle for something less volatile? Is the next big race in technology going to have to do with hypertext and making information related to what someone searched even more accessible? I didn’t even know it was a thing before this assignment, and that worries me. How much do I need to predict to stay on top of the industry, to prove my worth? The trajectory isn’t stopping, and it seems like it’s going to pierce straight through me and leave me wounded and lost as it moves on without me.

“Calling The Future”

https://editor.p5js.org/full/rkz3e_Zi7

var Slider, gSlider, bSlider;
var button;
var canvastext;
var input;
var title;

function setup() {
// create canvas
createCanvas(windowWidth, windowHeight);
textSize(15);
noStroke();

// create sliders
rSlider = createSlider(0, 255, 100);
rSlider.position(20, 20);
gSlider = createSlider(0, 255, 0);
gSlider.position(20, 50);
bSlider = createSlider(0, 255, 255);
bSlider.position(20, 80);

input = createInput();
input.position(windowWidth/2, windowHeight/2);

button = createButton('submit');
button.position(input.x + input.width, windowHeight/2);
button.mousePressed(greet);

title = createElement('h1');
title.position(windowWidth/2, windowHeight/3);
}

function draw() {
var r = rSlider.value();
var g = gSlider.value();
var b = bSlider.value();
background(r, g, b);
text("red", rSlider.x * 2 + rSlider.width, 35);
text("green", gSlider.x * 2 + gSlider.width, 65);
text("blue", bSlider.x * 2 + bSlider.width, 95);
}

function greet() {
var canvastext = input.value();
title.html(canvastext);
input.value('');
}

Troubleshooting:

The process to get this callback working was a little tedious because I had to figure out the code of a new concept that was in the p5.js library, which had to do with getting text to appear on the canvas from a button. I also learned form the example in the library to execute my RGB sliders to change the background color of my canvas.

Sources and Attribution:

https://p5js.org/examples/dom-input-and-button.html

https://p5js.org/examples/dom-slider.html

@Stoker — thanks so much for helping me debug my code!

“Cultural Evolution”

When is once to different communications systems like pictograms, tallying, and counting, there is always an underlying goal of universality. Some like tallying can be seen as impractical, because counting one by one could be tedious. IT evolved into a system where groups were represented by different symbols. I think future communication will be something along the lines of communicating with our minds. Most probably, that be our minds connected to our technology.

Cultural evolution is different from biological evolution because cultural evolution is affected by more than just survival. It’s affected by the people of that time, the rules that were made, the beliefs that were the most prevalent, and even the systems of communication. For example, when mail and communication was limited to traveling. by horses the culture of that time was vastly different than what we have today using technology to send emails in a split second.

 

“Embodiment”

Our body is an important channel that we use to take in all sorts of information. The amount of information that is taken in is huge, but our bodies and minds do not process it all. Whatever we deem important has to go through a process to be stored into short term and long term memory. The process includes many steps, but wondrously happens instantaneously. Our bodies allow us to take in everything around us.

Emotions can drive many parts of our psyche. They are a huge part of our cognition, and they definitely are the fuel to many of our decisions. Emotions can help us think certain way about things. When our minds go to process information and memories, the intake is influenced by a factor of things such as emotions. Emotions can affect how you remember something, and that in itself can teach us things about ourselves, make decisions regarding present experiences, and either cloud or clear our thinking about something.

Computers can reach your body quite easily because they are a too. Our bodies can use them, so computers need to be useable to reach our bodies. For our emotions, it might be a bit more complicated because computers have to be used in a meaningful way to affect our emotions. “Meaningful” could mean so many different things as well, because people’s experiences are so different and the emotions that come with them are as well.

“What To Change”

‘Morals’ are something that everyone has obviously or subconsciously. They can be influenced and changed by different experiences such as the environment one was raised in, the life one has lived, and things one has lost. Each person has a different set of morals. Sometimes people share a moral, maybe it has to do with religion or lack thereof, but many of our actions and reactions as humans are controlled by our morals whether we realize it or not.

Sometimes the morals we have go against the things we have learned are ‘good’ and ‘bad’. But the terms ‘good’ and ‘bad’ are just as ambiguous as the word ‘moral’ is. And I don’t like that. They seem tentative but in reality are just vague. When humans all share morals and morally fueled decisions, the second machines come into the picture, questions and arguments arise. “Should machines make moral decisions?” In my opinion, I don’t know if I can have an opinion on it. This question reminds me of my favorite movie ‘I Robot’ with Will Smith. In that movie, robots were capable of living alongside humans as really helpful tools but were not at the stage where they could make decisions for themselves. As the movie goes on, there is a single robot capable of making morally and emotionally fueled decisions which was still a crazy reality in that futuristic world. That one robot brought down the evil artificial intelligence that was about to take control of all the mindless machines and overthrow the humans. Though seemingly far off, this can be related to our reality now. Our dependence on technology is so monumental that when artificial intelligence comes into the picture, who’s to say that humans won’t willingly surrender to keep all the ‘benefits’ of having these tools? Some might argue that technology is turning humans into mindless robots themselves.

Expressivity: Distance

For my task to code expressivity, I decided on expressing distance.

Things I used:

Arduino Uno R3

Breadboard

Ultrasonic Sensor

10K Ohm Potentiometer

LCD 1602 Module (with pin header)

Female to Male Dupont Wires

Jumper wires (generic)

Wiring:

Wiring    Wiring2 

Wiring3

^^Wiring of the LCD Display and Potentiometer

Code:

Code

^^The code had a few components we hadn’t gone over in class because of the LCD display and the fact that I used 12 out of the 16 pins. To make the centimeters and inches display on the screen I had to use a formula that I got from the sources below.

The goal for this project was to use the ultrasonic sensor to read the distance of an object blocking its waves and use the LCD Display to display the distance of that object from the sensor. I used the potentiometer to control the brightness of the screen of the LCD display, to make sure that I can see the numbers that will appear after coding my Arduino.

^^My sensor was not working at this point, which I fixed later, but I needed to make sure I could see what was expressed on the screen, so this is the test of the potentiometer on the LCD Display.

^^The reason my sensor was not working was because the wire that was supposed to connect to GND on the breadboard was actually plugged in the same row as the echo connection, which made the sensor unable to read the distance. So it was stuck at 0 centimeters  and 0 inches.

After I fixed the wiring, my sensor started working! And everything came to completion!

ATTRIBUTIONS!

For help with wiring and coding the sensor, LCD display, and potentiometer, I give credit to these sources!

Ultrasonic Sensor HC-SR04 and Arduino Tutorial

http://mertarduinotutorial.blogspot.com/2016/11/arduino-tutorial-15-ultrasonic-sensor.html

Real World Application:

I’m pretty sure this exact type of invention is used in cars nowadays. Cars have sensors that allow you to see on a trunk camera how far you are from another object/human being. My expressive device is the bare essentials of this exact technology. I think it’s super cool that I built my own from scratch and actually understand how it works.

‘Universal Machine’

Computational media can be defined as the new and improved version of traditional media. The difference between the two is the technology and interactions of these media.

A story can be defined as: an account of incidents or events, and in this case interactions. So a story is made up of interactions. A single interaction can pivot the story on its head, or continue the storyline that is being portrayed at the moment. The two concepts do not just intersect, in some cases you cannot have one without the other.

These media forms are produced and consumed based on necessity, or rather usefulness. Like Bret Victor said, new forms of interactions are created by using and producing new tools that humans can use. I think that new media is created and becomes popular when it has a use that fills a gap mankind didn’t realize was there in the first place.

A universal machine can be defined as something that gives a user everything they need, at any given time. To be honest, it seems like some kind of pipe dream in my mind, because humans get bored. Humans always want more. Enough is never enough, and because of that I think a universal machine is a highly unlikely invention.

Booty Bumper Upgrade

For my conditional switch project, I incorporated the old switch I made, the booty bumper, and upgraded it! So what I did , is I used a rgb LED and coded it so it would change colors every 4 seconds as long as the circuit was held closed.

^^ This is the code I used to create my conditional switch. Some of the concepts, to get the rgb LED to change colors was taken from the Adafruit website. Specifically, this link: <https://learn.adafruit.com/adafruit-arduino-lesson-3-rgb-leds/arduino-sketch>.

^^ This was attempt #1 to test out my code to see if it would change from the colors pink to purple every 2 seconds. It worked, but the colors pink and purple were too similar and the interval of 2 seconds was too short, so I decided to adjust.

So I adjusted the colors to 3 instead of 2, and decided to use red, blue, and yellow. I also changed the time interval to 4 seconds instead of 2.

^^ This is my breadboard setup. I used pins 4, 5, and 6 fro the 3 tongs of my rgb LED.

^^ This is the booty bumper in action!!

^^ This video shows all the components of the booty bumper. As you can see, I reused the idea of soldering on the wires to copper tape to close the circuit and allow the conditional switch to work.

Real World Application:

The Bumper Bumper could be used in the future for hip rehabilitation. For those needing to go to physical therapy for their hips, the Booty Bumper could be used as training to reaching a goal  on a hip mobility scale. The second person in this experiment could be replaced with a bar or inanimate object to create a base for that patient to have to reach with their hips. The goal for the booty bumper upgrade is to reach a mobility where the patient can hold their hips at the bar or other object for four seconds of longer to have the reward of seeing the LED change to a different color.

Special Thanks!!!

@ Sophie for helping me turn my concept into a reality, and helping me wire and code my board for this project to work!!

“Who”

I feel like my computer knows exactly who I am, all the way to the parts that I don’t show people often. Which is a weird concept, that my computer can know who I am better than a sibling or a loved one.

Computers and what they include reflect their makers and society as well. They can be messengers, translators, and sometimes the bearer of bad news, but they do not prevent anything from happening. A teenager who is suicidal posts theirs thoughts on an online website and the computer just relays their message to the world. Texts can be sent between two people with language barriers, but a computer does not make understanding connotations or sarcasm any easier. My point is we use computers as a tool, which is what they are made for, and if anything more inclusive was made I think there would be some big changes in society. Technology is so advanced that AI seems like its not to far away. To me, that’s a little scary after watching I Robot on repeat as a child.

Booty Bumper

For my project, I decided to incorporate a switch with something that I love to do to people, which is booty bumping. I soldered the two wires that connected the circuit to the positive column on the breadboard, to two large pieces of copper tape. The copper tape was used as a conductor for this project. I decided to use two LEDs to light up so it matched the number of people that worked to make them light up! It’s a pretty basic adaptation of the circuit we made in class.

^Initial Sketch/Schematic

^^Pictures of the Arduino and Breadboard based on schematic

Soldering (Click the link to see how I soldered the wires to the copper tape!)

Booty Bumper in action (Click it’s a boomerang!!)

^Hips don’t lie! the LED lights up!

https://youtu.be/vI4lrS9SzJ8

^Click to watch a video showing everything in action! Quality is questionable, don’t judge! ^

Real World Application:

The Bumper Bumper could be used in the future for hip rehabilitation. For those needing to go to physical therapy for their hips, the Booty Bumper could be used as training to reaching a goal  on a hip mobility scale. The second person in this experiment could be replaced with a bar or inanimate object to create a base for that patient to have to reach with their hips.

Special Thanks!!

@Apoorva for letting barge in on her soldering session, being a participant as a booty bumper, and filming me soldering on her phone!

@Grace for helping me solder, being a booty bumper, and helping me figure out where to put my switch in the circuit!

@Sophie for helping me solder, and filming the product for me on her phone!