Category Archives: Final Project

We Got Sole – FINAL PRESENTATION

For our project, Christshon and I created a pair of shoes that are both fashionable and interactive. We were inspired by sneaker culture and wearable technology.

Out of all clothing items we feel that shoes are the one that have the most potential the it comes to wearable technology. That being said the only shoes that integrate fashion and technology are extremely overpriced and largely unobtainable for the average person. We feel that it’s time that interactive fashion became something that is accessible and to prove it we made this pair of shoes that illustrates how possible it is for designs like this to be made in a cost efficient way. Although the amount of interaction between the wearer and the shoe is not very extensive art the moment, we feel that this is a good place to build from and I am now much more aware of the possibilities within the realm of wearable tech.

Our Process:

Due to issues with shipping our process was pushed back quite a few times but we still found time to get it all done. The first step was making our user interaction diagram. This was our plan for how the user would get output and what output they would receive (LEDs). Next I bought the shoes from a thrift store for only $4. After purchasing them from Salvation Army, we saw that the shoes were pretty beat up so we had to clean them and then paint them in order for them to appear new. They came out beautiful all thanks to Christshon. I entrusted him with this task because he used to paint shoes a lot back at his home.

1

His inspiration for the colors and shape was the “heavy-duty sneaker” look that many high-end brands like Balenciaga and Gucci have been popularizing lately. After the painting was finished, we uploaded our code to our Arduinos, we began wiring the LEDs and Arduinos to the shoes.

3

2

Materials:

  1. Arduino Uno

2. Jumper Wires

3. 10k potentiometer

4. New Balance Shoes

5. Neopixel RGB LED Ring

6. Velcro

7. Paint

8. 9V Battery

9. Solderable Breadboard

It took a lot of time and even more troubleshooting but eventually we were able to get both shoes to operate smoothly. Then, we “created” a spot on the heel were the batteries would attach in order for them to not be in the way. We placed them behind the end of the shoe. Finally, we used wires to mimic shoelaces as we felt added to the futuristic vibe that we were striving for. Now that the physical component was done we put our focus onto creating the video that we would play as part of our presentation.

The idea was to create a video that demonstrated the importance of sneaker culture while also highlighting the progress that sneaker design has made since they first came onto the scene. The video was created using Premiere Pro and our plan was to create a makeshift screen to project it onto, however we did not budget enough time to create the screen so instead I decided to play the video off of Christshon’s laptop. We also decided to go with this option instead because the video lost some of its quality through projection.

Overall, I feel that we did a very good job despite the adversities Christshon and I had to endure. To be honest, he has been one of the best partners I’ve had on a project. So thank you David for letting us team up. Christshon and I will probably try to dive into some more wearable tech but let’s see for in the future.  It was cool to see how Christshon’s knowledge of shoes and my knowledge of wearable tech bloomed together. I look forward to adding onto this project along with our given feedback and to explore more possibilities of art installations and wearable tech.

Collage of the process of the interaction:

yerrrr
Processed with MOLDIV

Tangible Course Search – Final Documentation

Introduction

For my final project, I created a combined physical and digital interface for the course search that used physical buttons, dials, and faders to filter through manually created JSON files of the most popular courses/majors available to IMA students. This is a noted change from the start of the projection, as it was originally intended to use a data-scrapping API to search through all the courses for all majors at NYU, both undergraduate and graduate.

Important Note: Unfortunately I have left my photographs and videos of the physical prototype on my DSLR’s SD card back in my dorm room and am now back at home in California, so there won’t be any pictures in this final documentation for now until I get back from break and am able to offload them from the SD card. 

Original Flowchart

User Flow 1

Final Flowchart

User Flow 2

Changes Since Prototype

The most notable change since the prototype of the projection has been adding the increased filtering controls that allowed control of time, credits, and a randomizer. Here is the isolated HTML code for the digital interface of those additions:

<div id="filterScreen">
        <div id="filterTitle"></div>
        <div id="creditFilter">
            <div id="creditFilterVertical"></div>
        </div>
        <div id="timeFilter">
            <div id="clock">
                <div class="hand" id="firstHand"></div>
                <div class="hand" id="secondHand"></div>
            </div>
        </div>
        <div id="dateFilter">
            <div id="monday"><span class="dayTag">MON</span><span class="dayValue">✓</span></div>
            <div id="tuesday"><span class="dayTag">TUE</span><span class="dayValue">✓</span></div>
            <div id="wednesday"><span class="dayTag">WED</span><span class="dayValue">✓</span></div>
            <div id="thursday"><span class="dayTag">THU</span><span class="dayValue">✓</span></div>
            <div id="friday"><span class="dayTag">FRI</span><span class="dayValue">✓</span></div>
        </div>
        <div id="randomFilter">
            <img src="img/diceIcon.png" alt="Dice Icon">
        </div>
        <div id="masterConfirm"></div>
    </div>

And here’s the function that animates the change over from the major selection screen (what used to be the career select screen)

function careerDialConfirmed(){
        if(careerConfirmValue){
            if(stage == 0){
                console.log("hello!");
                $("#chooseCareer, #dialBackground").animate({
                    left: "-50vw",
                    opacity: "0"
                },2000);
                $("#chooseDial").animate({
                    left: "-20vw",
                    opacity: "0"
                },100);
                $("#confirmCareer").css({
                    "background-position": "left bottom",
                    "top":"40vh"
                });
                $("#confirmCareer").css({
                    "top":"100vh"
                });
                var filterScreen = $("#filterScreen");
                setTimeout(function(){
                    filterScreen.css({
                        "opacity":"1",
                        "pointer-events":"auto"
                    });
                },2000);
            }
        }
    }

Another major change since the first prototype was changing the career selection criteria from all-university careers to simply IMA-related majors (IMA, IDM, ITP, MCC, and Open Arts) to take courses in. This change was put in place because:

a) Once the data scrapping side of the project failed, it was no longer feasible to show many different majors as it would all require manual creation of the JSON files. I needed to figure out a way to restrict the amount of data the user could theoretically see

b) When presenting my project to various people I got many comments about wanting to filter by major, and I didn’t have any good method other than just putting a keyboard in front of them and calling it a day. I felt this went against the dashboard-like interface for the project. So instead I limited the number of possible majors to what could be fit onto a dial and used that.

Thankfully, because of the way that I programmed the javascript and CSS before to be irrespective of the content of the dial buttons (mostly so I can use this code again in the future), all I needed to do was change the HTML and add a couple Javascript variables for the final filtering mechanism.

<div id="confirmCareer">
        <div class="confirm-right"></div>
    </div>
    <div id="dialBackground"></div>
    <div id="chooseCareer">
        <div id="chooseMCC" class="chooseItem">MCC</div>
        <div id="chooseIDM" class="chooseItem">IDM</div>
        <div id="chooseIMA" class="chooseItem">IMA</div>
        <div id="chooseITP" class="chooseItem">ITP</div>
        <div id="chooseTOA" class="chooseItem">TOA</div>
    </div>
    <div id="chooseDial" class="dial">
        <div class="arrow-right"></div>
    </div>

The CSS stayed the same because it used nth-Child for its margins:

.chooseItem{
            text-align: center;
            color: #999999;
            font-size: 1.7vw;
            cursor: pointer;
            font-weight: bold;
            border-radius: 1vw;
            padding: 0.5vw;
            width: auto;
            transition: all 1s ease-in-out;
        }
            .chooseItem:nth-child(1){
                margin-left: 8vw;
            }
            .chooseItem:nth-child(2){
                margin-left: 14vw;
            }
            .chooseItem:nth-child(3){
                margin-left: 16vw;   
            }
            .chooseItem:nth-child(4){
                margin-left: 14vw;
            }
            .chooseItem:nth-child(5){
                margin-left: 8vw; 
            }

For the physical construction of the final interface I decided to go with cardboard instead of wood. This allowed for more rapid modifications should anything go wrong or needed to be changed, and also meant that I could work away from the busy shop and without the perpetually used shop equipment during this pre-show stretch of time. It did however mean that I needed to get creative with the way I made dials and faders.

Christshon and Holly’s Final

 

For our project Holly and I created a pair of shoes that are both Fashionable and interactive. Out of all clothing items we feel that shoes are the one that have the most potential the it comes to wearable technology.  That being said the only shoes that integrate fashion and technology are extremely overpriced and largely unobtainable for the average person. We feel that its time that interactive fashion became something that is accessible and to prove it we made this pair of shoes that illustrates how possible it is for designs like this to be made in a cost efficient way. Although the amount of interaction between the wearer and the shoe is not very extensive art the moment, I feel that this is a good place to build from and I am now much more aware of the possibilities within the realm of wearable tech.

Our Process:

Due to issues with shipping our process was pushed back quite a few times but we still found time to get it all done. The first step was making our user interaction diagram. This was our plan for how the user would get output and what output they would receive (LEDs). Next I bought the shoes from a thrift store for only $4. The shoes were pretty beat up so I had to clean them and then paint them in order for them to appear new. My inspiration for the colors and shape was the “heavy-duty sneaker” look that many high end brands like Balenciaga and Gucci have been popularizing lately. After the painting and we uploaded our code to our Arduinos, we began wiring the LEDs and Arduinos to the shoes. It took a lot of time and even more troubleshooting but eventually we were able to get both shoes to operate smoothly. I then created a spot on the heel were the batteries would attach in order for them to not be in the way. Finally I used wires to mimic shoelaces as I felt It added to the futuristic vibe that we were striving for.  Now that the physical component was done we put our focus onto creating the Video that we would play as part of our presentation. The idea was to create a video that demonstrated the importance of sneaker culture while also highlighting the progress that sneaker design has made since they first came onto the scene. I created the video using Premier Pro and our plan was to create a makeshift screen to project it onto, however we did not budget enough time to create the screen so instead I decided to play the video off of my laptop.

Overall I am very proud of where the project went and how it, and this class in general, pushed me out of my comfort zone and forced me to try things that I otherwise wouldn’t. And although it was pretty difficult at times I think I’ve created a good foundation on which I can build. I look forward to adding onto this project and also exploring other aspects of wearable tech and code in general.

Documentation

earth

 

zeddfdfweefgvvfgjdcerfdeweddfrfcvbgrdfrvgvfe

Our Final Product

uhgtgvbdc

yerrrr
Processed with MOLDIV

Final project

The name of my final project is the Space Well. I collaborate with Cass Yao from the other class.  We tried to create an experience of floating in space.

Process

The code was definitely the most difficult of this project. We spent a lot of time figuring the Simple NI library since it doesn’t have good documentation. We spent hours to create the visual effects and testing it in the water pool. The also spent hours to debug. Luckily, it worked in the show. Many people love our work.

Inspiration

When we walked in Washington square park, it was rainy. We saw a small water pool on the road. We wondered how it would look like if the stars and ourselves are reflecting in it.

1
1
1
1

videos

this is the live stream by denial Shiffman. My project is at 1:32:04.

 

Code

We used processing to make this project because we think java is faster than JS.

Because the code is too long, I will just post the controller of our code.

import SimpleOpenNI.*;
import processing.serial.*;
import codeanticode.syphon.*;

PGraphics canvas;
SyphonServer server;
SimpleOpenNI context;

int backgroundColor=0;

//rotate
float rotX = radians(180);
float rotY = radians(0);
boolean Armup = false;
boolean noArmup = false;

//thresh
float minThresh=0;
float maxThresh=2413;
float leftThresh=-750;
float rightThresh=750;

float bomb;
float bright;

PVector [] prepos;
int [] predepthMap;

boolean reduce = false;
boolean invert=false;
int waitForInvert=0;
boolean preInvert;
boolean shake = false;
boolean beginSpin = false;
boolean beginUpdatee = false;

float count = 40;

void setup() {
size(1080, 1080, P3D);

canvas = createGraphics(1080, 1080, P3D);
server = new SyphonServer(this, “Processing Syphon”);

context = new SimpleOpenNI(this);

if (context.isInit() == false)
{
println(“Can’t init SimpleOpenNI, maybe the camera is not connected!”);
exit();
return;
}
context.setMirror(false);
context.enableDepth();
context.enableUser();
context.alternativeViewPointDepthToImage();
context.setDepthColorSyncEnabled(true);

smooth();
stroke(255);
strokeWeight(5);

// Init all stars
for (int i = 0; i < starsD.length; i++) {
starsD[i] = new StarD();
}
}

void draw() {
if(countPointCount==200){
restart();
}

//println(waitForInvert);

// canvas.beginDraw();
canvas.beginDraw();
canvas.clear();
canvas.background(backgroundColor);

// explosion();

context.update();

//meteor();
userList = context.getUsers();

//maxThresh=map(mouseX, 0, 1080, 0, 3000);

//println(maxThresh);
//minThresh=map(mouseY, 0, 1080, 0, 2000);

if (userList.length > 0) {
userId = userList[0];
if ( context.isTrackingSkeleton(userId)) {
ArmsAngle(userId);
// MassUser(userId);
}
}

//star——————————————–
canvas.pushMatrix();
canvas.translate(width/2, height/2);
for (int i = 0; i < starsD.length; i++) {
starsD[i].update();
starsD[i].show();
}
canvas.popMatrix();
// speed+=0.001;

pointcloud(canvas);

//lastPosition();

//prepos = context.depthMapRealWorld();
//predepthMap = context.depthMap();

//rotY+=0.01;

if (beginSpin){
count –;
canvas.fill(255);
canvas.ellipse(width/2,height/2,1080- count*20, 1080- count*20);
if (count == 0){
beginSpin = false;
count = 40;
}
}

if (shake){

}

if (beginUpdatee){
updatee();
}

canvas.endDraw();
image(canvas, 0, 0);
server.sendImage(canvas);
}

Auto-Adjusting Volume Headphones

-What is it?

It is a headphone with an auto-volume controller and a led showing outside noise level.

This project has two parts, the p5 part and the LED part. The p5 part mainly controls the volume of music playing in the headphones. For example, in super noisy environment, the volume of music playing in the headphones would automatically become higher to allow the user listen more clearly, while in quiet environment, the volume in headphones would become lower in order to avoid disturbing others.

-For whom?

Everyone frequently uses headphones

-Why?

Avoiding awkward situations and making life easier

Code Part:

p5- Full screen:https://editor.p5js.org/Ruojin/full/B19JpYL67

p5- Code:https://editor.p5js.org/Ruojin/sketches/B19JpYL67

 

LED & Arduino:

Step 1:

https://youtu.be/elggjARv9A4

Step 2:

https://youtu.be/Ez9exZU5ziA

Step 3:

https://youtu.be/9EGjHajbRGM

Official Final Project Documentation

Here is a copy of the project:

https://drive.google.com/drive/folders/1KwPtWHd_sW3Uts3uCWJijKG2_r8ZjxUl?usp=sharing

Here is the project in action during the Sunday show.

While you cannot hear what the user is hearing, you can see what the user sees on the monitor before them and their reaction during the simulation.


Tools Used:

Illustrator, Photoshop, AfterEffects, Audition, P5.play, P5.js, PoseNet, Brackets, FreeSounds.com, and a lot of office hours

Process:

Creating a Prototype

The process to achieve the final result was surprisingly complicated. For my first step,  I took free images of body parts online (lungs, heart, and veins), made them transparent through Photoshop and then animated them on Adobe After Effects.

asdas

I then created a simple subway animation that would be masked to reveal the user and created a “background” of sorts. Since I was unsure if users would resonate with the subway background, I initially used free stock footage. I also created two text animations: one that provides users context before the simulation and one to provide closure afterwards.

sdfsd

 

Once these first draft animations of the body parts and background were created, I decided to continue working with After Effects to create a prototype of my project. I simply used “movie magic” to apply these animations to  prerecorded webcam footage of myself. This allowed users to get a general understanding of the storyline that would be displayed. Finally, I used Audition and Free Sounds.com to create the audio. There are two pieces of audio: the subway noises which play in the beginning to help add context and the panic attack audio which imitates the internal noises (rapid heartbeat, heavy breathing, scattered/panicky thoughts) that a user would experience during a panic attack.

auditojn

Here is a link to the prototype:

 

User Testing with Prototype

I primarily used the prototype for user testing because it allowed me to make changes easily, quickly, and without the sunk cost that completely coding it first would have. Users primarily gave me feedback on the general storyline, specifically providing insights regarding the mini story that exists when the user “experiences the panic attack” in the subway. Originally, the mini story had thrusted the users into the situation without providing the user time to understand the context and in turn, the simulation. Thus, the user testing feedback helped fixed issues with the overall pacing. User testing also provided insights on the semantics used in the text displayed before and after the “simulation”. Specifically, I discovered that the ending text was abrupt and did not provide the necessary closure that a user needed after experiences such a sensory overload.

 

Creating the final project

After testing with almost 20 users over a course of a week, I finally reached a version of my project that I was content with. Now, all I had to do was bring it to life!

I started to by working to get the webcam and body tracking working. Since I knew I was using large animation files, I opted to use Brackets to code rather than the text editor. For some reason, I experienced a strange amount of problems regarding this because my computer was not properly capturing video feed and the text editor made it difficult to debug.

Thus, I pivoted back  to the text editor. I used facial mapping code instead, mapping the lungs x pixels away from the user’s chin. Then I added “filler” animations to create a general structure of my code. I knew that my animations, regardless of the file type, would be too large for the text editor. However, since I was having trouble debugging without the text editor, I decided to put gifs and .movs files that were small enough for the text editor in the places where the real animations would be placed. In other words, where the subway background would be was a random gif of the earth. I just wanted to have the bones of my code down before I moved back to the local text editor.

While currently, the random earth gif has been replaced with the appropriate subway file, here is a link to the first web editor: https://editor.p5js.org/andrikumar/sketches/BJuBq6cy4

During this time I also recorded my own video footage of the subway and substituted it with the stock footage I had been using for user testing.

With the bones created, I then transitioned back to the text editor so that I could input the correct files; yet, I still faced a lot of hiccups. Essentially, After Effects renders extremely large files that would not even work locally. However, these files needed to maintain their transparency so they could not be compressed post rendering. After playing around for days with different files types and ways to maintain transparency, I finally discovered what to do. I decided to convert the subway background into 5 pngs that would loop using p5.play. I turned the pre text, post text, and lungs animation into gifs. While originally, the lungs gradually increased in speed, I could only render 2 seconds of the animations to avoid having too large of a file size. Now, the user sees rapid breathing throughout the simulation.

Once I successfully added the animations to my code, I used different functions and “addCue” to trigger the animations based off the audio as well as create the interactions.

Here is what I ended up with:

https://drive.google.com/open?id=1rZYTTyByN53vB8ByfKPy5V_aUJrzemkv

You can find my code here which you can open up with a text editor to see the final work! I used Brackets!

Here is my code:

asdasdasassadas

Final Changes for the Show

While presenting the project during class, I realized that facial mapping required an extremely well lit room otherwise the code could not “see” the user’s chin. At first, I thought of simply switching the code to map from the eyes down but if something is being mapped onto a user’s body, they are very likely to move around. If the code used the user’s eyes, then the animations would scatter everywhere. Thus, I needed to use something more stable.

As a result, I converted my code from facial mapping based to PoseNet based, mapping the animation of the body parts between the user’s shoulders. For some reason, I am terrible at doing math and struggled to find the mean distance but luckily I was able to in the end!

Since I also understood p5.play better, I decided to take 15 pngs of the lung animations and animate them through p5.play rather that using the gif. I thought users would appreciate the higher quality animation that p5.play offered. However, after during a few rounds of A/B testing with the gif animation versus the p5.play animation, I discovered users preferred the gif animation. They thought the low quality created an “abstractness”, which allowed them to really be immersed in the story.

 

Conclusion

I am honestly happy that I faced all the issues I did because as a result, I got the opportunity to explore libraries, like p5.play, which we did not get the opportunity to in class. I am quite proud of my work, especially because my freshman year I failed my first coding class and now I coded this entire project! Of course, this project would not exist without the help my professors and friends provided me! It was really rewarding during the show to hear users talk to me after the simulation about how anxiety disorders have effected their lives. A lot of the users mentioned that they had a partner who had panic attacks, and while they had learned how to help their partner get through the attack, they never understood what had been going on. However, this experience gave them a glimpse on what it had been like for their partner and finally helped them understand the situation– something that endless conversation simply could not provide. I really hope to keep developing this project further so that it can serve as an educational tool!

Here is a video of my work during the show:

What I will be working on in the future

After having numerous people try out my project at the show, I was able to get a lot of user feedback! While most of it was positive, many users explained that the conclusion could still use some work. They still felt shocked and were unsure what to do after the simulation. One participant even asked if I had a teddy bear they could hold. I have always struggled with making powerful conclusions and so I think this will be the perfect opportunity to work on this skill.

I also got the opportunity to show my work to a medical student that was going to become a psychiatrist. Ideally, I would love my project to be used to educate medical professionals about mental illness. The student provided me some insights on how I could add to the project to help appeal to medical professional’s needs. For instance, he mentioned that I could have users experience the panic attack on the subway and then “go to the ER and hear from a doctor that it was just a panic attack”. Not only would this have a better story arc, but it would help medical professionals understand the importance of empathizing with their patients that just had a panic attack. I think this was a really powerful insight and I plan on brainstorming around it a bit more!

Final

fullscreen: https://editor.p5js.org/xuemeiyang/full/rJrKDzDam

edit: https://editor.p5js.org/xuemeiyang/sketches/rJrKDzDam

What it is
  • Pinwheels + Fat Cats + Fish + Paper Plane
  • Objects controlled by sounds.
Why made it
  • Cross-dimensional interaction
  • Easy to play with
  • Different inputs lead to different outputs
  • // I LOVE FAT CATS 😀
Who uses it
  • Everyone
How it works
  • There is only one input : the microphone
  • Based on how hard the user blows to the microphone, the rotation speed of pinwheel and the wind sound will change.
  • By dividing the input into different stages, there will also be different reactions
  • Cats’ behavior
  • The fish rain
The process
  • Coding
  • Pinwheels
  • Fat cats
  • Volume of wind sound
  • Paper Plane
  • Fish rain
  • Buy the pinwheel & microphones
  • Settle the microphone on pinwheel

iConcert

video

https://vimeo.com/306778810

Powerpoint

https://drive.google.com/open?id=1NNjm1CEuwuVWFnFWEf7kPRYFj-TYYB47

p5

https://editor.p5js.org/ach549@nyu.edu/sketches/rJ82qfVxE

Arduino

#define PIN_ANALOG_X 0
#define PIN_ANALOG_Y 1
#define PIN_ANALOG_X2 2
#define PIN_ANALOG_Y2 3
#define PIN_ANALOG_X3 4
#define PIN_ANALOG_Y3 5
#define PIN_ANALOG_X4 6
#define PIN_ANALOG_Y4 7
#define PIN_ANALOG_X5 8
#define PIN_ANALOG_Y5 9
#define PIN_ANALOG_X6 10
#define PIN_ANALOG_Y6 11

int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);
int sensor3 = analogRead(A2);
int sensor4 = analogRead(A3);
int sensor5 = analogRead(A4);
int sensor6 = analogRead(A5);
int sensor7 = analogRead(A6);
int sensor8 = analogRead(A7);
int sensor9 = analogRead(A8);
int sensor10 = analogRead(A9);
int sensor11 = analogRead(A10);
int sensor12 = analogRead(A11);

//joystick 2 X A2 Y A3

void setup() {
Serial.begin(9600);
pinMode(A0, INPUT_PULLUP);
pinMode(A1, INPUT_PULLUP);
pinMode(A2, INPUT_PULLUP);
pinMode(A3, INPUT_PULLUP);
pinMode(A4, INPUT_PULLUP);
pinMode(A5, INPUT_PULLUP);
pinMode(A6, INPUT_PULLUP);
pinMode(A7, INPUT_PULLUP);
pinMode(A8, INPUT_PULLUP);
pinMode(A9, INPUT_PULLUP);
pinMode(A10, INPUT_PULLUP);
pinMode(A11, INPUT_PULLUP);

PIN_ANALOG_X != PIN_ANALOG_Y;
PIN_ANALOG_X2 != PIN_ANALOG_Y2;
PIN_ANALOG_X3 != PIN_ANALOG_Y3;
PIN_ANALOG_X4 != PIN_ANALOG_Y4;
PIN_ANALOG_X5 != PIN_ANALOG_Y5;
PIN_ANALOG_X6 != PIN_ANALOG_Y6;

// pinMode(2, OUTPUT);
}

void loop() {

Serial.print(analogRead(PIN_ANALOG_X)+analogRead(PIN_ANALOG_Y));
Serial.print(“,”);

Serial.print((analogRead(PIN_ANALOG_X2)+analogRead(PIN_ANALOG_Y2))+3000);
Serial.print(“,”);

Serial.print((analogRead(PIN_ANALOG_X3)+analogRead(PIN_ANALOG_Y3))+6000);
Serial.print(“,”);

Serial.print(-analogRead(PIN_ANALOG_X4)-analogRead(PIN_ANALOG_Y4));
Serial.print(“,”);

Serial.print((-analogRead(PIN_ANALOG_X5)-analogRead(PIN_ANALOG_Y5))-3000);
Serial.print(“,”);

Serial.println((-analogRead(PIN_ANALOG_X6)-analogRead(PIN_ANALOG_Y6))-6000);

delay(10);

}

What

My project is a cross between an instrument and a sound remixer. It is preloaded with 3 sounds and 3  tunes the user can “remix.” It can be considered the “kid’s version” of DJ equipment.

Why

So many people love music and have personal things they would change about certain songs or performances, but no way to do it. Learning how to use really DJ equipment is difficult, time consuming, and usually left to the professionals. Actual DJ equipment is expensive ad bulky. That’s why I created the “kid’s version” of DJ equipement giving regular people a chance to make music.

How

The project consists of 6 joysticks all mapped to speed and volume. There are 4 joysticks on the bottom and 2 joysticks on the top sitting where the fingers naturally would when holding a video game controller. I made the case out of foam and resemble a nintendo or PS4 shape to be more comfortable, user-friendly, and intuitive. Out of the 6 joysticks, the 3 on the right are connected to 3 different real songs and the 3 on the left are connected to 3 different tunes/beats.

final

this is the link to my final documentation:

video1:

https://drive.google.com/open?id=1st1dvqPodxUSjrlSb7kbHTRExrme5RWp

video2:

https://drive.google.com/open?id=1VeCeyGA8XsaTL_bbirXURpiIZf5VSfeB

The whole webpage for documentation:

https://spark.adobe.com/page/eOmdfOcykgZLW/

 

Solar System Documentation

What it is

  This project is an interactive solar system that uses physical computing and p5. Using a sensor we were able to create a spherical controller that when turned, also rotates the planet. There are many interactive solar system applications online but ours is different because it integrates physical computing in a way that we have not seen before for these kinds of projects. The controller and how one interacts with it is what makes our project unique.

Solar System

Why we made it

  We started this project out of a love of outer space. It slowly transformed into something educational, so we could share that love with others.

Who uses it

  Our audience is people, particularly children, who do not know much about the solar system and want to explore. It is especially for people who are tactile learners. Our alternative controller allows the natural movement of turning an object around in one’s hands to translate on to on-screen exploration.

How it works

  It is an online program that is controlled by a physical remote and the mouse. The planets and sun are rotating in space. Using the mouse, you can click on planets/sun and it will zoom in on the planet and display information such as name, size, surface gravity, and more, in the upper left hand corner. From there, the controller can be rotated and the planet will mirror this rotation on the screen. To zoom back out, you must click on the space around the planet and the project will go back to the first screen displayed.

The process

  The process of making it was a tough one because we had a lot of ideas but not all of them were possible given what we’ve learned and the time constraint. Our first idea was to build a planet creator, so that anyone could customize their own solar system in Unity. We talked this out with one of the residents, Jenny Lim, and we realized that this was not going to be able to be made in time. So, we switched to p5 and Arduino and decided to create our solar system.

  At first, we created the planets on a side view that rotated around the sun using webGL. Then, we made a planet class so that we could standardize the planets and their information. To make it work so that the planets were more easily clickable, we changed the idea so that the solar system side view had the planets just rotating in place and not around the sun. We added a zoom so that when the planet is clicked, it is the focus on the screen. Finally, we added some ambient space-themed music looping in the background.

snkladna

  The controller uses a 6DOF (six degrees of freedom) sensor to sense the rotation (Euler angles) of an object. With serial processing, we fed the values that the sensor recorded into the rotateX, rotateY, and rotateZ values of the planets. For the enclosure, we laser cut a pattern out of poster paper and folded it into a sphere-ish shape. The sensor then went inside a smaller, geometric paper shape and was suspended in the middle by string. At first, the sensor was connected directly to the Arduino with shorter, stiffer wires. For a more polished interaction and stable connection, we soldered on flexible wires, then translated all connections to a solderable breadboard. Both the breadboard and the Arduino when then placed inside a box, to prevent any wires from being pulled out.

sphere

dasds

Arduino + breadboard

Link to p5 sketch:

https://editor.p5js.org/aramakrishnan/sketches/HyRDJSs1V

Arduino resources:

https://github.com/jrowberg/i2cdevlib (/Arduino/MPU6050/)

Processing Teapot sketch (used to test sensor in beginning stages):

https://gist.github.com/cooperaj/10965142