# Physical Computing – Midterm – Playtest

This week for our Physical Computing midterm we had to do some playtesting of the concept.  We collected materials on Thursday and were really excited about a six position switch that we bought.  We are hoping that this switch will allow the user to control five or six different states of the kinetic window screen: 1) closed (static), 2) slow motion, 3) faster motion, 4) open (static), 5) “automatic” / responsive mode – connect to a distance sensor (or photocell for playtesting), possible position 6) medium speed.

We started by working on the Processing and Arduino code that would control the movement of two patterns and we setup the circuit using a couple of buttons.  Pressing one button triggers quick motion, pressing the second triggers slow motion.  We got this working successfully and planned to elaborate on this setup–to bring in the switch and a photocell–but Hurricane Sandy prevented us from reaching the floor (where our new materials are stored) on Sunday.  We will have to resume this test early this week, as it’s essential for us to properly figure out the circuit before moving forward with the project.

Liz was able to have her boyfriend test a basic setup based on our aforementioned work.  She brought in a potentiometer to the circuit and adjusted the function of the buttons — one button stops the motion, the other starts the motion, and the potentiometer controls the speed.  Here’s a video of that test:  P Comp Midterm Playtest

While this is a good start, we need to expand on the testing and build out the complete circuit.  Also, we’d like to determine which type of motor we’ll use.  We were able to borrow a stepper motor for our testing, and this week’s lab and lessons are about DC motors, so we should be well positioned to make this decision shortly.

# Comm Lab: Video & Sound – Video Project part 2

This post is a continuation of the Video Project part 1 post.  After creating our storyboard, we had to actually shoot some video.  Then edit it into a rough cut.  Then show that rough cut to the class.  Then refine the editing into a final cut.  Oh, and add the audio.  And refine that, too.  We had our work cut out for us.

We started by shooting the office scene and the coffee scene.  We reserved a room on the ITP floor and transformed the space into an office.  We thought it would take a couple hours; it took four and we didn’t even get to the coffee.  We’d only begun and were already behind.  In the week that followed we kept to a very aggressive shooting and editing schedule.  We shot the coffee scene and the sandwich scene at the ITP House and elevator scene on the floor.  Everywhere we went we had to carry a lighting kit or two, the camera, and at least one tripod.  The scenes took hours to shoot.  We were constantly adjusting the lighting and, because we knew we wanted so many cuts and different perspectives, we had to act each scene all the way through over and over again from each camera angle.  Adding to this was the fact that none of us had any experience with the camera or the lights.  We’d set something up, review it, change it, record some stuff, realize it looked bad, scrap the setup and start over.  Occasionally, we’d hit upon a setup that worked.  As the shooting continued, we became better at figuring out how things should be setup.  We were learning…

We felt continually grateful that we’d decided not to shoot in a public space or with professional/aspiring actors.  We were able to control most of what was going on and didn’t need to worry about upsetting any individuals who are just trying to get their laundry done or unnecessarily wasting the precious time of other Tisch students.

After completing the shooting of a few scenes, we started to edit using Premiere Pro.  Even though we were all new to Premiere, because we had thought carefully about the scenes during the storyboarding and shooting process, the editing for the rough cut went well.  Our movie was starting to come together.

Despite our best efforts, we still didn’t get everything done.  The rough cut we presented to our class had only four scenes and we’d originally envisioned six or even seven.  The feedback from our class was good and everyone felt like it was long enough, that we didn’t need to add any more scenes.  Although we liked the concept with the additional scenes, we were glad to move on from the shooting and dig our heels deeper into the editing and adding the audio.  We had thought we’d put a song over the entire movie, but a couple tests in class showed us that it would look better if we added sound effects instead.  Music seemed to drive the emotion too much and take away from the acting.

We spent the next week putting final touches on the video editing and adding sound.  The sound work took much longer than we had expected and we were kicking ourselves for not using a zoom recorder to capture sound as we were filming.  We had some success finding sound files on freesound.org but, for the most part, we re-created the sounds we wanted to use in the video.

Even though we knew it wasn’t exactly perfect, we had a final cut to show in class.  When we heard it through speakers (instead of earbuds) for the first time, we recognized the areas that needed sharpening.  Some scenes were much louder than others and a few effects were so loud it seemed like the props were made from concrete.  Those things aside, the response from the class was good and we felt good about our work.  Gabe suggested that we add music in addition to the sound effects — a classical piece over the entire thing that builds to the crescendo at the end.  So, we took another week, fine tuned the sound, made some edits to the video, and added music over the whole thing.

Here’s the final piece for your viewing pleasure!  We feel pretty good about how it turned out.  A month ago, we knew nothing about shooting or editing video, but we were able to make this!

# Comm Lab: Video & Sound – Video Project part 1

Our second (and final) project for Comm Lab: Video and Sound was to make a short movie (2-5 minutes) in small groups.  We had three weeks to complete all of the work.  Jonas, Paty and I were assigned to a group.

Our first task was to develop a story and create a storyboard.  We decided it would be best to take an existing story and develop it into a movie.  In our brainstorm session, we kept coming back to stories that had a New York theme — things that happen in the City, experiences common to New Yorkers, dealing with the love/hate relationship that many people have toward NYC.  This led us to the NY Times Metropolitan Diary column, where individuals contribute short anecdotes about NYC.  The stories are usually optimistic and humorous, which worked well for us since we wanted to create a funny or lighthearted movie.  We also thought it was really important to choose a story that could be communicated without using words.  We knew our chances of working with professional-level actors were very slim and poorly delivered dialog would kill the movie immediately.

Our original inspiration was A Waltz at the Laundromat, a story about a couple who bring some passion to the mundane task of folding laundry.  We drew the story out a bit and storyboaded a movie that included coordinated folding and a full-blown dance scene.  We were really excited about the concept and presented the storyboard in class.  However, when we started to look for places to shoot, we came up dry.  We called and visited a number of laundromats around NYU and in our neighborhoods.  All of them either said we couldn’t use their location or they gave us the run-around (many managers were out of town).  Adding to our frustration was the thought of setting up all of the equipment and spending hours shooting in a public place where a lot of things (number of people doing their laundry, lighting levels, etc) were totally out of our control. None of us had any experience using a video camera or lights and we were becoming fearful that we’d quickly get on the nerves of any laundromat owner or customers.  We considering using a laundry room in an apartment building to have better control over some of these things, and even did some location scouting in an NYU dorm, but NYU has a strict student film policy and laundry rooms are not on the list of approved spaces.  Without a resolution to the location situation, we decided to kill that idea and develop a new story.

We went back to Metropolitan Diary this time with new criteria in mind.  In addition to being self-contained, humorous, and not requiring dialog, this new story also needed a public or replicable setting.  No more private spaces for us!  We eventually landed on Street Crossing Time Trials, a story about elderly Upper East Siders who use the countdown crosswalk signs to record their own personal best crossing times.  We modified the story to be about two office workers.  We re-drew the storyboard and met with Gabe to review it.  Gabe pushed us to build out the story more — he felt that it needed something else, additional or repeated competitions and, of course, an ending. After hours of brainstorming and refining, we developed a series of competitions in which our two office workers would engage throughout the day.  We drew a third storyboard which came to be the basis of our shooting and editing.  We used a large piece of paper and did the best we could with stick figures and arrows.  The result in in this PDF: Friendly Competition Storyboard.

Although it took a very long time to think through the story and create the storyboard, it really paid off.  We relied on and referred to the storyboard throughout the shooting and editing.  Those processes were still time consuming, but at least there was no indecision; we came to know every scene, look, and cut inside and out.

# ICM – Week 7 Homework – Pixels

OK, it is week 8, time to turn in ICM homework based on week 7 concepts (pixels!). Can it be that this is my very first blog about ICM even though I’ve done so much ICM work?!  It’s hard to believe and clearly I need to do something about this.  But not right now.  Right now, I’m just going to post a couple videos of the homework.

The first is me upside down and pixelated (pixelized?). VideoPixelation_HWwk7_Pixels_121024

In the second video, I’m using color tracking to create a snake of ellipses. Talya, Lei, and I worked on this together:  SnakeWithColorTracking_Take3_Edited_ICM_Wk7

# Physical Computing – Midterm – Concept Presentation

We’ve started to work on our Physical Computing midterm and will be discussing the concept in class today, Monday, October 22.  Katie, Liz and I are working together on a kinetic window screen.  Our inspiration came from a few sources.  First, we liked the idea of trying to create something which is both aesthetically pleasing and functional.  The screen creates privacy and allows light in but also creates patterns as the movement and light levels vary.  Second, we were interested in playing around with a moire effect and thinking about how to create such effects using overlapping patterns.    Third, we wanted the cutout patterns to reflect patterns in nature, for example: pebbles, cut fruit, cellular structures.  Because moire has its origins in textiles, we originally thought we would use fabric for the screens.  As our conversations about the project progressed, we began to consider using high quality paper instead of fabric.  This is primarily an economic consideration, but has a couple additional advantages over fabric: it’s easier to use with the laser cutter and, because it’s less expensive, we can test out multiple patterns before deciding on the final one.

Here’s a short description:

Create a paper window screen that is kinetic and responds to user input.  A loop of paper, the length of which is equal to the height of a window, is fitted around two dowels, one at the top of the loop and the other at the bottom.  The dowels are set within the window frame.  The paper has a cutout pattern which allows light to come through.  The pattern is repeated so that the front and back of the paper loop can be aligned exactly (the negative spaces match up to allow a maximum amount of light to shine through) or not aligned at all (the negative space matches with positive space to block light).  When moving between those settings, the cutout patterns merge and depart from one another, creating new patterns on the screen and on the floor/wall from the sunlight.

The user can adjust the setting of the screen (light, privacy, in between, in motion) using a potentiometer or switch.  Another thought is that a motion sensor/ photo sensor enables the screen to respond to the user’s approach or presence of sunlight — it will begin to move when a person is nearby, for example.

And here’s a labeled diagram:

Diagram for our Physical Computing midterm project.

And a preliminary list of materials:

Item Quantity Purpose Cost Supplier
dowels/wooden rods 2 secure both ends of patterned loop \$3-\$8 each Home Depot
roll of paper/ mylar roll we need to get enough for multiple tests primary component of project \$65 (we probably don’t need to buy a whole roll, need to investigate further) Pearl Paint
DC motor – with speed control and continuous rotation 1 control motion direction and speed based on voltage/input \$10 Use the motor from our kit for testing,
buy a different one if necessary
rubber/grippy material for end of each dowel 4 pieces (maybe 1-2″ wide, length the same as the dowel diameter) wrap each end of the dowel to better grip the paper as it rolls \$10 canal rubber
motion sensor 1 sense proximity of person or presence of sunlight – initiate motion \$1 Adafruit
window frame 1 hold the dowels and the paper roll free Katie’s scrap wood
potentiometer 1 allow user to manually adjust the paper roll/ 5 thresholds to read for 5 possible states \$1 Adafruit

There are a few patterns we’re considering.  Next week, during the playtesting phase of the project, we’ll experiment with different patterns and determine which one(s) we’ll use.  Some look more promising than others, but animating them in processing will give us a better idea of how they’ll look when built out.  Katie generated these patterns using using rvb and Rhino.

Possible patterns for the paper loop.

# Physical Computing – Week 4 Lab

This week in Physical Computing the lab was about serial communication (specifically, serial output).  This is the big time–we’re learning how to get two electronic devices (yeah, like the computer and the Arduino) to talk to each other.  In this lab, we sent data from the potentiometer to the computer and graphed the output.  The result was that, when you adjusted the potentiometer, a graph in a Processing sketch changed accordingly.

First, I set up the circuit connecting the potentiometer to analog pin A0.  Then, I programmed the Arduino to read the analog sensor and print the results to the serial monitor.  This instruction was pretty familiar but then came a twist — to use Serial.write() instead of Serial.println(), which will send the sensor value serially.  The result is that instead of seeing numerical values in the serial monitor (like we have in the past) we see a bunch of characters that make no sense at all.  As I continued through the lab, I learned that these are ASCII characters.  Serial.write() sends out the sensor reading value as a binary and then the serial monitor receives this information in bytes and displays the ASCII character which corresponds to that byte.  OK, so, why?  Well, the computers talk to each other in bits and bytes, so the sensor reading needs to be sent in bytes in order for the computer to know how to read them.  It’s important to remember that both electronic devices need to be in agreement about the voltage, data rate, and order of interpretation of bits in order for communication to work.

The next part of the lab was to download and install CoolTerm, which allows you to see both the ASCII characters and the hexadecimal values.  Here’s a screenshot of my CoolTerm window:

Now we are going to graph the sensor values.  I wrote the sketch in Processing that will read the sensor values according to the instructions in the lab, making sure to include the Processing Serial Library.  The Arduino is sending data to the computer and this graph visualizes the sensor output.  Mine doesn’t look exactly like the image in the lab — instead of making peaks it sort of hits a plateau at the top, but I think it’s working properly.  Here’s a short screen capture video of my graph: Lab_Wk4_SerialCommunication_GraphOfOutput

# Physical Computing – Week 3 Labs

This week for physical computing we had to complete two labs.  One using the servo motor (analog out) and the second involving tone output.

In the first lab, we set up the breadboard with a force sensing resistor and a servo motor and programmed the Arduino to map the FSR input to the servo’s 180 degree output.  Here’s a video of my working setup: ServoLabVideo  (You can see that as I apply more pressure to the FSR, the arm of the servo rotates more.)  This lab is my first exposure to controlling motion and using the servo motor is really cool.  Up until this point, I’ve been mostly making LEDs turn on and off, which is pretty satisfying, but having the ability to make things move is going to be great.

In the second lab, we set up the breadboard with a little speaker and two photocells then programmed the Arduino to play a tone that varied in frequency with the analog input.  As you move your hand over the photocells and adjust the amount of light that hits the cells, the frequency of the tone changes.  The volume of the tone output is very low (I used the speaker in the kit) and nearly impossible to capture using the camera on my phone, so here’s a picture of the working setup (no video).

The input from two photocells adjusts the frequency of the tone output.

The second part of the tone output lab was to use the same setup to play a tune.  This was accomplished by including a file called pitches.h and then putting the notes we want in the tune in an array variable.  This was all totally new to me so I had to use the code provided in the lab.  Even though I didn’t write this code myself, this part of the lab was really instructive.  Up until this point, I hadn’t realized the extent to which the output can be controlled by the code that you write.  The analog input labs last week started this thinking (making the LEDs toggle, for example) and I guess conceptually this should be obvious (you write the instructions and, if everything goes well, the circuit you build follows those instructions), but for whatever reason this little tune really brought it home.

This lab had a third section where we built a little musical instrument using three sensors as keys on a keyboard.  Again, we included the pitches.h file but this time used an if statement to control which sensor played which note.  The result was that you could play three different notes, each corresponding to one sensor.  This part of the lab was an excellent extension of the tune portion.  By working through this section, I was able to see the value of interaction.  Telling the circuit to play a tune is really fun for me, but might not be so interesting to anyone else.  Here’s a picture of the final setup.

Each sensor controls the output of one note.

So, three pretty big revelations in one week of labs: 1) motion is something that can be controlled, 2) you can tell your circuit what to do, and 3) you can even make those orders interactive (if you setup your circuit with interaction in mind).  Now to come up with a way to combine these ideas into a creative project…

# Physical Computing – Response to Week 3 Readings

In preparation for the week 3 class, we read two short pieces: Physical Computing’s Greatest Hits (and Misses) and Making Interactive Art.

Both of these readings really inspired me.  Even though I’m a long way from making projects of this caliber, I found comfort in the notion that many projects spring from common themes and that I don’t need to reinvent the wheel to develop and realize a creative project.

I find the musical instrument theme very appealing.  To me, the process of music making is a mystery and instruments seem so formal and foreboding, I wonder how I could ever learn to use such a thing.  Projects that enable people to make music without knowing how to play an instrument take away that fear and make the experience more fun; instead of feeling concerned about holding something correctly, or putting your fingers in the right places, you get to experiment with sounds.  Growing up, there was a small science museum in my town which I visited somewhat frequently.  I remember a bunch of really tall tubes, PVC pipes or something like that maybe, made to resemble the pipes of an organ but instead of connecting one end to an organ that end was just open.  The user hit that open end with a shoe (I think it was a specially-made paddle but it always seemed to me like a slipper) to produce a note.  This installation made music production a physical activity.  Probably no one ever used it to compose a masterpiece, but it was really fun.

I’m also drawn to the body-as-cursor theme, in particular I’m interested in the Digital Wheel Art project that was described.  Since we’re inventing the future, we might as well think as creatively as possible in as many dimensions as possible.  That includes thinking outside our own abilities (or cultures or languages) and designing really great projects for people who aren’t us.  We can ask ourselves how we can expand the functionality of a project to make it more usable and more widely accessible.

# Physical Computing – Week 2 labs

This week for Physical Computing we had to complete two labs: first Arduino program (with digital in & out)  and analog in.  In addition, I was selected to present a creative project which gives context to the lab concepts.

The LEDs toggle when you press and de-press the button.

For the first Arduino program lab we setup the breadboard with two LEDs and a pushbutton and then wired the breadboard to the Arduino.  Since the pushbutton is a digital input (it’s either on or off), both the input and output were connected to the digital row of pins on the Arduino.  Then we programed the Arduino to turn on one LED at a time — one is on when the button is not pressed, the other when the button is pressed.  The final result is LEDs that toggle when you press the button over and over.   This lab helped me to understand the basic setup of input and output, the functions digitalRead and digitalWrite, and the arguments high and low.  I’ll put a video here but beware, it’s not really in focus: TwoLEDsAndButton

The second lab focused on analog input.  First, we setup the breadboard with a potentiometer and an LED.  Then we connected the breadboard to the Arduino using an analog pin for the potentiometer and a digital pin for the LED.  The Arduino programming became a little more complicated in this lab: we had to establish variables and translate the analog output of the potentiometer (which ranges from 0 to 1023) into the analog input of the LED (which ranges from 0 to 255).  We also used Serial.println() to monitor the potentiometer output.  The final result was an LED that grows brighter and fades as you turn the potentiometer.  Again, an out of focus video:PotentiometerAndLED

This lab had two parts.  In the second part, we used force sensing resistors as the analog input which controlled the brightness of two LEDs.  Here’s a video of those sensors in action: touch-sensor-lab-wk2.  As you can see, the sensors and LEDs respond fairly well to the variation in pressure.

In addition to the labs, I was selected to present a creative project in class. Since day one at ITP, I’ve heard a lot about the integral part that failure plays in learning and creativity.  With that in mind, much of the remainder of this post will be about failure and confusion.

I decided to work with analog input for my creative project — the sensors were a lot of fun to work with, they make you feel like you have control over what’s happening with those little LEDs.  Along the lines of the “love-o-meter” challenge, I decided to create a “frugality-o-meter” to measure the very sexy trait of money management by measuring how tightly you can pinch two pennies together.  I thought I’d use a flex sensor and build it into some type of contraption which adds in physical resistance (so it becomes a little difficult to bring the two ends together).  Based on the analog in lab, and some additional resources, I cobbled together some Arduino code that would make one LED light up with a little flex, a second LED light up with more, and a third LED light up if those pennies were really pinched.  When I built my breadboard and connected it to the Arduino, all three LEDs just lit up – not what’s supposed to happen.  I’d put in my code a Serial.println() for the sensor input and on the serial monitor there was a fairly steady read of 1023 even when the sensor was totally flat and the read didn’t change much when I bent it. Here’s a picture of the serial monitor:

What’s wrong with this picture?

So, OK, plenty of things could have gone wrong.  Is there a problem with my code?  Maybe I need to do some math someplace or something?  Am I just using the sensor incorrectly?  Or is the whole setup totally wrong?  I try rearranging things and looking over my work as best I can.  Nothing changes.  The long story short is that I couldn’t figure out what I was doing wrong, so I decided to give the whole thing a try with the force-sensing resistor.  Since I used the FSR in the lab, I feel like I know how to handle this sensor a little better and maybe I can figure out a way to use it in the frugality-o-meter instead.

I disconnect the flex sensor and replace it with the FSR.  All the LEDs light up.  What?  I feel so sad — the good old FSR, after all we went through figuring out part two of the lab, is now betraying me.  I open up the serial monitor and the read is changing as I put pressure on the sensor, but it never goes below about 300, even when nothing is touching it at all.  Here’s a picture of the serial monitor for the FSR:

Now it’s personal…

I go through all of the adjustments that I can, nothing I do seems to make much of a difference.  And, really, the FSR is too sensitive for what I’m trying to accomplish.  I could maybe build something around it that gets the job done, but at this point I’m basically doubting all of my skills.  I’m unable to figure out what I’m doing wrong so I have to move on.

As a last-ditch effort to try and gain some understanding of where the problem is, I replace the sensor with the potentiometer.  And, lo and behold, it works like a charm!  Exactly what I want to happen, happens.  When you twist the potentiometer a little bit, one LED comes on, a little more lights a second and when it’s cranked all the way all three LEDs are lit.  My despair fades.  Even if this isn’t what I was going for, at least something is working and making sense.  I ditch the frugality-o-meter.  I start thinking about a new context for my analog in and corresponding code.  With some input, I land on a faucet.  A faucet which drips LEDs when you turn the little knob.  I add some code which makes the LEDs toggle when the knob is turned on all the way so it looks kind of like a drip.  And, while I’m building the housing for the breadboard, I decide to turn the whole thing into a PSA for the DEP.  Here’s a photo of the finished piece:

Leaky faucets waste money and resources.

I’ll include a video here too because, well, who knows if this is going to work this afternoon in class (LeakingFaucetAnalogIn).  I’ll be honest, I had a hard time getting the breadboard and the Arduino into the housing.  I had to add wires to the potentiometer legs so that it could be separated from the breadboard, it got kind of unruly.  Also, it’s possible those LEDs will work themselves loose on the commute.

When I started the “get creative” part of the lab I was really nervous.  And, as you can see, it wasn’t exactly smooth sailing.  But I got something done and I feel good about that.  Even if it’s not in direct response to the challenge, the faucet demonstrates a basic understanding and use of analog in.  I think this is a great place to start and definitely leaves me a lot of room for improvement.