As you can probably guess based on the video above, I was able to successfully construct a gumball machine for my pComp final. While I consider the project a success, it ended up taking a slightly different form than I had initially envisioned.
Initially, I had planned to use RFID, in order to authenticate users as members of the ITP community using their NYU ID cards. Having determined that the RFID chips in our cards use the 13.56Mhz RFID standard, I went ahead and ordered a 13.56Mhz RFID reader and evaluation board from Sparkfun. Of course, it was only afterward that I saw an email from Tom Igoe describing the difficulties he had experienced in trying to read any kind of data from the NYU cards using off-the-shelf RFID hardware. Since I had already ordered the hardware, however, I decided to press on and see if I couldn’t figure out some kind of authentication mechanism using RFID.
This was the point at which I learned a very valuable lesson: when you get new parts, read the documentation closely before doing ANYTHING. Upon receiving the RFID hardware, I immediately soldered the RFID chip to the evaluation board, which, as I later learned, was a really dumb idea for at least two reasons: 1.) the 13.56Mhz RFID chips ship with different firmware versions installed on them and often need to be flashed with the latest revision of the firmware in order to use the correct communications protocol–to do this, of course, the chip needs to be breadboarded sans evaluation sheild; 2.) the 13.56Mhz chip is very heat sensitive and soldering directly to the leads can fry the chip, unless you are really good at soldering (I’m not!).
So, after trying in vain to get the RFID module to work and consulting with Tom Igoe, I learned that either my chip had the wrong firmware installed on it or that I had destroyed it during the soldering process. My only option at that point was to attempt to de-solder the chip from the board, which would almost certainly fry the chip, that is, if I hadn’t already ruined it the first time around.
Left with no good option and knowing that reading the RFID chips in the NYU cards was probably a long-shot anyway, I decided to abandon RFID at this point. Knowing that reading the magnetic stripes on the back of the cards was far easier, I sent out a panicked email to the phys-comp list the day before the project was due, asking if I could borrow someone’s magnetic stripe reader. To my surprise, a number of folks on the floor volunteered their hardware and I was able to get the gumball machine up and running in no time, thanks to Paul May and Genevieve Hoffman’s kindness.
Here’s what the completed project looked like:
Here’s a look at the back of the unit. You can see the card reader up top, the funnel and solenoid resting on the first tier and the Arduino Uno and breadboard on the bottom:
Here’s a close up of the 24V solenoid that regulated the flow of gumballs:
And last but not least, here’s a close up of the Arduino and the breadboard:
All in all, the project felt like a success in the end, even if the construction process didn’t work out quite as planned. I was pretty happy with the enclosure and overall look of the unit, the fabrication of which I could not have completed without much help and advice from Eric Hagan, John Duane and others in the shop.
Were I to keep working on this project, I would definitely improve the construction of the funnel and mechanism so that it could hold more gumballs (the way it was built, it can’t hold more than 3 gumballs at a time without resulting in a logjam). Had I more time, I would also have liked to play around with the card reader more and figure out if I could pull any meaningful data off of it. Now that I know a bit of Sinatra and am aware of the possibility of sending data from an Arduino to a Sinatra database, it would also be cool to log data on who is using the machine and make that data publicly available somewhere.
In closing, I will leave you with the lessons I learned while working on this project: read before you solder, you can rely on the kindness of ITP strangers, chewing too many gumballs will make your jaw ache.
For my analysis of a commercially-available pop-up book, I chose Brooklyn Pops Up, which Marianne kindly lent to me. Going through the book, I discovered that most of the mechanisms were indeed ones that we had already learned in class–they were simply combined in exciting and sophisticated ways.
Of the 8 pages in the book, 5 rely heavily on floating layers. In most cases, these floating layers are either combined with other mechanisms or floating layers are layered on top of each other to produce a greater sense of depth. Here we see the first page of the book, which combines a floating layer with the pull-down-type technique we learned in class. Fairly simple but effective nonetheless.
This bridge scene is pretty spectacular but the mechanism seems surprisingly simple–it appears to be a floating layer built on top of something that looks like half of a box support.
This Coney Island scene is the grand finale, as it were. Looks fantastic but it’s actually really simple, mechanically. It’s just two of those folding V-supports that we learned in class with a few other elements attached so that they all pull up at the same time.
I’m having a bit of trouble deciding what I want to do for my ICM final, so I’ve decided to present two ideas in class this week and make up my mind based on the feedback I receive.
This is essentially the original idea I had–to build a pop-up book with QR codes embedded in it and then build an accompanying application in Processing that recognizes those codes and triggers an animation. Unfortunately, after telling my pop-up books professor, Marianne Petit, about this idea, she revealed that a previous ITP student had built a project that is similar in some ways. This has somewhat diminished my interest in following through with the idea, though I might still do it.
Two weeks ago, I attempted to build a data visualization that used data imported from the New York Times API in real-time. Essentially, the application downloads the abstracts from the last 30 days’ worth of most-read stories on the site, counts the frequency with which words appear in those abstracts and then draws the words on the screen in a size that corresponds to their frequency. I had originally planned to finish the app in time for my week 9 homework assignment but was unable to due to difficulties parsing the XML that the API spit out. With a little help from Rune, I was able to get it working successfully but there’s still more I’d like to do with the app. If I continue to work on this app and turn it into a final project, I would:
Work on the RFID-activated gumball machine continues apace. I met with a resident yesterday to discuss ideas for the mechanism and fabrication of the machine. After talking to Patricia Adler, I decided that a solenoid would probably be the best option for powering the gate that releases the gumball, since even a relatively low-power solenoid will be much faster and more powerful than a comparable motor. I found a 24V solenoid on Adafruit that seems to fit the bill–I’ll have to power it externally with a 12-24V power adapter and run it into the Arduino via a transistor but that should be relatively easy to set up. Patricia also helped me come up with some ideas for how to make the machine a bit more engaging to the user: a partially see-through enclosure that allows the inner mechanics to show, a winding tube that the gumball has to traverse through, etc. My plan is to build the basic mechanism for the machine first and then add some of these additional features if I have time. Unfortunately, I’m still waiting on most of my parts–13.56Mhz RFID readers are apparently very popular at ITP this year, if the week-long backorder for them at Sparkfun offers any indication–but hopefully I’ll have all my parts in hand after Thanksgiving and can begin construction on the machine in earnest.
For my pComp final, I decided I wanted to do something with RFID, which is a technology I’ve long been curious about but have not yet had a chance to play around with. Some chatter on the list suggested that our NYU ID cards might have RFID chips in them (this still hasn’t been 100% confirmed), which gave me the idea to create some kind of installation on the floor that would require students to use the RFID chips in their ID cards to unlock some sort of reward. Everyone likes candy (actually, I don’t really care much for candy but I digress), so an RFID-activated candy machine seemed like a good way to go. This will be a pretty challenging project for me, I think–not only will I have to learn how to use the RFID hardware, I’ll also have to fabricate a box and build a mechanism for dispensing the candy, all of which are things that I have no idea how to do. There’s also the open question of whether or not our ID cards have RFID chips in them. At any rate, I’m hoping that I can overcome all of these obstacles and build an RFID-activated candy machine within the next month–stay tuned!
What you see above is, more or less, the result of my pComp group’s media controller assignment: a primitive music-making device housed inside of a toy ball. The idea was to create a device that would allow the user to control the pitch, octave and volume of notes emitted through motion. My partners for this assignment were Chris Egervary and Olya Mikhaliova.
As a peek inside the TEO will reveal, we used a single board containing a triple-axis accelerometer and a dual-axis gyroscope to collect input data which was then mapped to variables that controlled the different note parameters. By using both an accelerometer and a gyroscope, we were able to allow movements in 3-dimensional space (i.e. tilt, pan, yaw, etc.) to translate into changes in the sound outputted by the TEO. We housed the sensors inside a foam ball, which, in addition to providing a form factor that invites the user to play and experiment, also protects the internal circuitry.
In terms of output, all of the data generated by the movement of the ball is fed into an Arduino Uno microcontroller attached to a musical instrument shield. The shield is capable of producing sounds from a number of different MIDI tonebanks and can play up to 31 sounds simultaneously. For the sake of simplicity, we settled on a single tonebank: a cheesy organ sound. Initially, we allowed the sounds generated from each movement to persist, slowly building up layer after layer to produce a sort of drone. We ultimately decided against this layering, however, since it made it difficult for the user to discern how her movements translated into sound output. If you’re curious, a video of the ball in “drone mode” is embedded above.
If we decide to continue working on this project in the future, we have a few ideas for how we might move forward. We would certainly like to tighten up the code a bit in order to make the link between input and output more precise, thereby giving the user a greater feeling of control over the device. We would also like to add a switch to the ball (most likely a pressure sensor embedded on the surface) that would allow the user to cycle through different tonebanks. It would be great to make the ball fully wireless using either WiFi or Bluetooth hardware, though this presents some power issues (we would have to embed a battery inside the device itself, to power both the sensors and wireless radio). Finally, it would be great to build multiple TEOs, each outputting a different tonebank, in the hope that users could compose and perform music by manipulating multiple devices in tandem.
For my ICM midterm, I decided to dive into the openCV library for Processing and play around with facial recognition via webcam input. My initial idea was to use input from the webcam to control the ghosts that I had animated for my week 5 homework assignment. My first thought was to track eye movement and to have the ghosts on screen follow the user’s area of focus. Of course, after looking to the capabilities of openCV, I realized that such sophisticated analysis/tracking of webcam input was probably outside the scope of what openCV is capable of. So, I had to modify my original idea and instead, decided to replace the user’s face (or users’ faces) with a vector image instead. My thought was that this would result in a sort of augmented reality Halloween mask, allowing the user to alter his or her appearance without having to endure the discomfort of wearing a physical mask.
Finally, I replaced the red rectangle with a vector image of a pumpkin. The pumpkin not only follows the user’s face around the screen but also scales in size, depending on the user’s proximity to the webcam.
There was more that I wanted to do with this–I wanted there to be some sort of interaction between the user’s face and the animated ghosts but I wasn’t able to implement that in time. I’d also like to allow the user to select which mask to wear from a list of options but I didn’t have enough time to create vector images for multiple masks. All ideas for future iterations of the program, I suppose.
So, first things first. Above are my results for the color hue test. 7/99 is not bad, I think? At least that’s what the above graphic leads me to believe. Would have been nice to know what an average score for my age range/gender is, though.
Below are my experiments in tweaking hue. I created three color bars, the first of which is a constant. With each iteration, I pushed the hue on the second and third bars down and up a bit, respectively. Interesting that they both eventually end up in the same place.
Finally, here’s my “fake transparency” image, created by playing with brightness, saturation and hue in order to create the illusion of transparency: