As I read through Tom Igoe’s Physical Computing’s Greatest Hits (and misses), I was struck by just how much one can innovate on existing ideas. While I had come across almost all of these project types before, each one of the projects presented by Tom was a decidedly unique take in its own right. Human creativity is decidedly wonderful.
I’m always most intrigued by the projects that cause a double-take: the seemingly random object that mirrors your movement as you walk by; the ”fake” flowers that sway in the wind with your movement; the video-game that you’re part of without knowing it. I think random day-to-day delight and whimsy is something humans thrive on, but don’t get to experience nearly enough. It’s certainly a concept I would love to explore more closely.
I’m also quite drawn to physical computing projects that explore alternative forms of energy. For example, the Solar Sinter project (in the above video) utilizes sunlight and sand as the raw energy and material to produce glass objects using a 3D printing process (and Arduino is the micro-processor!). While it’s still experimental, it is a new take on how to use abundant energy supplies. More physical computing projects should explore this area.
Another project type I find to be compelling (and the most meaningful) are those projects that equip a certain group of people with capabilities they lacked before. In particular, projects that allow people with disabilities or paralysis to better explore, interact with their world, and express themselves via a new form of physical computing. Some examples include:
- The Neater Eater – Allows people with disabilities to easily program and customize their own feeding mechanism. While it’s something most people take for granted, the ability to feed yourself (rather than rely on someone to hand feed you) is incredibly empowering.
- Eyegaze Edge – Quadriplegics or others who can’t use a standard keyboard and mouse can control a computer simply by moving an eye (the device uses a high-speed infrared camera mounted under the system’s monitor and a small external processing unit to translate eye motion into on-screen action).
- Jouse2: A computer is controlled with mouth movements and gentle puffs of air. The person manipulates the stick with her mouth to move the cursor and can click on an item by blowing into the straw.
However, there is one major issue I have with almost every physical computing project. It tends to be designed for a certain privileged sect of the world population (generally, the “richer” countries and populations). Physical computing and interactive art projects are not yet designed for use in the majority of the world. I think that’s a real shame. While there is still a digital divide in access to technology in these places, I wonder how we as technologists can start bridging this gap and coming up with simple devices and interactive physical projects that can be used by people all over the world. How can we bring delight, whimsy, and fun projects to people who are just now discovering computers? How can physical computation empower more people to live more meaningful lives in all corners of the world?
These aren’t easy questions, and it certainly will be a challenge — but one I feel is critically important to the field. This is especially true as physical projects become more and more ubiquitous with the rise of 3D printers and other similar technologies.
Week 2 PCOMP labs were not only a lot of fun, but they allowed me to start my (hopefully long-term) relationship with my new friend Arduino.
Here’s what I did:
Digital Input and My Very Own Switch!
Using Arduino and a switch, I programmed a basic digital input/output that controls the 2 LEDs. When the switch is open, one light is on; when the switch is closed, the other comes on. The switch is the digital input, and the LEDs are the digital output.
I then had some fun with it! This past weekend, I escaped the city and went on an excursion to upstate New York. On our trip, we stopped at a thrift store that had all sorts of weird/fun items, including some old-school games!
I bought the classic game “Operation,” and it proved to be perfect for my very own switch. When I touch the metal part of the game (when performing the “operation”), not only does the patient’s nose light up, but so do my LEDs!
Analog In with an Arduino and the Coffee Luv-o-Meter
In the next lab, I used Arduino to explore Analog input. As a somewhat silly way of playing with analog inputs, I attached a force-sensing resistor to a coffee cup. The harder you squeeze the cup, the brighter an LED shines. The brighter your light, the greater your love for coffee (or something like that …)
Apologies for ridiculous midnight commentary.
Thoughts and questions that came up:
- I wanted to make it a bit more difficult to get the light to shine bright when you squeezed it (i.e. I wanted the person to have to squeeze really hard to get it bright; as-is, you don’t have to squeeze too hard to get it from “no light” to “bright”). I fiddled with my Arduino code, but that didn’t do the trick. Maybe something physically on the breadboard would help?
- Also, umm, I really need to learn Arduino code!
When I was young, I used to love going to big electronic stores, mainly because it meant I could demo the video game systems I didn’t have at home. But I always remembered being disappointed and somewhat confused by the whole set-up.
I wanted to see why this was (and if it was even still the case), so I set out to observe how in-store electronic displays are designed and how people interact with them.
tl;dr: In-Store Displays Suck!
I was shocked by just how bad these set-ups are designed (true story: they are exactly the same as when I frequented them 15+ years ago!). What’s inexplicable is that these set-ups are presumably installed with the intention of selling more product — the fact that they are so terrible to interact with is an enoromous waste of an opportunity for these stores!
So what sucked? (note: I am mainly talking about the interaction with the display, not so much the electronic itself)
In his hilarious and informative book “The Design of Everyday Things,” cognitive scientist and usability engineer Donald Norman points out that a well-designed product has obvious visual cues and a clear and natural map of what does what. You can see exactly what you need to see, and it’s fairly obvious what each component will do. A big part of getting it right is designing for the context of your user.
The problem with in-store displays is that they completely miss the context of how the product will actually be used by an in-store customer. Instead, it’s basically just the product sitting there, waiting for someone to do random shit with it.
Sounds like that could be OK, but here’s what I found:
I initially saw an open Wii across the room, and being the Nintendo fan that I am, I went over to see what it was all about. Here’s what I saw on the screen as I approached:
Now, for many people, this would be an immediate deal-breaker on exploring the wonders of the Wii. There was no information about how to reset the system or any other helpful info or visual cue for the first-timer. However, me being the tech wiz I am, I grabbed the Wii remote and re-connected the device. Fantastic right? But were’s what I saw next:
I came in on a game that was already in progress, with absolutely no instructions or prompts of what I could do. I was left fiddling with the controller, pressing random buttons, and ultimately just getting confused and frustrated by the game I couldn’t get to work.
And this is my problem with these displays.
They are there to show off the product and allow people to play around, maybe even moving them one step closer toward making a purchase. But there’s absolutely no customization of the display for the context of a user in the store. Games don’t reset after each user (instead, normally you have to start where the last person left off). There’s no clear “getting started” instructions. There’s no real visual cues to let you know what you should do, so you’re left pressing random buttons (they seem to assume people will already know how the product works, which is ironic given its goal of engaging new users).
It’s a frustrating process, even for those of us that have played lots of games in our lives.
A better in-store display would take into account the context of the user: someone browsing the store who may be very new to that technology. It would thus have a customized process to orient the new user to the system in the best possible way. For example:
- A visual cue to let people know what they need to do first. Grab the controller? Press a button? Select a game or other program? This stuff isn’t always clear.
- A specific demo version of a game that gives newcomers some initial instructions (that can be skipped by the expert), and then resets after each use. Coming in midway through someone else’s game is confusing.
- A fun prompt after the end of the game to let the user know what they can do next.
My thoughts were backed up by some hilarious observations of other in-store users: whether the user was trying an Xbox, iPod, or PS3, the interaction with the display was rather uniform — they would initially look excited, grab the device or controller, press random buttons and try to figure out how the damn thing worked, get confused, and then put the thing down and leave after 2-3 minutes of not making much progress. If the goal was to have the user press random buttons to see how the device physically felt, then mission accomplished! If the goal was to get the user more engaged with the product, then it was a massive failure.
Again, the context matters.
A chair would be cool too.
Monday marked the 1-year anniversary of Occupy Wall Street. To document the event, myself and ITP classmates Valerie and Asli headed down to Zuccotti Park and joined the march, collecting sounds that we then mixed with some live banjo (shout out to Valerie’s banjo-playing friend!). All sounds are of our own recording.
Check out the final track here: The Bluegrass of Occupy
And here are some pics of us collecting the sounds in the march.
For my week 2 assignment in ICM, I decided to up the fun on the Kanye sketch, and make an interactive music video.
- Open the interactive sketch by clicking here.
- Find your favorite Kanye track. This one works fairly well.
- Start playing the track and go back to the interactive sketch.
- Make your own Kanye video by repeatedly clicking a, s, and d (to the beat!).
- Click the mouse (and try holding it down) for a special surprise courtesy of Jon Wasserman.
Working with my wonderful lab partners Caroline and Vanessa, we were able to set up our breadboards and get the LEDs to blink to our liking:
However, we certainly ran into some difficulties (and had some questions):
- Soldering. The instructions said to touch the soldering iron directly to the joint (not the wire), but the joint never actually got hot enough to melt the wire (so we had to touch the wire directly)
- At first we thought we did not set up our 9-12V DC power supply correctly. Turns out we just didn’t understand how to use the meter properly. It worked fine (although we forgot to attach a battery for a bit!)
- Specific ways of wiring to create proper circuit. We understand that the wires have to be connected/run in specific ways, in order for the circuit to work, but we’re still a bit unclear on the specifics.
This is the biggest artistic opportunity in history: a major new field is suddenly opening up, and you’re one of the lucky generations to be in the right place and at the right time to change the world. The doorway to each of the other Muses was slowly pried open by the combined efforts of many artists; but the doorway to interactivity was blown open overnight. Our interactive Bachs, Michelangelos, and Shakespeares are probably out there right now, flunking school. We are living in Florence during the Renaissance. Hey look! Wasn’t that da Vinci going into that restaurant?
– Chris Crawford, The Art of Interactive Design
The age of interactivity is upon us. We interact with our phones, our games, and our computers (and we probably do this much too often). We even interact with a whole new breed of devices like watches, glasses, and yes, even mud. We do this every single day. And more and more, as the digital divide lessens, people all over the world are doing this every single day as well.
Interaction design matters. And what’s exciting is we’re still defining the future of this relatively nascent artform.
But what is interactivity? What makes for a good physical interaction with technology? Where should the artform go?
I’ll put forth my own definition of interactivity as an engaged connection between two (or more) actors. For example, you can connect with another person pretty much whenever you want, but if there’s no active engagement between you and that lucky person, I’d argue there is no real interaction (just a passive connection, which we’re all too familiar with). The same holds true for interaction with technology — you can be passively connected to a device (for example, reading your kindle), but unless there is an active two-way engagement between you and said technology, real interactivity does not exist.
Ok great – so if interactivity is an engaged connection between two (or more) actors, what makes for a good physical interaction with technology? For me, the secret sauce lies in its connection to human intuitiveness. We as humans are already blessed with complex and amazing ways of physically interacting with our world. And these methods have evolved over thousands and thousands of years, and are tried and tested! Our job as interaction designers is to pull from this rich and complex — but completely intuitive — human toolset to create interactions with technology that are natural and delightful.
Sounds simple enough, but we’re just at the tip of the iceberg (and that’s what’s exciting!). We need physical interactions that better pull from human’s natural and intuitive abilities like balance and tactile sensations of weight and texture. Better yet, we need to get the entire body involved in the interaction. Our hands aren’t the only things we use to interact with our world outside of technology. So why has this relatively limited method become the go-to in our interactions with technology? We can do better. As Bret Victor points out in his must-read Rant on the Future of Interaction Design, we have to go way beyond the “pictures under glass” approach that currently defines the art of interaction (sure, clicking and dragging your tablet is fun and useful, but the inherent human toolset makes so much more possible!).
So this is where I want to see the artform of interaction design go. Taking a broader and more nuanced perspective on how humans have interacted with their physical world for thousands of years, and designing technology interactions that pull from this natural, rich, and intuitive toolset. Not an easy task, but I think we’re up for the challenge.
The first man who, having enclosed a piece of ground, bethought himself of saying This is mine, and found people simple enough to believe him, was the real founder of civil society. From how many crimes, wars, and murders, from how many horrors and misfortunes might not any one have saved mankind, by pulling up the stakes, or filling up the ditch, and crying to his fellows: Beware of listening to this imposter; you are undone if you once forget that the fruits of the earth belong to us all, and the earth itself to nobody.
Jean-Jacque Rousseau, Discourse on Inequality
While I won’t get into the broader (and admittedly complex) philosophical, economic, and political debates at the heart of this Rousseau quote, it is the passage that comes to mind when thinking about the ownership of ideas, concepts and art. Can someone “enclose” and own an abstract idea? Is it even intellectually coherent to be able to lay down stakes and patent a concept? A design process? A user flow? Where is the line between a legitimate right belonging to a private individual and common ownership belonging to us all? What separates impostor from bona fide right holder when it comes to ideas (especially when we can’t really know who first thought of said idea)?
Although I find that line between impostor and legitimate right holder very tricky to define, I certainly agree with the line of thinking that general concepts, ideas and art belong to us all in common. It turns out that the real source of human concepts and art is usually nebulous – or as Jonathan Lethem puts it: “Is an intellectual or creative offering truly novel, or have we just forgotten a worthy precursor?” Most often, the answer is the latter. And we’ve probably forgotten its worthy precursor as well. Originality rarely means first use.
Enclosing the cultural commons isn’t good for anyone. It stifles art, expression and innovation (ask any start-up dealing with patent trolls). Even worst, it tragically takes energy away from real human artistic and technology progress, and instead wastes valuable human cognition on mind-numbing systems, like software patents that no start-up tech founder can really be bothered with, simply because you’d have a hell of a time actually figuring out whether or not your violating a patent anyways (you only seem to know when they come for you).
That being said, I also believe artists, technologists and creators have an unwritten pact to use the commons responsibly. I’d argue that a painter using a photographer’s photo as inspiration for a new piece is a responsible use of the commons that is good for human progress, but a bootlegger simply copying (plagiarizing) and spreading that same photo with little artistic enhancement is simply stealing. On the same token, musicians “sampling” a prior piece of music and coming up with a fresh take is something I think we want to encourage in the world, but replication without enhancement is theft. As artists and technologists, we share in the commons, but we have a responsibility to build and contribute to it as well. I think that’s why so many of us are drawn to the open source philosophy. It serves us, and we pay it back with our own contributions.
On a closing note, I’m reminded of one of my favorite hip hop group’s experiences from a few years back, when they were being sued for not clearing samples in one of their ground-breaking records — A Piece of Strange. This record was a masterpiece that made a big impact in my life and in many others’ as well. Kno, producer of the group, has a great take on the conundrum an artist faces:
“[A Piece of Strange] would have cost $3+ million to clear all of the samples on it – but some people have told me it literally saved their lives. Should it not exist? Should I now be lectured on legal morality? The music I make infringes on copyrights. I am aware of this and have been aware of this, that is why I don’t stress getting rich off of it, I tell people to download it if they want to and I sink any money I do make back into making more records.”
Hip hop is probably one of the best examples of the cultural commons — much of the music is taken directly from prior pieces, but the end product is a work of art of its own. This is something we should want to see in the world.
This is my first ITP assignment — a Kanye “Genius” West sketch done entirely with Processing (a Java-based programming language). Make sure to move your cursor around the sketch window to experience the full-on genius that is Kanye. Click to get a Kanye pearl of wisdom.
Note: The formatting of the sketch is off when embedded here. Please find the fully working sketch here.