You can clearly see the emotional tone of a programming language by the emoticons it contains. Objective-C: pensive and anxious. LISP: Romantic and possibly horny. Common LISP: cocky & kind of a dick.
Code samples taken from 99 Bottles of Beer.
This weekend we went to the New York Hall of Science (NYSCI) to see how they presented science in a museum context.
The first thing you see as you approach the museum are the rockets from the US space program which are positioned on the grounds. These were originally displayed at the 1964 Worlds Fair, which took place on the same land that the museum now sits. The building itself is a hybrid between a funky Modernist cement structure (think a vertically extruded amoeba with black spots), and a more contemporary wing made of glass and steel. It’s a good home for a museum that bridges the gap between “old school” science (light, waves, gravity, etc.) and contemporary topics (network effects, the maker movement, computers, etc.). When you entered, the halls fork in a number of different ways without any directions. It asks to be explored, rather than “completed.”
The collection is obviously very sciencey, and also hands-on and playful. I recognized a lot of the ideas from our “Active Prolonged Engagement” reading. For example, many of the exhibits were designed to explore a specific phenomenon for an extended period. This was encouraged by adding seats to the exhibits, and also adding multiple “stations” to a single exhibit, so more than 1 people can use it at a time. There were a few corners of the museum that were specifically designed as a play area for kids to run around and stack things and interact. It was basically a big playground and the space was filled with a pretty positive energy. The museum staff was also very engaging. There were live demonstrations (in a cooking-show-like format) and one staffer started playing with us on the “ant-gravity” mirror.
The website is clean and straight-forward. The front page has hours and contact information displayed in the top left corner, and upcoming events occupy the main column. This makes it extremely easy to find the basic info. Like many museums, it suffers from some anti-navigational language (e.g. “learn”, “explore”), but this seems to be pretty standard so I’ll stop mentioning it.
I’ve been interested in the idea of emergence for a while and I think it could really spark the imagination of kids. There are amazing examples of it in mathematics, nature, the organic growth of cities and also the structure of the internet. Unexpected and wonderful propertied are often the effect of large systems of individual actors, and it can arise with seemingly simple rule sets. In a museum context I’d present it as a particle simulation. The audience would have the ability to tweak the rules that each of the individual particles behaved by. Once those rules had been set, the simulation would be set in motion by introducing more and more particles to see how they interacted in larger and larger groups. It’s amazing how tiny tweaks at the particle level can lead to dramatically different behaviors on a macro level. It’s an interesting metaphor for our role as humans.
I created a simple audio sequencer with an Ardiuno today. The sequencer plays a loop of 10 notes that can be programmed by the user.
1) Pressing the button switches from “Play” mode to “Recording” mode.
2) When you’re in Recording mode, you select a pitch for the illuminated LED by tuning the knob.
3) Once you’ve chosen a pitch, pressing the button will save it and advance to the next LED.
4) After you’ve set the pitch of all of the LEDs, you return to Play mode and the audio loop begins again.
5) When you’re in Play mode, the knob adjusts the pitch and playback speed of the loop.
I really enjoyed reading Tom Igoe’s Physical Computing’s Greatest Hits (and misses) because he’s got a pretty exhaustive list of interesting (if overused) physical computing categories. I’ve seen examples of almost all of these, but I’m still a little surprised they’ve become tropes. Maybe I didn’t realize that there are enough people experimenting with this level of physical computing to make “Fields of Grass” an entire category.
The ones I respond most to are those that enable novel interactions, not just re-imagine existing interactions. For example, even though it may not be very “useful”, I enjoy video mirrors and mechanical pixels because you’re engaging in a new kind of experience. You have to explore the system a little to find the boundaries and parameters.
Contrast that with “Body/Hand-as-cursor” or “Touch Screens” which take a well known behavior and give the user a new vector through which to use it. These categories are usually conceptualized far before they’re actualized, so they tend not to feel very exciting when you use them in person.
I don’t know where “LED fetishism” falls along this spectrum, but it’s hilarious and true. That being said, “LED Throwies” was genius.
Here are a few other examples of common Physical Computing idioms that I’ve seen:
This generally takes the form of a screen mounted on a treadmill that displays a simulated running course. The course responds to the pace and difficulty level of the runner. Running on a simulated suburban sidewalk is now possible!
An oldie but a goodie, the virtual reality rig is a helmet with a screen on the inside of the visor that fills the user’s field of vision. The imagery displayed on the screen responds to the orientation of the user’s body, and creates the illusion of being in a virtual 3D environment.
This approach augments the image of a user with computer graphics, or inserts them into a wholly artificial scene. It’s often proposed as a way to “try on” clothes in a shopping context to see how they would look without having to actually try them on. It’s becoming more feasible with robust body tracking hardware like the Kinect.
Here’s an interesting snapshot of the work I’m doing for my next Cabinets of Wonder assignment. It reminds me of Boccioni’s Unique Forms of Continuity in Space. I’m also reminded that, in the last Waving at The Machines class, James mentioned that gait analysis is more accurate at identifying individuals than facial recognition.
When machines look into the world around them, they’re generally looking for 1 very specific thing. It might be a faces, or bar codes, or microscopic material imperfections. Everything else—the other 99.9% of what they see—is irrelevant. It’s null data.
How Machines See, created with Cinder.
The American Museum of Natural History is about as iconic a museum as you’re going to come across. It’s a giant classical structure with triumphant arches and marble stairways that are instantly recognizable. The main entrance is adorned by an unforgettable sculpture of Teddy Roosevelt riding a horse alongside two “savages.” The building itself is at once grand, imposing and timeless. It typifies the idea that a museum is an institution with eternal wisdom entombed within.
The AMNH has a vast collection of animals and artifacts that are generally presented in life-like tableaus. These may be the most memorable exhibits of the museum. While they seem a little out-dated at this point, they still have a beauty to them like a still-life oil painting might, rich with color and composition. The museum also contains a sizable collection of human artifacts such as cooking vessels, clothing and instruments. But beyond the more traditional exhibits, it’s clear that there’s been an effort to revitalize the space with modern components such as touch screens, videos, etc. This is particularly true in the space and Earth exhibition spaces which are rife with interactive exhibits that aim to actively engage the user. See my previous post, Observing Interaction Technology, for more on that.
The website is pretty comprehensive, but it’s also seriously ugly; when I saw the masthead for the first time I double checked that I was at the right URL. The front page does a pretty good job at drawing you into the current exhibitions, with a carousel of large images. Like many museums it leans a little heavily on anti-navigational lingo such as “explore,” but that’s mitigated by making links to almost every section of the website accessible from the nav-bar.
After hearing the demographic data of the average museum goers in the US, I was surprised by the diversity of the crowd at AMNH. There were clearly groups from all over the world, mostly comprised of younger couples or families with kids. I listened into the chatter occurring in front of the exhibitions, but I didn’t come to any over-arching conclusions. Generally the adults read the plaques and the children would just ask adults about what they were looking at. Often the chatter wasn’t about facts or learning per-se, but maybe a made up story about the animal or just a funny comment. Groups might linger in front of an exhibition from 10-30 seconds, and then move on. They “got” it.
I really enjoyed the “Spectrum of Life” exhibit because of the array of diversity it captures, but also the visually pleasing arrangement. I think if I were going to re-imagine an exhibit, I’d start here and merge it with the family tree of dinosaur species that’s positioned in the back of the hall of dinosaurs. I’d attempt to show the evolution of species by positioning them in a 3-dimensional formation throughout a large hall. Each species would be oriented next to a species which is similar, but had evolved different characteristics for their environment. The watery species would descend, and the flying species would transition vertically into the space. The primitive species would be in the beginning of the hall and as you walk forward, they get more complex and specialized.
This weekend I visited the American Museum of Natural History to observe people using interactive technology in the wild. The reason I chose the museum is because it’s got an interesting collection of hands-on exhibits that use technology which the visitors have never seen. I was able to watch them interfacing with it for the first time.
For this post, I’ll focus on the ice core exhibit. The primary focus of the exhibit was a ≈16 foot long ice core from Greenland which was positioned horizontally about 4 feet off the ground. Around the core were a number of signs which included photos, data and information. Above the core was a video interview of one of the scientists. And positioned in front of the core was a touch screen that slides along a track for the length of the core. This screen is what I was primarily interested in.
I chose to observe this exhibit because it seemed like a pretty innovative way of exploring this unique object. At first glance the screen looked like a something that might allow you to peer deeper into the core either literally or figuratively, like a hybrid between an electron-microscope and a VH1 popup video.
It didn’t end up being quite that magical. The idle screen simply says “Touch the Screen to Begin.” It doesn’t say to begin what, and there’s nothing other than its form-factor that gives you further hints. But the handles on either side are a pretty good indication about how to use it, even if you don’t know what it is. Once you touch the screen you’re given a menu of 4 different data sets, such as climate and volcanic eruptions over the past 10,000 years. After making a selection, a graph appears that slides along the x-axis as you slide the screen, implying that the data is aligned with the segment of ice below the screen. Using the device was very intuitive—the physical affordance of the sliding track made it obvious that the core itself was a timeline.
The greatest let-down of this interface was that the screen actually obstructed the ice that the data was purportedly relating to. It felt less like like it was enhancing the core and more like it was getting in the way. I would have loved to see a magnified video of the ice, possibly even down to the level where pollen and particles of volcanic ash were visible. Instead you get a power-point, only a small slice of which you can see at any given point and limiting you from taking in the whole arc at once.
After I finished using it I started observing other people from a dark corner of the room. It was a little creepy. The first thing that I noticed was that people were actually pretty excited about seeing this ice core. There’s something inherently cool about capturing time in a physical object. Lots of “Oooohs” and pointing and pressing faces in to get a closer look.
I’d estimate that 1 in 3 visitors who looked at the exhibit decided to engage with the sliding screen. It was clearly taking a back seat to the ice core itself. The extent of the engagement was also surprising; most people would slide it down the track for a few inches, get the idea, and then move on. A few people navigated into the menu a few clicks, but it was a small minority.
A rather large design oversight is that the ice core, and thus the track, were positioned high enough that it was above the heads of a few kids who walked by. They didn’t even notice it.
The general impression that I left with is that the physical object was novel, intuitive and well constructed. However, the content was dull and the screen actually detracted from the primary focus of the exhibit—the core that it was obstructing.
As the brains and sensors in the objects around us become more sophisticated, there are great opportunities to design the interaction that mediates our relationship with them. After reading the initial chapters of Chris Crawford’s The Art of Interactive Design, I agree that the most meaningful interactions require a full feedback-loop between the object and the human that includes Listening, Thinking and Speaking from both parties (if not always literally).
An excellent display of this is relationship between a car and the driver. The car is constantly “Listening” to your body movements, “Thinking” by converting those your movements into turning, accelerating, etc. and “Speaking” through meters, sounds and force feedback. This “loop” is so tight, that both actors are constantly doing all three things.
The strength of this interaction is that the fidelity of the loop is high and constant. It’s not limited by resolution or a frame-rate or an inability to interpret the data. Those aren’t always assumptions that can be made with human-computer interaction, especially when it comes to technologies like camera vision (I’m looking at you Kinect). The more “understanding” that each actor has over the intent, abilities and expectations of the other actor, the greater the quality of interaction becomes.
Interaction comes in degrees and a high-quality loop isn’t always necessary. In its simplest form, an interaction can happen between a human and a light-switch—albeit not a very interesting one. The light switch “Listens” by virtue of it being a switch, “Thinks” by interpreting it’s position into the appropriate state, and “Speaks” through the light bulb. Playing any 3D game on a laptop is not much more than this same interaction happening millions or billions of times per second.
Remove this reactionary relationship, and the interaction ceases to exist. There are examples of digital technology all around us that wouldn’t be considered interactive. My favorite might be the GPS system. We can sample from it like listening to music from a cello, but our actions have no impact on the behavior of the satellites. Far from making it a “bad” technology, this lack of interaction is one of the beauties of it’s design. Similarly, automated machinery, clocks, and CCTV are sophisticated digital technologies, but they are not interactive.