I began working on my tantra vision by replicating the one of the paintings in Processing.
I then chose the “historical” elements, both within the chosen painting and also from other paintings, that I wanted to include within the sketch and added those features. For mockup purposes, I used the keyPressed function to call each element method so I could get a rough idea of what the experience would be like. This all came together pretty easily. The challenge then was how to connect the Processing sketch to Arduino. How could I take this interaction beyond keyPressed? As I mentioned earlier, I wanted to capture the breath of the user and have that control the sketch. But how could I do that? I did not want to encumber or constrict the user with a bulky interface (like a helmet) or a chest strap. I wanted the physical interface to be as non-invasive as possible. So I set out researching several things: a C02 sensor, electret microphone, and a heart rate monitor.
Inevitably, all of the parts that I ordered from Sparkfun and Parallax took an obscene amount of time to arrive. Given the circumstances, I quickly ruled out the heart rate monitor and was able to borrow an electret microphone (from Scott, thank you) and a CO2 sensor (from Tom, thank you) to start experimenting. I had to figure out what type of values I could get out of these sensors and if they would be useful for me in this project. When I started mapping the values, I found that the range was very small. I realized that it would take a lot more work (a visit to Eric Rosenthal to learn about inverting the sound wave) and may not be the best way forward.
So I circled back to the breathing idea – if I couldn’t capture breath, maybe I could animate the sketch in a way to appear as if it was breathing. I could use variable resistors, a set of FSRs, to translate the pressure exerted by the user onto the sketch. But I still had a problem: where would I put the FSRs?
Since tantra paintings are used for meditation, I was perplexed by the idea of introducing some sort of physical element. I thought something small, intimate, might be appropriate. Perhaps a cup or a bell? Maybe even a singing bowl? It wouldn’t be the most visually appealing though to have a cup with FSRs mounted on it and wires dangling down. That’s when I thought about an instrument, a didgeridoo, to be specific.
I set about fabricating my five-foot long didgeridoo (out of PVC pipe and colorful tape) and mounted the FSRs like keys on the outside of the pipe. When I finally hooked it up to the Arduino and ran the sketch, I discovered an immediate problem. Scale. A five foot didgeridoo and a sketch running on a computer screen are not the most compatible. And if I was attempting to create some sort of intimate, personal experience having a physical distance so great between the user and the visualization just wouldn’t work. Back to the drawing board, I go.
Several residents suggested that I consider using the Kinect which now has the ability to measure the rise and fall of a user’s chest. So that’s my next step. Start playing around with the Kinect, see what I can get out of it and run with it. Additionally, I would like to make more of these paintings so that there’s are many options for the user to explore.
My final project is inspired by the Tantric paintings collected by a French poet, Franck André Jamme, in Rajasthan, India. These paintings serve as an aid to meditative practice and use simple, conventional symbols to stimulate specific mental and spiritual experiences. Most of the drawings in his collection have only one or two shapes, with no ornamentation and are monochromatic. What I find so compelling beyond the purpose of the paintings (as meditative aids) is the history of each. These paintings are handmade, copied, and passed down through generations. There are imperfections in the paper, tape and pin marks, water stains, ink transfers, and tears. There is beauty in that layering. The life of the painting grows and takes on a new meaning / multiples meanings in these incidental marks / what remains on the page.
In Processing, I would like to recreate one of the images in its original state – just the shape or shapes with none of the historicity. Then slowly through user interaction illustrate (my interpretation) of the evolution of the painting. Starting with perhaps the fading around the edges, then maybe a tear, some tape parks, errant text. The raw nature of the “original” painting would be exposed as the narrative develops (its authenticity erased).
The problem is what type of user interaction? I had originally thought about creating a helmet the user would wear to view and explore the image. But that seemed clunky and more of an impediment to meditation. When I posed this question in my ICM class, I received some valuable feedback. One of the central principles of meditation is breathing so why not utilize that for the user interaction?
In my wildest dreams, I would love to create a screen that is sensitive to breath and responds to each exhalation. I’m not sure how possible that is but I know there are other ways to measure breath. I could use a microphone to pick up the sound of breathing or a chest strap with a flex sensor that would pick up the contraction of each breath.
Alternatively, I could:
- Measure heart rate and as it slows reveal more of the history of the image
- Use facial recognition to sense a user and then start sequence after they’ve inspected the image for a few seconds
- Use eye tracking to reveal different aspects of the history as the user explores the image
Worlds collide! Processing + Arduino = MAGIC.
Before I had the good fortune to move to Brooklyn, I commuted daily from Bronxville to New York on Metro North. I don’t miss jockeying for a seat when finally (finally!) the car doors open onto the platform or elbowing for space when attempting to read the newspaper. I entertained myself mostly by doing the crossword but also observing the other passengers and their habits during each interval of the commute. These intervals (pre-train, train, and post-train) have their own set of rules & behaviors (like the most efficient way to pitch your paper when exiting the train or how to successfully traverse Grand Central Station in order to reach the subway or an exit). As a commuter, it’s easy to pick up the predominant behaviors and follow suit. It’s not easy though when you’re unfamiliar with the system.
The most difficult part of using the Metro North system, and the most frustrating to observe, is ticket purchase. The vending machines for tickets look exactly like the Metrocard machines. However, if you’ve used a Metrocard machine you would be none the wiser when trying to buy a Metro North ticket. The series of screens and prompts are radically different. Each user must answer at least four questions before they can purchase: one way or round trip? to or from Grand Central? other station combinations?? which station? peak or off-peak? This transaction is old hat to me, I can make my purchase in less than 30 seconds. (Maybe better… I should time myself next time.) Observing others try to make a ticket purchase is as I already mentioned, very frustrating. I had to try very hard to restrain myself from stepping in to speed up the process.
Some of my observations:
1. Even though here are several language options available, foreign tourists (that I either identified by the language I heard spoken or stereotyped by apparel choice) still had a difficult time as they went screen by screen, especially when they had to find the station that they were going to. Non-foreigners as well had a hard time finding their destination station.
2. Another hiccup in the process is answering the question, peak or off-peak? In the bottom left corner there’s a definition of peak and off-peak hours but its pretty verbose. Considering that most people that are purchasing tickets plan to board a train immediately, why can’t the system answer that question itself? Or ask the question in another way like, do you plan to travel now (within a certain timeframe that would also be defined by the system) or later (also predefined by the system)? Maybe list the next three departures for their destination and let the user choose? That might add too much information and slow down the process more.
3. There’s a lot of finger wagging. I saw a number of people using their pointer fingers to visually guide them through the text. Some fingers turned circles or figure eights as the user tried to decipher the screen and answer each prompt. The fact that the choices on each screen weren’t in the same location each time also disoriented the user. They expected to answer the next prompt in the same location on the screen and when they couldn’t, they had to mentally step back and assess the entire screen.
4. When there was a long line of users, it felt like each user made more mistakes. I don’t know if they felt pressure to move through the process quickly or if I’m reading into it too much — maybe it was just more likely that there was a line of users very unfamiliar with the system.
5. Many users, not using cash, exhibited privacy concerns. There’s no shield or way to prevent the person behind you from seeing you punch in your debit card pin number.
6. When the user is successful in completing a purchase, they then have to wait for change and or a receipt and the ticket(s) to print out. These actions don’t happen simultaneously so most users have to stick their hand in the dispensary more than once. Some even check the dispensary one more time, just to be sure they received everything.
I’m sure that they did extensive user research to build the most efficient and intuitive interface but maybe using the same format of the Metrocard machine was not the best idea. In terms of visibility, yes, if a user sees that machine they will quickly identify that as a place to purchase tickets. But I’ve seen people try to buy Metro North tickets from Metrocard machines and walk away very confused. I definitely wouldn’t recommend the MTA get rid of the information booths OR the ticket agents just yet.
Last Thursday, I went to the Internet of Things Meetup on Empowering Communities, hosted by Ed Borden (of Pachube and LogMeIn). Two projects that use electronic sensors to increase enviromental awareness really stood out to me, Leif Percifield’s dontflush.me, and Joe Saavedra’s Citizen Sensor.
dontflush.me aims to educate NYC residents about water pollution in the harbor. According to Percifield’s research, about 27 billion gallons of raw sewage is dumped into the harbor annually due to an overloaded sewer system. By making residents aware of the overflow problem, Percifield hopes that as a community we’ll be able to reduce wastewater production before and during an overflow. He envisions a network of sensors at individual sewage sites that would collect data and alert users on his website or by SMS. He’s also prototyping a home visualization device similar to the Ambient Devices weather beacon.
Part of Saavedra’s thesis project at Parsons, Citizen Sensor is a wearable and customizable system of sensors that allows users to monitor any number of environmental conditions they encounter on a daily basis. Collecting data on the amount of carbon monoxide, light or noise pollution that surrounds us can help us to better understand our environment and how we interact with it. Are there steps we can take to improve our lives and the lives of others in our community?
How do we re-establish physical awareness? Make people more cognizant of the world they’re living in? The proliferation of personal devices (mobile phones, tablets, etc) only serves as a distraction from seeing what’s literally right in front of us. We need to plug back into the environment, educate ourselves on the current state of affairs, and take action. By reducing the amount of natural resources we use or opting for public transportation over a car – we’ll make a difference.
I felt very inspired afterwards, just reeling from the possibilities that exist out there when you combine big ideas with a bit of electronics. But there was something else bothering me. How much information is too much? Where do we draw the line so that we don’t drown in data overload? Providing the data is important, but equally important is how we interpret the data. If I know that pollen levels are high today I might take an allergy pill before I leave the house. I probably wouldn’t coop myself up in the apartment and build a bubble suit with an air filtration system and oxygen supply unit to make it safer to go *gasp* outside. This brings up an important consideration for the designer or developer, once you provide the data – what do you want the user to do with it? Perhaps create a user guide or establish an online forum to help the user identify common scenarios and responses? How do you temper the sensibilities of the user? Or is that out of the control of the designer?
Last night, I took a stroll around my neighborhood in search of sensors. Traffic lamps, walk/do not walk signs, and ATM machines – nothing terribly intriguing.
Although, I did want to knock on the door of my neighbor with the Mary statue. It would be great if he installed a motion detector on the light for the statue that way every passerby could have his/her own personal experience with Mary.
The most interesting thing that I observed were not the sensors themselves but the signs that alerted the public that sensors existed. You’re being watched.
Two sensors I observed but couldn’t capture (and quite possibly the most exciting):
1. My laundromat has a set of bells attached to it’s entrance door, jingling whenever the door is opened or closed.
2. On Gordy’s desk is an electronic pencil sharpener. When the pencil’s tip is sufficiently sharp, a light flashes on the front face of the sharpener to alert the user that it’s ready.
I’ve been an avid cyclist since the first day I clipped into my shimano bike clips and promptly fell off. And fell off again – sometimes in traffic. After I got the hang of it, I started to use my bike as my main means of transportation. I cross the Wiliamsburg bridge on a near daily basis and seeing my fellow cyclists inspired my fantasy device. Some of these cyclists, you see, aren’t wearing helmets. That drives me crazy. Irrationally crazy.
I’ve been thinking for awhile about the best way to convey to these cyclists that helmets may cramp your style but will save your life. I thought about mounting a display screen on my helmet that flashes a message “Wear A Helmet” but realized that may distract them and cause an accident. Then I thought about making a t-shirt or oversized pin to wear but I felt that didn’t carry enough weight. I needed something more impactful! What if I shoot them? Literally.
Not with a bullet, but with a message embedded on a chip. The chip would be propelled towards the target from an apparatus (similar to a cop’s speed gun) mounted on my bicycle. The chip would be equipped with an accelerometer that would deliver an audio message only when the cyclist came to a complete stop.
The message would be something akin to the “howler” in Harry Potter, warning the target to “Wear A Helmet!” For more information on the howler from the Harry Potter Wiki (yes, this exists).