WhatMappr: a project by Meghan Hoke and Becky Kazansky
WhatMappr is a mapping platform that utilizes 3-D in processing with 3 sensors: GPS, a potentiometer, and switch, to create multi-layered, idiosyncratic maps that compare “perceived location” to “GPS location”. As you walk around, you turn the potentiometer knob to the right or left to mark your movements. GPS pings 1 pulse per second, and as the GPS shield is set on an arduino and the analog sensors are input through the GPS shield, their output of values is also at a rate of 1x per second.
A second analog sensor going through the GPS shield is a photocell which “records” amounts of ambient light as you proceed on your route. After all values are input to a text file, you can view a 3-D visualization of the GPS and Perceived Routes. The ambient light values are plotted on the map as a line representing the average of the GPS Route and the Perceived Route lines. High values of ambient light produce a feeling of “depth”, and colors go from a dark blue(dark) to yellow(light).
One of the more interesting lessons coming from our experiments was the discovery that often GPS is hilariously wrong. A look at GPS line vs. Perceived line showed the reading from the manually controlled potentiometer to often be more accurate.
Our original plan was actually to record spikes in RF or Wifi in the surrounding area (and even this concept was a result of several revisions from before; we actually started with wanting to create an anti-tech device that would show you where you could go to be free of certain types of radiation. The output was actually supposed to be in audio. Because of subsequent problems with finding adequate sensors (oh boy, RF is…fun)….we ended up with just a photocell, designed to read the the values coming from the LEDs on the RF sensor.
What do you know, this concept ended up being interesting enough as a stand-alone, but I think both Meghan and I are interested in developing our concept further, to create a sort of citizen mapping or educational platform; with a device that you could snap various modules onto (for, say, air quality) and gain a different sense of your environment.
One of my concerns as we put the last iteration into development was whether this device should ultimately provide real time data or if it held equal value as a repository. It seems fairly obvious that we’re trending towards increasingly networked objects and real-time data visualizations. Instantaneous output of information into accessible visualizations provides obvious “magic” appeal, along with enabling the user to modify their behavior within a real-time feedback loop. What then, going forward, is the value for slow data?
The reasons we went with “slow data” were at least partially logistical:
-We assumed that using bluetooth or xbee wouldn’t be feasible once the user was out of good range of the receiver (which would be small in an area as laden with interference as NOHO/NYU.
-The “analog” aspect; logging your route via “left”, “right”,”stop”,”go” with a potentiometer did not seem easily translatable to a “flat” phone interface.
-We thought the process of acquiring slow data from a dedicated device could be gratifying in a way that real-time information does not facilitate, as it is easily subsumed w/in the noise of the real-time data stream.
-Creating a slower behavioral feedback loop in which a user goes out with a dedicated device, then comes back and inputs the information to a visualization program makes the activity more ritualistic and possibly more significant/conscious.
I think that in order to give this concept a continuing vitality, a jump into cross-platform networking of data would be necessary.
One thing that astounds me is how many paths you can take to arrive at a similar destination: After presenting our project in Pcomp, I became aware incidentally of, at my count, three other projects in development this semester with similar concerns and outputs. One of these projects focused on recording ambient light as an end in itself, but utilized video to arrive at mappable values instead of our photocell. It’s extremely captivating, visually, becoming both a video project and sensor experiment . Another project senses air quality via stationary sculpture; the idea fitting within the vision of an urban grid full of embedded sensors. The third project I discovered seems to have the widest scope; with the idea of creating an interchangeable sensor module platform. It seems most in-line with what I envisioned to be the future of our project. Each of the projects is fascinating, and seems to have sprouted from widely disparate starting points.
Our ideas seem to coalesce around:
-DIY graphing of environmental factors/visualizing the invisible
-personal quantification + route mapping on the urban grid
-responsive behavioral feedback loops via real-time locative data
It is a bit jarring to suddenly become aware of your place in the “marketplace of ideas” har har, but of course ultimately invaluable. Of course we are making similar projects! We are all in the same place at the same time and pretty much on the same wavelength. What it forces me to do is to consider what the standout elements are about Meghan’s +my project.
I feel that we didn’t really draw out any one specific aspect.
-a 3D mapping visualization that could be an app in and of itself. This would require research to figure out what exists out there and how we can contribute something unique.
Right now there are many ready-made gps visualization services, but most of them seem a bit flat, and standardized in a similar way.
-The comparison of “perceived” vs. “gps”. This seems to be more of an academic type of experiment, and maybe a bit of a one-off, unless we take the idea of “analog” mapping further.
-The dedicated device. Right now the WhatMappr is very very box like…(well, actually, I was asked if it was a bomb the other day). It does not currently communicate with arduino or android. Is there value in keeping it autonomous? Eh…..