For my final in Intro to Computational Media, I have been working on a project called the Darkness Map, which is an attempt to map the light and dark levels of the city at night on a human scale. I am collaborating on this project with Scott Wayne Indiana and Rune Madsen, since the idea came out of discussions we’ve had in a seminar we’re taking called the History of Sound and Light.
Most of you are probably familiar with this image, which is a view of the earth at night from space. We were inspired by the idea that by showing the amount of light there is, you are also showing the amount darkness, and we wanted to try and map where the darkest places are in such a well-lit city as New York.
In order to collect this data, we finally settled on using video. We had thought about using light meters with GPS tracking, but taking video also serves as a nice reference for the places you’ve mapped themselves. As we take this project further, we would like to build an app for the Android or iPhone which would ideally tag the frames of our videos with their GPS locations. This would help immensely in the data visualization process, since our way of “tagging” the video as of now is telling the camera what street we’re starting and stopping on, and shooting video of one block at a time. This requires someone listening to the video in order to give tag it with its location by hand.
One caveat of using the video on a smart phone is that you don’t have the ability to control the aperture or shut off auto iris. The ability to disable auto irising is crucial to our data collection, because we need some kind of control threshold in order to compare light levels. As the video cameras on smart phones get better and better, this should be possible soon. Another option is to link our video to a GPS logger manually, by logging GPS data at the same time as we record the video, and then linking them together by matching the timecode later on. This, however, might require additional human processing that we’d like to avoid. Ideally, we want to make an app for people to use so that we can try and crowdsource our data collection.
Scott and I covered the area below in about 2 hours, which was the amount of time we could shoot video before our camera batteries died. Even in this small area, you can start to really see the variance in ambient light block by block. We also decided to shoot either side of the street, since there is a lot of variation even just across the street.
One of the nice surprises of this project was how the raw data, our video source footage, is a wonderful video work on its own. I think it could be great to add a “street view” element to the map, where you might click on a block and see its source footage played. Here is a sample of one block, with each side of the street side by side, formatted to look as if you’re walking down the block in the same direction.