For my final in Intro to Computational Media, I have been working on a project called the Darkness Map, which is an attempt to map the light and dark levels of the city at night on a human scale. I am collaborating on this project with Scott Wayne Indiana and Rune Madsen, since the idea came out of discussions we’ve had in a seminar we’re taking called the History of Sound and Light.
the earth lit up at night
Most of you are probably familiar with this image, which is a view of the earth at night from space. We were inspired by the idea that by showing the amount of light there is, you are also showing the amount darkness, and we wanted to try and map where the darkest places are in such a well-lit city as New York.
In order to collect this data, we finally settled on using video. We had thought about using light meters with GPS tracking, but taking video also serves as a nice reference for the places you’ve mapped themselves. As we take this project further, we would like to build an app for the Android or iPhone which would ideally tag the frames of our videos with their GPS locations. This would help immensely in the data visualization process, since our way of “tagging” the video as of now is telling the camera what street we’re starting and stopping on, and shooting video of one block at a time. This requires someone listening to the video in order to give tag it with its location by hand.
One caveat of using the video on a smart phone is that you don’t have the ability to control the aperture or shut off auto iris. The ability to disable auto irising is crucial to our data collection, because we need some kind of control threshold in order to compare light levels. As the video cameras on smart phones get better and better, this should be possible soon. Another option is to link our video to a GPS logger manually, by logging GPS data at the same time as we record the video, and then linking them together by matching the timecode later on. This, however, might require additional human processing that we’d like to avoid. Ideally, we want to make an app for people to use so that we can try and crowdsource our data collection.
Scott and I covered the area below in about 2 hours, which was the amount of time we could shoot video before our camera batteries died. Even in this small area, you can start to really see the variance in ambient light block by block. We also decided to shoot either side of the street, since there is a lot of variation even just across the street.
Darkness Map - East Village test
One of the nice surprises of this project was how the raw data, our video source footage, is a wonderful video work on its own. I think it could be great to add a “street view” element to the map, where you might click on a block and see its source footage played. Here is a sample of one block, with each side of the street side by side, formatted to look as if you’re walking down the block in the same direction.
So after a little debate, we decided to use video footage instead of a light meter to record the lightness and darkness of NYC at night. We did a short test outside of ITP, where we walked down a block on Waverly Place (between Greene and Mercer) and took video footage of the street. Rune made up a quick sketch to analyze the brightness of the frames, and here is the quick and dirty result.
He made two visualizations. The first is more of a histogram and draws a line sequentially whose stroke is the average brightness of each frame. The other I believe draws a circle whose radius maps to the average brightness value of each frame, where the center shifts over one pixel to the right each draw. I think we’re going to move forward with the histogram based visualization, since it seems to map really well to the grid-based orientation of Manhattan.
For my final project in ICM I would like to try and make a darkness map, which would attempt to visualize dark areas in New York City. The concept for this project came out of my History of Sound and Light class, and the experiments one of my classmates, Scott Wayne Indiana, is doing with outdoor projection. He mentioned how difficult it was to actually find a dark surface in the city to project onto, and it got me thinking that that might be a really interesting thing to try and visualize. We are also planning on working with Rune Madsen.
We wanted to make a micro scale darkness map. Many people are probably familiar with the amazing satellite views of the earth at night, where you can see the lit areas of the world at night, mostly clustered in the first world. This is an amazing depiction of lightness and darkness on a macro scale, but we wanted to see if we could create a system to create a map or visualization of darkness on a more micro scale.
the earth lit up at night
The first step in this process is to go out and collect the data. We plan on using the GPS in our smart phones to keep track of our locations, and light meters to take readings of the light bouncing off various walls and surfaces.
A few things to consider in collecting this data:
- How far up do we measure? Specifically, since most of New York City is so well lit by street lights, do we decide to measure darkness above the area that the light hits the buildings?
- Phase of the moon. It’s obviously brighter when there is a full moon, as there is now. From now until early December the moon will be waning. There will be a new moon on December 5th, and by December 16th it will be about 3/4 full again.
- Do we take measurements on each side of the street? Do we restrict ourselves to man-made, flat surfaces, or do we include parks, bushes and trees?
A few things to consider in visualizing this data:
- What are the darkness and lightness thresholds? Namely, what value of darkness is dark enough for our purposes, and what values do we ignore. How much of the gray area or spectrum in between do we include? Do we have to include light areas in order for the dark areas to be visible?
- Do we visualize the darkness values for a whole street, or do we include values for each sidewalk?
- Is there any interactive functionality, like a rollover which might give you coordinates and an exact light meter reading?
- We would like to create a platform that allows the map to be filled in over time through some sort of crowd sourcing process. We’re probably going to use the Google Map API in some respect so that people can fill in gaps in the future.
- We think it is interesting to try and map New York City’s dark areas since they are actually pretty hard to find. We aim to try and cover as much of Lower Manhattan as we possibly can in the next few weeks. Ideally, this map would grow to include information about the 5 boroughs, and possibly go to other well-lit cities where darkness at night might be hard to come by.
A few sources of inspiration:
- Max Neuhaus made a sound map of Times Square in preparation for his sound installation there beneath the subway grates. It is in the form of an aural topographic map, which could be a nice way to do the darkness map.
- Donald Appleyard’s Livable Streets research and map, in which he compared three streets in San Francisco that were practically identical except for the amount of traffic on each. Both his method of research and findings are of interest.
Map of quality of life in relation to traffic, by Donald Appleyard
As a side project, I’m helping to design some lighting for a window installation designed by Emily Ryan, another ITP student who won a contest sponsored by H&M to have her art installed in their 5th Avenue store for three months. She’s designed an interactive installation that visualizes tweets, texts and flickr photos and displays them across an outline of New York City. After seeing my Hot or Not project, she asked if I wanted to help out designing some lighting for the skyline.
Emily's winning design!
One of the main things she wanted was to light the Empire State Building in the skyline so that its lights matched the lighting of the real Empire State Building. I’m working with ITP alum Matt Richard, creator of Estrella, to make this happen. He told me about a site and rss feed that already displays what colors the lighting on the Empire State Building will be each day: www.whatcoloristheempirestatebuilding.com/. This will be easy to have either PHP or Processing parse the XML for the color data, then feed it to Arduino to program the lights.
Coincidentally, this in week in ICM we had to do a project that imported data into a processing sketch. I decided to take this opportunity to make a mockup of the lights in processing, where it would parse the XML and display the color data as a simple screen visualization. It looks fairly static and boring since the XML only updates once a day, but here is the sketch.
The code is still clunky (remind me to finally try and understand classes), but it works. Next steps are making it cleaner, as well as trying to do the same thing in PHP, which people have told me is a “better” way to interface with Arduino for this application.
I decided to choose one pattern to work with for this week’s midterm assignment: the Voronoi Pattern. The underlying structure of this pattern is quite appealing to me. You take an array of random points, and polygons are drawn around these points in which all the edges are equidistant from the various points they are closest to. I found a library made by Lee Byron which had functions for creating the Voronoi pattern from a 2D array of random points. I also relied heavily on a code demo of this library made by Marius Watz.
I wanted to play with the positive and negative space of the Voronoi regions. I made a processing sketch which slowly increases the strokeWeight of the regions’ edges. You can view the sketch here. I plan on creating a mousePress function that would capture a pdf of the screen each time I clicked the mouse, or I might have a pdf generated every second. I then bring these pdfs into Illustrator and prepare them to be lasercut. I would like the final product to be a book or series of prints that would show the change in the pattern over time.
Since I couldn’t get the PDF export working properly, here are some screen captures I grabbed as the sketch ran for a few minutes. I cropped as best I could but they don’t line up exactly.
As you can see in the screenshots, as the strokeWeight grows larger and larger, sharp corners start to invade other polygon regions. I would like to refine my code a lot more so that the regions are aware of one another, and if one region “invades” another, it would be smoothed over instead of creating the jagged triangular shapes.
Also, my code is pretty “quick and dirty.” In order to make the edges look like they are white shapes with black edges, I’m actually drawing their stroke twice: once in black, and again in white, only less wide by 2 pixels. I would like to be able to use the “getEdges” function in the Mesh library and treat the edges themselves as objects that I can affect.
For my Midterm project in ICM, I decided to do some experiments with generative patterns. Basically, I want to compare iterative processes with various parameters, and then get various materials (most likely different weights of paper) lasercut with the patterns to find the limits of what patterns are visible on what material. I am also interested in the threshold where the pattern destroys the material. I’m envisioning the final product as a series of prints or possibly a book format, where you can see the progression of destruction and/or intricacy.
However, from doing research into patterns I’ve come across a few interesting sources describing ways computer scientists model complex behavior or scenarios with patterns. One name that keeps coming up again and again is Christopher Alexander, an architect and computer scientist who popularized the term “pattern language.”
I am taken with his idea that patterns can be used to generate solutions to problems. I would like to explore this idea further in the future (possibly for my ICM final project), but in the meantime decided to focus on one pattern, and generate a sketch to visualize how the pattern can change based on changing parameters.
Rule 30 cellular automata (i'm not exactly sure what that means yet)
The Barbarian Group’sBiomimetic Butterflies are a really nice example of what got me inspired to bring some of these generative patterns into the physical world. These lampshades made by Nervous System are wonderful too, especially the organic quality they were able to achieve.
For this week’s ICM homework, I decided to to work on an image sequence I’d made before in Processing, but apply a few more effects to it. I had intended on doing more color averaging, but I couldn’t figure out how to work with more than two images at a time, so instead I went with the simpler tint() function.
Image still from Take Off take 2
Instead of randomizing the frameRate, I mapped it to the mouseX position, so you can adjust the timing of the animation. Next, I’d like to try some image averaging so that the frames really meld into one another.
So this week we learned learned about arrays. Arrays are cool. You can store things in them, like numbers, or letters, or even objects (and images). Object arrays also make it much easier for you to make alot of the same object. For me, I instantly thought of creating an object array for my rainy day sketch, so that instead of my dorky rectangles over lines, I could actually just make one line object, then repeat it over and over.
However, I got sick of my rainy day sketch, so I decided to go for something more abstract. It still has the same line repeated, but it’s pretty minimal, and was mostly a sketch for me to try and control the behavior of the lines with the mouse. In my head, I had a much more nuanced idea of a line field, where the mouse would control the rotation of the lines closest to it, and get less affective the further away they were. I’m sure there is a fairly straightforward approach, but I had enough trouble with my object array, let alone giving it a fancier functionality.
So, without further ado, here is my array of lines (warning, it’s quite boring as a static image, and probably still pretty boring as a dynamic one):
Line field, the x position of the mouse controls their rotation
So, this is where things “got real” in ICM. I think I’ve been saying that a lot recently, but it basically boils down to this. For about the first month of ITP we knew that we would be learning a lot of important things, but drawing shapes and making a light turn on and off didn’t really seem like the most exciting thing ever. But lo and behold, now we pretty much know the basics of Object Oriented Programming, the building blocks for EVERYTHING that comes next. This is by no means saying that I’ve mastered it (not even close), but I am starting to feel comfortable with the idea that we are making self-contained object, that has qualities or does something. And then with that one object, we can really easily duplicate it so that there are many of them. Arrays come in handy for that, which are coming right up.
I returned to my rainy day umbrella sketch to see if I could make classes for both the rain, and my umbrella. The way I am drawing the rain is pretty dorky (in a code sense). I’m drawing lines of slope -1, and the appearance of rain comes in because I have rectangles spanning the width of the screen on top of the lines, which are the same color as the background. These rectangles then fall from the top to the bottom of the screen, giving the appearance of “rain drops,” albeit in a very designy non-randomized way.
I also made my umbrella into an object, and even tried to create two instances of it. As you can see, I ran into a bit of trouble when I scaled it down, so that the top of the umbrella goes up too far. Maybe I will get around to tweaking this, we’ll see.
Rainy Day Umbrellas as Objects!
Now, make no mistake. The objects aren’t actually doing anything smart like talking to one another. The way it looks like the umbrella stops the rain is that I drew a parallelogram underneath the umbrella that is also the same color as the background. Moral of the story is if you don’t actually know how to do things right you can usually find a hack to get the job done!
So this week in ICM we learned about logic. Ifs, elses, if elses, and a few other handy ways to let the computer make decisions. Don’t worry, we’re far from AI territory, more like making a ball stop if it reaches the edge of the screen. I worked with Meghan to create a bouncy ball program, where we tried to make it seem like the ball was responding to gravity, as well as bouncing when it reached the bottom of the screen. Working with Meghan was helpful since she has more programming experience, and was very patient as I tried to understand the math behind acceleration.
It was great trying to work with some if else statements, and I like the overall effect of what we made. However, after playing around with it a bit more, I realize that using the mouse to position the ball start location, but initializing with the L/R arrow keys, may not be the best control system. I wish we knew how to specify left / right mouseclick (there’s probably an easy way to look that up but I haven’t yet). In any case, it was a good lesson in trying to simulate gravity, acceleration, and ball dropping.