Since I started drinking coffee—and later wine—I’ve been interested in the language around describing flavors, and more specifically, how we remember flavors. It takes a long time to build up a palette memory especially when you’re dealing with differences that are as nuanced as coffee. Despite being a barista (back in the day) and roasting my own beans for the past few years, I still struggle to recall anything more than the most general characteristics of coffees from growing regions around the world.
Is it possible to use data to construct a memorable, visual “signature” of a coffee growing region? In other words, can I create a tool that generates images for specific countries, acting as mnemonic devices, with which I can backtrace the region’s coffee characteristics?
Sweet Maria’s is the company who I buy my green beans from and they have an extensive library of cupping scores from coffee growers around the world. I’ll be using these data to generate composites for the world’s most popular coffee growing countries. Here’s an example of the “spider graphs” that seem pretty standard in the industry:
I plan on creating an iPad app that lets you explore the various signatures and allows you to generate your own combination by manipulating the data.
This week in Data Representation we looked at the Open Paths data that we’ve been collecting since the beginning of the semester. This video is a Processing sketch, and the map is generated with Unfolding Maps. The library allows you to use custom map tiles designed in TileMill or CloudMade.
The mobile photos that appear above the timeline were taken during the same period. I wanted to incorporate an additional dataset to provide a little context. There’s a brief time after I shoot up to LaGuardia Airport that I was in MN for Trivia Weekend.
For my midterm project I explored a few different angles; face tracking, Delaunay Triangulation, and hair simulation. The ultimate mashup is a sketch I’m calling “The Beard Booth”, and can be seen below.
Source for all of these sketches can be found in my github repo.
Here’s a basic example of Delauny Triangulation. In this sketch I’m randomly adding 2D nodes to the triangle mesh. The colors are sampled from the center point of each triangle.
I was recently tipped off to Delaunay Triangulation, which connects an arbitrary set of points with a minimum number of triangles. This is exactly what we needed to create planar surfaces between our subway station nodes, and I was able to do that with the Edgy triangulation library for iOS. It was quite a nice feeling to see this rendered as a 3D topography for the first time.
Once we had the triangles, it was trivial to manually dump the data into an .obj file and open it in Rhino.
We’ve arrived at the moment of truth; particle systems. Simply put, particle systems are a collection of objects that all operate on a similar rule-set. But they have a wide range of possible uses; by tweaking their variables or dressing them up, they can model behaviors like flocking birds, clouds, or fire.
This is a simple example of a particle emitter using additive blending. Dragging your finger sends a wind force toward the emitter. I’ve also included a settings panel that allows you to select any of the OpenGL blend functions.
In this example, I’ve created a “Flame” subclass of ParticleSystem. You can spawn flames by touching the screen, and they seek out the brightest regions of the paper. They will avoid their own trails, since they’re black, and the flame burns “upwards” according to the device orientation.
Both of these sketches have been committed to my Nature Of Code github repo.
Liz Khoo and I have been working with the MTA Turnstile data for our semester-long project in Sculpting Data into Everyday Objects. Over the past few weeks we’ve been able to parse the data and visually investigate what we’ve got.
After paging through a few weeks of data it became clear that the days and weeks after Hurricane Sandy would be our focus. The system was dramatically brought offline just prior to landfall, and the data indicates just how long it took for stations to resume operation after the storm passed.
This was our initial sketch showing the volume of riders entering the system. This screen represents 24 “control units” at 1 station over 1 week.
Here we’re parsing all stations over the course of a week. The bright white activity on the left is the flow prior to Sandy, and the right is just starting to show the first stations coming back online.
We mapped the station data to their geographic coordinates to create this map. Scrubbing the mouse horizontally changes the brightness of the dots correspondant to the traffic shown in the screenshot above. This sketch is parsing 3 weeks of data, which gives us a better indication of how long it took for the stations to recover.
In this sketch we looked at how long an individual station took to come back online. Darker stations took longer. We’re tentatively going to use this data as the basis for our X, Y and Z values for 3D modeling.
Here are some screenshots of an iPad app that lets us browse the data in 3 dimensions. The slider changes the Z depth.
For this week’s Data Representation assignment, we were asked to select a dataset from the Guardian Data Store and represent it in two ways. The first is the Tufte way, which focuses on simplicity and clarity. The other is to look at the unique character of the dataset, and try to represent it in a way that only applies to our data.
I’ve chosen the Close Earth Encounters dataset, which looks at asteroid flybys circa 2011.
For the first representation, I’ve done a straight-forward chart with distance on one axis and size on the other:
For the character representation, I’m plotting the asteroids in a faux-orbital path around the earth to represent the distance in relation to other satellites (e.g. The Moon, the Space Shuttle and Mars), and also indicate the scale of the system. The length of the tail corresponds to the velocity of the asteroid, and the brightness of the tail maps to the diameter of the object. I’ve also included data from the recent DA14 flyby, as a benchmark that people may be familiar with.
Move your mouse along the Y axis to change the scale.
If you’re running Processing 2 β5 or later, you can install a command-line app which will compile and run your sketches for you. It’s called processing-java. To install it, open Processing and select “Install processing-java” from the “Tools” menu.
Once that was installed, I added the following line to ~/.profile that allows me to quickly launch a Processing app from the CLI:
alias run_sketch="processing-java --sketch=\"\`pwd\`\" --output=/tmp/processing_output --force --run"
When you’re in a sketch directory, you can simply type `run_sketch` to compile and launch the program.
I’ve also downloaded the TextMate Processing Bundle, which adds some Processing text-completion and shortcuts to TextMate. Once that’s installed, I remapped ⌘R to the following command, which runs the sketch with processing-java.
#!/usr/bin/env bash /usr/bin/processing-java --sketch="$TM_PROJECT_DIRECTORY" --output=/tmp/processing_output --force --run
Voila! You can now edit and run your Processing sketches without the Processing IDE, if you’re so inclined.
This week’s installment of Nature of Code looks at oscillation and springs. As always, the code is available in my NOC github repo.