Time of visit: Evening, extended hours. Friday February 1st, 2013.
Entry via Gallery 800, the long hall. This hall is built to impress. I’m not convinced it’s built for this particular exhibit, but it is certainly beautiful. It seems to have been scaled for the sculptures which are strong, beautiful black or white pieces. The paintings on the walls quite literally pale in comparison – many seem faded, with only a few pops of color from the later works. Honestly, only the Joan of Arc and the Salome really stands up to the rest of the room. And the Joan of Arc’s naturalist style looks bizarrely out of place behind the sculptures.
There was no real introduction to this gallery. No main statement. You walk into all these great rough Rodins and then some smoother busts and things in completely different styles. The “Madame X” bust was a funny story (“Hmph! The nose displeases me, throw it away”), one which I wish were in some way highlighted – we’re too used to sculptures that represent ideals, and not really people. When I see a bust of someone’s head I want to know who they were.
I walked from the main hall right into the Monet gallery, which was packed with famous images sitting in the dark. I understand some pigments are light sensitive, but the dim lighting and dark walls made the paintings just seem to sink into the shadows. And the Van Gogh gallery nearby had some faded pieces with fugitive pigment* showed in perfectly good lighting.
Was it a lighting malfunction? I’m not clear why the wall color was chosen because the largest pieces are not complemented by it at all. And there are plenty more Monets in cream-colored rooms with better lighting.
*Regarding the fading of colors – I would love a mini image or augmented reality app that showed an image in the colors it originally had. The Van Gogh flowers would have had a much different impact with their red tones intact.
It must be incredibly difficult to balance a room by theme, painting size, content, artist, symmetry, and then find a color that sets off each image. I have the utmost respect for whoever hung the pieces, as the shape balance in each room was exceptional. But I’m confused about the choices of wall color (Why were some rooms so red? Not all of them had paintings with red tones), lighting, and seating. There was very little seating. I can handle the walking but my parents cannot – they would have stayed in rooms with benches. There was no bench in front of Van Gogh’s “Self Portrait with Straw Hat” which I simply cannot understand.
(There were a few paintings whose inclusion in a particular gallery seemed like odd choices, which I will post later.)
There seemed to be several different designers for different groups of rooms. Some rooms had simple info plaques sitting on the molding below the paintings. Others (apparently the newer ones) had more information mounted on the wall beside their works. Only one, the gallery full of John Wolfe’s collection, had a large overall explanation plaque. It made that room much more interesting to me knowing what paintings were once together, separated, and now united. Or which were commissioned and which were personally motivated by the artist. This is particularly true of the Rousseau “Forest in Winter at Sunset” which on its own is a vast and depressing piece. It means more to me, to know that it was his last great attempt to get something accepted to a Salon, but he died before he could complete it. You can see in the curling tree branches and shadows a bittersweet hope and despair – but perhaps the finished piece would have gotten a brighter, and we would never have seen this mournful undercoat. I thought the goal was the darkness but since it is unfinished (noted only on the plaque) the painting remains a mystery. One I would have completely walked by, but that was one of the few rooms with a bench, so I sat and considered it long enough to read the information.
to: http://itp.nyu.edu/classes/ied-spring2013/files/2013/01/What-is-Exh-Des-selects12-62-reduced2.pdf What is exhibition design?
I think the most important theme in this for me was “Who do we design for?”
It is absolutely true that people seek different things in an exhibit. Should it strive to be as accessible as possible? I think a well designed exhibit can tell its story through multiple levels of engagement. Even one so brief as the gallery sprint. If there is a message, it should be instantly visible. Text should be for elucidation and greater meaning in context. Unfortunately, many exhibits seem designed exclusively for either children (in particular interactive museums) or adults (quiet and full of text). I particularly liked the idea from this reading, “strive to offer…opportunities to engage with the information together”(p 18). It absolutely should. And that means more interactive engagement for adults, whose attention span seems to be decreasing over the years, and more context and story-telling brought forward for children, who tend to struggle connecting disparate elements of an exhibit. And indeed, since people learn in so many different modalities, we should offer a form of engagement for each of them.
The one issue I had here is that the idea of design for the “disabled” is a foolish idea but making an area wheelchair accessible is critical. I’m not clear what distinction they are trying to make here. That is what it means to design for the differently abled – to consider the way they can move and ensure it is not obstructed. He brought up Braille and audio for the exhibits but frankly that is far insufficient for someone with impaired vision. We have the ability now to make 3D printed reliefs of images or miniature prints of sculpture – these should really be everywhere. And the Braille and audio guides are woefully incomplete – try navigating the Met blindfolded and see if you feel like you got a full experience moving through the galleries. And try to avoid getting lost! There is no “You are here” between small rooms in the galleries, how on earth are you meant to find your way out without having to ask a museum guard?
The structure of the room itself can tell not just about the subject matter at hand, but also its audience. The 1900 Paris World’s fair featured an immense entryway inspired by the illustrations of the naturalist Ernst Haeckel. It was a sign of growing acceptance of the view of a universal framework of biology and natural design – in fact, since Haeckel was a noted supporter of Darwinism, it was a way of showing popular support for the theory of evolution. The enormous archway would have set the tone for the entire exhibition, to introduce new and wondrous ideas to the public.
PCA is for:
Determining meaningful differences between sets
Evaluating covariance between sets of data
Reducing data to its most important axes (principal components).
Allowing simplified reconstitution of data
Principal Component Analysis compares a set against its own mean to normalize all data between -1 and 1. Then it compares this set to another set to see whether they both vary (covary) with the same sign and proportion. If I vary +0.5 from the mean and you vary -0.3, then our covariance is -0.15, a small negative correlation. And so it continues.
If cov(A,B) is 0.8 and cov(B,C) is 0.01 and cov(A,C) is 0.1, I can focus on A and B and essentially ignore C.
In plotting x,y data the principal axes will align along the linear equation that best fits the set, and then orthogonal to that a line that defines the deviation (spread) of the data points.
This means when you have lots of training images, you will be treating each image as a data set, eg a column in your matrix, eg an added dimension of the matrix. So 700 images = 700 dimensions. And the you will be reducing all this data by finding the mean and covariance. This lets you match images that have similar covariance results!
I used this primarily to help me label important components of any Cymbella diatom, by treating it as a blob and finding its axes. Diatoms are symmetric across their axes so this is a great use of PCA. I based m code on the pen orientation example code.
I spent a long time learning about linear algebra, matrix multiplication, eigenvectors and eigenvalues. I read through Lindsey Smith’s tutorial. I particularly enjoyed http://betterexplained.com/articles/linear-algebra-guide/ and
http://www.ams.org/samplings/feature-column/fcarc-svd for excellent visual representations. The identity and transformation matrices are critical to PCA. These guides help you really picture how the transformation works.
SVM: Using a set of training images (or sounds, or words, etc) for computer recognition and categorization
I originally tried this code out as a music recognizer. The plan was to pull out the frequencies (which turned out to be very difficult on anything other than a .wav) and the beat (using FFT) and then use those to categorize the music.
I tried to tackle way too much and none of it ended up working right. It also wasn’t a great use of SVM as building a training library was very slow, and I kept changing how I wanted to categorize everything.
Finally I decided it was time to try again on a project more within my reach. Perhaps a simpler “bag of words” style problem. So I borrowed the Spam classifier code to use for recognizing valid vs invalid recipes. In particular I wanted to look at cake recipes.
Where are the text samples from?
You may be familiar with the game Portal. It is probably one of my favorite games of all time. At one point in the game you encounter a very strange cake recipe. It starts out somewhat normal, but quickly delves into nonsensical or outright dangerous ingredients.
I decided to take this entire recipe, rate the elements as valid or invalid for cake making, and then test the resultant model against a cake recipe downloaded from allrecipes.com
Unfortunately, it did not quite work correctly. In fact it incorrectly ranked a string that perfectly matches one from the training model (“4 large eggs”). I am still trying to work out why. But I did spend a long time learning about Machine Learning and I have dozens of new projects I would like to try with it.
In bioinformatics, a sequence alignment is a way of arranging the sequences of DNA, RNA, or protein to identify regions of similarity that may be a consequence of functional, structural, or evolutionary relationships between the sequences. Aligned sequences of nucleotide or amino acid residues are typically represented as rows within a matrix. Gaps are inserted between the residues so that identical or similar characters are aligned in successive columns.
Sequence alignment is a powerful research tool, and finding the longest common substring of a DNA set seemed like the perfect use of this week’s dynamic programming exercise.
An interactive genus identifier to use on the beautiful and unique class of micro-organisms summarized under the name “diatoms”. These tiny creatures fix staggeringly large amounts of CO2 in the ocean, and are a key player in the global carbon cycle. Invisible to the naked eye, some are as small as 3 microns across. Diatoms are frequently studied under light microscopes, which allow a small liquid environment in which to view their motion. Often the liquid is a random sample of river or sea water, full of many other organisms. Finding the diatom of interest in these large messy samples can be a huge challenge!
The purpose of this program is to use machine learning to assist in rapid identification of organisms during microscopy. While I used videos recorded and posted to the internet, the code would work equally well on live updating video from a USB microscope.
As a research scientist, a good portion of my research time was often spent doing tasks that machines are better at: counting, finding the brightest/darkest out of a set, judging relative movement, etc. While a human eye is useful to verify machine results, it is easy to see where computer vision fits into the lab work flow.
Uses SVM (support vector machine learning) and HOG (histogram of oriented gradients). New features: image rotation of sample images
A still image of a diatom represents only one possible orientation. Under light microscopy, the diatom can spin freely 360. So a good image library should provide all possible orientations.
SVM only works if you have a well-trained model. I built a custom image library from videos and still images of diatoms under light microscopy. I focused on the following diatom species:
Various diatom species
Science photo library
Protist image library
target="_blank">Image links via Indiana University
Tree of Life image bank
(stay tuned for video demo via camtwist)
The auto-merge in Ai is truly impressive. I tried aligning everything on my own for awhile, but the results from Photoshop are so clearly superior I just stopped halfway through. I forgot to color balance with the grey card before I started, but the variations were so minor that the photo merge blend basically erased them. Some of the whites are a bit blown out, which is a pity, but the foreground objects look good.
Here was an attempt to correct for the inevitable distortion caused by rotating a camera on a ball-joint tripod. Cropped and slightly distorted at the edges. I think it looks good, but I prefer the longer panorama at the top.