For my thesis, I am cultivating a species of Algae known as Chlorella Vulgaris. C. Vulgaris is known and studied for scrubbing CO2 as well as producing oil that can be used as biofuel. Its also consumed by humans as a ‘superfood’. I ordered C. Vulgaris from Carolina Labs. I went with the simplest instructable I could find to create a DIY bioreactor, found here. After creating the reactor, I followed pretty generic sanitation procedures: sterilize the bioreactor with alcohol then flush with hot water; gloves, sterile instruments, etc.
To grow, the algae needs (per Carolina) 200-400 foot candles of light for 7-10 days in circadian cycles (12 on/12 off). Formulas for light conversions:
lumens = foot candles x 10.76
Watts = lumens x 0.001496
I ended up with only needing about 6 and a half watts of either incandescent or fluorescent light (contains UV).
The algae also needs a fresh supply of CO2 hence the aquarium pump, as well as some nutrients. For the nutrients I just bought some fresh water plant food from the fish store and am giving 2-3 drops weekly.
To automate the lighting system, I set up an Arduino running the Blink without Delay code from the Arduino library. By using a power switch tail (which does the hard stuff of going from DC to AC current for the clip light) I can simply send a pulse of voltage on for 12 hours and then off for 12 hours. To see what problems may arise, I added on a Sparkfun micro sd shield and an adafruit chronodot RTC. The RTC is a really accurate clock that has its on Lithium battery, so that time is kept accurately and uninterrupted. I am writing the on/off values with a time stamp the entire time to a csv. Its really unsophisticated but it does the job in a pinch. You can see the code on my github here.
The wiring is pretty straightforward. The RTC runs on I2C communication and the RTClib Library specifies using analog pin 4 to SDA. analog pin 5 to SCL to accomplish this.
I’m currently on Day 4 and the are growing well. A decent setup. Now, for my biofluorescent marine algae…
Posted in thesis
Tagged arduino, thesis
For my makematics and open source animation finals, I intend to combined what I learned using PCA with the Kinect, and a simple particle system that Nick created for OSA, to create a short scene in my storyboard.
Screenshots of the illustrator assets:
Screenshots from Greg Borenstein’s PCA/Kinect Library:
The scene that I want to focus on, computationally, is where the character first appears through the dust storm. I intend to use the particle example with code from my Makematics class: Using Principle Component Analysis to determine gaze direction. What I hope to do, is to use PCA to simulate a headlamp appearing in a dust storm.
Screenshots from the storyboard
This past summer, I decided that I wanted to get a laptop feed onto an old TV sitting around ITP. The only inputs the TV had were screw terminals for fork leads. So after many hours of trying out different cable combinations and getting help from Marlon and Rob at ITP, I discovered a formula that worked: From the laptop, I send the signal via mac mini dongle into a VGA cable into a magic box called a TV view gold (basically this piece of hardware takes the pixels and converts them to phosphorus scans [digital-analog]). From there, the signal travels to a RF converter via a (this is the fuzzy part) component/composite cable. The signal then travels via coaxial cable to a coaxial to fork lead adapter. Voila! Now I have a vacuum tube old school aesthetic monitor. This has awesome possibilities for student projects. Anything, (Processing, OF, Kinect, Max, etc) that I can run on my laptop, I can display on this awesome old TV
Sheiva Rezvani was a big help with this.
Here is a video and a diagram :
Macbook Pro to old school Analog TV from Crys Moore on Vimeo.
Posted in Uncategorized
I came across this paper in Eric’s Digital Imaging class. In class, ee learned about creating stereoscopic images and seeing them in 3D with glasses. I know there has to be a way to accomplish a similar thing computationally, so I googled it and came across something similar and more interesting…
Depth Extraction from Video Using Non-parametric Sampling
Basically, this algorithm recreates not only a depth map, but an anaglyph image from just plain ol RGB footage. So what that means is you can watch all your favorite old school movies in 3D. Below is a diagram of the algorithm:
How do they do this? First, they take their image and compare it to a database that already has assigned depth maps. In this lab test, they are using the Make3D dataset, but there are others such as the NYU Depth Dataset, RGB-D dataset, and the B3DO dataset. They find a candidate and use something called SIFT flow to match up the images. That basically means, that they are able to group images of simliar scenes together. They then process the images according to a Global Optimization procedure (which is basically just what it sounds like). From this they are able to obtain an image with an approximate depth, but it could be better so they inevitably make it better. This is where they use time (pixel change) from video and get a more accurate depth map.
This has an unlimited number of fantastic possibilities for ITP projects. The Kinect is great, but really clutters up a project visually and limits it logistically…ie wearable computing, mobile, robotics. Not to mention, I would finally be able to watch Faster Pussy Cat Kill! Kill! in glorious 3D.
Depth Extraction from Video Using Non-parametric Sampling from Kevin Karsch on Vimeo.
This last week in Makematics we learned an algorithm called seam carving. I won’t go into it, but you can find it here. Basically, it is finding a column of the least important pixels to a picture, and then deleting them dynamically. What this means functionally, is that you can dynamically resize an image. The video below is a screen recording of the processing that was running. I ran it into final cut and sped it up a little bit. Its about 6 min long. The end is pretty great…The sketch eventually crashes, but I managed to get the seam carver down to a column of only a few pixels wide.
Ryan and I had a sort product shoot for the junk shelf. Ryan and I worked together. We started at a 100 ISO with the smallest fStop at 4.6. We had to bump up the ISO because there just wasn’t enough light for our setup. Our other option would of course have been to get more light. We only had to go up to like 250 ISO, so we thought the trade off wouldn’t have been that bad. From there we were pretty impressed with ourselves at the pictures we were taking.
One interesting thing to note, even though I set the Canon D5 to monochrome, the pictures imported in color. This didn’t happen to Ryan. I used iPhoto. Either way I just discarded the color information in Photoshop. Then I adjusted the levels and voila! Great pictures. See attached.