Midterm Documentation: Owen & Sae

Midterm

By Owen, on March 5th, 2012

Sae and I have been working on a Android app currently called Kirlian Cam, a reference to Kirlian photography, which is a form of photography made with electricity.  Our idea doesn’t really have anything to do with the technology behind Kirlian photography, but the sort of “aura” photography that is popular in weird holistic medicine places in Chinatown and stuff.  The ultimate goal is to have a full service sort of paranormal app, where a user can take a picture, and different photo filters will reveal different types of auras and ghost trails in a photo, but we have a bit of work to do to realize everything we’ve talked about.  We spent a lot of time working with PhoneGap, Android SDK, HTML5, Javascript and Jquery to create the app we have now, but we’re still working on the pixel manipulation that is happening in Javascript, using the HTML5 canvas tag, which allows us to do the sort of image processing that we learned in ICM.  So for now, the app lets a user take a picture and then returns a negative imprint of the photo, which isn’t that cool, but it took a lot of work to figure out that much.  Hopefully we’ll have the rest of the pixel manipulation algorithm sorted out to start doing some really cool stuff soon.  Here’s the look of the app now:

 

Some examples of Kirlian photography and other sources we may try to emulate:

here is the javascript code:

function newPhoto(imageURI) {
  console.log(imageURI);
  // alert(imageURI);
  // Get the canvas element.
  var elem = document.getElementById('cancan');
  if (!elem || !elem.getContext) {
    return;
  }

  // Get the canvas 2d context.
  var context = elem.getContext('2d');
  if (!context) {
    return;
  }

// Create a new image.
  var img = new Image();
  img.src = imageURI;

  // Once it's loaded draw the image on the canvas.
  img.addEventListener('load', function () {
    // Original resolution: x, y.
	//alert(screen.width);

	var width = 300;
	  var height = 400;

  context.drawImage(this, 0, 0, width, height);

  var imageData = context.getImageData(0, 0, width, height)
  var w2 = width / 2;

  for (var y = 0; y < height; y++) {
		var inpos = y * width * 4; // *4 for 4 ints per pixel
		var outpos = inpos;
		for (var x = 0; x < width; x++) {
			var r = imageData.data[inpos++] * 1.4; // less red
			var g = imageData.data[inpos++]* 1.4; // less green
			var b = imageData.data[inpos++]; // MORE BLUE
			var a = imageData.data[inpos++]* 1.4;     // same alpha

			var b = Math.min(255, b); // clamp to [0..255]

			imageData.data[outpos++] = 255 - r;
			imageData.data[outpos++] = 255 - g;
			imageData.data[outpos++] = 255 - b;

			imageData.data[outpos++] = a;

		}
	}

	// put pixel data on canvas
	context.putImageData(imageData, 0, 0);
	},  false);

}

var pictureSource; // picture source
var destinationType; // sets the format of returned value

// Wait for PhoneGap to connect with the device
//
document.addEventListener("deviceready", onDeviceReady, false);

// PhoneGap is ready to be used!
//
function onDeviceReady() {
    pictureSource = navigator.camera.PictureSourceType;
    destinationType = navigator.camera.DestinationType;
}

// A button will call this function
//
function capturePhoto() {
    // Take picture using device camera and retrieve image as base64-encoded
    // string
    navigator.camera.getPicture(newPhoto, onFail, {
        quality : 50,
		destinationType: Camera.DestinationType.FILE_URI
    });
}

// Called if something bad happens.
//
function onFail(message) {
    alert('Failed because: ' + message);
}

Comments are closed.