September 08, 2011

Michael Edgcumbe 2011

Design Exercise

Last week, I began learning WebGL to evaluate its viability as an interface for visualization of our data set. I tend to like WebGL because it can be wrapped cleanly in javascript, and the implementation is fairly straightforward to walk into other languages, even if they need a syntax rewrite. WebGL can exist with decent feature parity to presentation on other platforms, like the iPad. Unfortunately, WebGL is not currently enabled in the standard version of Safari, although it does ship with development version as well as with the standard versions of Chrome and Firefox.

I found a visualization contest put forth by O’Reilly to promote the upcoming Strata conference in New York, and took on the assignment not to win a ticket to the contest but to have a simple challenge to begin piecing together the framework that might wrap our grant data at IMAP. The data for the visualization contest consists of two excel files: a list of 100 products with nutritional information and a linked table of ingredients ranked by volume for each of the 100 products. The goal is to create a data explorer that presents the products in useful views.

The design exercise has so far helped me to think about what kinds of interface elements I might include in a clean front end. It will eventually help me connect the code and raw data from Excel to a MongoDB (via C++) to an interface built with Javascript and WebGL (via Ruby on Rails). Moving through the organization of this data into an explorer deeply informs choices I must make later for IMAP’s constituents.

In this case, I am working my way from the front end to the back. The steps to the design exercise are:

  1. Wireframe the front end in Illustrator
  2. Build the front end structure in Javascript and HTML5 using placeholder data
  3. Add WebGL transitions and animations
  4. Load the two tables from Excel into Mongo using my C++ converter
  5. Write queries for the database that return results to the interface
  6. Link the database to the interface
  7. Polish the UI
  8. Package the code

Below are the wireframes I have created for Step 1:

Sep 8, 2011 10:36 AM

September 06, 2011

Rob Faludi Adjunct

September in New York – Talks & Demos

The next two weeks are going to be lively ones for makers in New York City! Here’s where to find me:

  • Open Hardware Summit, September 15th — running a breakout session “What Open Hardware Needs from the Cloud,” a discussion on how Internet services can better serve open hardware projects, with Jordan Husney.
  • Maker Faire New York, September 17th — giving a talk on the Make Live Stage, “Fun with XBees” that showcases the creative projects currently enabled by XBee radios, along with a tour of the tools that you can use to make your own.
  • Maker Faire: September 17 & 18th — showing several sensor and actuator projects from my book as well as a cool new way to get your devices onto the Internet in the MakerShed demo area.
  • Strata Conference, September 22nd — demonstrating data sensor networks from the book and new Internet gateway demos.

We’re also launching my Sensitive Buildings class at ITP today plus somebody asked me to be in an Ericsson documentary that according the producer, “aims to tell a compelling story about how we are on the brink to a digital revolution.”

Whew!

Sep 6, 2011 01:53 PM

September 02, 2011

Michael Edgcumbe 2011

WebGL Lessons – Color, Animation

Color added to the geometry from lesson 2. Code annotated below.

<strong></strong><html>

<head>
<title>Learning WebGL &mdash; lesson 1</title>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">

<script type="text/javascript" src="glMatrix-0.9.5.min.js"></script>

<script id="shader-fs" type="x-shader/x-fragment">
    #ifdef GL_ES
    precision highp float;
    #endif

    //varying variable is interpolated by the fragment shader between vertices
    varying vec4 vColor;
   
    void main(void) {
        gl_FragColor = vColor;
    }
</script>

<script id="shader-vs" type="x-shader/x-vertex">
    //passed in from the gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute...)
    attribute vec3 aVertexPosition;
    //color attribute is passed in for each vertex and then interpolated by the
    //fragment shader
    attribute vec4 aVertexColor;

    //set by setMatrixUniforms()
    uniform mat4 uMVMatrix;
    uniform mat4 uPMatrix;

    varying vec4 vColor;

    void main(void) {
        gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
        vColor = aVertexColor;
    }
</script>


<script type="text/javascript">

    //GL Object
    var gl;
    function initGL(canvas) {
        try {
            //Initiate the webGL context    
            gl = canvas.getContext("experimental-webgl");
            gl.viewportWidth = canvas.width;
            gl.viewportHeight = canvas.height;
        } catch (e) {
        }
        if (!gl) {
            alert("Could not initialise WebGL, sorry :-(");
        }
    }


    function getShader(gl, id) {
        //pass in the script    
        var shaderScript = document.getElementById(id);
        if (!shaderScript) {
            return null;
        }

        var str = "";
        var k = shaderScript.firstChild;
        while (k) {
            if (k.nodeType == 3) {
                str += k.textContent;
            }
            k = k.nextSibling;
        }

        //create the shader object
        var shader;
        //assign the shader object
        if (shaderScript.type == "x-shader/x-fragment") {
            shader = gl.createShader(gl.FRAGMENT_SHADER);
        } else if (shaderScript.type == "x-shader/x-vertex") {
            shader = gl.createShader(gl.VERTEX_SHADER);
        } else {
            return null;
        }

        //append the source
        gl.shaderSource(shader, str);
        //compile the shader
        gl.compileShader(shader);

        //check for errors
        if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
            alert(gl.getShaderInfoLog(shader));
            return null;
        }

        return shader;
    }

    //Create an object to hold the shader program
    var shaderProgram;

    function initShaders() {
        //read in, compile, and load the shaders into javascript objects    
        var fragmentShader = getShader(gl, "shader-fs");
        var vertexShader = getShader(gl, "shader-vs");

        //Put a program object into the javascript shaderProgram container
        shaderProgram = gl.createProgram();

        //attach the compiled shaders to the shaderProgram
        gl.attachShader(shaderProgram, vertexShader);
        gl.attachShader(shaderProgram, fragmentShader);

        //Link the shader program
        gl.linkProgram(shaderProgram);

        if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) {
            alert("Could not initialise shaders");
        }

        //designate the shader program to be active
        gl.useProgram(shaderProgram);

        //setup a pointer to the location of the vertex position vector
        //declared by the vertex shader
        shaderProgram.vertexPositionAttribute = gl.getAttribLocation(shaderProgram, "aVertexPosition");
        //make the array available
        gl.enableVertexAttribArray(shaderProgram.vertexPositionAttribute);
       
        //gets a reference to the attributes that we want to pass to the vertex shader
        shaderProgram.vertexColorAttribute = gl.getAttribLocation( shaderProgram, "aVertexColor");
        gl.enableVertexAttribArray(shaderProgram.vertexColorAttribute);

        //setup pointers to the uniform variables in the vertex shader
        shaderProgram.pMatrixUniform = gl.getUniformLocation(shaderProgram, "uPMatrix");
        shaderProgram.mvMatrixUniform = gl.getUniformLocation(shaderProgram, "uMVMatrix");
    }

    //Create the model view matrix
    var mvMatrix = mat4.create();

    //Create the projection matrix
    var pMatrix = mat4.create();

    //pass in the projection and modelview matrices into the vertex shader as uniforms
    function setMatrixUniforms() {
        gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, pMatrix);
        gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, mvMatrix);
    }


    //instantiate objects to hold the buffers for the geometry
    var triangleVertexPositionBuffer;
    var triangleVertexColorBuffer;
    var squareVertexPositionBuffer;
    var squareVertexColorBuffer;

    //fill in the buffers
    function initBuffers() {
        //create the triangle buffer    
        triangleVertexPositionBuffer = gl.createBuffer();
        //bind the buffer
        gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexPositionBuffer);
        //create the vertices for the geometry
        var vertices = [
             0.0,  1.0,  0.0,
            -1.0, -1.0,  0.0,
             1.0, -1.0,  0.0
        ];
        //put the vertices in the buffer
        gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
        //add an integer to the object to represent the number of dimensions for a vertex
        triangleVertexPositionBuffer.itemSize = 3;
        //add an integer to the object to represent the number of vertices
        triangleVertexPositionBuffer.numItems = 3;

        //put a buffer into the color buffer object
        triangleVertexColorBuffer = gl.createBuffer();
        //set the buffer to be ready to accept data
        gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexColorBuffer);
        var colors = [
            1.0, 0.0, 0.0, 1.0,
            0.0, 1.0, 0.0, 1.0,
            0.0, 0.0, 1.0, 1.0     
        ];
        //put the color matrix into the buffer
        gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(colors), gl.STATIC_DRAW);
        //set the number of dimensions for each color (RGBA) - columns
        triangleVertexColorBuffer.itemSize = 4;
        //set the number of colors specified by the matrix - rows
        triangleVertexColorBuffer.numItems = 3;
       
        //repeat for the square
        squareVertexPositionBuffer = gl.createBuffer();
        gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
        vertices = [
             1.0,  1.0,  0.0,
            -1.0,  1.0,  0.0,
             1.0, -1.0,  0.0,
            -1.0, -1.0,  0.0
        ];
        gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
        squareVertexPositionBuffer.itemSize = 3;
        squareVertexPositionBuffer.numItems = 4;

        squareVertexColorBuffer = gl.createBuffer();
        gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexColorBuffer);
        colors = []
        for (var i=0; i < 4; i++) {
           colors = colors.concat([0.5, 0.5, 1.0, 1.0]);
       }
       gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(colors), gl.STATIC_DRAW);
       squareVertexColorBuffer.itemSize = 4;
       squareVertexColorBuffer.numItems = 4;
   }


   function drawScene() {
       //set the viewport
       gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
       //clear the depth and color buffers
       gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

       //set the perspective using the projection matrix
       mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 100.0, pMatrix);

       //set the identity matrix to the modelview matrix
       mat4.identity(mvMatrix);

       //translate the camera
       mat4.translate(mvMatrix, [-1.5, 0.0, -7.0]);

       //DRAW THE TRIANGLE
       //bind the triangle buffer
       gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexPositionBuffer);

       //put the values from the triangle buffer into the vertex position attribute variable in the vertex shader
       //TODO:  LOOK THIS FUNCTION UP
       gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, triangleVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
       
        //assign the color buffer to the vertexColorAttribute reference that
        //points to the shader attribute aVertexColor pointed to in initShaders()
        gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexColorBuffer);
        gl.vertexAttribPointer( shaderProgram.vertexColorAttribute, triangleVertexColorBuffer.itemSize, gl.FLOAT, false, 0, 0 );

        //send the modelview and projection matrices to the vertex shader
       setMatrixUniforms();
       //draw what was sent to the shaders
       gl.drawArrays(gl.TRIANGLES, 0, triangleVertexPositionBuffer.numItems);

       //repeat for the square
       mat4.translate(mvMatrix, [3.0, 0.0, 0.0]);
       gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
       gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, squareVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
       gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexColorBuffer);
       gl.vertexAttribPointer(shaderProgram.vertexColorAttribute, squareVertexColorBuffer.itemSize, gl.FLOAT, false, 0, 0);
       setMatrixUniforms();
       gl.drawArrays(gl.TRIANGLE_STRIP, 0, squareVertexPositionBuffer.numItems);
   }


   //Main function to push through the WebGL code
   function webGLStart() {
       //Create a canvas object from the HTML5 canvas tag in the body  
       var canvas = document.getElementById("lesson01-canvas");
       //Pass the canvas object to initialize WebGL
       initGL(canvas);
       //Create the shaders
       initShaders();
       //Create the buffers and fill in the geometry
       initBuffers();

       //Clear the background to black
       gl.clearColor(0.0, 0.0, 0.0, 1.0);
       //Occlude for depth
       gl.enable(gl.DEPTH_TEST);

       //Draw everything
       drawScene();
   }


</script>


</head>


<body onload="webGLStart();">
    <canvas id="lesson01-canvas" style="border: none;" width="500" height="500"></canvas>
</body>

</html>

With animation functions:

<strong></strong><html>

<head>
<title>Learning WebGL &mdash; lesson 1</title>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">

<script type="text/javascript" src="glMatrix-0.9.5.min.js"></script>
<script type="text/javascript" src="webgl-utils.js"></script>

<script id="shader-fs" type="x-shader/x-fragment">
    #ifdef GL_ES
    precision highp float;
    #endif

    //varying variable is interpolated by the fragment shader between vertices
    varying vec4 vColor;
   
    void main(void) {
        gl_FragColor = vColor;
    }
</script>

<script id="shader-vs" type="x-shader/x-vertex">
    //passed in from the gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute...)
    attribute vec3 aVertexPosition;
    //color attribute is passed in for each vertex and then interpolated by the
    //fragment shader
    attribute vec4 aVertexColor;

    //set by setMatrixUniforms()
    uniform mat4 uMVMatrix;
    uniform mat4 uPMatrix;

    varying vec4 vColor;

    void main(void) {
        gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
        vColor = aVertexColor;
    }
</script>


<script type="text/javascript">

    //GL Object
    var gl;
    function initGL(canvas) {
        try {
            //Initiate the webGL context    
            gl = canvas.getContext("experimental-webgl");
            gl.viewportWidth = canvas.width;
            gl.viewportHeight = canvas.height;
        } catch (e) {
        }
        if (!gl) {
            alert("Could not initialise WebGL, sorry :-(");
        }
    }


    function getShader(gl, id) {
        //pass in the script    
        var shaderScript = document.getElementById(id);
        if (!shaderScript) {
            return null;
        }

        var str = "";
        var k = shaderScript.firstChild;
        while (k) {
            if (k.nodeType == 3) {
                str += k.textContent;
            }
            k = k.nextSibling;
        }

        //create the shader object
        var shader;
        //assign the shader object
        if (shaderScript.type == "x-shader/x-fragment") {
            shader = gl.createShader(gl.FRAGMENT_SHADER);
        } else if (shaderScript.type == "x-shader/x-vertex") {
            shader = gl.createShader(gl.VERTEX_SHADER);
        } else {
            return null;
        }

        //append the source
        gl.shaderSource(shader, str);
        //compile the shader
        gl.compileShader(shader);

        //check for errors
        if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
            alert(gl.getShaderInfoLog(shader));
            return null;
        }

        return shader;
    }

    //Create an object to hold the shader program
    var shaderProgram;

    function initShaders() {
        //read in, compile, and load the shaders into javascript objects    
        var fragmentShader = getShader(gl, "shader-fs");
        var vertexShader = getShader(gl, "shader-vs");

        //Put a program object into the javascript shaderProgram container
        shaderProgram = gl.createProgram();

        //attach the compiled shaders to the shaderProgram
        gl.attachShader(shaderProgram, vertexShader);
        gl.attachShader(shaderProgram, fragmentShader);

        //Link the shader program
        gl.linkProgram(shaderProgram);

        if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) {
            alert("Could not initialise shaders");
        }

        //designate the shader program to be active
        gl.useProgram(shaderProgram);

        //setup a pointer to the location of the vertex position vector
        //declared by the vertex shader
        shaderProgram.vertexPositionAttribute = gl.getAttribLocation(shaderProgram, "aVertexPosition");
        //make the array available
        gl.enableVertexAttribArray(shaderProgram.vertexPositionAttribute);
       
        //gets a reference to the attributes that we want to pass to the vertex shader
        shaderProgram.vertexColorAttribute = gl.getAttribLocation( shaderProgram, "aVertexColor");
        gl.enableVertexAttribArray(shaderProgram.vertexColorAttribute);

        //setup pointers to the uniform variables in the vertex shader
        shaderProgram.pMatrixUniform = gl.getUniformLocation(shaderProgram, "uPMatrix");
        shaderProgram.mvMatrixUniform = gl.getUniformLocation(shaderProgram, "uMVMatrix");
    }

    //Create the model view matrix
    var mvMatrix = mat4.create();

    //Create the model view matrix stack
    var mvMatrixStack = [];

    //Create the projection matrix
    var pMatrix = mat4.create();

    //Function puts the current mvMatrix into the array of matrices
    function mvPushMatrix(){
        var copy = mat4.create();
        mat4.set(mvMatrix, copy );
        mvMatrixStack.push(copy);
    }
   
    //Function pops the last mvMatrix off the array of matrices
    function mvPopMatrix(){
        if(mvMatrixStack.length ==0){
            throw "Invalid popMatrix";     
        }
        mvMatrix = mvMatrixStack.pop();
    }
   

    //pass in the projection and modelview matrices into the vertex shader as uniforms
    function setMatrixUniforms() {
        gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, pMatrix);
        gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, mvMatrix);
    }

    //convert degrees to radians
    function degToRad(degrees){
        return degrees * Math.PI / 180;
    }

    //instantiate objects to hold the buffers for the geometry
    var triangleVertexPositionBuffer;
    var triangleVertexColorBuffer;
    var squareVertexPositionBuffer;
    var squareVertexColorBuffer;

    //fill in the buffers
    function initBuffers() {
        //create the triangle buffer    
        triangleVertexPositionBuffer = gl.createBuffer();
        //bind the buffer
        gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexPositionBuffer);
        //create the vertices for the geometry
        var vertices = [
             0.0,  1.0,  0.0,
            -1.0, -1.0,  0.0,
             1.0, -1.0,  0.0
        ];
        //put the vertices in the buffer
        gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
        //add an integer to the object to represent the number of dimensions for a vertex
        triangleVertexPositionBuffer.itemSize = 3;
        //add an integer to the object to represent the number of vertices
        triangleVertexPositionBuffer.numItems = 3;

        //put a buffer into the color buffer object
        triangleVertexColorBuffer = gl.createBuffer();
        //set the buffer to be ready to accept data
        gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexColorBuffer);
        var colors = [
            1.0, 0.0, 0.0, 1.0,
            0.0, 1.0, 0.0, 1.0,
            0.0, 0.0, 1.0, 1.0     
        ];
        //put the color matrix into the buffer
        gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(colors), gl.STATIC_DRAW);
        //set the number of dimensions for each color (RGBA) - columns
        triangleVertexColorBuffer.itemSize = 4;
        //set the number of colors specified by the matrix - rows
        triangleVertexColorBuffer.numItems = 3;
       
        //repeat for the square
        squareVertexPositionBuffer = gl.createBuffer();
        gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
        vertices = [
             1.0,  1.0,  0.0,
            -1.0,  1.0,  0.0,
             1.0, -1.0,  0.0,
            -1.0, -1.0,  0.0
        ];
        gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
        squareVertexPositionBuffer.itemSize = 3;
        squareVertexPositionBuffer.numItems = 4;

        squareVertexColorBuffer = gl.createBuffer();
        gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexColorBuffer);
        colors = []
        for (var i=0; i < 4; i++) {
           colors = colors.concat([0.5, 0.5, 1.0, 1.0]);
       }
       gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(colors), gl.STATIC_DRAW);
       squareVertexColorBuffer.itemSize = 4;
       squareVertexColorBuffer.numItems = 4;
   }

    //rotation variables
    var rTri = 0;
    var rSquare = 0;

   function drawScene() {
       //set the viewport
       gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
       //clear the depth and color buffers
       gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

       //set the perspective using the projection matrix
       mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 100.0, pMatrix);

       //set the identity matrix to the modelview matrix
       mat4.identity(mvMatrix);

       //translate the camera
       mat4.translate(mvMatrix, [-1.5, 0.0, -7.0]);
       
        //rotate the world
       mvPushMatrix();
       mat4.rotate(mvMatrix, degToRad(rTri), [0, 1, 0]);
       
       //DRAW THE TRIANGLE
       //bind the triangle buffer
       gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexPositionBuffer);

       //put the values from the triangle buffer into the vertex position attribute variable in the vertex shader
       //TODO:  LOOK THIS FUNCTION UP
       gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, triangleVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
       
        //assign the color buffer to the vertexColorAttribute reference that
        //points to the shader attribute aVertexColor pointed to in initShaders()
        gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexColorBuffer);
        gl.vertexAttribPointer( shaderProgram.vertexColorAttribute, triangleVertexColorBuffer.itemSize, gl.FLOAT, false, 0, 0 );

        //send the modelview and projection matrices to the vertex shader
       setMatrixUniforms();
       //draw what was sent to the shaders
       gl.drawArrays(gl.TRIANGLES, 0, triangleVertexPositionBuffer.numItems);

        mvPopMatrix();

       //repeat for the square
       mat4.translate(mvMatrix, [3.0, 0.0, 0.0]);
       mvPushMatrix();
       mat4.rotate(mvMatrix, degToRad(rSquare), [1, 0, 0]);
       gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
       gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, squareVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
       gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexColorBuffer);
       gl.vertexAttribPointer(shaderProgram.vertexColorAttribute, squareVertexColorBuffer.itemSize, gl.FLOAT, false, 0, 0);
       setMatrixUniforms();
       gl.drawArrays(gl.TRIANGLE_STRIP, 0, squareVertexPositionBuffer.numItems);
       mvPopMatrix();

   }

    var lastTime = 0;
   function animate() {
       var timeNow = new Date().getTime();
       if (lastTime != 0) {
           var elapsed = timeNow - lastTime;

           rTri += (90 * elapsed) / 1000.0;
           rSquare += (75 * elapsed) / 1000.0;
       }
       lastTime = timeNow;
   }

    //Move the world forward in time
    function tick(){
        //function from webgl-utils.js which calls a repaint
        //does not repaint all windows with WebGL open, only the current one
        requestAnimFrame(tick);
        drawScene();
        animate();
    }

   //Main function to push through the WebGL code
   function webGLStart() {
       //Create a canvas object from the HTML5 canvas tag in the body  
       var canvas = document.getElementById("lesson01-canvas");
       //Pass the canvas object to initialize WebGL
       initGL(canvas);
       //Create the shaders
       initShaders();
       //Create the buffers and fill in the geometry
       initBuffers();

       //Clear the background to black
       gl.clearColor(0.0, 0.0, 0.0, 1.0);
       //Occlude for depth
       gl.enable(gl.DEPTH_TEST);

        tick();
   }


</script>


</head>


<body onload="webGLStart();">
    <canvas id="lesson01-canvas" style="border: none;" width="500" height="500"></canvas>
</body>

</html>

Sep 2, 2011 09:35 AM

WebGL Hello World

I spent the day at the office and some time this evening reviewing an excellent tutorial for setting up a WebGL canvas element in HTML5.

It’s my hope that I can use WebGL to do the heavy lifting for visualization in the browser, and easily walk the code back and forth from OpenGL ES used on the iPad.

I annotated the first lesson’s example as I went through to understand it.

<strong></strong><html>

<head>
<title>Learning WebGL &mdash; lesson 1</title>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">

<script type="text/javascript" src="glMatrix-0.9.5.min.js"></script>

<script id="shader-fs" type="x-shader/x-fragment">
    #ifdef GL_ES
    precision highp float;
    #endif

    void main(void) {
        gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);
    }
</script>

<script id="shader-vs" type="x-shader/x-vertex">
    //passed in from the gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute...)
    attribute vec3 aVertexPosition;

    //set by setMatrixUniforms()
    uniform mat4 uMVMatrix;
    uniform mat4 uPMatrix;

    void main(void) {
        gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
    }
</script>


<script type="text/javascript">

    //GL Object
    var gl;
    function initGL(canvas) {
        try {
            //Initiate the webGL context   
            gl = canvas.getContext("experimental-webgl");
            gl.viewportWidth = canvas.width;
            gl.viewportHeight = canvas.height;
        } catch (e) {
        }
        if (!gl) {
            alert("Could not initialise WebGL, sorry :-(");
        }
    }


    function getShader(gl, id) {
        //pass in the script   
        var shaderScript = document.getElementById(id);
        if (!shaderScript) {
            return null;
        }

        var str = "";
        var k = shaderScript.firstChild;
        while (k) {
            if (k.nodeType == 3) {
                str += k.textContent;
            }
            k = k.nextSibling;
        }

        //create the shader object
        var shader;
        //assign the shader object
        if (shaderScript.type == "x-shader/x-fragment") {
            shader = gl.createShader(gl.FRAGMENT_SHADER);
        } else if (shaderScript.type == "x-shader/x-vertex") {
            shader = gl.createShader(gl.VERTEX_SHADER);
        } else {
            return null;
        }

        //append the source
        gl.shaderSource(shader, str);
        //compile the shader
        gl.compileShader(shader);

        //check for errors
        if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
            alert(gl.getShaderInfoLog(shader));
            return null;
        }

        return shader;
    }

    //Create an object to hold the shader program
    var shaderProgram;

    function initShaders() {
        //read in, compile, and load the shaders into javascript objects   
        var fragmentShader = getShader(gl, "shader-fs");
        var vertexShader = getShader(gl, "shader-vs");

        //Put a program object into the javascript shaderProgram container
        shaderProgram = gl.createProgram();

        //attach the compiled shaders to the shaderProgram
        gl.attachShader(shaderProgram, vertexShader);
        gl.attachShader(shaderProgram, fragmentShader);

        //Link the shader program
        gl.linkProgram(shaderProgram);

        if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) {
            alert("Could not initialise shaders");
        }

        //designate the shader program to be active
        gl.useProgram(shaderProgram);

        //setup a pointer to the location of the vertex position vector
        //declared by the vertex shader
        shaderProgram.vertexPositionAttribute = gl.getAttribLocation(shaderProgram, "aVertexPosition");
        //make the array available
        gl.enableVertexAttribArray(shaderProgram.vertexPositionAttribute);

        //setup pointers to the uniform variables in the vertex shader
        shaderProgram.pMatrixUniform = gl.getUniformLocation(shaderProgram, "uPMatrix");
        shaderProgram.mvMatrixUniform = gl.getUniformLocation(shaderProgram, "uMVMatrix");
    }

    //Create the model view matrix
    var mvMatrix = mat4.create();

    //Create the projection matrix
    var pMatrix = mat4.create();

    //pass in the projection and modelview matrices into the vertex shader as uniforms
    function setMatrixUniforms() {
        gl.uniformMatrix4fv(shaderProgram.pMatrixUniform, false, pMatrix);
        gl.uniformMatrix4fv(shaderProgram.mvMatrixUniform, false, mvMatrix);
    }


    //instantiate objects to hold the buffers for the geometry
    var triangleVertexPositionBuffer;
    var squareVertexPositionBuffer;

    //fill in the buffers
    function initBuffers() {
        //create the triangle buffer   
        triangleVertexPositionBuffer = gl.createBuffer();
        //bind the buffer
        gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexPositionBuffer);
        //create the vertices for the geometry
        var vertices = [
             0.0,  1.0,  0.0,
            -1.0, -1.0,  0.0,
             1.0, -1.0,  0.0
        ];
        //put the vertices in the buffer
        gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
        //add an integer to the object to represent the number of dimensions for a vertex
        triangleVertexPositionBuffer.itemSize = 3;
        //add an integer to the object to represent the number of vertices
        triangleVertexPositionBuffer.numItems = 3;

        //repeat for the square
        squareVertexPositionBuffer = gl.createBuffer();
        gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
        vertices = [
             1.0,  1.0,  0.0,
            -1.0,  1.0,  0.0,
             1.0, -1.0,  0.0,
            -1.0, -1.0,  0.0
        ];
        gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
        squareVertexPositionBuffer.itemSize = 3;
        squareVertexPositionBuffer.numItems = 4;
    }


    function drawScene() {
        //set the viewport
        gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
        //clear the depth and color buffers
        gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

        //set the perspective using the projection matrix
        mat4.perspective(45, gl.viewportWidth / gl.viewportHeight, 0.1, 100.0, pMatrix);

        //set the identity matrix to the modelview matrix
        mat4.identity(mvMatrix);

        //translate the camera
        mat4.translate(mvMatrix, [-1.5, 0.0, -7.0]);

        //DRAW THE TRIANGLE
        //bind the triangle buffer
        gl.bindBuffer(gl.ARRAY_BUFFER, triangleVertexPositionBuffer);

        //put the values from the triangle buffer into the vertex position attribute variable in the vertex shader
        //TODO:  LOOK THIS FUNCTION UP
        gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, triangleVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
        //send the modelview and projection matrices to the vertex shader
        setMatrixUniforms();
        //draw what was sent to the shaders
        gl.drawArrays(gl.TRIANGLES, 0, triangleVertexPositionBuffer.numItems);

        //repeat for the square
        mat4.translate(mvMatrix, [3.0, 0.0, 0.0]);
        gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexPositionBuffer);
        gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, squareVertexPositionBuffer.itemSize, gl.FLOAT, false, 0, 0);
        setMatrixUniforms();
        gl.drawArrays(gl.TRIANGLE_STRIP, 0, squareVertexPositionBuffer.numItems);
    }


    //Main function to push through the WebGL code
    function webGLStart() {
        //Create a canvas object from the HTML5 canvas tag in the body 
        var canvas = document.getElementById("lesson01-canvas");
        //Pass the canvas object to initialize WebGL
        initGL(canvas);
        //Create the shaders
        initShaders();
        //Create the buffers and fill in the geometry
        initBuffers();

        //Clear the background to black
        gl.clearColor(0.0, 0.0, 0.0, 1.0);
        //Occlude for depth
        gl.enable(gl.DEPTH_TEST);

        //Draw everything
        drawScene();
    }


</script>


</head>


<body onload="webGLStart();">
    <canvas id="lesson01-canvas" style="border: none;" width="500" height="500"></canvas>
</body>

</html>

Sep 1, 2011 09:58 PM

August 31, 2011

Michael Edgcumbe 2011

Doubling Back

Our methodology for coding is to have two separate coders evaluate an organization and discuss as a group the grantees for which they disagree. The group, led by Sheila, acts as a third coder and tie breaker.

Recently we added new categories to our definitions and we felt that it was important to make sure we considered the previously coded organizations under the new rubric. For that reason, we submitted the old coding to another round and found some changes to make in nearly every category. For example, we separated out the MECO category to see if it was internally consistent, and recategorized organizations for which we determined a primary focus of consulting and marketing as opposed to continuing medical education.

We are now printing out the documentation that comes up in research that leads to our decision and putting those face sheets into an archive. We have also asked the coders to cite the category definition reference when they want to discuss an organization so that we can understand where the boundaries between the current definitions lie and how they should be modified to accommodate new variations. Our best hope is that we no longer have to add new categories and can resist doubling back on work we have already done.

Aug 31, 2011 02:10 PM

August 26, 2011

Sarah Dahnke 2011

Interview on The Creators Project

The Creators Project did an interview with me this week, where I got to talk about nerdy circuit building and rocket boots.

Click here to read more!

Aug 26, 2011 01:51 AM

Duet with Chair #2

Created while in residence at chaNorth.

Part of my series of duets with inanimate objects.

Aug 26, 2011 01:24 AM

Chairs in Motion

Created while in residence at chaNorth.

Part of my series of duets with inanimate objects.

Aug 26, 2011 01:23 AM

Duet with Mike

Created while in residence at chaNorth.

Part of my series of duets with inanimate objects.

Aug 26, 2011 01:22 AM

Duet with Tree

Created while in residence at chaNorth.

This is part of my series of duets with inanimate objects.

Aug 26, 2011 01:21 AM

Duet with Basket

Created while in residence at chaNorth.

This is one piece in a series of experimental duets with inanimate objects.

Aug 26, 2011 01:20 AM

Two Trees

Created while in residence at chaNorth.

A love story and a performance based around two trees growing a canopy together until death do they part. My intention was to unravel the crocheted binding I created over the course of two days. But after 24 hours of heavy rain, the wool refused to unravel. I wondered, why did I even try?

Aug 26, 2011 01:18 AM

Out on the Ledge

A collaboration with Alex Vessels.

Out on the Ledge is a site-specific installation for four dancers projected into four windows. It takes the inhabitants of the building and incorporates them into its architecture, exposing four wild characters ready for a dance party.

Aug 26, 2011 01:16 AM

check:other

Showings:
“One Brave Thing” at The Wild Project (New York), November 2010
“Going Dutch” at Ruth Page Center for the Arts (Chicago), April 2011

check:other is a solo performance for video, featuring one woman and her attempt to fit her body through nine frames of varying shapes and sizes.

Aug 26, 2011 01:07 AM

This Dance is a Cliché

Debut: Dance Theater Workshop, February 2009

This Dance is a Cliché is more than a performance; it’s a state of consciousness. TDIAC began as a blog where I asked the audience to submit their favorite dance clichés. Submissions came from around the world. These submissions have been used to inspired a variety live performances. No two performances of TDIAC are alike.

Aug 26, 2011 01:00 AM

August 25, 2011

Rob Faludi Adjunct

Light Switch XBee: Example Project

Just finished documenting the latest example project. The Light Switch XBee is a wireless wall switch that can control lamps, fans, motors or your homemade robot using Digi’s XBee radio. It’s a model for almost any digital input device you’d like to build. If it goes on and off, you can make it wireless using this example as your guide!

The full instructions include parts needed, configuration, soldering instructions and assembly. I’ve also added a Simple XBee Receiver instruction set so you can test your switch, and an explanation of how to modify the more robust Actuator Example from my book to control A/C mains powered appliances with the wireless Light Switch. Anything can be a switch. Need shoes that turn on your toaster? Or a cat door that plays “The Cat Came Back” each time Fluffy returns from an outing? Get started with the Light Switch XBee example!

Aug 25, 2011 09:24 AM

August 24, 2011

Nien Lam 2011

Commons – iPhone App

Want to improve your city or neighborhood?

With Commons, you can compete to do good, while helping to make sure that problems in your city get fixed. Report a problem or recommend an improvement in your neighborhood that you think deserves attention and resources, and show your city some lovin’. Vote on the best reports and improvements, and see what’s most popular. Go on short fix-it missions around town to earn bonus points, and unlock City awards to level up through the game.

In Commons, share the things that you care most about fixing and improving in your neighborhood, and explore the city with your friends.

Features:
- Over 70 different City Tasks and Missions (currently designed for gameplay in Lower Manhattan)
- Report problems in neighborhoods
- Recommend city improvements
- Show appreciation for your city’s best features
- Vote up people’s best ideas
- Unlock new ranks and earn experience points
- Earn your city’s most coveted award titles

***Commons is FREE to play.

We hope you enjoy Commons. Vote It Up!

www.commonsthegame.com

Commons is brought to you by:

Suzanne Kirkpatrick, Creator & Lead Game Designer
Nien Lam, Developer & Game Designer
Jamie Lin, Interaction & Game Designer

 
 
 

Winner of the real world game for change challenge

Introducing ‘Commons’, winner of the Real-World Games for Change Challenge 2011, and the team behind the game.

 
 
 

NY Daily News – June 20, 2011
Commons, an iPhone app and urban game in which players suggest ways to improve the city’s outdoor spaces, held its first public event in lower Manhattan Sunday. To play, users opened the app and selected the first open game “district” on the map – City Hall. Players then chose from a list of “City Tasks” to complete. For each report, players take a photo of the problem or area where they’d like to see an improvement, then add a description. Their entry is uploaded to the Commons system with GPS coordinates pulled from the phone. Reports are voted on by other users, so that the best and most interesting suggestions rise to the top and earn more points.

Interview with NY Daily News:

Aug 24, 2011 09:36 AM

August 22, 2011

Rob Faludi Adjunct

Horsie Race

This Horsie Race project, developed for my recent workshops in Colorado, creates a carnival midway-style horse race using a wireless audio input with Arduino and XBee that transmits each player’s yells and cheers to a base station radio. This base station is connected to a computer where the noise advances their horse on the screen using the Processing graphical programming environment.

Yell, cheer, chant or plaintively moan into the microphone on your sensor board to make the horses move. Your yelling will also be picked up somewhat by your neighbor’s microphone. Since you’ve gone wireless, physical strategy is key. The direction you face and whether you hide in the coat closet will influence the speed of your horse.

First horse across the finish line wins the race! Shower the lucky jockey with champagne, then race again! Full instructions, diagrams, schematics and code for this example are freely available on my site.

Aug 22, 2011 08:30 AM

Aaron Uhrmacher 2011

TEDx Talk: Digital Death, Online Afterlife

Video of my TEDx Talk titled, "Digital Death, Online Afterlife"

Aug 22, 2011 07:13 AM

August 19, 2011

Michael Edgcumbe 2011

Reading in a Tab Separated Text File Created in Excel into C++

There are a number of steps that must be taken to move an excel file into a C++ container (which can then be moved on into a Mongo database).

- Repair offending unicode
- Remove offending /n characters (which are contained within the cells, rather than between those – most easily accomplished by copying and pasting as values and then using the CLEAN function to restate the table)
- Save from Excel as a tab separated, unicode file
- Open the tab separated file in TextWrangler, remove quotation marks, and re-save with Unix line endings and unicode encoding

Searching for the “best” method to input a file into a C++ struct brings up many different, subtly varying opinions. I have written my own routines in the past, but I haven’t been satisfied that they were speed and memory efficient and adaptable to many circumstances. I spent the day coming up with two objects that could be easily adapted to new files. I also adapted examples which gave clean, sophisticated code. Below is a class which loads a file into memory once and then parses it into a container.

//
//  main.cpp
//  MongoImport
//
//  Created by Michael Edgcumbe on 8/19/11.
//

#include <iostream>
#include <fstream>
#include <sstream>
#include <vector>
#include <string>

#define FILELOCATION "../Master_Grants_081911.txt"
#define INITROWS 10515
#define CONTAINERCOLUMNS 12

using namespace std;

struct Container{
    int     m_id;
    string  m_company;
    string  m_period;
    int     m_year;
    string  m_requestor;
    string  m_requestorclean;
    string  m_corequestor;
    string  m_dbarequestor;
    string  m_dbacorequestor;
    string  m_description;
    int     m_amount;
    string  m_type;
   
    Container ( vector<string> elements ){
        if( elements.size() >= CONTAINERCOLUMNS - 1 ){
            m_id = atoi( elements[0].c_str() );
            m_company = elements[1];
            m_period = elements[2];
            m_year = atoi( elements[3].c_str() );
            m_requestor = elements[4];
            m_requestorclean = elements[5];
            m_corequestor = elements[6];
            m_dbarequestor = elements[7];
            m_dbacorequestor = elements[8];
            m_description = elements[9];
            m_amount = atoi( elements[10].c_str() );
        }
        if( elements.size() == CONTAINERCOLUMNS ){
            m_type = elements[11];
        }
    }
};

class Data{
public:
    vector<Container> * m_container;
   
    Data(){
        m_container = new vector<Container>[INITROWS];
    }
   
    /*Adapted from http://stackoverflow.com/questions/132358/how-to-read-file-content-into-istringstream
      The purpose of the parse function is to read a file into memory without duplicating the buffer
      and parse the rows into a dynamically sized vector of the struct defined by the columns in the raw data.
     
      The vector is initialized to a known number of rows, if possible, in order to avoid a reallocation when
      the size exceeds the currently allocated volume ( a process which creates a new vector of size between 1.5
      2 times the original and copies all containers from the old location to the new ).
     
      Each row from the table is tokenized and passed into the container without an additional copy.
     */

   
    void parse( ifstream* _infile ){
        //Find the length of the file
        _infile->seekg(0, ios::end);
        streampos length = _infile->tellg();
        _infile->seekg(0, ios::beg);
       
        //Create a vector for the buffer
        vector<char> buffer( length );
        _infile->read(&buffer[0], length);
       
        //Create a stringstream, read the string buffer, and set the vector as the source
        stringstream localStream ( stringstream::in );
        localStream.rdbuf()->pubsetbuf( &buffer[0], length );
       
        //Feed each row into a string and tokenize the string (assumes a tab separated file).
        string line;
        while( getline( localStream, line ) ){
            stringstream linestream( line );
            string token;
            vector<string> elements;

            while( getline( linestream, token, '\t') ){
                elements.push_back( token );
            }
            Container container( elements );
            m_container->push_back( container );
        }
    }
};


int main (int argc, const char * argv[])
{

    Data data;

    ifstream *infile = new ifstream( FILELOCATION );
   
    if( infile->is_open() ){
        data.parse( infile );
        infile->close();
    }
    else {
        cout << "Unable to open file." << endl;
    }

   
    cout << data.m_container->size() << endl;
   
//    for( int i = 0; i < data.m_container->size(); i++ ){
//        cout << (*data.m_container)[i].m_id << endl;
//    }
   
    cout << "COMPLETE" << endl;
   
    return 0;
}

Aug 19, 2011 02:38 PM

August 18, 2011

Michael Edgcumbe 2011

Build Schematic

I created a build schematic to summarize the containers and languages I’ll need to build to finish the project. There’s some detail to be filled in later in terms of functions, libraries, and wireframes. The one I’m least sure about is the Desktop client. I’m leaning towards having only an iPad client at first because of the nice things that you can do with multitouch. The Kinect stuff is always just a glimmer.

Aug 18, 2011 10:35 AM

Outline of Steps to Transition from Excel to MongoDB

Thinking about what I need to do to move our preliminary data from Excel to MongoDB. This list will likely grow over time.

  1. Install and compile an unmodified MongoDB client in C++
  2. Output the Excel file to a tab separated .txt
  3. Create an ifstream of the file and put it into an object
  4. Pipe the object array into the MongoD server
  5. Setup automatic backup of database
  6. Verify output to JSON as a query and as a file
  7. Implement basic query and pivot tables through the Terminal
  8. Integrate into a basic Objective-C front end
  9. Find a hosting provider that can be handed over to IMAP

Aug 18, 2011 08:42 AM

August 17, 2011

Sarah Dahnke 2011

Site Reconstruction

This site is undergoing a redesign.

In the meantime, feel free to browse my projects on Vimeo, including my new choreography reel:

Aug 17, 2011 06:09 PM

Michael Edgcumbe 2011

Building MongoDB in C++ in Xcode

After getting Boost installed properly, I’m ready to install the MongoDB source and driver. The source code is located on github. Ironically, the scons file in github looks for boost in /opt/local rather than /usr/local. I’m sure that you can supply the scons with a different header directory, but it’s also looking for a few other utilities, so I am using MacPorts to install a second copy of Boost as well as PCRE++ and Spidermonkey. The instructions for building mongo are on the website. Essentially, ‘sudo port install mongodb’ will get you all the dependencies and put them in the right place.

Once the Macports dependencies are installed, you can cd into the github cloned directory and type ‘sudo scons –full install’ to build the libraries.

Macports seems to change some of the settings for the system, so I pointed to the /opt/local/ version of it in the linker.

The Xcode project should be set up like so:

  1. New Xcode Project
  2. Copy the following code into Main.cpp:
  3. #include <iostream>
    #include "client/dbclient.h"

    using namespace mongo;

    void run() {
      DBClientConnection c;
      c.connect("localhost");
    }

    int main() {
      try {
        run();
        cout << "connected ok" << endl;
      } catch( DBException &e ) {
        cout << "caught " << e.what() << endl;
      }
      return 0;
    }
  4. Add to the Header Search Paths
    • /path/to/mongo source directory
    • /usr/local/include
  5. Add to the Library Search Paths
    • /usr/local/lib

    • Add to the Other Linker Flags
      • -lmongoclient
      • -lboost_program_options
      • -lboost_filesystem
      • -lboost_thread
      • -lboost_system
    • Compile

Aug 17, 2011 11:50 AM

August 16, 2011

Michael Edgcumbe 2011

HPC Notes Day 2 Cont.

GPUs – gpus are single instruction, multiple data. If the data accesses are sparse, how do you vectorize onto SIMD?

CUDA is designed for manycore, wide SIMD parallelism, scalability. Provides thread abstraction to deal with SIMD. Synchronization and data sharing among small groups of threads.

Kernels are composed of many threads. All threads execute the same sequential program. Threads are grouped into thread blocks. Threads inside thread blocks can synchronize, communicate. Not guaranteed to synchronize with other thread blocks. All have unique IDs to query.

CUDA thread – own program counter, variables, processor state, no implication of scheduling.
CUDA thread block – data parallel task. all blocks stay at the same entry point, but each can execute any code they want. thread blocks must be independent tasks.

Thread parallelism – independent thread of execution
Data parallelism – access threads in a block, across blocks in a kernel
Task parallelism – different blocks are independent, independent kernels in separate streams

Threads within a block synchronize with barriers called syncthreads();
Thread blocks can coordinate to share a task. Any possible interleaving of blocks should be valid. Can run in any order, concurrently or sequentially. Allows scalability.

CUDA kernels are always void.. don’t return anything. Fills up a pointer on a buffer in device memory.

Each thread has its own private memory. Compiler takes care of local memory space for a stack. Per block shared memory is mapped on chip.

cudaMemcpy() moves data from the host memory to the device memory and back. cudaMalloc allocated memory on the device to memory allocated with malloc.

variables shared across the block
__shared__ int *begin, *end;

scrathpad memory
per block shared memory is used for communication between threads

__global__ – function callable from host
__device__ – function callable on device
__device__ – variable in device memory
__shared__ – in per-block shared memory

cudaGLMapBufferObject() – interoperability with OpenGL

OPENCL:
has a rich task parallelism model
different terminology, similar model to CUDA
AMD chips / OpenCL, have to parallelize data structures. CUDA/Nvidia do not.

Tips for Efficient CUDA Code
Need abundant, fine grained parallelism to make the NVidia GPU efficient.
Maximize on-chip work.
Minimize execution divergence.
Minimize memory divergence.
First priority: make things work.
Second priority: get performance.
You initiate a grid of thread blocks, hardware has a load balancer that takes threads from launch and schedule them and process until its done. Peak efficiency requires multiple thread blocks schedules on every SM. How many thread blocks can be simultaneously scheduled? Mapping depends on resources that each thread blocks require (register file limiting factor, possibly). Want to have about 20 registers per thread for Fermi in order to get full occupancy.

Can’t have synchronization inside divergent code. Must terminal logic branches to get them all doing the same thing before you can synchronize.

Performance depends on optimize utilizing memory. How do we tune memory? Memory is SIMD also. Sparse access wastes bandwidth. Unaligned access wastes bandwidth (depends on the cache line width). Optimal case for ‘coalescing’: dense accesses that are aligned. That’s what you want your data structure to be like.

Structure of Arrays is often higher performance that Array of Structs.

CUDA Thrust – C++ libraries inspired by the STL, includes reduce, sort, reduce_by_key, scan.. Includes OpenMP backend for multicore programming.

Aug 16, 2011 04:29 PM

HPC Notes Day 2 cont

MPI.h is a library (message passing interface) which runs on supercomputers (with CUDA, for example).

MPI_Comm_rand( MPI_COMM_WORLD, &rank) which one am I
MPI_Comm_size(…) number of processes in the system

Group and context form a communicator. default communicator is MPI_COMM_WORLD. packs up datas a packet to send across the system and then have them taken apart over the system. loses its type information as it goes over the network.

Blocking send
MPI_SEND( start, count, datatype, dest, tag, comm )
When the function returns from the send operation, we know that we can reuse that buffer without corrupting the message (it has been copied into another place). We don’t know it has necessarily been received.

Collective operations:
get all the processors together than own a piece of an array and perform an operation (reduce to a sum, min, max, prod, etc).

Sources of deadlocks:
send a large message from process 0 to process 1 – if there is insufficient storage at the destination. 0 copy synchronous models (copying directly to the destination with no buffer) can cause this problem. You have to set the buffer size in order to make sure the application doesn’t deadlock, while balancing it with the memory constraints of the system.

Non blocking operations:
MPI_Request
MPI_Status
MPI_Isend
MPI_Irecv
MPI_Wait

figuring out whether the messages have gone out to reuse the buffers. MPI_Wait can be really slow – if things are not load balanced.

can wait for a global synchronization with MPI_Barrier

race conditions can arise from not checking the buffer.

MPI doesn’t match gpus, multicore processors, etc very well. It’s made for a mid-90s architecture. OpenMP for shared memory. CUDA for GPUs (or maybe OpenCL). MPI uses too many copies on many cores and demands a huge memory footprint.

PGAS languages give message passing and shared memory in the same implementation. (http://upc.lbl.gov). Unified Parallel C (UPC). Parallel extension of ANSI C.

Single program multiple data style – fixed number of threads, runs to the end of execution. any serial c program can be a parallel program (each processor runs its own copy of the program). Any thread can write to a shared variable – expensive. UPC has locks, but they are slow. Shared arrays.

One sided vs. two sided communication:
a one sided message you send the address instead of the message id to directly write into memory without getting any information back from the host.

Aug 16, 2011 02:16 PM

HPC Notes Day 2

Discrete Event Systems
- synchronous, evaluate all transitions at every time stem
- asynchronous, transitions evaluated only if inputs change based on an event from another part of the system

Two copies of the grid, old and new, ping pong between
Domain decomposition – split into a grid, compute locally, barrier(), exchange info with neighbors until done
Pick shapes to minimize surface to volume area (reduces the number of barrier calculations).

Graph partitioning – load balance + minimize communication, solved by libraries

Synchronous simulations may waste time. Asynchronous – evaluate when an event arrives from another processor.

Conservative updating – allowed to simulate up to the point where everyone ends up at the same time t, but can get deadlocks. When everyone gets to time t=0, no one can take a step forward to t+1. If you’re stuck for a while, if everyone is stuck, detect deadlock, move forward. But it’s a serial bottleneck, lose parallelism.

Speculative updating – keep simulating, timestamp, back up if you’re wrong.

Particle Systems
External forces – currents in the environment, easy to //
Nearby force – interactions between local particles
Far-field force – everyone depends on everyone else

- update velocity with acceleration, update position using velocity at each time t, update the time step
- corresponds to “map reduce” pattern

Evenly distribute particles on processors

Parallelism in Nearby Forces:
Use domain decomposition, assign parts of the grid to each processor. Communicate particles in boundary zones to the neighbors. Can use a quad tree or an oct tree to divide the space.

Far field forces:
Package up everything a process owns and sends it to the neighbor (On^2). Approximate by aggregating into centers of gravity. “Fast-multipole method” (O n log n)
Particle mesh – move to the nearest mesh point,
Tree codes: approximate cluster my single metaparticles in the kd-tree.

Lumped Systems – Ordinary Differential Equations
System is lumped because we’re computing at the nodes/endpoints, not along the wires.
Modeled with differential and algebraic equations.
“Does the earthquake frequency match the resonance frequency of the building?” Sparse matrix, eigenvalues

Continuous systems
Elliptic – heat, discretize time and space – gives a sparse system of linear equations
Hyperbolic

Find parallelism and locality
Load Balancing
Linear algebra
Fast Particle Methods

Parabolic

Aug 16, 2011 11:48 AM

Rob Faludi Adjunct

Collaborative Strategy at Digi International

I just started a terrific new job! In July, Digi International invited me to join their R&D team as Collaborative Strategy Leader. My mandate is to forge stronger connections with the maker community, discover outstanding new work, help Digi contribute to those projects and support innovation in general.

Some of my cool new role will include:

  • building a thriving developer community
  • locating interesting new projects that can benefit from Digi’s support
  • helping makers get their devices connected to the cloud
  • driving the creation of new examples and kits
  • helping developers publish, present, workshop and teach
  • speaking at summits, panel discussions or other gatherings
  • …and pushing the boundaries with some innovative work of my own

By creating this position Digi hopes to uncover new markets and design new products that engage inventors. We’ll be looking to shine a light on your extraordinary new creative projects. There’s incredible work coming out of design labs, hacker spaces, basements and garages these days. If you’re doing something excellent with XBee radios, or connected devices of any make (we’re brand agnostic), let us know what you’re doing and how we can help you!

Aug 16, 2011 09:37 AM

August 15, 2011

Michael Edgcumbe 2011

Notes on High Performance Computing

Long version of the parallel computing course.
www.cs.berkeley.edu/~demmel/cs267_Spr11

Python for science – AY250
Optimization Models in Engineering – EE127
Software Engineering for SCientific Computing CS194/294

Parallel is always forced to be at the lowest level all the time. Cars with multiprocessors. Manycore chips.

Amdahl’s law – only speed up by the fraction of what is being sped up. 50% of the code == speedup factor of 2 at most.

Pipelining parallelism – get all stages going at the same time (doing the laundry: dryer while washer is going on second load).

Hazards prevent the next instruction from executing during its designated clock cycle.
- structural, hardware isn’t ready
- data, data isn’t ready
- control, need to make a decision, but don’t know what to do

Software architects get rid of hazards by stalling the process. Out of order execution for instructions with no dependencies. Make sure the results appear sequentially. Branch prediction and value prediction – guesses to lead the process that can be thrown out later. But can always solve a hazard by waiting.. if you’re not correct guessing, you’re wasting time and power. Ridiculous power consumption levels off CPU clock times. Accuracy of guesses decreases with the number of instructions in the pipeline. Moore’s law holds because of multicore processors. Parallelism must be taken up in software. (Uniprocessor parallelism though pipelining has run out of steam because of heat/power).

Very long instruction word (VLIW) – figured out in the compiler. Compilers guarantee no data hazards. Keeps the chip sections idle.

Vector code: Uses vector registers to move more through.

Single instruction, multiple data (SIMD): Image filters. GPUs have SIMD architecture.

PThreads – Threads for Portable Operating System Interface for UNIX (POSIX)
Thread level parallelism (TLP) – divide app into multiple threads.

Caches are used to provide spatial and temporal locality for speed. Can’t ignore the cache when thinking about an algorithm because cache misses bring the cpu to a halt. Techniques for improving cache performance are blocking and tiling. Or you can deal with the complexity through experiments – variants of the algorithm with different block sizes that figures out the best size for the architecture.

Explicitly parallel computer architecture depends on the communication between the processors. Software architect must manage the communication (rather than letting it be done under the hood in hardware). What data is private vs. shared? How is it accessed and communicated? How is it synchronized?

Shared Memory Programming Model. Threads communicate implicitly by writing and reading shared variables. Leads to the need for synchronization because you don’t know when something is finished. Data race occurs when two processors or two threads access the same variable and at least one does a write. Doesn’t give the right computation. Make instruction an atomic operation by putting a lock around it. Locks can lead to deadlocks.

Barrier synchronization is waiting for everyone to finish before moving on. Fine grained locking is sometimes faster than barriers.

Message passing – no shared pot of memory, every execution unit has its own memory. Only way to share is to send messages. De facto standard for parallel processing. The arrival of a message is a synchronization event. Overhead of messaging varies based on the support in the architecture.

Aug 15, 2011 02:11 PM

August 14, 2011

Michael Edgcumbe 2011

Compiling, Installing, Linking to Boost Binaries

Boost is mostly a header-only library, however, MongoDB requires some of the binaries that need to be pre-compiled. I had a difficult time getting XCode to see the binaries that I had installed, but in the end it was a simple fix.

Steps to installing, building, and linking Boost 1_47_0 in XCode 4.1 on Mac OS X:

  1. Download and decompress Boost
  2. Open the terminal, type ‘open /usr/local’
  3. type ‘cd /usr/local/boost*’
  4. type ‘bootstrap.sh’
  5. type ‘sudo ./bjam –macosx-version=10.7 –macosx-version-min=10.6 –architecture=x86 –link=static –address-model=32_64 stage’
  6. type ‘sudo ./bjam install
  7. Create a new XCode C++ Command Line Tool project
  8. Change the main.cpp file to include one of the C++ binaries. I used the example from the Boost website:
  9. #include <boost/regex.hpp>
    #include <iostream>
    #include <string>

    int main()
    {
        std::string line;
        boost::regex pat( "^Subject: (Re: |Aw: )*(.*)" );

        while (std::cin)
        {
            std::getline(std::cin, line);
            boost::smatch matches;
            if (boost::regex_match(line, matches, pat))
                std::cout << matches[2] << std::endl;
        }
    }
  10. Open the Build Settings for the target in the XCode project
  11. Use the search bar in the build settings to find ‘search’
  12. Add the following line to Header Search Paths: ‘/usr/local/include’
  13. Add the following line to Library Search Paths: ‘/usr/local/lib’
  14. Use the search bar in the build settings to find ‘flags’
  15. Add the following line to Other Linker Flags: ‘-lboost_regex’

A brief explanation of things I learned along the way.. Boost uses either b2 or bjam to build the binaries. bjam takes several flags which determine the output. The minimum macosx version on 10.7 is 10.6. The –architecture flag should be set to x86 (ppc compiling is no longer an option, ‘combined’ will not compile correctly, you can compile 32 bit and 64 bit and put them in separate directories if you want to). The –address-model flag can be set to either 32, 64, or 32_64 depending on which you would like (32bit vs 64bit vs both). The –threading=multi flag outputs libraries that enable multithreading. The –with flag selectively builds libraries (I didn’t use it).

XCode needs to be pointed to the exact library file, rather than just the library directory. This can be accomplished by adding a linker flag in the format ‘-lboost_xxx’ where xxx is the name of the library.

Once XCode sees the library, the ‘Undefined symbols for architecture x86_64:”boost …’ errors should disappear.

Aug 14, 2011 11:33 AM

August 13, 2011

Aaron Uhrmacher 2011

Play Pop N Scream at Geekdown TONIGHT in NYC

Visit Geekdown 2011 on August 13, 2011 to play Pop N Scream

Aug 13, 2011 09:17 AM

August 12, 2011

Michael Edgcumbe 2011

Installing the MongoDB C++ Source and Boost on a Mac

I have had a hard time getting Boost to install the right libraries to compile the example code from the MongoDB site.

I think the include line from the example code
#include “client/dbclient.h”
uses one of the prebuilt libraries from Boost.

My Boost source code has not been compiling using these libraries. I get an “Undefined symbols for architecture x86_64:” error, with some specific variables listed below. I think I need Boost to compile for x86_64 because MongoDB is highly recommended to be compiled as 64-bit (although I could be wrong about this, I’m not enough of a computer scientist to know – one of the troubles of being self taught).

I have tried both installing boost from macports as well as inserting my own installation into /usr/local/ and installing it with bjam and bootstrap.sh. I have muddled around with setting the architecture= flag in bjam to x86 with the address-model= flag set to either 64 or 32_64.

I uninstalled the macports boost because it seemed to be confusing the issue for me. I followed the boost instructions to compile the libraries into a directory in my Documents folder, but I don’t like this option because it’s specific to my machine. I have also included the -lboost_system flags etc in the linker instructions, but have since removed them to simplify.

I think first, I need to forget about MongoDB and get Boost to compile correctly, so I’ve deleted the folders and I’m starting from scratch, again. More on Monday.

Aug 12, 2011 02:50 PM

August 09, 2011

August 08, 2011

Rob Faludi Adjunct

Liking The Guests – at Sketching in Hardware

 

Just launched a new talk called “Liking the Guests” at Sketching in Hardware 2011 at Philadelphia’s Franklin Institute. “Liking the Guests” tells the story of how holding your users in high esteem creates an unexpected and fundamental principle for good design. The talk has pirates, princesses, apes, and taxidermy, but essentially it’s about why I think we all make things. These ideas are worth sharing so I’ve uploaded the talk publicly so that anyone can take a listen.

Sketching in Hardware is an invited gathering attended by a small group of interaction designers, open software developers, device theorists, educators and hackers. We all create, study and use prototype devices for exploring new ideas. Everyone gives a presentation, there’s a mite of drinking and a ton of sharing skills. Well worth the trip. Check out my bit…

Aug 8, 2011 08:47 AM

August 05, 2011

Matt Ganucheau 2011

Intro to openFrameworks: From Beginning to Launching a Mobile Application

This is an introduction to using openFrameworks, a cross-platform C++ library for creative coding. In this class, you will be taken from the first steps of installing openFrameworks, all the way to launching an interactive mobile application. Because openFrameworks is a powerful open source tool designed to simplify building creative applications, it has rapidly become one of the tools of choice for a new era of creative designers. This class is for artists, designers and hackers alike, beginners should not be afraid to join.

- An introduction to Xcode
- 2D & 3D Graphics
- Generative sound and sample playback.
- Generative graphics and video playback.
- Basic texture mapping.
- Utilizing sensors
- Basic Game design.
- Launching your app on a mobile device.

http://openframeworks.cc

Here are some great examples of projects made with OF :
http://www.creativeapplications.net/category/openframeworks/

*Each student will be required to pay $100 registration as a developer to place on their iPhone
*Xcode is required (http://developer.apple.com/xcode/)

Dates: Tuesday & Thursday, September 6th, 8th, 13th & 15th
Times: 6pm – 9pm
Course Length: 12 hours
Cost: $20/instruction hour, $240 total, $216 for GAFFTA Members
Location: GAFFTA, 998 Market Street, San Francisco, CA 94131

Aug 5, 2011 01:01 AM

July 29, 2011

Aaron Uhrmacher 2011

Pop N Scream

Pop N Scream is a physical carnival like game where two players compete using their mobile phones to keep their respective balloons from getting popped.

Jul 29, 2011 11:37 AM

July 28, 2011

Aaron Uhrmacher 2011

Telestory Thesis Presentation

Video documentation of my ITP thesis presentation of Telestory in May, 2011.

Jul 28, 2011 12:16 PM

July 25, 2011

Aaron Uhrmacher 2011

Gotham Guide Included in MOMA’s “Talk To Me” Exhibit

I am honored to have Gotham Guide included in the new MoMA exhibition, "Talk to Me"

Jul 25, 2011 11:16 AM

July 24, 2011

Ezra Velazquez 2012

TokShow


CLICK HERE TO VIEW APP – Artist Portal
CLICK HERE TO VIEW APP – Viewer Portal
CLICK HERE TO VIEW APP – Admin Portal

Name: TokShow
Platform: Web
Author: Ezra Velazquez
Released: July 2011
Technology: AJAX, CSS, CSS3, HTML, JavaScript, jQuery, JSON, MySQL, OpenTok API, PHP
Version: 1.0
Source Code: Available on GitHub

About TokTube: Showcase App developed for tech startup TokBox under the position of technical product development intern.

Talk show format web application with host view, user view, and moderator view. Up to five people are queued by the moderator, who can decide whether to allow user to broadcast his/her signal.

Jul 24, 2011 02:20 PM

LoopTube


CLICK HERE TO VIEW APP – Artist Portal***PRIVATE VIEWING/ACCESS ON 7/26***
CLICK HERE TO VIEW APP – Viewer Portal

Name: LoopTube
Platform: Web
Author: Ezra Velazquez
Released: July 2011
Technology: AJAX, CSS, CSS3, HTML, JavaScript, jQuery, JSON, MySQL, Node.js, OpenTok API, PHP, YouTube API
Version: 0.5
Source Code: Available on GitHub

About LoopTube: Showcase App developed for tech startup TokBox under the position of technical product development intern.

Social e-gather where selected YouTube videos are played in sync for all viewers until artist turns camera on.

Jul 24, 2011 02:11 PM

ConsulTok


CLICK HERE TO VIEW APP

Name: ConsulTok
Platform: Web
Author: Ezra Velazquez
Released: August 2011
Technology: AJAX, CSS, CSS3, HTML, JavaScript, JSON, jQuery, jQueryUI, Node.js, OpenTok API, ShopSense API, Socket.io
Version: 1.0
Source Code: Available on GitHub

About ConsulTok: Showcase App developed for tech startup TokBox under the position of technical product development intern.

E-commerce social shopping web application featuring the OpenTok & ShopSense APIs

Jul 24, 2011 01:58 PM

Lollapaloobox


CLICK HERE TO VIEW APP

Name: Lollapaloobox
Platform: Web
Author: Ezra Velazquez
Released: June 2011
Technology: AJAX, CSS, CSS3, HackLolla API, HTML, JavaScript, jQuery, JSON, OpenTok API
Version: 1.0
Video: On YouTube!

About Lollapaloobox: Showcase App developed for tech startup TokBox under the position of technical product development intern.

Submission to www.hacklolla.com using the Lollapalooza and OpenTok APIs.

Blog Posts: Blog post written specifically for Lollapaloobox.

lolla lolla3

Jul 23, 2011 08:14 PM

ArchiveTok


CLICK HERE TO VIEW APP

Name: ArchiveTok
Platform: Web
Author: Ezra Velazquez
Released: July 2011
Technology: AJAX, CSS, CSS3, HTML, JavaScript, jQuery, OpenTok Archiving API
Version: 0.5
Source Code: Available on GitHub

About ArchiveTok: Showcase App developed for tech startup TokBox under the position of technical product development intern.

Analogy to explain & demonstrate the OpenTok Archiving API

Jul 23, 2011 07:41 PM

July 17, 2011

Greg Borenstein 2011

Back to Work No Matter What: 10 Things I’ve Learned While Writing a Technical Book for O’Reilly

I’m rapidly approaching the midway point in writing my book. Writing a book is hard. I love to write and am excited about the topic. Some days I wake excited and can barely wait to get to work. I reach my target word count without feeling the effort. But other days it’s a battle to even get started and every paragraph requires a conscious act of will to not stop and check twitter or go for a walk outside. And either way when the day is done the next one still starts from zero with 1500 words to write and none written.

Somewhere in the last month I hit a stride that has given me the beginnings of a sense confidence that I will be able to finish on time and with a text that I am proud of. I’m currently preparing for the digital Early Release of the book which should happen by the end of the month, which is a big landmark that I find both exciting and terrifying. I thought I’d mark the occasion by writing down a little bit of what I’ve learned about the process of writing.

I make no claim that these ten tips will apply to anyone else, but identifying them and trying to stick by them has helped me. And obviously my tips here are somewhat tied in with writing the kind of technical book that I’m working on and would be much less relevant for a novel or other more creative project.

  1. Write everyday. It gets easier and it makes the spreadsheet happy. (I’ve been using a spreadsheet to track my progress and project my completion date based on work done so far.)
  2. Everyday starts as pulling teeth and then goes downhill after 500 words or so. Each 500 words is easier than the last.
  3. Outlining is easier than writing, if you’re stuck outline what comes next.
  4. Writing code is easier than outlining. if you don’t know the structure, write the code.
  5. Making illustrations is easier than writing code. If you don’t know what code to write make illustrations or screen caps from existing code.
  6. Don’t start from a dead stop. read, edit, and refine the previous few paragraphs to get a running start.
  7. If you’re writing sucky sentences, keep going, you can fix them later. Also they’ll get better as you warm up.
  8. When in doubt make sentences shorter. they will be easier to write and read.
  9. Reading good writers makes me write better. This includes writers in radically different genres from my own (DFW) and similar ones (Shiffman).
  10. Give yourself regular positive feedback. I count words as I go to see how much I’ve accomplished.

A note of thanks: throughout this process I’ve found the Back to Work podcast with Merlin Mann and Dan Benjamin to be…I want to say “inspiring”, but that’s exactly the wrong word. What I’ve found useful about the show is how it knocks down the process of working towards your goals from the pedestal of inspiration to the ground level of actually working every day, going from having dreams of writing a book to being a guy who types in a text file five hours a day no matter what. I especially recommend Episode 21: Assistant to the Regional Monkey. and the recent Episode 23: Failure is ALWAYS an Option. The first of those does a great job talking about how every day you have to start from scratch, forgiving yourself when you miss a day and not getting too full of yourself when you have a solid week of productivity. The second one speaks eloquently of the dangers of taking on a big project (like writing a book) as a “side project”. Dan and Merlin talked about the danger of not fully committing to a project like this. For my part I found these two topics to be closely related. I’ve found that a big part of being fully committed to the project is to forgive myself for failures — days I don’t write at all, days I don’t write as much as I want, sections of the book I don’t write as well as I know I could. The commitment has to be a commitment to keep going despite these failures along the way.

And I’m sure I’ll have plenty more of those failures in the second half of writing this book. But I will write it regardless.

Jul 17, 2011 03:05 PM

July 11, 2011

Minette Mangahas 2011

When We Were Kids



I'm delighted to run back to my brushes and pencils after working with technology so much in the last two years. 
My good friend James Garcia has put together a wonderful show dubbed "When We Were Kids", featuring Christopher De Leon, James and myself. Its opening at 1AM Gallery in San Francisco this Friday, July 15 and runs 'til August 14.




Hulahoop Girl (2011)
ink and pencil, 9 in x 12 in




Sampaguita Kid (2011)
ink and pencil, 9 in x 12 in

Jul 11, 2011 10:49 AM

July 08, 2011

Matt Ganucheau 2011

The Art of The Mixtape


The Art of The Mixtape
Thursday, July 21. 7pm-10pm.
Langton Labs North: 9 Langton St, SF CA.

There’s no doubting it: a good mixtape can lead to baby making.  A mix with a good arc of pace and narrative can have profound effects on people. They signify much more than music recommendations but also the amount of time focused on the recipient. Through the evolution of media formats and software recommendations services, this art has become watered down but it does still exist.

In this class, Matt Ganucheau will show you how to use Ableton Live to lead you through all aspects to create a quality modern mixtape, from inspiration, narrative, technical execution and even presentation.  No prior Ableton skills are required.

$10 suggested donation.

RSVP on Facebook, if that’s how you roll: https://www.facebook.com/event.php?eid=202459919805093
Langton Labs events: http://events.langtonlabs.org/

langton labs is:
agil alex amy anselm ben dave galit hanna jane kate katy lev lou matt
matty megan michael mick mike peretz sam todd trevor tristan

Jul 8, 2011 12:25 PM

July 07, 2011

Rob Faludi Adjunct

Botanicalls in MoMA Exhibition

We’re tremendously excited that the Botanicalls project will be featured in MoMA’s upcoming “Talk to Me” Exhibition that opens July 24th in New York on the third floor of the Museum of Modern Art, continuing through November 7th, 2011. Here’s the official description:

Talk to Me explores the communication between people and things. The exhibition focuses on objects that involve a direct interaction, such as interfaces, information systems, visualization design, and communication devices, and on projects that establish an emotional, sensual, or intellectual connection with their users. Examples range from a few iconic products of the late 1960s to several projects currently in development—including computer and machine interfaces, websites, video games, devices and tools, furniture and physical products, and extending to installations and whole environments.

Botanicalls is a system that allows thirsty plants to reach out for human help. It has certainly come a long way from its noble beginnings at ITP, via many intriguing incarnations and kits, including the current one. There’s lots of other terrific projects in this exhibit, so if you’re thinking of being in New York City this year, plan a visit and check us out at MoMA!

Jul 7, 2011 09:35 AM

July 05, 2011

Greg Borenstein 2011

Physical GIF Launches on Kickstarter

I’m proud to announce the launch of Physical GIF on Kickstarter. Physical GIF is a collaboration with Scott Wayne Indiana to turn animated GIFs into table top toys. We use a laser cutter and a strobe light to produce a kind of zoetrope from each animated GIF so you can watch it on your coffee table. Here’s our Kickstarter video which explains the whole process and shows you what they look like in action:

For our Kickstarter campaign we have four main pledge levels. At $50 you get a Physical GIF along with everything you need to play it at home: the strobe, the plastic GIF disc and frames, and the hardware. You can choose from three designs that scott created, BMX Biker:

Elephant-Rabbit Costume Party:

and New York Fourth of July:

For a $100 pledge, we’ll send you a kit with all three of these Physical GIFs.

We’ve also recruited four amazing animated GIF artists to design special limited edition Physical GIFs: Ryder Ripps, Nullsleep, Sara Ludy, and Sterling Crispin. More info about these artists on our project page At $250, you can reserve one of the Physical GIFs from any of these artists. We’re going to be working with them to explore materials and techniques for turning their designs into Physical GIFs. We’re hoping that they explore some of the limitations and possibilities of this new medium. Each of the Physical GIFs they produce will come in a limited numbered edition with documentation from the artist.

And at the top pledge level, we’ll work with you directly to manufacture your own custom Physical GIF from your design. We’ve only made five of this reward available because we want to be able to spend as much time as it takes working with you to turn your animated GIF ideas into physical reality.

We’re incredibly excited about this project and can’t wait to see how people react to it. Head over to Kickstarter right now to give us some help: Physical GIF on Kickstarter. Thanks!

Jul 5, 2011 08:45 AM

July 03, 2011

Aaron Uhrmacher 2011

Bernie Mac: Christmas Special

Here's a spec script I wrote based on the characters from FOX's "The Bernie Mac Show"

Jul 3, 2011 02:13 PM