Category Archives: Interaction

Interaction (no real progress but lots of ideas)

I haven’t made much concrete progress since last week. I spent the early part of this week getting a camera feed into Three.js, which I succeeded in doing. I started experimenting with a processing sketch for tracking the center of change , but I couldn’t figure out how to convert the whole thing into p5 before the rest of the week caught up with me and I had to put it down for the time being.

However, I am confident and excited moving forward. I’ve started pulling together the pieces for my final project, which is also the final project for every other class I’m in, which means after this weekend, it will be the only thing I have to work on for the rest of the semester.

From this gem of a web 1.0 site, I’ve scraped .mus files — an esoteric musical file format similar to MIDI — for over 100 songs from the Sacred Harp Songbook. I will be using these files to generate a Markov Chain to algorithmically generate this style of music in real time. Because the music is inherently “spatialized,” with sound coming from all corners of the room, it is perfectly suited to a 3d environment like Three.js.

I have more specific ideas regarding execution that I’ll talk about in class — I don’t think I have time to get them all out in the next 25 minutes — but I’d like to put some time into explaining why I want to do this. Beyond a desire to create something transcendently beautiful (which I believe this will be), it serves as a useful proof of concept in developing a more universal system.

The field of Cantometrics seeks to provide a qualitative analysis of all the musics of the world, studying what makes the music of each culture unique, and by extension, what unites them all. It’s a mid-twentieth century concept that has blossomed recently with the aid of computerized analysis, but to my knowledge has not yet sought to actually generate anything with all those data points. This could be a first step driving this field of study toward creation. If such a system could be developed for music, it stands to reason that it could be developed for any form of expression.

In pursuit of this goal, I’ve been diving into several Javascript libraries that deal with generative text and speech synthesis. I also used Python for the first time to pull the .mus files (I didn’t even really know what Python did at the beginning of this week but wow Python is amazing). I have a lot to learn, and this is going to be a lot of hard work. I can’t wait.

Story Interface + Illustrations

Big updates. This project kind of took over my life this past week since it’s bringing together a lot of elements I’ve been working on this semester.  So I’m really running with this and I think I’ll work on it long after ITP.

I started in a place where I knew Quill paintings/animations could be used as story elements. All semester I’ve been trying to figure out how to “paint” in AR, and it arguably took me too long to think of illustrating subjects that interacted with the environment. That pandoras box is now swung open and I’ve been living in VR for a few days cranking out illustration/animations.

To quickly recap, I’m painting illustrative animations in Quill, recording them (with a green screen), converting that video to transparent looping video (like a gif), and placing those videos in AR.

I was stumped for awhile on what to draw and for a while thought I would choose a specific story to build assets around. I started with the basics and created some basic background scenes (a mountain, a forest, some waves) and I realized each of these environments could be built out – instead of forcing a specific narrative, I would give users the option to add assets centered around the same theme to make their own story. The (tentative) themes are forest/woods, space, prehistoric, ocean, and city.

What really came as a breakthrough was structuring the project so that users could “drag and drop” which ever illustration they wanted to in AR. Before this I would spend hours in Unity trying to line up the illustrations in the right spot, build the app, test, and go back and make changes. Now those changes are live! (although the user control needs a lot of work).

I feel like I’m exponentially learning what works and what doesn’t as I’m building up familiarity in how to animate with this program (it’s extremely tedious). A few things I’ve noticed works well are subjects that “melt” into the ground or illustrations that have a similar optical illusion so it appears as if they’re integrated in AR (for example the Narwhale). So I’m still defining my illustrative style in this and building up each scene.

(On a side note – my workflow changed since this documentation so the line drawings will be more vivid, thicker and more noticeable. I used a particle shader that removed all black from a video, and that took some of the line weight away as well – now I’m recapturing these illustrations with green screen and removing it beforehand in After Effects.)

This upcoming week I’m building up the assets for each scene and making the user interface much better. I’d ideally like users to be able to scale and rotate the illustrations (and eventually have some capturing/sharing option).

tbas

 

 

 

 

 

(videos:)

newones

 

Embodied Cognition and Interaction

Updated April 25, 2018: Lisa Jamhoury got the code to work! Check it out here

This week I attempted to apply Lisa Jamhoury’s code for grabbing objects within a 3D environment using a Kinect to the sketch I had made. I used the code from her osc_control sketch here. This is currently where I’m at and even with help I couldn’t get it to work. I used the same overall architecture I built from the Story Elements assignment from before:

  • I used Google Street View Image API to get the panorama I am using as a background.
  • I suspect that this is causing my problems: I am not wrapping an image or video onto a Sphere Geometry nor am I creating a 3D scene in the traditional Three.js sense.

Stray thoughts from the reading, “The Character’s Body and the Viewer: Cinematic Empathy and Embodied Simulation in the Film Experience”:

  • Empathy has only existed as a concept since the early 20th century???
  • “Proprioceptive” –– I did not realize there was a word for the sense people have of the position of their own bodies in space. This is a feeling dancers know very well.
  • You will find a picture of me in the Wikipedia entry for “kinesthetic strivings.”
  • Could facial mapping software be used to track the unconscious facial expressions viewers reproduce when watching a character’s facial expressions, and how could that be applied?
  • The concept of motor empathy reminds me of a character from the show Heroes: Monica Dawson’s power was that she could replicate the movements of anyone she observed. Here she is beating some bad guys with the some moves she got off a tv show: