All posts by Mary Notari

Final: The Future of Acting

Throughout this semester, a gnawing anxiety has been present in the back of my mind. I have ignored it in my blog posts and class discussions because I wasn’t sure if it was relevant to the goals of this class and I wanted to keep an open mind.

Something changed during the discussion in our penultimate class last week.

In hearing Ayal’s pessimism regarding humans’ relationship to immersive, escapist entertainment and Stevie’s ideas about how distributed networks could expand audiences for independent writers and creators, I was inspired to pull at the thread that had been bothering me, namely:

Will we still need actors in the future?

As someone who, in my heart of hearts, believes in the value of the very human craft of acting, this question terrifies me.

Increasingly, commercial actors are pushed into tighter and tighter constraints in terms of what they are expected to look like and the opportunities available to them based on “type,” women and minorities especially. It’s very easy to fall into an apocalyptic mindset: as entertainment becomes more and more data-driven, these definitions of who a person is and can be could become narrower and narrower. We are already experiencing a societal failure of imagination as the wealth gap grows, hate becomes more and more visible, and trafficking in nostalgia becomes the default form of entertainment. The hegemony abides.

If we can generate (in the computational sense) characters, even actors/celebrities, tailored to the precise whims of their target audiences, where does that leave the humans? Will acting become an antiquated skill? Something that people used to have to do before we could create the perfect performer? Will theater meet the fate of vaudeville or the nickelodeon and fade into memory as a curiosity of the past?

I of course hope not. I think most people recognize the value in forms of expression that do not presuppose “realism” and take joy and satisfaction from experiencing them. We still have circus acts and puppetry and cartoons. As the second reading of this course, Flicker, illustrated our minds are looking to make those connections and in fact things that are inherently non-naturalistic (like cuts in movies) actually enhance the emotional impact or aesthetic experience of a piece of work. Just like movies can communicate through cuts, live theater can communicate things through the shared experience that aren’t possible through other medium. What will future immersive mediums do that nothing else can and will human performers have a place in them?

I tried to play with that idea for my final project, starting with an assumption that the human body will always have a place in cinema.

Concept: The Future of Acting, or The Future of Self-Insert Fanfiction

Users can insert themselves into their favorite movie scenes––as their favorite characters or as entirely new characters. They can mimic the performances of the original actors precisely or they can add their own spin. Each performance is stored in a networked database so that users may see the choices made by those before them, conjuring the past recorded forms of previous actors with their own movements.

In this prototyped iteration of the concept above, the Key function of the Kinectron app is used to superimpose a translucent image of the user on top of the selected movie scene. A recording is made of both the scene with the keyed in actor and their Kinectron skeleton. The Kinectron automatically saves the positions of all the joints on all the recorded bodies in each frame of the scene as a JSON file. These JSONs can then be played back and compared, frame by frame to that of the current user. Those who are similar, are automatically grouped and those similarities can be found within multiple parameters.

For instance, if a user lifts their arms up above their head in a certain frame, the keyed images and metadata and all past users who lifted their arms up above their heads at that same frame will appear within that scene in the user’s search. Ideally, cases within the code could be defined that allow for similarities of intention to be found in addition to similarities in position. For example, we can broadly define what an aggressive physical stance would look like as opposed to a fearful or shy one. Based on our current stage of technological evolution, here is where AI would come in handy. I can imagine training a neural network to recognize an actor’s motivation based on the position of joints and to then find others within the network who made similar choices.

Methodology

Finding Videos with YouTube Data API –– code || live example

Video Playback with Kinectron Key –– code || live example (requires Kinectron)

Record Key and Skeleton ––

  • create buttons using the native startRecord() and stopRecord() functions to capture key
  • to record the skeleton in JSON format as a body object, use saveJSON() from the p5.js library
    • you will then have to manually extract the position values for each joint nested within the body object

Compare Skeletons ––

  • use a for() loop to read through each bin in the body.joints array you created above
  • for each joint in each frame, calculate the difference between that joint and the recorded joints from other recordings
    • those who are closer will be most alike

 

Wherefore?

People will be able to compare their own performances as well as the performances of others. I see this not as a tool for competition and deciding who did it “best” but rather as a tool for knowledge building.

At this point, I freely admit that I am––kind of blindly––following an impulse. I can’t exactly draw a line directly between my idea and a future that values human contributions to cinematic performance. I know that the impulse exists in many people to play pretend well beyond childhood and there will always be fan communities. Giving more people access to high quality methods of recording themselves performing and democratizing the way those performances are shared can only be a good thing… right?

HyperCinema: Are we closer than we think?

This course has challenged us with imagining the future of storytelling: if we image a future where assets like settings, characters, plots, and other story elements are created on a massive scale by individual users and shared within a network, how do creators find each other and collaborate? How will creative skillsets that currently require specialized and expensive knowledge be further democratized and made widely accessible?

There are already case studies readily available to us on the internet in how this can be done. I will look at the following fandom communities and examine how assets are created, remixed, and shared currently and try to imagine how this will evolve over time:

  • Tumblr TV Fandoms
  • Gamer Mods

Tumblr

Scrolling through the fandom tag
Scrolling through the fandom tag

Scroll through any tag even containing the word #fan on Tumblr and behold acres and acres of sometimes brilliant, but mostly cringe-worthy, fan-created and remixed content.

On this platform, creators use tags to self-filter their contributions and their consumption. When users follow their favorite profiles as well as their favorite tags, those posts will automatically show up chronologically on their dash. Additionally, users can search tags for like content chronologically or by popularity.

Users can post their own content in the form of text, images, and videos or they can reblog others’ posts. Notes, the metric by which the popularity of a post is measured, are recorded on each post in the form of Likes, Reblogs, and Replies.

Replies are what makes the remixed fan content you will find on Tumblr so interesting to the discussion of the future of storytelling.  Rarely will posts remain in isolation. Tumblr’s endless-scrolling nature allows for things like gif-sets of popular culture, sequential screen grabs that when combined form new meanings––forms of storytelling unique to the platform. With Replies, users end up generating long chains of additions to original posts that become a form of collaborative storytelling (and joke telling), internet sleuthing, historical research/context, and even critical discourse (of varying degrees of quality).

People with similar interests find each other through a shared visual and cultural vocabulary. As a result of its meme-ifying architecture and its millions of users’ creative labor, Tumblr is uniquely situated to help us imagine what a future of fully networked narrative content could look like. The next step would be thinking beyond the memes and the monkeyshines towards how this shared vocabulary could evolve into actual shared 3D assets.

A Note on Bias and Toxicity

One aspect of internet fan-culture that cannot be understated is its tendency towards toxicity and exclusivity: a boys-club, if you will.  Even if the majority of internet fans don’t create their virtual communities with the intention of excluding those deemed other, we still bring our biases into our virtual escapes. Tumblr has a number of examples of horrendous mob-like bullying and straight up hate, just as it does beautiful expressions of collective creativity––much like the internet as a whole. It is imperative to stay vigilant and critical, both within and without.

I am optimistic, however, in the power of experiential technology. To reiterate part of my post from the first week of class, the democratization of cinema will affect the mainstream culture at large:

The stories we tell and how we tell them affect our understanding of the world and therefore affect our understanding of the possible.

When the wealth of experience there is in the world is more broadly represented and when more audiences have access to it, we may no longer have to endure the failure of imagination our society now faces.

The big question now is how.

Video Game Mods

There are multiple examples throughout the history of video games of enthusiasts modifying the code, textures, visuals, and mechanics of their favorite video games to riff on the existing assets or create entirely new properties. Today, mods can be purchased on Steam, the social network slash gaming platform, and many more are shared as open source projects. These platforms are usually focused on competition (they are games, after all), although The problem of accessibility still remains: There is no centralized resource through which to access the elements of the mod nor is there any way to make them or combine them without some amount of specialized knowledge.

While my personal favorite use of this medium is for goof-em-ups (see examples below), fully independent narrative and artful games have been created using mods. As the technology for creating virtual assets progresses, these techniques will become more and more accessible. I would like to explore the possibility of using these existing assets to create interactive stories within the sandboxes mod-enthusiasts have already built.

I want to bring special attention to Garry’s Mod (link below). Garry’s Mod (or GMod) is a physics sandbox made with a modified version of Valve’s source engine. Valve is the gaming company that brought us seminal classics like the Half-Life series and Team Fortress. Fans buy the mod from Steam and use it to make their own worlds and characters, sharing them in game using Multiplayer and broadcasting them on streaming services like Twitch and YouTube. In the mid-to-late-aughts, there was even a renaissance of comics created using GMOD. Concerned was a personal favorite.

Garry's Mod
Garry’s Mod

While the majority of things created with these mods and the communities around them are juvenile, ridiculous, and honestly hard to parse, I don’t think that is necessarily inherent in the form. I’m not above taking inspiration from these long-running, real world examples of networked, collaborative creation.

Embodied Cognition and Interaction

Updated April 25, 2018: Lisa Jamhoury got the code to work! Check it out here

This week I attempted to apply Lisa Jamhoury’s code for grabbing objects within a 3D environment using a Kinect to the sketch I had made. I used the code from her osc_control sketch here. This is currently where I’m at and even with help I couldn’t get it to work. I used the same overall architecture I built from the Story Elements assignment from before:

  • I used Google Street View Image API to get the panorama I am using as a background.
  • I suspect that this is causing my problems: I am not wrapping an image or video onto a Sphere Geometry nor am I creating a 3D scene in the traditional Three.js sense.

Stray thoughts from the reading, “The Character’s Body and the Viewer: Cinematic Empathy and Embodied Simulation in the Film Experience”:

  • Empathy has only existed as a concept since the early 20th century???
  • “Proprioceptive” –– I did not realize there was a word for the sense people have of the position of their own bodies in space. This is a feeling dancers know very well.
  • You will find a picture of me in the Wikipedia entry for “kinesthetic strivings.”
  • Could facial mapping software be used to track the unconscious facial expressions viewers reproduce when watching a character’s facial expressions, and how could that be applied?
  • The concept of motor empathy reminds me of a character from the show Heroes: Monica Dawson’s power was that she could replicate the movements of anyone she observed. Here she is beating some bad guys with the some moves she got off a tv show:

Week 3 – Story Elements

These assignments are kind of brilliant challenges in bringing theory to practice… and it’s driving me nuts. There are two different brain-skills (it’s a technical term) that I am using each week: 1) developing concepts of what story is and can be and how experiential technologies can aid in the production and dissemination of narrative experiences; and 2) how to find visual representations within a very simplified context of those concepts (also a third which is making code work the way I want it to). It reminds me of the Rules by Sister Corita Kent, a document I return to frequently. Specifically Rule 8:

“Don’t try to create and analyse at the same time. They are different processes.”

As I’m approaching these assignments, I can’t help but feel a bit anxious that I’m not completing them correctly… How can I imagine the future of storytelling and create something now? How can I explain why I’m making the choices I make when I’m not even sure the reason? How can I express what I’m trying to express without understanding how to make the code work?

My usual process is to follow a thread that interests me and experiment and the meaning or connective tissue of whatever it is I’m building reveals itself to me over time. I am hoping that that combined with the texts we are reading and the concepts Dan is presenting us with are informing each other.

Now that the freak-out is over with…

Attempt #1: Adding elements to Google Street View Panorama

Assignment: Capture people and props for stories.Create foreground objects to include in last week’s setting. Use a capture tool for those elements, find a 3D Model in a Repository or make your own 3D model. 

Concept:

Mythical figures — and historical figures of mythic proportions — inhabit the a shifting land where earth, sky and water meet.

What I did:

  • Used Three.js to create planes and add images of the characters to those planes as materials
  • Positioned those planes within the panorama I created last week

What happened:

  • At first, both elements showed up, but were stacked on top of each other.
  • Then I tried to reposition them using element.position.set()but then la sirena disappeared.
  • At times, as I was toying with the numbers and the code, the elements would rotate so that the planes weren’t directly facing me.
  • Here is the code as it currently stands. A doofus

Questions about the code:

  • Why don’t the other DOM elements show up in my sketch when using street view?
  • Why when I click and drag the sketch do the foreground objects jump to the left?
  • Why can’t I get my sirena object to show up?
  • To what extent is the Google Street View API messing up the way this code is supposed to work?

Attempt #2: Adding myself to scenes

When I couldn’t troubleshoot my way out of the above quagmire, I tried a different approach. I used Dano’s code for green-screening oneself into a panorama scene using Kinectron.

I was able to get myself into the pre-set scene but I think I misunderstood what the “Start Record” and “Stop Record” buttons in the sketch meant because this is all I got:

I assume that this is meant to save whatever was happening in the scene at that point in time so it could be remixed later.

I then tried to add myself to a video of some of my brethren. I uncommented the video part of the code, added the video file to the project folder and added its location to the sketch, and I couldn’t get it to work. Here’s the code as it currently stands.

Questions about the code:

  • How do I get the video to show up?

Playing with Kinectron

Here is some documentation of me at least successfully running the Kinectron sketch examples from the note:

Mary in the Kinectron

powers!

Week 2 – Panorama

For my 3D setting, I wanted to create a scene in my favorite place on earth: the Outer Banks of North Carolina.

It turns out that these picturesque vistas are quite popular with the 360 camera crowd and Google Street View had tons of high quality captures already uploaded to it. I really like the idea of playing with existing public assets so I set out to use the Google Street View API to create my scene.

Using Dan’s code, I signed up for an API key, found the image I wanted to use and then proceeded to fail completely at getting it to work in time for class. It was particularly frustrating because I was sure I understood what I was supposed to do to get it to work:

  1. Copy and Paste the new API key into the global variable “apiKey”
  2. Make sure the database and collection global variables are the same as in your Mlab.
  3. Copy and paste the panoID from the street view location you want to use in the “pano” in the “panoramaOptions” in setUpStreetView()
  4. Adjust heading and pitch if desired.

I found this view that I loved from the sound side of Jockey’s Ridge. Found this StackOverflow thread explaining how to find the panoID. I then proceeded to try every combination of the hexadecimal string in the URL and couldn’t figure out how to make it work.

Instead I created a 3D scene using a panorama I took in Corolla in January, the last time I went down there:

corolla, jan 2018

Although I tried to fudge it into being a 360 scene using a sphere geometry, it never quite worked. Instead, I made peace with it and used a cylinder geometry, futzing with the dimensions until it looked a least warped as possible. The code is on GitHub here.

 

Week 1

The Future of Storytelling

The beginning of this course is timing out perfectly with the one-credit course I’m also taking this weekend, “Blockchain Fiction.” There seems to be a similar thread between the two related to democratizing systems––whether they are financial, social, or cultural. Specifically I’ve been thinking about specialized knowledge:

In medieval times, illiteracy was the norm and scribes and clergy were the only ones who had the means and time to achieve the training necessary to read and write.

Are we in a new era where those who can code are the new scribes? Nowadays, computation and filmmaking are the expensive pastimes that only the privileged few with the time and resources––or the sheer tenacity––have the ability to be trained in these specialized, technical skill sets.

The ubiquity code and the image (moving or otherwise) in our everyday lives––whether it’s a logistical tool or entertainment––will necessarily increase the number of people who understand code and cameras and sound and editing etc. What I find lacking in the cultural conversation however is the idea that people will at the same time advance in their visual culture literacy.

The stories we tell and how we tell them affect our understanding of the world and therefore affect our understanding of the possible. What would our world look like if the stories we experienced truly captured the wealth of experience there is in the world?

Save a Story as Data – CONEPTUALLY

Before understanding what exactly we were being asked to do, I created a janky JSON to try and represent what I considered the elements of cinema:

https://gist.github.com/marynotari/4d99a647af613c1d972a1b404a28d0d2

Save a Story as Data on a Server – FOR REAL

Then I figured out that we were supposed to literally make a server and and save visual representations of what we considered elements of a story––in the conceptual sense rather than the literal “how to make a movie” sense. The basic elements of a story in my mind are as follows:

Setting
Protagonist/Characters/Archetypes
Given Circumstances (an acting term that describes the base reality of the scene/story)
Inciting Incident (a literary term to describe an event that sets off the conflict of the story… the thing that disrupts the given circumstances)
Antagonist (this can be a literal person or not, just whatever the protagonist is working against)
Character Arc/Hero’s Journey
Climax
Resolution/Catharsis (I am not of the mind that there must be catharsis in the Aristotelian sense of poetics but there must be some sort of through-line or cohesion for it to be considered a story)

I created a Mlab database, downloaded Dan’s code that allows you to drag and drop images directly into an Mlab… only I couldn’t get it work as I had seen it demonstrated in class. It took me a while to figure out that while the images weren’t showing up in the screen, they were still being saved as object in the Mlab. For some reason, their sizes were defaulting to 0. Then I couldn’t even get the images to save when I dragged them. So I ended up just copy and pasting each object and URL into the object and image dimensions by hand  to make it work. I ended up with this representation of the Aristotelian ideal of drama:

TMNT