Category Archives: Elements

What is the language of universal meaning

Things that come to mind

 

NOTE: i couldn’t get the videos to show up. Read the proper version at my blog.

In the game “The Movies”, by Lionhead Studios, is a movie production simulator much like The Sims or Tycoon series; where the main goal is to keep everything in your created world in order (or not). But The Movies adds this whole “Movie Maker” mini game where you can actually make movies within.

Production of The Movies began in late 2002[2] in a Lionhead Studios brain-storming conference. The idea began when Peter Molyneux came up with a new idea for a simulation game. The idea was to create a more diverse and lifelike strategy aspect to the game giving players the option to create their very own movie. GameSpy

So they had to come up with a way to enable players to “make movies”, seperate than the studio management part. The implementation is simple: They created a catalog of “scenes”, which are Maniquins that have a set animation, for example a character shooting another character. Then you have a catalogue of skins to put on these character, much like Mixamo/Adobe Fuse (As of April 2018!). They also let you control the camera movement and the setting to an extent so you’ll end up with a fully formed scene. Now all you have to do is to put these scenes together, add sound/subtitle (or if you’re ambitious, voice over) and you’ll have a motion picture story.

 

The Movies Advance Movie Maker Tutorial

Here you can see how the Movie Maker system in “The Movies” works. Worth a watch.

Terminator 2 Animated Remake using ‘The Movies’ game by Lionhead

And example of what the Movie Maker is capable of, a remake of a familiar film.

BEST MOVIE FROM ‘THE MOVIES GAME’ EVER MADE!!

This one’s an original production, although i’ve seen IMO better results, but it’s expression through the medium, so take it however you like.

Story telling doesn’t come naturally to us

Story Appreciation does come naturally, don’t get me wrong: We can’t help but to praise people who can tell stories in the medium of choice and appreciate the level of detail they go to make sure the end product contains all the relevant pieces to deliver that “totality”, that elusive “Gesamtkunstwerk”.

The truth is we don’t really need a lot of information to empathize and understand. There’s a lot of information out there that we are able to parse, but it wouldn’t be viable nor practical to do so. So we use abstraction and compartments stored that contain “just enough” detail to be operational in the world, and add to the detail when necessary. Interestingly we have no problems expressing using those symbolic abstractions as well, meaning we’re quite comfortable with a cartoonish representation of people, and are willing to empathize with literally a couple of lines and circles and humanize them.

So the problem of “Abstraction Level” is something to think about. If you give your random person ALL the details they need to build a full picture, like a real model to draw from, it’s unlikely for them to hit their desired level of detail.
Even a lot of times they think they did, but after reviewing the expression later they can see how it looks “wrong”. So the question is what level of abstraction should you give the creator for them to create with “enough” effort, but don’t get discouraged from their incompetence to utilize it? How high up should be the level of details they should be able to comprehend and articulate?

Grim Fandango: Land of the Living

In the game Grim Fandango, you live in the land of the dead (from Mexican mythology), and you go to the “land of the living” to bring back the newly deceased. The depiction of “the land of the dead” is the image that someone who’s not used to see living things would make. In our example, dead people who forgot how living was.

Adventure Time: BMO in the VR BRB

In the series Adventure Time: The Islands, Fin and Jake (boy and dog) enter the VR world that their sentient Gameboy (BMO) has made. BMO also tried to recreate his friends in a section called BRB (Be Right Back!), but they look horrific, and BMO knows it (well you KNOW you drew a bad car if you never drew a car!). Very dark and very though provoking (Like most of Adventure Time, don’t be fooled by the colors!)


Although, if you WANT your work to look like a collage, go ahead! Using the limitations imposed on you to your advantage is always a good strategy. Also, we never had “enough” juxtaposition, so go for it!

Enjoy this work by the Armenian artist, Sergei Parajanov.

Sergei Parajanov Collages

Variation on themes by Pinturicchio and Raphael

Week 3 – Story Elements

These assignments are kind of brilliant challenges in bringing theory to practice… and it’s driving me nuts. There are two different brain-skills (it’s a technical term) that I am using each week: 1) developing concepts of what story is and can be and how experiential technologies can aid in the production and dissemination of narrative experiences; and 2) how to find visual representations within a very simplified context of those concepts (also a third which is making code work the way I want it to). It reminds me of the Rules by Sister Corita Kent, a document I return to frequently. Specifically Rule 8:

“Don’t try to create and analyse at the same time. They are different processes.”

As I’m approaching these assignments, I can’t help but feel a bit anxious that I’m not completing them correctly… How can I imagine the future of storytelling and create something now? How can I explain why I’m making the choices I make when I’m not even sure the reason? How can I express what I’m trying to express without understanding how to make the code work?

My usual process is to follow a thread that interests me and experiment and the meaning or connective tissue of whatever it is I’m building reveals itself to me over time. I am hoping that that combined with the texts we are reading and the concepts Dan is presenting us with are informing each other.

Now that the freak-out is over with…

Attempt #1: Adding elements to Google Street View Panorama

Assignment: Capture people and props for stories.Create foreground objects to include in last week’s setting. Use a capture tool for those elements, find a 3D Model in a Repository or make your own 3D model. 

Concept:

Mythical figures — and historical figures of mythic proportions — inhabit the a shifting land where earth, sky and water meet.

What I did:

  • Used Three.js to create planes and add images of the characters to those planes as materials
  • Positioned those planes within the panorama I created last week

What happened:

  • At first, both elements showed up, but were stacked on top of each other.
  • Then I tried to reposition them using element.position.set()but then la sirena disappeared.
  • At times, as I was toying with the numbers and the code, the elements would rotate so that the planes weren’t directly facing me.
  • Here is the code as it currently stands. A doofus

Questions about the code:

  • Why don’t the other DOM elements show up in my sketch when using street view?
  • Why when I click and drag the sketch do the foreground objects jump to the left?
  • Why can’t I get my sirena object to show up?
  • To what extent is the Google Street View API messing up the way this code is supposed to work?

Attempt #2: Adding myself to scenes

When I couldn’t troubleshoot my way out of the above quagmire, I tried a different approach. I used Dano’s code for green-screening oneself into a panorama scene using Kinectron.

I was able to get myself into the pre-set scene but I think I misunderstood what the “Start Record” and “Stop Record” buttons in the sketch meant because this is all I got:

I assume that this is meant to save whatever was happening in the scene at that point in time so it could be remixed later.

I then tried to add myself to a video of some of my brethren. I uncommented the video part of the code, added the video file to the project folder and added its location to the sketch, and I couldn’t get it to work. Here’s the code as it currently stands.

Questions about the code:

  • How do I get the video to show up?

Playing with Kinectron

Here is some documentation of me at least successfully running the Kinectron sketch examples from the note:

Mary in the Kinectron

powers!

Spatial Transparent Video

A lot of loose, floating ideas about narrative, immersive media and story structure are finally coming together in an interesting way. Now I have a pretty clear idea of how I want to approach this project and, if this workflow works, this could be a much bigger focus in my life long after ITP.

For starters, I’m working in AR (through Unity) and was counting on using video assets in this space to create a narrative. I’ve been experimenting with additive and greenscreen shaders to make transparent video assets (that don’t look like floating rectangles in space), so one component of this project is borderless, immersive video assets that seamlessly integrate into a space.

So what are these videos? How do they construct a story in an interactive, user generated way? Since the videos are spatial and surround a user, there’s a big opportunity for interaction and user input in the order and placement of these videos around the scene. What does it mean when one video happens before another, or is to the left versus way back in space. There’s a lot that can be done with that and I have a script working where users can drag specific videos around an AR space.

On a whole other planet, I’ve been developing a practice of painting in AR. I bring assets made in Quill and Tilt Brush to AR and have been experimenting with how this new medium works in general. One major limitation in this process is the strain of these paintings of a device, there’s a limit to how much they can handle. But there’s another way!

Instead of collecting and creating video assets from traditional footage (finding clips/shooting with a camera), I’m able to record painting animations as videos. The content of the paintings really open up a lot of possibilities – they can shape an environment, convey a mood and introduce plot devices.

If this is hard to imagine I have examples coming. But it’s an extremely powerful idea because I can create several directions a narrative can go in through painting, and present them as an immersive experience where users create the story.

I’m thinking of two main interactions: rearranging video clips spatially, and collision events that change the video clips. In this environment users can walk into videos that then instantiate other related videos. Essentially a story will unfold naturally as a user walks around a space and chooses which story to pursue.


So that’s the idea – progress has been pretty good. Most of the backbone of this workflow is completed and works. After a lot some tinkering, I managed to alter the Vimeo Unity SDK so that instead of taking a direct url as it’s video source, it grabs it from the server. Now we can store all our video clips as vimeo urls and switch them throughout the experience.

For the interaction, I’m unfortunately still locked into mobile AR and very much in ‘click’/’swipe’ territory. Using raycasting, I set up each transparent video screen as a collision object, that when tapped switches the video for the next url in the database. For now I’m experimenting with changing a character (in the old paper segment game style) and the background scene.

This is really a proof of concept for this idea and most of my progress was in getting the workflow up and running. Now I can focus on assets and thinking about the narrative and what exactly to illustrate as video assets. I’d also like some of the video switches to happen with collision events, so users ‘walk into’ different narrative paths.

sjsl

(video^)

(video^)

suuc
icoco(video^)