Spatial Transparent Video

A lot of loose, floating ideas about narrative, immersive media and story structure are finally coming together in an interesting way. Now I have a pretty clear idea of how I want to approach this project and, if this workflow works, this could be a much bigger focus in my life long after ITP.

For starters, I’m working in AR (through Unity) and was counting on using video assets in this space to create a narrative. I’ve been experimenting with additive and greenscreen shaders to make transparent video assets (that don’t look like floating rectangles in space), so one component of this project is borderless, immersive video assets that seamlessly integrate into a space.

So what are these videos? How do they construct a story in an interactive, user generated way? Since the videos are spatial and surround a user, there’s a big opportunity for interaction and user input in the order and placement of these videos around the scene. What does it mean when one video happens before another, or is to the left versus way back in space. There’s a lot that can be done with that and I have a script working where users can drag specific videos around an AR space.

On a whole other planet, I’ve been developing a practice of painting in AR. I bring assets made in Quill and Tilt Brush to AR and have been experimenting with how this new medium works in general. One major limitation in this process is the strain of these paintings of a device, there’s a limit to how much they can handle. But there’s another way!

Instead of collecting and creating video assets from traditional footage (finding clips/shooting with a camera), I’m able to record painting animations as videos. The content of the paintings really open up a lot of possibilities – they can shape an environment, convey a mood and introduce plot devices.

If this is hard to imagine I have examples coming. But it’s an extremely powerful idea because I can create several directions a narrative can go in through painting, and present them as an immersive experience where users create the story.

I’m thinking of two main interactions: rearranging video clips spatially, and collision events that change the video clips. In this environment users can walk into videos that then instantiate other related videos. Essentially a story will unfold naturally as a user walks around a space and chooses which story to pursue.


So that’s the idea – progress has been pretty good. Most of the backbone of this workflow is completed and works. After a lot some tinkering, I managed to alter the Vimeo Unity SDK so that instead of taking a direct url as it’s video source, it grabs it from the server. Now we can store all our video clips as vimeo urls and switch them throughout the experience.

For the interaction, I’m unfortunately still locked into mobile AR and very much in ‘click’/’swipe’ territory. Using raycasting, I set up each transparent video screen as a collision object, that when tapped switches the video for the next url in the database. For now I’m experimenting with changing a character (in the old paper segment game style) and the background scene.

This is really a proof of concept for this idea and most of my progress was in getting the workflow up and running. Now I can focus on assets and thinking about the narrative and what exactly to illustrate as video assets. I’d also like some of the video switches to happen with collision events, so users ‘walk into’ different narrative paths.

sjsl

(video^)

(video^)

suuc
icoco(video^)