All posts by Richard Lapham

Getting There

For the final round (for this class) I wanted to work on one big feature: allowing users to add their own content in AR. That tool would be a game changer in opening up this application to any content out there. And it works! (kind of).

A quick recap. I went back into some of the basic elements of this app and redesigned the UI (again). I made uniform icons and a background bar on the top of the screen for navigation. When you open it up, each button is a category that houses the video animation assets I made (clicking those will drop it into the scene). I worked on the touch interface a bit too, refining the selection outline and finally added an option to delete assets.

On to the bigger news. Using a plugin I was very lucky to find, users can access their own image gallery on an iOS device (will work on an android but its only iOS for now). Without getting too much into the technical nitty gritty, I basically set it up so the image you choose replaces a texture in a prefab that’s spawned in front of you. They’re automatically generated as PNG’s so it’s transparency friendly and comes with the same interactions as my painting videos.

One last update was also doing this for video. That worked for a solid day! (and reverted back to an Xcode hangup I’m still figuring out). It works in the same way as the images, and supports transparency as well.

I really considered building some kind of timeline/sequencing tool to make more controlled experiences, but it’s a daunting task to recreate unity’s timeline editor as a mobile interface. I’m not saying never, but that’s tacked onto the To Do list. I’d also love eventually to allow social/sharing options for this and generally getting it out there to see what people come up with. The final stage for this will to look more at my server setup and create a publicly accessible layer so you can leave your AR experiences for anyone to see.

 

Searching for videos in an immersive space

We’re faced with a real unique situation. Some might see it as a problem that we need to solve, but I see it as being very incredibly lucky because I love thinking about this stuff. We’re in a time and place where a toolkit for a new medium is dropped in our laps and there’s a lot to find out about it.

So I started this question of how to address search in AR with the expectation that this might be a life question and something I’ll take a try at now. When brainstorming AR or immersive search, my main question is how does a spacial dimension change things. My first approach to this was to imagine video “options” that unfolded as someone walked into a collision event. So search results relating to a video would drive the plot forward and be a mixture of stream of (internet) consciousness and choose your own adventure.

But anytime I hear, choose your own adventure, I come to a full stop.

Using  depth and environment seem to be the big breakthroughs with AR. Most things I’ve made and most things I’ve seen so far have been similar to spatial-VR (or VR in AR), where you drop a camera feed on a unity scene. I’m excited to start interacting more with environment, it’s a really amazing moment for cinema (once it adopts it), and see the future of search coming from situational details. With ML and content recognition, you’re walking down the street with your AR contacts, watching some story all around you, and the objects you see will inform how the story develops and what assets are added. It’s a strange form for search to take, maybe there’s room for more user input.

This week my main goal was to open up my project so that users wouldn’t rely on just my collection of narrative elements. That’s a bit tough for the type of elements I’ve created so far – since they’re illustrations drawn in a specific style, I wanted to find something that matched that a little. Using google images API, you can narrow the search down to animated line drawings – that’s pretty close. It’s not ideal and I want to build up my repository of illustrations, but for now it’s a quick way to get a wide range of material in there.

So unfortunately, for now, I think we still need to rely on a text based search, or a combination of that and related image searches to assets chosen for the scene (each of the illustrations would have a tag). For the interaction I’m thinking of spatial component that layers results in transparencies back in space. If possible, I’d like to create a preview where the plane that would hold the video would show what each option looks like in the space.

In practice I ran into a lot of problems integrating a basic image search in unity. Since I’m working spatially, poly makes a lot of sense and has a great repository, so even though it doesn’t match exactly what I’m up to, it’s a good placeholder. I connected its API with Unity and set it up so that when an asset in the scene is selected, it’s tag is searched in Poly (for example astronaut is tagged with “space”). I took the top three results and displayed it above each object.

Otherwise I completely redesigned the UI and have some real gesture functionality that works great. Users can select each illustration (with bounding boxes), position, rescale and rotate them. Next up is making a delete option for too many videos, and eventually a timeline that controlled a sequence of events would be a game changer for creating stories.

poly  dwdw(video^)

momo(video^)

Story Interface + Illustrations

Big updates. This project kind of took over my life this past week since it’s bringing together a lot of elements I’ve been working on this semester.  So I’m really running with this and I think I’ll work on it long after ITP.

I started in a place where I knew Quill paintings/animations could be used as story elements. All semester I’ve been trying to figure out how to “paint” in AR, and it arguably took me too long to think of illustrating subjects that interacted with the environment. That pandoras box is now swung open and I’ve been living in VR for a few days cranking out illustration/animations.

To quickly recap, I’m painting illustrative animations in Quill, recording them (with a green screen), converting that video to transparent looping video (like a gif), and placing those videos in AR.

I was stumped for awhile on what to draw and for a while thought I would choose a specific story to build assets around. I started with the basics and created some basic background scenes (a mountain, a forest, some waves) and I realized each of these environments could be built out – instead of forcing a specific narrative, I would give users the option to add assets centered around the same theme to make their own story. The (tentative) themes are forest/woods, space, prehistoric, ocean, and city.

What really came as a breakthrough was structuring the project so that users could “drag and drop” which ever illustration they wanted to in AR. Before this I would spend hours in Unity trying to line up the illustrations in the right spot, build the app, test, and go back and make changes. Now those changes are live! (although the user control needs a lot of work).

I feel like I’m exponentially learning what works and what doesn’t as I’m building up familiarity in how to animate with this program (it’s extremely tedious). A few things I’ve noticed works well are subjects that “melt” into the ground or illustrations that have a similar optical illusion so it appears as if they’re integrated in AR (for example the Narwhale). So I’m still defining my illustrative style in this and building up each scene.

(On a side note – my workflow changed since this documentation so the line drawings will be more vivid, thicker and more noticeable. I used a particle shader that removed all black from a video, and that took some of the line weight away as well – now I’m recapturing these illustrations with green screen and removing it beforehand in After Effects.)

This upcoming week I’m building up the assets for each scene and making the user interface much better. I’d ideally like users to be able to scale and rotate the illustrations (and eventually have some capturing/sharing option).

tbas

 

 

 

 

 

(videos:)

newones

 

Spatial Transparent Video

A lot of loose, floating ideas about narrative, immersive media and story structure are finally coming together in an interesting way. Now I have a pretty clear idea of how I want to approach this project and, if this workflow works, this could be a much bigger focus in my life long after ITP.

For starters, I’m working in AR (through Unity) and was counting on using video assets in this space to create a narrative. I’ve been experimenting with additive and greenscreen shaders to make transparent video assets (that don’t look like floating rectangles in space), so one component of this project is borderless, immersive video assets that seamlessly integrate into a space.

So what are these videos? How do they construct a story in an interactive, user generated way? Since the videos are spatial and surround a user, there’s a big opportunity for interaction and user input in the order and placement of these videos around the scene. What does it mean when one video happens before another, or is to the left versus way back in space. There’s a lot that can be done with that and I have a script working where users can drag specific videos around an AR space.

On a whole other planet, I’ve been developing a practice of painting in AR. I bring assets made in Quill and Tilt Brush to AR and have been experimenting with how this new medium works in general. One major limitation in this process is the strain of these paintings of a device, there’s a limit to how much they can handle. But there’s another way!

Instead of collecting and creating video assets from traditional footage (finding clips/shooting with a camera), I’m able to record painting animations as videos. The content of the paintings really open up a lot of possibilities – they can shape an environment, convey a mood and introduce plot devices.

If this is hard to imagine I have examples coming. But it’s an extremely powerful idea because I can create several directions a narrative can go in through painting, and present them as an immersive experience where users create the story.

I’m thinking of two main interactions: rearranging video clips spatially, and collision events that change the video clips. In this environment users can walk into videos that then instantiate other related videos. Essentially a story will unfold naturally as a user walks around a space and chooses which story to pursue.


So that’s the idea – progress has been pretty good. Most of the backbone of this workflow is completed and works. After a lot some tinkering, I managed to alter the Vimeo Unity SDK so that instead of taking a direct url as it’s video source, it grabs it from the server. Now we can store all our video clips as vimeo urls and switch them throughout the experience.

For the interaction, I’m unfortunately still locked into mobile AR and very much in ‘click’/’swipe’ territory. Using raycasting, I set up each transparent video screen as a collision object, that when tapped switches the video for the next url in the database. For now I’m experimenting with changing a character (in the old paper segment game style) and the background scene.

This is really a proof of concept for this idea and most of my progress was in getting the workflow up and running. Now I can focus on assets and thinking about the narrative and what exactly to illustrate as video assets. I’d also like some of the video switches to happen with collision events, so users ‘walk into’ different narrative paths.

sjsl

(video^)

(video^)

suuc
icoco(video^)

Unity Database

For starters, the server setup was ultimately a success, although I went down a few rabbit holes. Basically Unity has it’s way of doing things and networking, a lot of this has to do with its use as a game engine (so a lot of multiplayer talk). I eventually found a solution for an independent server through Firebase, which has a lot of functionality and supports “noSQL” (they have JSON options).

It’s tough to document the process of setting up a server connection, so I’ll share some error messages, hurdles and milestones along the way. The big moment was having the position values of a Unity object (sphere) update in the server live. So updating values a success – next up is figuring out how to parse incoming data so we can access video information. I’m imagining storing a bunch of videos on a seperate server (maybe squarespace) and send the url information based on the user interaction.

ALT TEXT!

klnbio

aaba
galslssee(video^) sosoo(video^)  

Time Slices

I wanted to try something that’s been on the back of my mind for some time. With AR and immersive media in general, I keep thinking what about this new medium is special and could be leveraged to show something that hasn’t yet been possible. Immersive media has two things really going for it: depth (immersion/space) and time. So I started out making a story/environment that mixes time and space around a user.

By bringing in video assets on an inverted sphere (as a panorama or skybox) and cutting that background panorama up into slices, you can create an immersive space that pays with space and time. As a simple example, imagine recording a 24 hr 360 video from the center of the Washington Square park fountain. Now take 1 minute slices of the video and evenly distribute it as a skybox around the inner circle of Washington Square Park, now you have slices of the day as a spacial installation.

That’s the idea that started this series of experiments. From there I realized if you brought in an “occlusion material” from Unity’s assets, you can cut away video in interesting ways. You can make a truly experimental space when bringing in videos from other places and perspectives (as a hockney-esc collage). And finally, I looked into a Vimeo Unity SDK that allows (pro) members to stream live footage. So I have a lot of tools at my disposal and I’m seeing this as just the beginning of exploring what’s possible with all of them.


^video^
vi

img ^video^ jbj
image^video^^video^^video^