Searching for videos in an immersive space

We’re faced with a real unique situation. Some might see it as a problem that we need to solve, but I see it as being very incredibly lucky because I love thinking about this stuff. We’re in a time and place where a toolkit for a new medium is dropped in our laps and there’s a lot to find out about it.

So I started this question of how to address search in AR with the expectation that this might be a life question and something I’ll take a try at now. When brainstorming AR or immersive search, my main question is how does a spacial dimension change things. My first approach to this was to imagine video “options” that unfolded as someone walked into a collision event. So search results relating to a video would drive the plot forward and be a mixture of stream of (internet) consciousness and choose your own adventure.

But anytime I hear, choose your own adventure, I come to a full stop.

Using  depth and environment seem to be the big breakthroughs with AR. Most things I’ve made and most things I’ve seen so far have been similar to spatial-VR (or VR in AR), where you drop a camera feed on a unity scene. I’m excited to start interacting more with environment, it’s a really amazing moment for cinema (once it adopts it), and see the future of search coming from situational details. With ML and content recognition, you’re walking down the street with your AR contacts, watching some story all around you, and the objects you see will inform how the story develops and what assets are added. It’s a strange form for search to take, maybe there’s room for more user input.

This week my main goal was to open up my project so that users wouldn’t rely on just my collection of narrative elements. That’s a bit tough for the type of elements I’ve created so far – since they’re illustrations drawn in a specific style, I wanted to find something that matched that a little. Using google images API, you can narrow the search down to animated line drawings – that’s pretty close. It’s not ideal and I want to build up my repository of illustrations, but for now it’s a quick way to get a wide range of material in there.

So unfortunately, for now, I think we still need to rely on a text based search, or a combination of that and related image searches to assets chosen for the scene (each of the illustrations would have a tag). For the interaction I’m thinking of spatial component that layers results in transparencies back in space. If possible, I’d like to create a preview where the plane that would hold the video would show what each option looks like in the space.

In practice I ran into a lot of problems integrating a basic image search in unity. Since I’m working spatially, poly makes a lot of sense and has a great repository, so even though it doesn’t match exactly what I’m up to, it’s a good placeholder. I connected its API with Unity and set it up so that when an asset in the scene is selected, it’s tag is searched in Poly (for example astronaut is tagged with “space”). I took the top three results and displayed it above each object.

Otherwise I completely redesigned the UI and have some real gesture functionality that works great. Users can select each illustration (with bounding boxes), position, rescale and rotate them. Next up is making a delete option for too many videos, and eventually a timeline that controlled a sequence of events would be a game changer for creating stories.

poly  dwdw(video^)

momo(video^)