Getting There

For the final round (for this class) I wanted to work on one big feature: allowing users to add their own content in AR. That tool would be a game changer in opening up this application to any content out there. And it works! (kind of).

A quick recap. I went back into some of the basic elements of this app and redesigned the UI (again). I made uniform icons and a background bar on the top of the screen for navigation. When you open it up, each button is a category that houses the video animation assets I made (clicking those will drop it into the scene). I worked on the touch interface a bit too, refining the selection outline and finally added an option to delete assets.

On to the bigger news. Using a plugin I was very lucky to find, users can access their own image gallery on an iOS device (will work on an android but its only iOS for now). Without getting too much into the technical nitty gritty, I basically set it up so the image you choose replaces a texture in a prefab that’s spawned in front of you. They’re automatically generated as PNG’s so it’s transparency friendly and comes with the same interactions as my painting videos.

One last update was also doing this for video. That worked for a solid day! (and reverted back to an Xcode hangup I’m still figuring out). It works in the same way as the images, and supports transparency as well.

I really considered building some kind of timeline/sequencing tool to make more controlled experiences, but it’s a daunting task to recreate unity’s timeline editor as a mobile interface. I’m not saying never, but that’s tacked onto the To Do list. I’d also love eventually to allow social/sharing options for this and generally getting it out there to see what people come up with. The final stage for this will to look more at my server setup and create a publicly accessible layer so you can leave your AR experiences for anyone to see.