Category Archives: Vision

A Vision

 A Vision for the Future

I’m scraping the last shreds of grey matter i have left at the moment. I will elaborate, but this is the outline:
0. Dawn of a new potential
1. Improvement of the tech as a driving force
2. Developing storytelling methods in the new mediums
3. Democratizing creation
0. Lumieres
1. Griffith/Melies/etc
2. Eisenstein/Kuleshov/and every movie you’ve ever seen
3. Super8/DV Cam/HD DSLR/Smartphone/Non Linear Editors/Online Education
0. Rebirth by Mark Bolas/Oculus
1. Lighter/Faster/Higher Resolution/More Biometrics
2. You/Me/Us
3. Free Engines/Javascript/UE Blueprints/Online Assets/Open Source Software

Searching for videos in an immersive space

We’re faced with a real unique situation. Some might see it as a problem that we need to solve, but I see it as being very incredibly lucky because I love thinking about this stuff. We’re in a time and place where a toolkit for a new medium is dropped in our laps and there’s a lot to find out about it.

So I started this question of how to address search in AR with the expectation that this might be a life question and something I’ll take a try at now. When brainstorming AR or immersive search, my main question is how does a spacial dimension change things. My first approach to this was to imagine video “options” that unfolded as someone walked into a collision event. So search results relating to a video would drive the plot forward and be a mixture of stream of (internet) consciousness and choose your own adventure.

But anytime I hear, choose your own adventure, I come to a full stop.

Using  depth and environment seem to be the big breakthroughs with AR. Most things I’ve made and most things I’ve seen so far have been similar to spatial-VR (or VR in AR), where you drop a camera feed on a unity scene. I’m excited to start interacting more with environment, it’s a really amazing moment for cinema (once it adopts it), and see the future of search coming from situational details. With ML and content recognition, you’re walking down the street with your AR contacts, watching some story all around you, and the objects you see will inform how the story develops and what assets are added. It’s a strange form for search to take, maybe there’s room for more user input.

This week my main goal was to open up my project so that users wouldn’t rely on just my collection of narrative elements. That’s a bit tough for the type of elements I’ve created so far – since they’re illustrations drawn in a specific style, I wanted to find something that matched that a little. Using google images API, you can narrow the search down to animated line drawings – that’s pretty close. It’s not ideal and I want to build up my repository of illustrations, but for now it’s a quick way to get a wide range of material in there.

So unfortunately, for now, I think we still need to rely on a text based search, or a combination of that and related image searches to assets chosen for the scene (each of the illustrations would have a tag). For the interaction I’m thinking of spatial component that layers results in transparencies back in space. If possible, I’d like to create a preview where the plane that would hold the video would show what each option looks like in the space.

In practice I ran into a lot of problems integrating a basic image search in unity. Since I’m working spatially, poly makes a lot of sense and has a great repository, so even though it doesn’t match exactly what I’m up to, it’s a good placeholder. I connected its API with Unity and set it up so that when an asset in the scene is selected, it’s tag is searched in Poly (for example astronaut is tagged with “space”). I took the top three results and displayed it above each object.

Otherwise I completely redesigned the UI and have some real gesture functionality that works great. Users can select each illustration (with bounding boxes), position, rescale and rotate them. Next up is making a delete option for too many videos, and eventually a timeline that controlled a sequence of events would be a game changer for creating stories.

poly  dwdw(video^)

momo(video^)

Assigment 4

Give us your big picture ideas about how emerging technologies and new forms of storytelling will change the media landscape.  Please use category ‘Vision’

This question can be quite difficult to answer mainly because,  the way one would and could imagine how emerging tech will influence media landscape can be completely wrong. So it forces people to thing of a multitude of ways that this change can happen.

However, I believe that technology like augmented and virtual reality will inform media landscape in ways that previous technologies couldn’t.

I believe social VR will be a thing in the future. However, I’m not sure if it will be sustainable Maybe it just reach a specific community. My thought is that the biggest impact it will have on is the gaming community. Those who identify as “gamers” often do not mind being in the same physical space for hours because they game they is immersive to the point that they forget about it. But for most consumers, I think they will eventually lose interest in this because fundamentally,  as a species we still enjoy the feeling that comes with real  human interaction and socialization.

Additionally, I believe more than ever, people will receive their news in a new way. At one point in time, the only way to receive news was over television or radio. Then we could log onto our desktops at home and get news on CNN.com or a similar news page. Now, we have that same access with out mobile devices. Real soon, I believe that we will have access in our glasses, and this technology  will layer information onto our reality.

 

 

Diving in infinite immersive information

Why so many stories are just boring? Is it because the subject is meaningless or the storyteller untalented? Or is it because the storyteller doesn’t have the medium in which they can do it? The large number of new technologies emerging year to year give us exciting possibilities to express ourselves and tell the stories that form in our heads. And people all around the world are picking up these tools and pushing them to the limit, creating new forms of what we used to think was finished…

Read more in my blog…

HyperCinema: Are we closer than we think?

This course has challenged us with imagining the future of storytelling: if we image a future where assets like settings, characters, plots, and other story elements are created on a massive scale by individual users and shared within a network, how do creators find each other and collaborate? How will creative skillsets that currently require specialized and expensive knowledge be further democratized and made widely accessible?

There are already case studies readily available to us on the internet in how this can be done. I will look at the following fandom communities and examine how assets are created, remixed, and shared currently and try to imagine how this will evolve over time:

  • Tumblr TV Fandoms
  • Gamer Mods

Tumblr

Scrolling through the fandom tag
Scrolling through the fandom tag

Scroll through any tag even containing the word #fan on Tumblr and behold acres and acres of sometimes brilliant, but mostly cringe-worthy, fan-created and remixed content.

On this platform, creators use tags to self-filter their contributions and their consumption. When users follow their favorite profiles as well as their favorite tags, those posts will automatically show up chronologically on their dash. Additionally, users can search tags for like content chronologically or by popularity.

Users can post their own content in the form of text, images, and videos or they can reblog others’ posts. Notes, the metric by which the popularity of a post is measured, are recorded on each post in the form of Likes, Reblogs, and Replies.

Replies are what makes the remixed fan content you will find on Tumblr so interesting to the discussion of the future of storytelling.  Rarely will posts remain in isolation. Tumblr’s endless-scrolling nature allows for things like gif-sets of popular culture, sequential screen grabs that when combined form new meanings––forms of storytelling unique to the platform. With Replies, users end up generating long chains of additions to original posts that become a form of collaborative storytelling (and joke telling), internet sleuthing, historical research/context, and even critical discourse (of varying degrees of quality).

People with similar interests find each other through a shared visual and cultural vocabulary. As a result of its meme-ifying architecture and its millions of users’ creative labor, Tumblr is uniquely situated to help us imagine what a future of fully networked narrative content could look like. The next step would be thinking beyond the memes and the monkeyshines towards how this shared vocabulary could evolve into actual shared 3D assets.

A Note on Bias and Toxicity

One aspect of internet fan-culture that cannot be understated is its tendency towards toxicity and exclusivity: a boys-club, if you will.  Even if the majority of internet fans don’t create their virtual communities with the intention of excluding those deemed other, we still bring our biases into our virtual escapes. Tumblr has a number of examples of horrendous mob-like bullying and straight up hate, just as it does beautiful expressions of collective creativity––much like the internet as a whole. It is imperative to stay vigilant and critical, both within and without.

I am optimistic, however, in the power of experiential technology. To reiterate part of my post from the first week of class, the democratization of cinema will affect the mainstream culture at large:

The stories we tell and how we tell them affect our understanding of the world and therefore affect our understanding of the possible.

When the wealth of experience there is in the world is more broadly represented and when more audiences have access to it, we may no longer have to endure the failure of imagination our society now faces.

The big question now is how.

Video Game Mods

There are multiple examples throughout the history of video games of enthusiasts modifying the code, textures, visuals, and mechanics of their favorite video games to riff on the existing assets or create entirely new properties. Today, mods can be purchased on Steam, the social network slash gaming platform, and many more are shared as open source projects. These platforms are usually focused on competition (they are games, after all), although The problem of accessibility still remains: There is no centralized resource through which to access the elements of the mod nor is there any way to make them or combine them without some amount of specialized knowledge.

While my personal favorite use of this medium is for goof-em-ups (see examples below), fully independent narrative and artful games have been created using mods. As the technology for creating virtual assets progresses, these techniques will become more and more accessible. I would like to explore the possibility of using these existing assets to create interactive stories within the sandboxes mod-enthusiasts have already built.

I want to bring special attention to Garry’s Mod (link below). Garry’s Mod (or GMod) is a physics sandbox made with a modified version of Valve’s source engine. Valve is the gaming company that brought us seminal classics like the Half-Life series and Team Fortress. Fans buy the mod from Steam and use it to make their own worlds and characters, sharing them in game using Multiplayer and broadcasting them on streaming services like Twitch and YouTube. In the mid-to-late-aughts, there was even a renaissance of comics created using GMOD. Concerned was a personal favorite.

Garry's Mod
Garry’s Mod

While the majority of things created with these mods and the communities around them are juvenile, ridiculous, and honestly hard to parse, I don’t think that is necessarily inherent in the form. I’m not above taking inspiration from these long-running, real world examples of networked, collaborative creation.