For the final round (for this class) I wanted to work on one big feature: allowing users to add their own content in AR. That tool would be a game changer in opening up this application to any content out there. And it works! (kind of).
A quick recap. I went back into some of the basic elements of this app and redesigned the UI (again). I made uniform icons and a background bar on the top of the screen for navigation. When you open it up, each button is a category that houses the video animation assets I made (clicking those will drop it into the scene). I worked on the touch interface a bit too, refining the selection outline and finally added an option to delete assets.
On to the bigger news. Using a plugin I was very lucky to find, users can access their own image gallery on an iOS device (will work on an android but its only iOS for now). Without getting too much into the technical nitty gritty, I basically set it up so the image you choose replaces a texture in a prefab that’s spawned in front of you. They’re automatically generated as PNG’s so it’s transparency friendly and comes with the same interactions as my painting videos.
One last update was also doing this for video. That worked for a solid day! (and reverted back to an Xcode hangup I’m still figuring out). It works in the same way as the images, and supports transparency as well.
I really considered building some kind of timeline/sequencing tool to make more controlled experiences, but it’s a daunting task to recreate unity’s timeline editor as a mobile interface. I’m not saying never, but that’s tacked onto the To Do list. I’d also love eventually to allow social/sharing options for this and generally getting it out there to see what people come up with. The final stage for this will to look more at my server setup and create a publicly accessible layer so you can leave your AR experiences for anyone to see.
Throughout this semester, a gnawing anxiety has been present in the back of my mind. I have ignored it in my blog posts and class discussions because I wasn’t sure if it was relevant to the goals of this class and I wanted to keep an open mind.
Something changed during the discussion in our penultimate class last week.
In hearing Ayal’s pessimism regarding humans’ relationship to immersive, escapist entertainment and Stevie’s ideas about how distributed networks could expand audiences for independent writers and creators, I was inspired to pull at the thread that had been bothering me, namely:
Will we still need actors in the future?
As someone who, in my heart of hearts, believes in the value of the very human craft of acting, this question terrifies me.
Increasingly, commercial actors are pushed into tighter and tighter constraints in terms of what they are expected to look like and the opportunities available to them based on “type,” women and minorities especially. It’s very easy to fall into an apocalyptic mindset: as entertainment becomes more and more data-driven, these definitions of who a person is and can be could become narrower and narrower. We are already experiencing a societal failure of imagination as the wealth gap grows, hate becomes more and more visible, and trafficking in nostalgia becomes the default form of entertainment. The hegemony abides.
If we can generate (in the computational sense) characters, even actors/celebrities, tailored to the precise whims of their target audiences, where does that leave the humans? Will acting become an antiquated skill? Something that people used to have to do before we could create the perfect performer? Will theater meet the fate of vaudeville or the nickelodeon and fade into memory as a curiosity of the past?
I of course hope not. I think most people recognize the value in forms of expression that do not presuppose “realism” and take joy and satisfaction from experiencing them. We still have circus acts and puppetry and cartoons. As the second reading of this course, Flicker, illustrated our minds are looking to make those connections and in fact things that are inherently non-naturalistic (like cuts in movies) actually enhance the emotional impact or aesthetic experience of a piece of work. Just like movies can communicate through cuts, live theater can communicate things through the shared experience that aren’t possible through other medium. What will future immersive mediums do that nothing else can and will human performers have a place in them?
I tried to play with that idea for my final project, starting with an assumption that the human body will always have a place in cinema.
Concept: The Future of Acting, or The Future of Self-Insert Fanfiction
Users can insert themselves into their favorite movie scenes––as their favorite characters or as entirely new characters. They can mimic the performances of the original actors precisely or they can add their own spin. Each performance is stored in a networked database so that users may see the choices made by those before them, conjuring the past recorded forms of previous actors with their own movements.
In this prototyped iteration of the concept above, the Key function of the Kinectron app is used to superimpose a translucent image of the user on top of the selected movie scene. A recording is made of both the scene with the keyed in actor and their Kinectron skeleton. The Kinectron automatically saves the positions of all the joints on all the recorded bodies in each frame of the scene as a JSON file. These JSONs can then be played back and compared, frame by frame to that of the current user. Those who are similar, are automatically grouped and those similarities can be found within multiple parameters.
For instance, if a user lifts their arms up above their head in a certain frame, the keyed images and metadata and all past users who lifted their arms up above their heads at that same frame will appear within that scene in the user’s search. Ideally, cases within the code could be defined that allow for similarities of intention to be found in addition to similarities in position. For example, we can broadly define what an aggressive physical stance would look like as opposed to a fearful or shy one. Based on our current stage of technological evolution, here is where AI would come in handy. I can imagine training a neural network to recognize an actor’s motivation based on the position of joints and to then find others within the network who made similar choices.
Finding Videos with YouTube Data API –– code || live example
Video Playback with Kinectron Key –– code || live example (requires Kinectron)
Record Key and Skeleton ––
- create buttons using the native startRecord() and stopRecord() functions to capture key
- to record the skeleton in JSON format as a body object, use saveJSON() from the p5.js library
- you will then have to manually extract the position values for each joint nested within the body object
Compare Skeletons ––
- use a for() loop to read through each bin in the body.joints array you created above
- for each joint in each frame, calculate the difference between that joint and the recorded joints from other recordings
- those who are closer will be most alike
People will be able to compare their own performances as well as the performances of others. I see this not as a tool for competition and deciding who did it “best” but rather as a tool for knowledge building.
At this point, I freely admit that I am––kind of blindly––following an impulse. I can’t exactly draw a line directly between my idea and a future that values human contributions to cinematic performance. I know that the impulse exists in many people to play pretend well beyond childhood and there will always be fan communities. Giving more people access to high quality methods of recording themselves performing and democratizing the way those performances are shared can only be a good thing… right?
While discussing current events, etc. I heard that falcons are being used for pigeon control in NYC. So much so that they are on the payroll of the city of New York! I had to investigate.
Title taken from a muslim belief, that there’s a high name for god, and by knowing it you can do anything.
A Vision for the Future
let’s get all the technical difficulties out of the way. what would we use to communicate?
What if the technology from 200 years later is here. We have VR technology that can seamlessly and effortlessly create the immersive world we ask of it. Let’s say we’ve had this technology for centuries. We can recreate anything we want for anyone we want to communicate it with. Okay, WHAT do you want to communicate? If you want to show people what happened, what are you exactly thinking about? Is it what really happened, or is it what your think happened?
Well if you want to recreate what happened, objectively, it wouldn’t really be possible. Let’s say you want the depict “i was sitting under an apple tree”: How were you sitting? what were you wearing? were you on grass or on soil? What color were the apples? What color were each individual apple? How did the sky look like? were the sun exactly where you think it was?
Let’s make it easier, depict me an “Apple”. It was the most beautiful apple you have ever seen. Okay, can you describe it for me? Can you describe it so accurately that i can make an accurate enough 3D model out of it? How would you go about describing an apple if i have never seen one?
You can’t describe the world accurately. You can hardly describe anything accurately. You don’t even have the capacity to store/digest all the information about anything in the objective world. It took us millennia to be able to describe ANYTHING accurately. You might have learned about things that can be accurately described, maybe a Triangle which has a very definite description (vertices, angles, etc) and anything that doesn’t comply 100% with those descriptions, is definitely not a Triangle. But you had to LEARN that description or spend years to figure it out on your own. And it’s just a triangle.
You were made in a world which defies description, and it’s not like your evolution never faced that problem. It’s no mystery that we chunk information into easily storable packages that are vague enough to include all the detail and accurate enough to be different from other packages. But there’s an infinite amount of information, and therefore infinite ways to package a subject. How do we determine how much information is “just enough” information? Because we inevitably do away with most of the information about something because we don’t seem to able to store all the information about it, so we do away with the less important information. So how do we determine which data is important to store and which is not?
Evolution has a weirdly simple answer to that: whatever information that is important for your survival.
Have you ever thought of being able to communicate with other creatures? Let’s say you want to communicate with a bee. Knowing what we know about storing and processing information, what do you think we have to say to a Bee?
I think by now it’s safe to say that we’re trying to recreate perception, not reality. You are inevitably immersed in the subjective world. So it makes sense for you to tell the story from the subjective perspective. And when we do away with reality, it’s becomes a choice to how much of your perception to include and exclude. Even when the event is happening right here and now, and i ask you to “look”, what i’m essentially saying is “turn away and exclude other things that you can turn towards, and focus on perceiving what’s happening there”. I’m asking you to remove unnecessary chunks of reality from you consciousness and include the ones that i think are necessary.
Well how’s that different from using words and ultimately telling a story?
I think words are our best invention to mirror what really happens in our heads when we chunk away the objective world in order to be able to operate in it. Words are exactly in the sweet spot and when necessary, they enable us to add suffix and prefix to further narrow it down to the benefit of our story. Narrowing the criteria further and further down might seem like a good option, but is it?
There’s a story in Quran (i’m not sure how it’s told in the Bible) when some murder happens among Israelites, and Moses asks God to help them out and God tells Moses to sacrifice a cow, then the truth will be revealed. Israelites asked Moses to narrow it down, what cow? what color? Male or female? how old? etc. So they keep asking and Moses kept answering until the demand was a very specific golden colored female cow that had such and such attributes. They have a hard time finding that cow, but they finally will and the sacrifice works out and the murderer is found. But things would’ve gone fine if they weren’t so picky about the cow for it the be the “right” cow.
In many ways, when we’re communicating with someone, we’re engaging in a sort of pretend-play. In that realm things are reduced to their functions and related actions, not the objectivity of it. You can play house with your childhood friend and non of you are actually a doctor or whatever, but you function as one and do actions that represents that objective reality. So you rely heavily on imagining and simulating and suspending disbelief.
It doesn’t have to be a child’s game. It’s a sort of logic in its most simple form. You can think of a mathematical equation even (and that’s a deep rabbit hole), a universally accepted thing like “1 + 1 = 2” . Well what you’re really saying 1(thing) + 1(same thing) = 2 * (things). So 1*apple + 1*apple = 2*apple.
But wait a second, those apples aren’t the same thing now, are they? You’re practically engaging in pretend-play at many levels for such a fundamental equation to be “true”.
So, a wrench here: What if we end up communicating in this future world, exactly the way we communicate in… our world? Look, you have everything you need to create in this world we live in. What does this world lack that you need to be in the virtual world to communicate with others? Wouldn’t we resort back to using the most efficient way to tell a story? Wouldn’t we end up using… words?
So maybe we’re getting ahead. Maybe finding the word of future.
In the beginning was the Word, and the Word was with God, and the Word was God. – John 1:1
- What do i want to communicate? what happened or what we think happened?
- You can’t communicate what happened, because you don’t have the capacity to digest and store all the information about… well anything.
- doing away with the details is just what we do.
- To an autistic, one way to describe it, the whole room changes if you move an element, because in all reality the scene is NOT what it used to be.
- let’s say you can communicate with every creature in the world. What would you have to say to a bee or fish?
- well, what is “just enough” information?
- are you truly creating or you’re just remixing?
- how much of it is thought?
- okay so you can’t communicate what happened. so we live in the realm of what we percieved that happened.
- do you want to communicate your perception fully? well, words seem satisfactory.
- but as an image? are we in the realm of logic and relationships anymore?
- A mathematical representation of things, bears nothing with reality. but it’s no less truthful.
- But the reality itself, two apples are not identical. 1*apple != 1*apple. things aren’t one thing in the real world.
- what are we communicating with then?
- in many ways, i think all our communication is pretend play. Let’s say, is Let’s pretend. Let’s pretend all these apples are actually equal.
- are you responsible for the information you didn’t percieve?
- understand that we’re not creating art here and leave it to the reader to interpret. It’s easyeasyeasy to jump there. But you’re COMMUNICATING. You are doing your best, hopefully, to make sure you are both on the same page. on the same page enough.
- Dawn of a new potential
- Improvement of the tech as a driving force
- Developing storytelling methods in the new mediums
- Democratizing creation
- Eisenstein/Kuleshov/and every movie you’ve ever seen
- Super8/DV Cam/HD DSLR/Smartphone/Non Linear Editors/Online Education
- Rebirth by Mark Bolas/Oculus
- Lighter/Faster/Higher Resolution/More Biometrics
This final assignment was curated by Stevie Chambers and Terrick Guttierez. In an effort to tell an interactive story that many people could easily relate to, we took a short trek around the neighborhood and made a record of some spots we pass by on our daily commute to ITP. On the way, we verbally noted the corners in which our paths intersect – which while not necessarily surprising, was cool to think about considering that we travel from two completely different places.
Using a 360 camera, we started our mock-commute at the Path station on 9th Street, detoured from Stevie’s normal route to get to Terrick’s train stop on 4th Avenue, and continued walking toward Tisch, making stops for more 360 flicks along the way (including a few of the places we would meet if we were say, unintentionally, getting to ITP at the same time). We then used dupes (3D scans) of ourselves to “walk” the map in our miniature city, which was made in Unity.
We think of this project as a tool. One that could allow you to see how friends typically commute to a common destination (and is not intended to be creepy at all!) in comparison to your own.
Final Project by Stevie C. and Terrick G.