Issue 5: Reality?

Death of The Hologram & The Life That Comes Next

Or Fleischer

Death of The Hologram & The Life That Comes Next

By Or Fleisher

An exploration of recent innovations in 3D technology and how these developments will change our perceptions, our relationship with screen-based media, and what we call realistic

The uncanny valley is a term coined by Jasia Reichardt in her book Robots: Fact, Fiction, and Prediction. In aesthetics, the uncanny valley is a hypothesized relationship between the degree of an object’s resemblance to a human being and the emotional response to such an object (Karl F. MacDorman & Hiroshi Ishiguro). This idea is perhaps best suited to describe not a phenomenon, but an era, which I would argue we are transitioning out of.

This era I speak of began as early as the 1980s, when popular cinema and television began depicting holograms of the future. These transparent blue figures have had a great influence on our perceptual model of what holograms “should” look like. Nowadays, innovation in machine learning, computer graphics, and hardware are paving the way for holographic content to become mainstream, and, yet, some of the questions I still ask myself are: Why are we so obsessed with realism? What is the archival benefit of documenting humans in 3D? What is the connection between holograms and personal assistants?

Hyperreal 3D Rendered representation of an elderly man.
Image credit: AlteredQualia / Branislav Ulicny Uncanny Valley WebGL experience

Our lives are surrounded by interfaces, apps, physical signage, and, more recently, voice interfaces such as Amazon’s Alexa and Google Home. These interfaces are meant to serve a specific function in our day-to-day lives, but more often than not, they look, feel, and sound nothing like us. When we emphasize function instead of speculating over imitative visuals, we will most likely avoid that uncomfortable “uncanny valley” feeling. With more and more demand for computer-generated imagery (CGI) in recent years, and the popularization of platforms such as augmented and virtual reality, computer games, and interactive filmmaking, it is clear that there is potential for new human-computer interfaces that also resemble us visually.

In order to attain this level of “realness” for our smart assistants, companies and artists are exploring various forms of 3D capturing meant to replicate the human element in new and compelling ways beyond two-dimensional pixels. These tools form the basis for volumetric capturing, a collection of techniques used to capture three-dimensional humans.

These volumetric capturing techniques have arrived in a variety of ways. Some are the result of a product vision,  such as Intel’s Replay technology for 3D sports replays, which engage sports fans in a new way by allowing them to replay a move from different angles. Others are born from computational aesthetic explorations, such as Scatter’s Depthkit, which was initially developed in order to create CLOUDS, a volumetric documentary about creative uses of software. One thing they all share is the visual, technical, and aesthetic exploration of how to represent and document real humans in 3D space.

Captured shot of a football game
Image credit: Intel TrueView

During my thesis research at ITP, I focused on the possibilities of using machine learning to reconstruct archival and historical footage in 3D. The idea of using machine learning was born out of a desire to look back into more than 200 years of visual culture (i.e. 2D photography) and speculate about how to bridge the growing gap between 2D and 3D.

Volume — NYU ITP thesis presentation on Vimeo

The accessibility of these “spatial interfaces” in our pockets has been a boon for closing that gap. Apple has included a depth sensor into new iPhones; Facebook now lets you post 3D photos to your wall; and Snapchat has augmented reality facial filters, which are immensely popular. My research has led me to believe that we are in a moment of acute awareness of the transition from 2D to 3D, and even though some might argue that we are already in the 3D era, to me, it seems that this is only the tip of the iceberg. Much like the transition from black and white to full color, from analog film to digital, the transition from 2D to 3D could be revolutionary.

Chart describing the progression of photography from Black and white to Volumetric.
The evolution of photography, Image credit: Volume

For example, today we consider black and white imagery as symbolizing authenticity and age, a representation of the reality of that time. So how will we look back at two-dimensional media a hundred years from now? Will it be regarded as only an artifact, maybe a quaint artifact at that, of our past? Perhaps we can bridge the gap by using new technologies?

An example of the cultural impact of the transition from black and white to color is Peter Jackson’s latest film, They Shall Not Grow Old. The film uses machine learning to re-color footage from World War I. The result is a rather chilling experience that puts our conventions of documented history to the test  and,  I would even argue, results in an uncomfortable confrontation with our desire to alienate the process of learning about our history. With that said it’s worth taking some time to understand how some of these technologies work.

What is volumetric capturing?

Volumetric capturing is derived from the field of computational photography, and refers to the ability to capture and store three-dimensional information about an object or human figures. There are a wide variety of techniques, ranging from laser scanning (also referred to as LIDAR scanning) to infrared sensors (a notable example is Microsoft’s Kinect camera) to, most recently, the use of machine learning and convolutional neural networks to reconstruct a 3D object from 2D images. These methods all have roots in different fields, including  defense, robotics, and topology, but are now being used more and more for art, entertainment, and media.

3D Point Cloud Representation of Human Figures
Image credit: GHOST CELL by Antoine Delacharlery

Computational humans? Alexa gets a body

Innovation in machine learning doesn’t only affect the fidelity of 3D representations, but also provides ground for procedurally generated facial expressions and dialog that bear an amazing resemblance to us, the human counterpart. Popular entertainment is taking note. In order to create the facial expressions behind Thanos in Marvel’s latest Avengers film, VFX studio Digital Domain created machine learning-driven software, called Masquerade, which aids artists in developing more human facial expressions. Imagine Google’s Duplex demo, combined with the facial expressions produced by the Masquerade software  —  personal assistants are poised to get a big facelift, quite literally.

Thanos of the Marvel Avengers Movies
Image Credit : Marvel’s Avengers Infinity War

After watching some of these tech demos, I found myself engaged in a conversation about the nature of personal assistants with my friend, Dror Ayalon. An interesting point arose in that we are experiencing a transition; our personal assistants are morphing into personal companions. The idea of embodying that voice that keeps our Amazon shopping list, turns the lights on, and sets a timer while we cook is yet another step towards Alexa getting a body and becoming a human companion.

We are experiencing a transition from personal assistants morphing into personal companions.

Films have already imagined this idea before, and it seems there is still a ways to go before we arrive at the vision portrayed in Her, where the personal assistant sounds like Scarlett Johansson, and she helps you win a holographic video game.

Hologram sequences from the film Her

There is an argument to be made that an embodied Alexa wouldn’t necessarily have to look like us. Take, for example, Anki’s Vector robot, which provides a very compelling experience without some of the visual human features, and feels like a physical embodiment to some of Pixar’s ideas of emotion through sounds and facial expressions.

Anki Robot Video

That said, a human representation of smart assistants could stretch beyond novelty and utility into something that resembles a relationship, not just “Order more toilet paper.”

What awaits beyond realism?

With volumetric capturing, I would argue that alongside the commercial pursuit of realism, we are going to see more and more attempts at capturing and representing emotion in more experimental means, using technology defects currently present in volumetric capturing as a part of the aesthetic vision for art.

Frame of Point Cloud room from the "A light in chorus" Video on youtube
Image credit: A Light In Chorus on YouTube

One example of that, which I recall from conversations I’ve had with Shirin Anlen, is using Chris Landreth’s idea of psychological realism. Landreth coined this idea and described it as “The glorious complexity of the human psyche depicted through the visual medium of art and animation.”

This idea essentially describes a cinematic technique where the director uses fantasy-like elements to reflect the character’s emotional state. Landreth wrote a paper in which he describes this mechanism, but also directed and animated the Oscar-winning short documentary film Ryan, which is still considered revolutionary in its use of aesthetics and animation to reflect the inner state of a character. If you haven’t already, you should really take 13 minutes to watch it.

Ryan by Chris Landreth on YouTube

Is it for everyone?

The history of computational photography has been driven by research institutes, startups, and R&D departments in giant tech corps. All this research and development has led to a reality where you may be able to experience the advances of volumetric capture yourself. With this technology shipping on the iPhone and the Google Pixel 3, it’s as easy as opening your camera and snapping a 3D capture.    

It’s impossible to know what virtual realities we’re about to encounter, but at the current speed of innovation, our ideas of what future 3D Interfaces might look like may be as quaint as the blue holograms of the past.   

All images are credited to reflect the rightful owners.

Or Fleisher (ITP 2018) is an award-winning creative technologist, developer and artist working at the intersection of technology and storytelling. | orfleisher.com