{"id":1765,"date":"2019-04-10T13:02:22","date_gmt":"2019-04-10T17:02:22","guid":{"rendered":"https:\/\/itp.nyu.edu\/adjacent\/issue-5\/?p=1765"},"modified":"2024-10-08T21:20:50","modified_gmt":"2024-10-08T21:20:50","slug":"death-of-the-hologram-the-life-that-comes-next","status":"publish","type":"post","link":"https:\/\/itp.nyu.edu\/adjacent\/issue-5\/death-of-the-hologram-the-life-that-comes-next\/","title":{"rendered":"Death of The Hologram &amp; The Life That Comes Next"},"content":{"rendered":"\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/itp.nyu.edu\/adjacent\/issue-5\/wp-content\/uploads\/sites\/10\/2019\/04\/Or_header.jpg\" alt=\"\" class=\"wp-image-2663\" \/><\/figure>\n\n\n\n<div class=\"wp-block-columns has-2-columns is-layout-flex wp-container-core-columns-is-layout-1 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<h1 class=\"wp-block-heading\">Death of The Hologram &amp; The Life That Comes Next<\/h1>\n\n\n\n<h2 class=\"wp-block-heading\">By Or Fleisher<\/h2>\n\n\n\n<p class=\"blurb\"><em>An exploration of recent innovations in 3D technology and how these developments will change our perceptions, our relationship with screen-based media, and what we call realistic<\/em><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<p>The <em>uncanny valley <\/em>is a term coined by<a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/en.wikipedia.org\/wiki\/Jasia_Reichardt\" target=\"_blank\"> Jasia Reichardt<\/a> in her book<a href=\"https:\/\/medium.com\/r\/?url=https%3A%2F%2Fwww.amazon.com%2FRobots-Fiction-Prediction-Jasia-Reichardt%2Fdp%2F014004938X\"> <em>Robots: Fact, Fiction, and Predi<\/em><\/a><em><a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/www.amazon.com\/Robots-Fiction-Prediction-Jasia-Reichardt\/dp\/014004938X\" target=\"_blank\">c<\/a><a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/medium.com\/r\/?url=https%3A%2F%2Fwww.amazon.com%2FRobots-Fiction-Prediction-Jasia-Reichardt%2Fdp%2F014004938X\" target=\"_blank\">tion<\/a>. <\/em>In aesthetics, the <em>uncanny valley<\/em> is a hypothesized relationship between the degree of an object\u2019s resemblance to a human being and the emotional response to such an object (<a href=\"https:\/\/philpapers.org\/rec\/MACTUA-5\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">Karl F. MacDorman &amp; Hiroshi Ishiguro<\/a>).  This idea is perhaps best suited to describe not a phenomenon, but an era, which I would argue we are transitioning out of. <br><\/p>\n\n\n\n<p>This era I speak of began as early as the 1980s, when popular cinema and television began depicting holograms of the future. These transparent blue figures have had a great influence on our perceptual model of what holograms \u201cshould\u201d look like. Nowadays, innovation in machine learning, computer graphics, and hardware are paving the way for holographic content to become mainstream, and, yet, some of the questions I still ask myself are: Why are we so obsessed with realism? What is the archival benefit of documenting humans in 3D? What is the connection between holograms and personal assistants?<br><\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/VT8g4WjjmGOigSqJFpMWk8Ul7uzDRzx-7O5hNNBaMtDda10mxhXIzYEYFQCQI-RZAKimdSYgS_Vy5I4IyorQcJxTYR0_7q_lUpZNmPkdsOYo39U8w9e72zmkiOGkq0v-I_sH-cFp\" alt=\"Hyperreal 3D Rendered representation of an elderly man. \" \/><figcaption>Image credit: AlteredQualia \/ Branislav Ulicny Uncanny Valley WebGL experience<\/figcaption><\/figure><\/div>\n\n\n\n<p>Our lives are surrounded by interfaces, apps, physical signage, and, more recently, voice interfaces such as Amazon\u2019s Alexa and Google Home. These interfaces are meant to serve a specific function in our day-to-day lives, but more often than not, they look, feel, and sound nothing like us.  When we emphasize function instead of speculating over imitative visuals, we will most likely avoid that uncomfortable \u201cuncanny valley\u201d feeling.  With more and more demand for computer-generated imagery (CGI) in recent years, and the popularization of platforms such as augmented and virtual reality, computer games, and interactive filmmaking, it is clear that there is potential for new human-computer interfaces that also resemble us visually.<\/p>\n\n\n\n<p>In order to attain this level of \u201crealness\u201d for our smart assistants, companies and artists are exploring various forms of 3D capturing meant to replicate the human element in new and compelling ways beyond two-dimensional pixels. These tools form the basis for volumetric capturing, a collection of techniques used to capture three-dimensional humans.<br><\/p>\n\n\n\n<p>These volumetric capturing techniques have arrived in a variety of ways. Some are the result of a product vision, \u200asuch as<a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/www.intel.com\/content\/www\/us\/en\/sports\/sports-overview.html\" target=\"_blank\"> Intel\u2019s Replay technology for 3D sports replays<\/a>, which engage sports fans in a new way by allowing them to replay a move from different angles. Others are born from computational aesthetic explorations, such as Scatter\u2019s<a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/www.depthkit.tv\/\" target=\"_blank\"> Depthkit<\/a>, which was initially developed in order to create<a href=\"https:\/\/cloudsdocumentary.com\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\"> CLOUDS<\/a>, a volumetric documentary about creative uses of software. One thing they all share is the visual, technical, and aesthetic exploration of how to represent and document real humans in 3D space.<\/p>\n\n\n\n<figure class=\"wp-block-image alignwide\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/iJhtQBePo9NRgy5kj1iX7wgqNxTafxyX9o4gGK9LmVIjVql5IB6TzTSjtdkOGwHMul5c7aBm1Rx13tUGmsfJYIPj3KoGmJSiO3TCdFpleAaQ3zHAx7fkJ6AWO_DLUZcWCuCspNHs\" alt=\"Captured shot of a football game \" \/><figcaption>Image credit: Intel TrueView<\/figcaption><\/figure>\n\n\n\n<p>During<a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/orfleisher.com\/volume\" target=\"_blank\"> my thesis<\/a> research at<a href=\"https:\/\/tisch.nyu.edu\/itp\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\"> ITP<\/a>, I focused on the possibilities of using machine learning to reconstruct archival and historical footage in 3D. The idea of using machine learning was born out of a desire to look back into more than 200 years of visual culture (i.e. 2D photography) and speculate about how to bridge the growing gap between 2D and 3D.<\/p>\n\n\n\n<figure class=\"wp-block-embed-vimeo aligncenter wp-block-embed is-type-video is-provider-vimeo wp-embed-aspect-16-9 wp-has-aspect-ratio video-margin\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"ITP Thesis Week 2018 - Or Fleisher - Volume\" src=\"https:\/\/player.vimeo.com\/video\/270190550?dnt=1&amp;app_id=122963\" width=\"500\" height=\"281\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture; clipboard-write\"><\/iframe>\n<\/div><figcaption>Volume\u200a\u2014\u200aNYU ITP thesis presentation on Vimeo<\/figcaption><\/figure>\n\n\n\n<p>The accessibility of these \u201cspatial interfaces\u201d in our pockets has been a boon for closing that gap. Apple has included a depth sensor into new iPhones; Facebook now lets you post 3D photos to your wall; and Snapchat has augmented reality facial filters, which are immensely popular. My research has led me to believe that we are in a moment of acute awareness of the transition from 2D to 3D, and even though some might argue that we are already in the 3D era, to me, it seems that this is only the tip of the iceberg. Much like the transition from black and white to full color, from analog film to digital, the transition from 2D to 3D could be revolutionary.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/lh5.googleusercontent.com\/1T4hSRHwk-Z-nxi_YGX4PgdKnGwowt8gO57ny35OL3guKVHhkVtaU2_ylQf4XRYAJEo3pocTFIniajZBztKSjBAoBDr3vV0I7aOi3HBVTXgVX3z2Aj3E_F_ijBiLZmUAOkKo2U0Y\" alt=\"Chart describing the progression of photography from Black and white to Volumetric.  \" \/><figcaption>The evolution of photography, Image credit: Volume<\/figcaption><\/figure><\/div>\n\n\n\n<p>For example, today we consider black and white imagery as symbolizing authenticity and age, a representation of the reality of that time. So how will we look back at two-dimensional media a hundred years from now? Will it be regarded as only an artifact, maybe a quaint artifact at that, of our past? Perhaps we can bridge the gap by using new technologies? <\/p>\n\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio no-margin\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"They Shall Not Grow Old \u2013 New Trailer \u2013 Now Playing In Theaters\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/IrabKK9Bhds?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>An example of the cultural impact of the transition from black and white to color is Peter Jackson\u2019s latest film, <em>They Shall Not Grow Old<\/em>. The film uses machine learning to re-color footage from World War I. The result is a rather chilling experience that puts our conventions of documented history to the test\u200a and, \u200aI would even argue, results in an uncomfortable confrontation with our desire to alienate the process of learning about our history.  With that said it&#8217;s worth taking some time to understand how some of these technologies work.  <br><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What is volumetric capturing?<\/h3>\n\n\n\n<p>Volumetric capturing is derived from the field of computational photography, and refers to the ability to capture and store three-dimensional information about an object or human figures.  There are a wide variety of techniques, ranging from laser scanning (also referred to as LIDAR scanning) to infrared sensors (a notable example is Microsoft\u2019s Kinect camera) to, most recently, the use of machine learning and convolutional neural networks to reconstruct a 3D object from 2D images. These methods all have roots in different fields, including &nbsp;defense, robotics, and topology, but are now being used more and more for art, entertainment, and media.<br><\/p>\n\n\n\n<figure class=\"wp-block-image alignwide\"><img decoding=\"async\" src=\"https:\/\/lh4.googleusercontent.com\/DCDL9JkB3dVd-kj5ehlL7RIiklTvCO_XOSMlmnb1zxoodjfSXX4thUZL_w54C3Y975Zrv0r0rOCOUe8w_KlKmO_etnRZJaprdmc-dwXneWaexa4ljaPZbGuebwUeSf7I_3Piw7vy\" alt=\"3D Point Cloud Representation of Human Figures\" \/><figcaption>Image credit: GHOST CELL by Antoine Delacharlery<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Computational humans? Alexa gets a body<\/h3>\n\n\n\n<p>Innovation in machine learning doesn\u2019t only affect the fidelity of 3D representations, but also provides ground for procedurally generated facial expressions and dialog that bear an amazing resemblance to us, the human counterpart. Popular entertainment is taking note. In order to create the facial expressions behind Thanos in Marvel\u2019s latest <em>Avengers<\/em> film, VFX studio Digital Domain created machine learning-driven software, called <a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/www.engadget.com\/2018\/08\/18\/avengers-thanos-ai\/\" target=\"_blank\">Masquerade<\/a>, which aids artists in developing more human facial expressions. Imagine<a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/www.youtube.com\/watch?v=FIa4JJLfzI0\" target=\"_blank\"> Google\u2019s Duplex demo<\/a>, combined with the facial expressions produced by the Masquerade software \u200a\u2014 \u200apersonal assistants are poised to get a big facelift, quite literally.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter\"><img decoding=\"async\" src=\"https:\/\/lh3.googleusercontent.com\/SnbLUNIn8DkvKesSyW4n_hstvInFS38_nNArXZJLZWNh00fqZ_sFUHW5iphkpbMsBbMCGKcAheuORT8g6AB46cnVVrlb2pTy-c341kz-jLabEJe13sZ5xq60IeSpb5gXVSzyVJfY\" alt=\"Thanos of the Marvel Avengers Movies\" \/><figcaption>Image Credit : Marvel&#8217;s Avengers Infinity War<\/figcaption><\/figure><\/div>\n\n\n\n<p>After watching some of these tech demos, I found myself engaged in a conversation about the nature of personal assistants with my friend, <a href=\"https:\/\/www.drorayalon.com\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">Dror Ayalon<\/a>. An interesting point arose in that we are experiencing a transition; our personal assistants are morphing into personal companions. The idea of embodying that voice that keeps our Amazon shopping list, turns the lights on, and sets a timer while we cook is yet another step towards Alexa getting a body and becoming a human companion.<\/p>\n\n\n\n<blockquote style=\"text-align:left\" class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p><em>We are experiencing a transition from personal assistants morphing into personal companions.<\/em><\/p><\/blockquote>\n\n\n\n<p>Films have already imagined this idea before, and it seems there is still a ways to go before we arrive at the vision portrayed in <em>Her,<\/em> where the personal assistant sounds like Scarlett Johansson, and she helps you win a holographic video game.<br><\/p>\n\n\n\n<figure class=\"wp-block-embed-vimeo wp-block-embed is-type-video is-provider-vimeo wp-embed-aspect-16-9 wp-has-aspect-ratio no-margin\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Her - Alien Child \/ Hologram sequences\" src=\"https:\/\/player.vimeo.com\/video\/97740427?dnt=1&amp;app_id=122963\" width=\"500\" height=\"281\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture; clipboard-write\"><\/iframe>\n<\/div><figcaption>Hologram sequences from the film Her<\/figcaption><\/figure>\n\n\n\n<p>There is an argument to be made that an embodied Alexa wouldn\u2019t necessarily have to look like us. Take, for example, Anki\u2019s Vector robot, which provides a very compelling experience without some of the visual human features, and feels like a physical embodiment to some of Pixar\u2019s ideas of emotion through sounds and facial expressions. <br><\/p>\n\n\n\n<figure class=\"wp-block-embed-vimeo wp-block-embed is-type-video is-provider-vimeo wp-embed-aspect-16-9 wp-has-aspect-ratio video-margin\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Anki: Vector Kickstarter\" src=\"https:\/\/player.vimeo.com\/video\/293390863?dnt=1&amp;app_id=122963\" width=\"500\" height=\"281\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture; clipboard-write\"><\/iframe>\n<\/div><figcaption>Anki Robot Video<\/figcaption><\/figure>\n\n\n\n<p>That said, a human representation of smart assistants could stretch beyond novelty and utility into something that resembles a relationship, not just <em>\u201cOrder more toilet paper.\u201d<\/em><br><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">What awaits beyond realism?<\/h3>\n\n\n\n<p>With volumetric capturing, I would argue that alongside the commercial pursuit of realism, we are going to see more and more attempts at capturing and representing emotion in more experimental means, using technology defects currently present in volumetric capturing as a part of the aesthetic vision for art.<br><\/p>\n\n\n\n<figure class=\"wp-block-image alignwide\"><img decoding=\"async\" src=\"https:\/\/lh6.googleusercontent.com\/dRyLJ74d1LYOWJrtAJNzU9slvMoZq8R25bUo3MTu-y3NPHaQ20yzFIERYr2EZmrvCEp9MIBu4rnoRs9Pa7ORmdZ6oQFgBdGKPM5q0rzgrxeV_EZ77MODiNrZCwjdlLr7cyCUESFT\" alt=\"Frame of Point Cloud room from the &quot;A light in chorus&quot; Video on youtube\" \/><figcaption>Image credit: A Light In Chorus on YouTube<\/figcaption><\/figure>\n\n\n\n<p>One example of that, which I recall from conversations I\u2019ve had with<a href=\"https:\/\/shirin.works\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\"> Shirin Anlen<\/a>, is using Chris Landreth\u2019s idea of psychological realism. Landreth coined this idea and described it as \u201cThe glorious complexity of the human psyche depicted through the visual medium of art and animation.\u201d <br><\/p>\n\n\n\n<p>This idea essentially describes a cinematic technique where the director uses fantasy-like elements to reflect the character\u2019s emotional state. Landreth wrote a paper in which he describes this mechanism, but also directed and animated the Oscar-winning short documentary film <em>Ryan<\/em>, which is still considered revolutionary in its use of aesthetics and animation to reflect the inner state of a character. If you haven\u2019t already, you should really take 13 minutes to watch it.<br><\/p>\n\n\n\n<figure class=\"wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio video-margin\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"Ryan\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube.com\/embed\/nbkBjZKBLHQ?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe>\n<\/div><figcaption>Ryan by Chris Landreth on YouTube<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\">Is it for everyone?<\/h3>\n\n\n\n<p>The history of computational photography has been driven by <a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/www.cs.washington.edu\/research\/graphics\/publications\" target=\"_blank\">research institutes<\/a>,<a rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\" href=\"https:\/\/www.magicleap.com\/\" target=\"_blank\"> startups<\/a>, and<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\"> R&amp;D departments in giant tech corps<\/a>. All this research and development has led to a reality where you may be able to experience the advances of volumetric capture yourself. With this technology shipping on the iPhone and the Google Pixel 3, it\u2019s as easy as opening your camera and snapping a 3D capture. &nbsp;&nbsp;&nbsp;<br><\/p>\n\n\n\n<p>It\u2019s impossible to know what virtual realities we\u2019re about to encounter, but at the current speed of innovation, our ideas of what future 3D Interfaces might look like may be as quaint as the blue holograms of the past. &nbsp;&nbsp;<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>All images are credited to reflect the rightful owners.<\/p><\/blockquote>\n\n\n\n<p class=\"has-small-font-size\"><strong>Or Fleisher <\/strong>(ITP 2018) is an award-winning creative technologist, developer and artist working at the intersection of technology and storytelling. | <a href=\"https:\/\/orfleisher.com\/\" target=\"_blank\" rel=\"noreferrer noopener\" aria-label=\" (opens in a new tab)\">orfleisher.com<\/a><\/p>\n<\/div>\n<\/div>\n\n\n\n<div style=\"height:100px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Death of The Hologram &amp; The Life That Comes Next By Or Fleisher An exploration of recent innovations in 3D technology and how these developments will change our perceptions, our relationship with screen-based media, and what we call realistic The [&hellip;]<\/p>\n","protected":false},"author":23,"featured_media":2343,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[24],"tags":[],"class_list":["post-1765","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-issue-5"],"_links":{"self":[{"href":"https:\/\/itp.nyu.edu\/adjacent\/wp-json\/wp\/v2\/posts\/1765"}],"collection":[{"href":"https:\/\/itp.nyu.edu\/adjacent\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/itp.nyu.edu\/adjacent\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/itp.nyu.edu\/adjacent\/wp-json\/wp\/v2\/users\/23"}],"replies":[{"embeddable":true,"href":"https:\/\/itp.nyu.edu\/adjacent\/wp-json\/wp\/v2\/comments?post=1765"}],"version-history":[{"count":1,"href":"https:\/\/itp.nyu.edu\/adjacent\/wp-json\/wp\/v2\/posts\/1765\/revisions"}],"predecessor-version":[{"id":3054,"href":"https:\/\/itp.nyu.edu\/adjacent\/wp-json\/wp\/v2\/posts\/1765\/revisions\/3054"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/itp.nyu.edu\/adjacent\/wp-json\/"}],"wp:attachment":[{"href":"https:\/\/itp.nyu.edu\/adjacent\/wp-json\/wp\/v2\/media?parent=1765"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/itp.nyu.edu\/adjacent\/wp-json\/wp\/v2\/categories?post=1765"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/itp.nyu.edu\/adjacent\/wp-json\/wp\/v2\/tags?post=1765"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}