Thesis Journal 3
This week I delved into academic research on Web VR art and interfaces, as well as some more technical experiments to figure out my project’s workflow. I started building a defunct interface dataset for training GAN using Runway, saving images from around the web, but before I start training I need to figure out what I want to include in the dataset based on what I’ll be exploring.
Following up on last week’s research, I also found a good article on Blortasia, the VR abstract art piece (https://www.roadtovr.com/prolific-vfx-artist-kevin-mack-brings-globular-surrealist-sculpture-life-blortasia/). In the article Mack explains his process, which is randomized but curated by him. He creates abstract drawings in Photoshop, then uses shaders and tools in Houdini to give them shape. He sets different procedures that are randomized so that each experience is uniquely generated, but aesthetically consistent. While this piece isn’t exactly “AI” it is using similarly processes that I want to use, as I want to work with the GANs in Photoshop and then export to create the Web VR experience.
In my research I came across “A-Frame as a Tool to Create Artistic Collective Installations in Virtual Reality,” presented at the 3rd XoveTIC Conference in Spain. The paper examines A-Frame as a tool used by art students (https://www.mdpi.com/2504-3900/54/1/47). While A-Frame is rudimentary, they argue that it offers possibilities for the future of VR as its accessibility on the web and ability to host many users simultaneously in the same experience allow many people to enjoy a VR experience from any device. They also point out that interaction wasn’t a common choice in A-Frame projects from graduate students.
I also found Brad Myers’ “A Brief History of Human-Computer Interaction Technology” very helpful in thinking about the scope of my project (https://dl.acm.org/doi/pdf/10.1145/274430.274436). Myers’ examination of the transition to GPU systems on computers inspired me to go backwards in my project to examine this timeline. He examines the origins of widgets, icons, windows and applications and the development in academic and corporate settings. This paper inspired me to add clickable icons and windows floating in the 3D VR environment, as I want my project to literally be a journey through interfaces. It also made me think about text-based interaction in A-Frame, which harkens back to the days of console and command line systems. I want meaningful interaction that adds to the narrative of interfaces but also shows the absurdity and abstractness of these elements.
With this in mind, here is a “dream review” of my project:
Juju’s “<project name>” is an active deconstruction of technological interfaces, peeling back the layers of our screens and icons through an interactive journey. Using AI generated media as a tool to show computer visions of the technology that led to its development, the experience quickly becomes meta. Entering the project shows a typical desktop screen that becomes 3D and envelopes the user, falling away behind them. 3D windows and icons quickly develop into visually indeterminant possible pasts and futures, while asking users to enter commands on a console-like window and click icons. The experience moves through a dynamic neural space resembling AI neural networks as it reexamines its own history as well as our own.
In my project Liminal Mind I used Photoshop to edit GANs into equirectangular photos, which I loaded into A-Frame as skies. I want to optimize this equirectangular generation process, but also incorporate 3D objects and video. I used Text2Image Siren (https://colab.research.google.com/drive/1L14q4To5rMK8q2E6whOibQBnPnVbRJ_7#scrollTo=Nq0wA-wc-P-s) for a demo as it allows image generation in 2:1 dimensions, the same dimensions as VR. I generated an image using the text “old computer keyboard circuit” as a test with a 2:1 ratio and used Photoshop’s 3D editing mode to create this test (unfortunately the test video is too big to upload but here’s the generated image with the right ratio):
While this is a good start, Siren’s results are much less visually interesting than Big Sleep for example, so I’ll keep experimenting with different GAN programs. In terms of the video aspect, I’m considering using Story2Hallucination (https://bonkerfield.org/2021/01/story2hallucination/) which generates “hallucinations” or frames from text that can be strung together to create a video. I think this could be a great tool for generation the equirectangular videos I need, though I need to figure out how to generate in the right dimensions or edit them together in After Effects without doing it frame by frame (which would be a nightmare). One idea is using a program like Differentiable Morphing (https://github.com/volotat/DiffMorph) which generates frames in-between two images to create a morphing video.
My technological focus now is the best way to convert various GAN images into 3D objects that I can use in Web VR. I found a very helpful Github on various 3D machine learning tools (https://github.com/timzhang642/3D-Machine-Learning). I love the objects Mack created in Blortasia and I want to give texture and depth to the experience instead of just a flat 360 video background. While I will make a few of these on my own (such as the console windows and icons) I want the visually indeterminate “technology” to play a dominant role.
Related Posts
Leave a Reply Cancel reply
You must be logged in to post a comment.
Kat Sullivan
Adam Colestock
Helen (Chenuan) Wu
Christina Lan
Dorian Janezic
George Faya
Julia Myers
Kelsie Smith
Michael Morran
Po-Wen Shih
Liu Siyan
Fisher Yu
—
Craig Protzel
Christopher Wray
Haoqi Xia
Hayden Carey
Katherine Nicoleta Helén
Maria Maciak
Parisa Shemshaki
Sakar Pudasaini
Skyler Pierce
Steven Doughty
Yiqi Wang
—
Andrew Lazarow
Benoit Belsot
Enrique García Alcalá
Hongyi Zhang
Jay Mollica
Li Shu
Teddy (Jian) Guo
Monika Lin
Wenye Xie
Yiru Lu