The Question

I want to explore the history and visual qualities of interfaces through AI generated media trained on databases of specific human-computer interactions. Through this exploration, the main question I’m asking is what is AI’s perception of our development and visual qualities of interfaces? This is still a working question, but I want to explore the ways interfaces have developed and the transition from command line to 2D to 3D and the “space” created inside screens. I’m fascinated with the shared visual qualities of technology and the often visually indeterminant aspects, as well as “alternate futures” or different possible paths for defunct technology from the past. I feel it may be a bit too broad, but I plan on choosing a couple interesting software and hardware interfaces, such as the teleprinter, command line interface and drawing apps/widgets and examining their connections. I want to use specific examples to tell a broader story about our interactions with technology, but I’m open to a smaller scope and focusing just on newer interfaces.

 

What I’m making

I will be creating a Web VR experience that can be used in the browser or through a headset in an installation setting. The experience will be around 10 minutes and involve optional interaction (so it’s functional as a headset experience and the web) and feature mostly AI generated media. It will address my thesis question thematically and visually, through visually indeterminant AI generated images, 3D objects and 360 video, and scrolling subtitles/voiceover to offer interesting specifics on the technology.

 

Why this is important

This is important to me as a research topic and technological exploration, especially as I’m building up both my Machine Learning and virtual reality portfolios. I’m fascinated with interfaces and love this opportunity to explore specific technologies more in depth, especially through a visual medium. I’m really into generating GAN images with different programs as well as my own datasets, especially experiments in generating VR images/video as it’s a fascinating intersection of AI and VR art.

 

My goals

I envision the outcome as a complete web VR experience with an accompanying website, and the experience can be enjoyed in a headset or on the web. I hope users will be left with a fascination and nostalgia for past technologies and perhaps even alternate futures. I’m also excited about being one of the first creators of AI generated VR content, as this seems like a very underexplored area artistically. My previous GAN generated VR experience “Liminal Mind” received a lot of interest from friends and artists interested in both mediums and I want to push the limits of both technologies together and create something new and interesting.

 

Influences & Inspiration

https://voicesofvr.com/581-using-abstract-vr-art-for-neural-entrainment-brain-research-can-creative-ai-become-conscious/ Blortasia VR game/art piece by Kevin Mack, interview by Voices of VR. Mack’s piece is a fascinating randomized abstract VR experience involving large blobs and shapes and a constantly changing environment. I love the aesthetics of this piece even though it’s not technically AI generated, and I want to build upon this by making my piece accessible on the web.

 

https://www.guggenheim.org/exhibition/countryside Description of Rem Koolhaas’ exhibition “Countryside, The Future” at the Guggenheim. I saw this exhibition a couple months ago and loved Koolhaas’ research-based approach, especially his method of choosing a few interesting examples throughout history to illustrate larger concepts. The show was fascinating and inspiring, and I learned so much, and I want to incorporate this detail-focused aspect into my piece.

 

https://www.mdpi.com/2504-3900/54/1/47 This article, created for the 3rd XoveTIC Conference explores A-Frame as an artistic VR tool. The researchers argue that A-Frame increases accessibility in VR but its interaction-based elements are underutilized. I agree with this paper and both the possibilities and limitations of A-Frame.

 

https://dl.acm.org/doi/pdf/10.1145/274430.274436 This article by Brad Myers explores the history of interfaces and major turning points in human-computer interaction. He examines the origins of widgets, icons, windows and applications and the development in academic and corporate settings, and I found his description of the transition to GPUs especially interesting. Great place to start my research into different interfaces I want to explore.

 

https://vimeo.com/75534042 https://hyperallergic.com/128037/jon-rafmans-not-so-still-life-of-a-digital-betamale/ “Still Life (Betamale)” by Jon Rafman uses images scraped from the internet to explore vivid online subcultures in this fascinating video piece. I love the internet inspired imagery and animations and the poignancy, while the female android voiceover is a bit overplayed it goes well with the piece.

 

https://jujucodes.github.io/liminalmind/ “Liminal Mind” is a web VR piece I made using entirely AI generated media, exploring AI’s perceptions of our concept of liminal spaces. I want to build on my work and technological research in the project, especially on converting GAN images to VR and creating VR for the web.

 

Realizing this project

One of the key steps is the technology, as I’m currently experimenting with different methods and workflows to find the best process. I’m focusing on different text to image/video, such as Story2Hallucination which generates GAN frames from input text. I’m also experimenting with VR dimensions in GAN generating software like Siren, Big Sleep and Deep Daze (among others, I’m testing a wide range to find the best results) and smoothing and editing them into 360 images using 3D editing in Photoshop. I’m also looking into the best ways to generate 3D objects (mostly 2D to 3D AI programs) to create GLTFs I can put in A-Frame to create the experience.

 

There’s a few questions I need to solve in the next few months so I can move forward with development. The first came up when discussing VR with my friends on whether I should generate the experience in a different software (such as unity) and then load the 360 video of that experience in A-Frame. I’m currently pushing the limits of what’s possible in A-Frame, and if I want to do more with shaders and textures I need to incorporate an animation software. This will involve some more research and learning as I lack experience in animation so I need to decide if that is going to be a component. I have several friends who focus on animation so I’ll engage more with that community and seek some feedback on my project.

 

Another issue is when using 3D objects in Photoshop or After Effects and exporting the 360 video, the video flattens these objects as well and distorts them. Loading them directly in A-Frame preserves them but involves large files and may impact the loading time. I’m sure there’s workarounds but it’s definitely an area I need to continue exploring.

 

I’m also staying aware of how much I actually “collaborate” with AI. I’m experimenting with VR dimensions to try to get as close as possible to the final result before I go in with Photoshop or After Effects, but this is something I’ll need to keep in mind as I move forward.