This week I started some preliminary experiments into my favorite idea, did a bit of research into artists working with similar materials and also made a little Cornell Box. Ultimately, between my two ideas, I chose to focus on the AI VR exploration, and I want to explore defunct technology through the VR experience.

For my Cornell Box my resources were a bit limited, so I decided to showcase some interfaces, textual inspiration and objects from my apartment that represented part of my process. In the back I have a Galileo Thermometer I found, a fascinatingly accurate temperature measurement that may be considered “defunct” with the many advanced options available but has a fascinating design and thought process behind it. I also displayed Dune and a magazine on Berlin artists working with natural spaces, as both books inspire me to think creatively about space and its impact on our psychogeography. I also included my Arduino and Raspberry Pi as within five years these technologies will be defunct, yet still have incredible designs and allow advancements in technological accessibility.

In terms of my inspiration, by far the most inspiring piece that I’ve found in my research is Blortasia by Kevin Mack (http://www.shapespacevr.com/blortasia.html), one of the only artists I could find who works with both AI and VR. The practice of combining both is very new, and I love his application of AI by drawing inspiration from the images to generate his own 3D animated journey. With a neuroscience background and in interest in the connection between AI’s neurons and our own, his practice is very similar to mine and his work is absolutely stunning.

I also love the piece La Camera Insabbiata (http://garden.metarealitylab.com/2020/08/10/la-camera-insabbiata/) by Huang Hisn-Chien and Laurie Anderson. The piece is an abstract VR exploration of space, playing with dimensions, point of view and creative incorporation of language. I love this format because it creates an immersive, trippy experience while still rooted in our expectations of the real world.

Finally, Everest Pipkins’ work with generated images such as their “screensaver collection” and “neural network generated zines” explore applications of AI to formats we use and understand, adding meaning and interpretation. I love their approach to collaborating with AI and exploring big ideas through curation of GAN images.

This week I practiced creating a few images in photoshop by combining Big Sleep GAN generated images with reddit scraped ones into collages. My goal is to use a similar process to generate 360 environments, using different layers to animate a scene and make the GAN elements 3D.

I also trained a few images on defunct interfaces using Artbreeder, creating some fascinated images. I used the 3D photo inpainting collab (https://shihmengli.github.io/3D-Photo-Inpainting/) to generate some practice 3D videos from these images, and while I’ll probably use a different technique in my final project it did produce some interesting results.

I also talked with a few friends about whether it would be better to use the project to continue developing my skills in Web VR and AI or focus on diversifying my skills with an interactive “AlmostOS” website, and after a few conversations I think it would be best to build on my unique skills of combining AI and VR as I also am way more drawn to the process. I want to build my portfolio with a clear focus, and I think this project provides a great opportunity to develop these skills and produce a beautiful experience at the same time.