I'm currently hosting my machine learning text generator on my localhost. I used machine learning to train a model for Friends and that model will allow users to incorporate more text to the machine. I plan to make the machine more interactive with other features. Right now, I have text to speech and have the text generate some text from Friends.
This piece is inspired by R. Luke Dubois's Pop Icon: Bowie piece, and the recent advancement in high-resolution AI image synthesis. While AI is now capable of producing images that look exactly like photos taken using a camera, it does not rely on any physicality of the subject of those imageries. In this case, what's the point of doing so? And what does it imply? In search of answers to this question, I found that it could be used as a platform to explore our “hive mind” perception of concepts, ideas, and bias, and to present them in realistically convincing or utterly surreal ways.
In this piece, I chose the combination of human portrait arts and real-life cat photos – two categories that wouldn't normally come across each other – as my subject of synthesis. 5000+ images of human portraits were trained on StyleGAN, with an additional layer of 1000+ cat faces. The results were then used to synthesize semi-human, semi-cat portraits and generate a piece of “cat” music modeled after Beethoven's Sonata No. 8.
This piece was performed at the Cycling '74 Expo on April, 2019, and its part of the my Hivemind series.
Autonomous Artificial Artists, Machine Learning for the Web