Expression is a p5 sketch that allows you to control musical timbre using body position, along with a built in music synced visual for a song I produced for the Software Music Production course at steinhardt.Tone.js enables a music synced animations and lyric subtitles as well as the ability to assign different stem tracks to their own effect plugins before routing them all into the master track. Posenet from the ml5 machine learning models gives access to body positions which is then passed to manipulate the cutoff values of three lowpass filters each corresponding to a specific track: vocal, drums, or lead.This project originated from a previous project where I used the ml5 model to create a rhythm game, but I decided to add tone.js and change this towards a more musical direction. This can be used for musicians either for production or for live performances, there are countless possibilities for musicians to naturally interact with this program and express themselves in a electronic setting.
A AI model allows you to translate Nike sneakers into Adidas style and vice versa.
This project utilizes cycleGAN to deal with sneaker style translation. By training the cycleGAN with Nike and Adidas sneakers' side image, the model can receive a rough sneaker design sketch, and transform them into black&white sneaker design
A recreation of traditional Korean folk painting in the eyes of an AI
The project is a recreation of a Minhwa painting called “The Tiger and the Crow” using a styleGAN2 model – a creation of a neural-net styled version of Minhwa painting.
Minhwa (민화) is a traditional Korean folk painting that has a unique feature that the painter is, most of the time, anonymous. Minhwa can be drawn by anyone. However, as many traditions get lost over time, the production of Minhwa paintings have been diminishing since the early industrialization era. Despite the current efforts to revive the tradition, it is slowly being lost as new generation steps away from folk paintings.
If everyone can be a Minhwa painter, I thought of a notion that an AI can also be an anonymous painter – an AI as an anonymous creator in order to pay respect to the long tradition of Minhwa creations. Through this project, I wanted to emphasize that anyone can be a Minhwa painter regardless of who you are. When even an AI can be a Minhwa painter, why can't you be one?
Although the product of this particular machine learning model seem odd and “imperfect” compared to the many Minhwas drawn by famous artists, the imperfection of the created painting portrays the essence of what folk paintings are – I am not striving to create a perfect folk painting, but a statement to the community that there is no “perfect” Minhwa.
The project involves using StyleGAN2, a machine learning model that creates similar styled images using a given training dataset images, as the main component. I was able to collect approximately 500 different Minhwa images with various styles, and trained on top of an existing pre-trained model to expedite the process.
Our main idea is to create an interactive experience for the users to speak out and get rid of their deepest fear in their life. The users will have a chance to physically eliminate their deepest fears by waving their left hand and swipe out those fears. The movement is detected with Posenet. We imported external databases (Google quickdraw dataset) as the fears are portrayed in scribbles. The quick draw dataset is a collection of 50 million drawings across 345 categories. Whenever the user clicks on “draw”, a related object will pop up on the screen. It is not only a fun game but also an interactive art therapy experience.
In this project, I use CycleGan, which is a powerful and useful machine learning algorithm to train my own model. It can generate images of myself based on the pose images (my name is Edmund so the project name is ‘This is Quite Edmund’). Those pose images are the result of a pre-trained machine learning model, which is called ‘DensePose’, it can return you an image showing poses of all the people in the input image.
During the training process, I took some videos of mine, and get frame images of those videos as the dataset of my own images. And then I upload those images to DensePose to get the results and store them as the dataset of pose images. After training for a long time, I eventually get a satisfactory model.
Finally, I wrap DensePose and my model up in a workspace, so I get another brand new model by combining them together. By using this model, you can send in your own images, and will get images of me with the same pose as you. You can image you are actually controlling my body.
Using our bodies to draw isn’t something new, but using our bodies to create a moving world on the canvas is. For this project, I would like to create a work called “poseCreate”. In poseCreate, users can imitate the objects they would like to draw and the animation of the object they imitate will appear on the canvas, therefore they are able to create a moving image on the canvas.
How might we create stylized Chinese typefaces though machine learning ?
Haoyu Wang, Yuguang Zhang
Glyph poems in motion is a machine learning experiment create by Yuguang(YG) Zhang and Henry Haoyu Wang. In the process, we applied two generative adversarial network (GAN) into Chinese typefaces. With the first GAN, we created glyphs based on a list of font data. With the second GAN, we applied image into the text. With the moving glyph and new shape styles. We creates a new ways how views can read these text.
In this interactive garden in outer space, you can evolve a garden to your liking using artificial intelligence.
In this intergalactic garden, users can create their own garden in outer space. Users are initially brought to a garden of flowers, moving in random fluid motions. They can mouse over flowers with attributes they like, evolving the garden to have more of these particular attributes. This project uses a genetic algorithm to take into account users’ preferences so they evolve a garden to their liking.