Expression

real-time timbre control using machine learning powered pose detection

, Alex Wang

https://www.youtube.com/watch?v=6BUvYAUamE0&feature=emb_title

Description

Expression is a p5 sketch that allows you to control musical timbre using body position, along with a built in music synced visual for a song I produced for the Software Music Production course at steinhardt.Tone.js enables a music synced animations and lyric subtitles as well as the ability to assign different stem tracks to their own effect plugins before routing them all into the master track. Posenet from the ml5 machine learning models gives access to body positions which is then passed to manipulate the cutoff values of three lowpass filters each corresponding to a specific track: vocal, drums, or lead.This project originated from a previous project where I used the ml5 model to create a rhythm game, but I decided to add tone.js and change this towards a more musical direction. This can be used for musicians either for production or for live performances, there are countless possibilities for musicians to naturally interact with this program and express themselves in a electronic setting.

IMA/ITP New York
The Code of Music (UG)
Machine Learning,Music

Sketch2Sneaker

A AI model allows you to translate Nike sneakers into Adidas style and vice versa.

Haoquan Wang

https://youtu.be/LCJ3aWogAgk

Description

This project utilizes cycleGAN to deal with sneaker style translation. By training the cycleGAN with Nike and Adidas sneakers' side image, the model can receive a rough sneaker design sketch, and transform them into black&white sneaker design

IMA/IMB Shanghai
INTM-SHU.226.1
Artificial Intelligence Arts
Art,Machine Learning

Minhwa in the Eyes of an AI

A recreation of traditional Korean folk painting in the eyes of an AI

Rick Kim

https://youtu.be/DNUUyyrqDYo

Description

The project is a recreation of a Minhwa painting called “The Tiger and the Crow” using a styleGAN2 model – a creation of a neural-net styled version of Minhwa painting.

Minhwa (민화) is a traditional Korean folk painting that has a unique feature that the painter is, most of the time, anonymous. Minhwa can be drawn by anyone. However, as many traditions get lost over time, the production of Minhwa paintings have been diminishing since the early industrialization era. Despite the current efforts to revive the tradition, it is slowly being lost as new generation steps away from folk paintings.

If everyone can be a Minhwa painter, I thought of a notion that an AI can also be an anonymous painter – an AI as an anonymous creator in order to pay respect to the long tradition of Minhwa creations. Through this project, I wanted to emphasize that anyone can be a Minhwa painter regardless of who you are. When even an AI can be a Minhwa painter, why can't you be one?

Although the product of this particular machine learning model seem odd and “imperfect” compared to the many Minhwas drawn by famous artists, the imperfection of the created painting portrays the essence of what folk paintings are – I am not striving to create a perfect folk painting, but a statement to the community that there is no “perfect” Minhwa.

The project involves using StyleGAN2, a machine learning model that creates similar styled images using a given training dataset images, as the main component. I was able to collect approximately 500 different Minhwa images with various styles, and trained on top of an existing pre-trained model to expedite the process.

IMA/IMB Shanghai
INTM-SHU.226.1
Artificial Intelligence Arts
Art,Machine Learning

White Night

White Night presents an interactive experience for the users to speak out and get rid of their deepest fear in their life.

Anica Yao, Jamie Wang

https://www.youtube.com/watch?v=WZkyaEmt4Xc&feature=youtu.be

Description

Our main idea is to create an interactive experience for the users to speak out and get rid of their deepest fear in their life. The users will have a chance to physically eliminate their deepest fears by waving their left hand and swipe out those fears. The movement is detected with Posenet. We imported external databases (Google quickdraw dataset) as the fears are portrayed in scribbles. The quick draw dataset is a collection of 50 million drawings across 345 categories. Whenever the user clicks on “draw”, a related object will pop up on the screen. It is not only a fun game but also an interactive art therapy experience.

IMA/IMB Shanghai
INTM-SHU.134.1
Movement Practices and Computing
Art,Machine Learning

This is Quite Edmund

This project let you control my body, which means it takes your image as input and generate a images of me with the same pose

Yunhao Ye

https://www.youtube.com/watch?v=xJfGu2vQsHA&feature=youtu.be

Description

In this project, I use CycleGan, which is a powerful and useful machine learning algorithm to train my own model. It can generate images of myself based on the pose images (my name is Edmund so the project name is ‘This is Quite Edmund’). Those pose images are the result of a pre-trained machine learning model, which is called ‘DensePose’, it can return you an image showing poses of all the people in the input image.

During the training process, I took some videos of mine, and get frame images of those videos as the dataset of my own images. And then I upload those images to DensePose to get the results and store them as the dataset of pose images. After training for a long time, I eventually get a satisfactory model.

Finally, I wrap DensePose and my model up in a workspace, so I get another brand new model by combining them together. By using this model, you can send in your own images, and will get images of me with the same pose as you. You can image you are actually controlling my body.

IMA/IMB Shanghai
INTM-SHU.226.1
Artificial Intelligence Arts
Machine Learning

poseCreate

Create drawings by posing them out!

Jamie Wang

https://www.youtube.com/watch?v=N6_7t8bXNfo

Description

Using our bodies to draw isn’t something new, but using our bodies to create a moving world on the canvas is. For this project, I would like to create a work called “poseCreate”. In poseCreate, users can imitate the objects they would like to draw and the animation of the object they imitate will appear on the canvas, therefore they are able to create a moving image on the canvas.

IMA/IMB Shanghai
INTM-SHU.134.1
Movement Practices and Computing
Art,Machine Learning

Glyph poems in motion

How might we create stylized Chinese typefaces though machine learning ?

Haoyu Wang, Yuguang Zhang

https://youtu.be/CZHkrkOFLto

Description

Glyph poems in motion is a machine learning experiment create by Yuguang(YG) Zhang and Henry Haoyu Wang. In the process, we applied two generative adversarial network (GAN) into Chinese typefaces. With the first GAN, we created glyphs based on a list of font data. With the second GAN, we applied image into the text. With the moving glyph and new shape styles. We creates a new ways how views can read these text.

IMA/ITP New York
ITPG-GT.2051.001
Material of Language
Art,Machine Learning

1234, Mass Tuning-In

1234, Mass Tuning-In is an installation which asks 8 people to count together as a way of deep listening

Nuntinee Tansrisakul

https://vimeo.com/416723951

Description

1234 is an installation which asks people to count together, separately.

Do you choose to lead? Follow? Create a form of unison or counterpoint? Listen and count.

IMA/ITP New York
ITPG-GT.2102.00001, ITPG-GT.2061.001, ITPG-GT.2061.001
Thesis, Tangible Interaction and Device Design, Tangible Interaction and Device Design
Machine Learning,Music

Intergalatic Garden

In this interactive garden in outer space, you can evolve a garden to your liking using artificial intelligence.

Abby Lee

https://youtu.be/Xjx5YWJxxFc

Description

In this intergalactic garden, users can create their own garden in outer space. Users are initially brought to a garden of flowers, moving in random fluid motions. They can mouse over flowers with attributes they like, evolving the garden to have more of these particular attributes. This project uses a genetic algorithm to take into account users’ preferences so they evolve a garden to their liking.

IMA/ITP New York
ITPG-GT.2480.001
The Nature of Code
Art,Machine Learning
NYU Tisch School of the Arts provides reasonable accommodations to people with disabilities. Requests for accommodations should be made at least two weeks before the date of the event when possible. You can request accommodations at tisch.nyu.edu/accommodation