Introduction to Synthetic Media (ITPG-GT 2054)

Generative machine learning models open new possibilities for creating images, videos, and text. This class explores the idea of how artists, designers and creators can use machine learning in their own design process. The goal of this class is to learn and understand some common machine learning techniques and use them to generate creative outputs. Students will learn to use pre-trained models, and train their own models in the cloud using Runway. For each week, we will discuss the history, theory, datasets, application of the machine learning models, and build experiments based on the model. In addition to Runway, we will be using JavaScript libraries like the p5.js, ml5.js, and TensorFlow.js, and software like Photoshop, Unity and Figma. Students are expected to have taken ICM (Introduction to Computational Media), or have equivalent programming experience with Python or JavaScript. A list of ML models we will be covering: Image generation: StylanGAN: https://github.com/NVlabs/stylegan BigGAN: https://github.com/ajbrock/BigGAN-PyTorch Style Transfer Fast-style-transfer: https://github.com/lengstrom/fast-style-transfer Arbitrary-Image-Stylization: https://github.com/tensorflow/magenta/tree/master/magenta/models/arbitrary_image_stylization Semantic Image Segmentation/Synthesis Deeplab: https://github.com/tensorflow/models/tree/master/research/deeplab Sapde-coco: https://github.com/NVlabs/SPADE Image-to-Image Translation: pix2pix: https://phillipi.github.io/pix2pix/ pix2pixHD: https://github.com/NVIDIA/pix2pixHD Text Generation LSTM gpt-2: https://github.com/openai/gpt-2

Interactive Telecommunications (Graduate)
2 credits – 6 Weeks

Sections (Spring 2020)


ITPG-GT 2054-000 (23367)
01/31/2020 – 03/13/2020 Fri
9:00 AM – 11:00 AM (Morning)
at Brooklyn Campus
Instructed by Shi, Yining