From principal components to puppyslugs: the mathematics of deep learning for art
This lecture-based session is an accessible primer on the theory and mathematics of machine learning for artistic and creative purposes. If you have ever looked at neural creations like Deepdream, DCGAN, pix2pix [1][2], or AlphaGo, and wondered "how does that actually work?" then this lecture will put you on a path towards understanding it beyond the magic tricks and cliches.
This lecture assumes no advanced knowledge of math or computer science and will start from basics, relying on figures and visual aides to communicate difficult concepts wherever possible. Topics will begin with principal component analysis and probability/statistics, and progress towards neural networks, generative models, and adversarial training. Although this lecture cannot fully demystify what years of mathematical training bring, it will put you on a path towards a deeper understanding of the subject, and academic resources for pursuing the content more thoroughly will be provided for further exploration.
This lecture will be tightly coupled with a follow-up tutorial/workshop on June 22 which will cover practical applications for machine learning for art, mostly using code and resources from ml4a.github.io. The two sessions are complimentary, but it is not necessary to attend both.
Comments
You must be signed in to comment.