Fundamental Concepts of Generative Machine Learning
Konu özeti
-
Fundamental Concepts of Generative Machine Learning
This short course offers a comprehensive introduction to the fundamental concepts of generative modeling in artificial intelligence. The curriculum starts with a review of the related mathematical concepts and tools necessary for the course. Then, the necessity of modeling data as distributions within a latent space rather than directly from raw signal data and the distinctions between deep feature spaces and latent spaces are provided. Further, the course will explore the properties of latent spaces, including generative factors, continuity, and entanglement. Participants will also learn about evaluation techniques and generative models. To support this, the course will delve into introductory concepts in information theory, such as entropy and divergence. Finally, the details of a latent space in a self-supervised auto-encoder system will be provided.
This introductory course aims to provide foundational concepts before exploring specific generative model architectures. One of the objectives is to offer a general understanding of generative models in machine learning, regardless of the technique applied. Upon completion, participants will have a robust understanding of the principles of generative modeling and its significant contributions to the field of machine learning.
Outline:
PART I: Mathematical Background
Generation vs. Discrimination in Machine Learning
Data Distributions, Sampling, Inference and Generation
Expectation and Likelihood
Evaluation for Generative Models, Distribution Distances, Divergence and Entropy
PART II: Latent Spaces
(Curse of) Dimensionality, Deep Features vs. Latent Spaces
Latent Space properties, Continuity, Entanglement, etc
PART III: Auto-Encoding
Autoencoders and Dimensionality Reduction
Variational Inference and VAEs
Conclusions