You are on page 1of 10

Generative Models

Latent Variable Models (VAE Context)

Introduce an unobserved random variable for every observed


data point to explain hidden causes.
Variational Autoencoder

Neural Network Architecture of VAE Probability Model of VAE


Variational Autoencoder
Generative Adversarial Nets
GANs set up a game between two players.
Generator: Creates samples - intended to come from the training data distribution.
Discriminator: Examines samples to determine whether they are real or fake.

Training GAN
Two mini batches sampled simultaneously:
minibatch of x values from the dataset.
minibatch of z values drawn from models prior over latent variables.

Then two gradient steps are made simultaneously:


updating discriminator parameters to reduce the discriminator loss.
updating generator parameters to reduce the generator loss.
Generative Adversarial Nets

Discriminator loss:

Generator loss(Minimax):

Non saturating loss:

To ensure each player has a strong gradient when losing the game.
Generative Adversarial Nets: Observations

GANs create sharper images compared to other generative models.


GANs are resistant to copying.

Non Convergence(oscillation):
Simultaneous gradient descent converges for some games

but not all of them.

Partial Mode Collapse(low output diversity):


GANs often choose to generate from very few modes (map several different input z values to the

same output point).


References
1. https://jaan.io/what-is-variational-autoencoder-vae-tutorial/#mean-field
2. https://arxiv.org/pdf/1606.05908.pdf
3. http://shakirm.com/papers/VITutorial.pdf
4. http://shakirm.com/slides/DLSummerSchool_Aug2016_compress.pdf
5. https://arxiv.org/pdf/1701.00160.pdf

You might also like