You are on page 1of 2

GANs have shown how deep neural networks can be used for generative modeling,

aiming at achieving the same impact that they brought for discriminative modeling. The first
results were impressive, GANs were shown to be able to generate
samples in high dimensional structured spaces, like images and text, that were no
copies of the training data.
Sr. Title Published Problem Statement Methodology Dataset Dataset
No Avail-
ability

A Style- IEEE The paper’s core idea is to input Adaptive Flickr Yes
1 Based Conference different noise vectors at each level of Instance Faces-
Generator on up sampling. This goes in stark HQ
Normalizati
Architecture Computer contrast to most previous works, Dataset
for Vision and which employ noise only as a first on and (FFHQ)
Generative Pattern step. By tweaking the noise vectors of Wasserstein
Adversarial Recognition. each resolution, the authors can loss
Networks 2019. control “high-level details” by
tampering with the lower levels noise
and “low-level details” tampering with
the later levels.
Image-to- Proceedings The Pix2Pix model deals with U-Net CMP Yes
2 image of the IEEE problems such as converting line Architecture Facades
translation conference drawings to finished paintings,
with on computer allowing users to have some degree of
conditional vision and artistic control by improving/altering
adversarial pattern their sketches.
networks. recognition.
2017.
Unpaired IEEE CycleGAN model relaxes the need for Image Style Citysc- Yes
3 image-to- international paired training examples. It’s most Transfer apes,
image conference famous uses are to replace horses for Google
translation on computer zebras or apples for oranges Maps,
using cycle- vision. 2017 CMP
consistent Facade
adversarial s
networks.
Towards IEEE This paper showcases the use of a Modification - No
4 the international tagged dataset, which can be of
automatic conference leveraged to guide the generator into SRResNet
anime on computer picking a specific hairstyle or eye
characters vision. 2017 color. The formulation is almost the
creation same as in semi-supervised learning.
with However, in this case, we don’t care
generative about the trained classifier
adversarial
networks
Analyzing IEEE The style-based GAN architecture Modification Flickr
5 and international (StyleGAN) yields of StyleGan Faces-
Improving conference state-of-the-art results in data-driven HQ
the Image on computer unconditional generative image Dataset
Quality of vision. 2020 modeling. We expose and analyze (FFHQ)
StyleGAN several of
its characteristic artifacts, and propose LSUN
changes in both Car
model architecture and training
methods to address them.
In particular, we redesign the
generator normalization, revisit
progressive growing, and regularize
the generator to
encourage good conditioning in the
mapping from latent
codes to images. In addition to
improving image quality,
this path length regularizer yields the
additional benefit that
the generator becomes significantly
easier to invert. This
makes it possible to reliably attribute
a generated image to
a particular network

You might also like