You are on page 1of 10

Report : Trends in Generative Models

Abstract

Many interesting studies based on discriminative models such as Convolutional Neural Network (CNN) and
Recurrent Neural Network (RNN) architectures for various classification problems have been presented with
recent improvements in computation power and high scale datasets. These models have achieved current
state-of-the-art results in almost all computer vision applications, but they lack efficient data sampling and
data distribution understanding. The most exciting topic in the computer vision field today, according to
deep learning pioneers, is generative adversarial training. With the influence of these viewpoints and the
potential applications of generative models, a growing number of studies have been conducted using
generative models, particularly Generative Adversarial Network (GAN) and Autoencoder (AE) based
models. A comprehensive review of generative models is presented in this study, along with defining
relationships among them, for a better understanding of GANs and AEs by emphasising the importance of
generative models.
Introduction.

Due to the possible applications, content generating is becoming more popular. With recent
advancements in the notion of adversarial learning on neural networks, generative models have attracted the
attention of many researchers due to the promising outcomes obtained. Apart from CNN and RNN-based
discriminative models, which take conditional probability into account and are good for representation
learning but not so good at predicting out-of-samples, generative models may produce out-of-samples to
reject or sample. Generic models based on the combined probability of input pairs will be useful for
forecasting out-of-samples.

Nowadays Researchers are more interested in adversarial networks because of the discovery of
adversarial examples and their effects on neural networks. By using perturbations to manipulate original
photos, adversarial instances can be created. These alterations are difficult to discern on photos, yet they
result in various predictions. Although adversarial examples are uncommon in practise, adversarial trained
networks will be more resilient and perform effectively at the same time.

Until now, generative models based on unsupervised autoencoder and adversarial learning have been
used to create synthetic content. The purpose of this research is to analyse generative models in order to
better understand recent trends in the deep learning society. To specify relationships inside generative
models, they are divided into five types based on model architecture. This work also includes a relational
evaluation of generative models. The goal of this study is to outline current models based on correlations
among them and their applications in the literature, as well as to highlight scholars who are interested in this
growing field.
THE CRITICAL ROLE OF GENERATIVE MODELS

For the past five years, generative models have been widely explored for their ability to generate data
in addition to estimating density.

The importance of generative models :

 Creating visuals that are both synthetic and realistic. For this purpose, the following part focuses on
generative models. The goal of this review is to compile generative models for image synthesising.
As with [44], [65], generating material with predetermined words and sentences. Adversarial
training, as described in [45], is important for training models with adversarial data in order to
improve and test model classification abilities. Filling in the blanks in data. Missed data is a
challenge when it comes to accurately analysing a model. Because generative models give a notion
about data manifold, manifold learning, it is possible to predict missing pieces of data with
generative models. For the image processing culture, this is also known as image inpainting or
completeness with their models [62], [27], [60], [63].

 Using specified features to manipulate original images. generative models may manipulate images,
On the basis of latent codes. Not only can generative models be used to change pose, picture
portions, and objects like [16][43][7], but they can also be used to change visual scenes. Images can
be mapped from domain A to domain B using [37][30][67] models. Working with multi-modal
outputs is a challenge. Generic models can be utilised for a variety of tasks with a single input, rather
than having to train them separately as in [23][56]. More examples from the same distribution are
available. This is critical for a variety of tasks, such as [1][58] obtaining better models with fewer
samples. Improving the quality of data. In review studies, generative models have been shown to
improve data quality by creating super-resolution images. [36][47][59].

GENERATIVE MODELS

Deep generative models are discussed in this study to explain a recent trend in the deep learning
society. Parametric models are linked to each other in Figure 1 to present a quick overview of the literature.
To make it easier to associate generative models, current research have been divided into unsupervised
fundamental models, AE-based models, autoregressive models, GAN-based models, and AE-GAN hybrid
models.

 Autoencoder Based Models

 For unsupervised learning, Autoencoder [64] integrates two networks: encoder and decoder into a
single network. Because it generates a representation of data from compressed data, the decoder
component works as a generator. The Stacked Autoencoder (SAE) [3] is recommended for training
autoencoders in a stacked fashion using the greedy layer wise technique. For robust representation
learning with corrupted data, the Denoising Autoencoder (DAE) model Vincent 2008 extracting is
introduced. SDAE (Stacked Denoising Autoencoder) [55] is a DAE and SAE model extension. DAE
is stacked in a single network to create the SDAE model.

 A variational component is proposed to simulate the Variational Autoencoder (VAE) due to the
overfitting and gradient descent problem [32]. In the work [31], a deep generative model based on
variational approaches is presented in a semi-supervised manner; for the rest of the paper, we will
refer to semi-supervised VAE as SS-VAE. to create a series of digits The Generative Stochastic
Network (GSN) [52] is a more advanced version of the generalised DAE (GDAE) [4] that generates
images using learning parameters in the same way as generative MCMC. Another autoencoder-based
generative model is the Adversarial Autoencoder (AAE) [39], which combines a GAN objective with
adversarial training instead of the KL-divergence term. The other network, in addition to the
traditional autoencoder network, takes samples from prior distributions and latent coding
distributions in order to separate them. They projected MNIST classes onto the latent space before
creating realistic images on it. Deep Recurrent Attentive Writer (DRAW) [21] is a fascinating study
of a recurrent autoencoder that employs the attention mechanism. Multiple autoencoders were
compromised due to their recurring structure. Because of the attention mechanism, the model may
choose which portions of images to focus on and which parts of the output to draw.

 Denoising Variational Autoencoder (DVAE) [28] combines VAE and DAE techniques to produce a
robust model that injects noises not only at the input level but also in a stochastic hidden layer,
similar to VAE. Because VAE's disentangled performance is limited, β Variational Autoencoder (β-
VAE) [24] with an additional hyperparameter is proposed to train disentangled representations by
balancing latent code capacity and independent generative variables. Another AE-based approach for
minimising loss based on Wasserstein distance utilising a proposed regularizer is the Wasserstein
Autoencoder (WAE) [53].

 Unsupervised Fundamental Models

 Unsupervised fundamental models have been investigated extensively for handwritten digit
classification and texture synthesis tasks, but their results are too hazy to be useful. The Restricted
Boltzmann Machine (RBM) [49], a form of Boltzmann Machine (BM), has been used to solve a
variety of issues with improved computational capabilities. Using their generative decoders and
Gibbs sampling, RBM and Deep Boltzmann Machine (DBM) [48] can recreate an input image from
its latent representations. In RBMs, the Markov chain Monte Carlo (MCMC) method [18] is used to
generate fair samples using random dice. Deep Belief Networks (DBN) [25] is a stacked variant of
RBM that operates as a multilayer learning architecture that provides features from high-level
representations [14].

 Autoregressive models

 This section deals with autoregressive models that model images pixel-by-pixel rather than the entire
image. MADE [17] is a modified autoencoder network that estimates any distribution from a series
of examples utilising the autoregressive property. To ensure autoregressive property, this model
includes missed connections and multiplicative binary masks.
 PixelCNN Decoder [54] is a CNN-based autoregressive generating model. It functions as an
autoencoder's decoder. PixelCNN is a neural network that simulates the conditional distribution of
visible pixel values. In this gated architecture, a gated CNN is employed to remember previous pixel
values. A latent factor term is employed to produce images in this model. PixelRNN, an RNN
variation, is also proposed in the same paper. PixelRNN is utilised in the following analysis of the
PixelRNN model by [42], which includes 12 LSTM layers and uses a convolutional method. Row
LSTM, Diagonal BiLSTM utilising residual connections among LSTM layers, and a multi-scale
variant of PixelRNN were also defined. PixelCNN++ [51] is a modified version of PixelCNN that
simplifies the design while enhancing performance. They employed discretized logistic mixture
likelihood on full pixels in the improved version, using tactics like down sampling, dropout, and
skip-out connections to improve the findings based on the log likelihood. A latent variable model
named PixelVAE is presented in the work [22] to combine the benefits of VAEs and PixelCNNs in a
model. In this approach, the output of VAE's decoder is a conditional PixelCNN. Variational Lossy
Autoencoder(VLAE) [8] is proposed combining VAE with autoregressive models to improve
modelling performance of VAE.

Fig. 1. Generative Models.

 AE-GAN Hybrid Models.

 With an encoder network, a collection of studies is being undertaken to mask GAN's lack of
inference mechanism. One of the most intriguing studies on VAE and GAN [34] combines the two
methodologies into a single model known as VAE/GAN. Instead of using element-wise
reconstruction as the VAEs reconstruction target, GAN learned discriminative features were utilised.
This paradigm has the potential to teach encoding, generation, and discrimination. The sum of VAE's
prior loss, feature similarity based log likelihood, and the loss of the GAN model is used to update
the loss function of VAE/GAN. Furthermore, images with specified attributes were created using
basic arithmetic employing high-level features.

 Another VAE-GAN hybrid study [33] focuses on the use of pretrained auxiliary network loss rather
than VAE reconstruction error. Similarly, to avoid fuzzy reconstructed pictures, [11] employed VAE
with GAN based on their DeePSiM loss function. A GAN is used with an adversarial autoencoder
model in [12]'s ALI model. There is no explicit reconstruction error in this approach for AAE
optimization. In addition, GAN's discriminator network accepts input pairs (x, z) rather than just
latent code z. On the same study as 3D-VAE-GAN [57], the 3DGAN model is paired with VAE to
build 3D objects from encoded 3D input data.

COMPARISON OF GENERATIVE MODELS

Since then, generative models have been defined for the purpose of image generation. In fact,
generative models may be used for a variety of tasks, including super-resolution, image colorization, image
inpainting, image modification, image generation, text to image generation, and picture to image generation,
among others. Autoencoders were used for image reconstruction prior to the generation task, and their
decoder network made them suitable for the generative task. Another aim for generative models like
SRGAN [35] is to improve image resolution. The task of inpainting or image completeness is accomplished
via generative models, as seen in [61]. Using generative models, such as InfoGAN [7], CGAN [16], and
[66], it is also possible to manipulate images. The applications of generative models are astounding, such as
generating images from words [44] and applying some arithmetic [43], that is, from image to image
production. One-shot generalisation [46], specified input generation [40], image colorization [5], domain
adaption such as CoGAN [38], and other generative model applications.

It's difficult to assess the quality of generated images, and it's already an open subject in generative
models. The log-likelihood metric is commonly used to analyse created material via generative models as an
evaluation metric. A single neighbourhood, inception score, and classification accuracy are used to evaluate
generated images in several studies, such as GAN. Human inspections evaluate generated pictures in
LAPGAN. Another metric for evaluating generative models is classification performance, as described in
[66] by extracting features from models. To evaluate generative models, the bottle between GANs approach
is presented in [29].
CONCLUSION

In practically all applications, deep learning-based models now achieve current state-of-the-art
results. The most exciting topic for the deep learning community appears to be generative models,
particularly with adversarial training. It is evident that the majority of recent studies, particularly those from
the previous three years, are focused on generative models, which are on the rise. As a result of this growing
interest, there is a need to compile current generative model trends. Recent generative models are used in
this comprehensive study to collect all techniques and answer some of the following questions: what does
the deep learning society study and why? What are the application areas for generative models, as well as
other trends? A relational evaluation of generative papers is also included to provide any insight into the
studies' relevance. As a consequence of this review, it can be concluded that there is a growing interest in
generative models as a result of their prospective applications and value to the computer vision community.
REFERENCES [21] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D.
Wierstra. Draw: A recurrent neural network for image generation.
[1] A. Antoniou, A. Storkey, and H. Edwards. Data augmentation arXiv preprint arXiv:1502.04623, 2015.
generative adversarial networks. arXiv preprint arXiv:1711.04340, [22] I. Gulrajani, K. Kumar, F. Ahmed, A. A. Taiga, F. Visin, D.
2017. Vazquez, and A. Courville. Pixelvae: A latent variable model for
[2] M. Arjovsky, S. Chintala, and L. Bottou. Wasserstein gan. natural images. arXiv preprint arXiv:1611.05013, 2016. [23] K.
arXiv preprint arXiv:1701.07875, 2017. Hausman, Y. Chebotar, S. Schaal, G. Sukhatme, and J. J. Lim.
[3] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy Multimodal imitation learning from unstructured demonstrations
layerwise training of deep networks. In Advances in neural using generative adversarial nets. In Advances in Neural
information processing systems, pages 153–160, 2007. Information Processing Systems, pages 1235–1245, 2017.
[4] Y. Bengio, L. Yao, G. Alain, and P. Vincent. Generalized [24] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M.
denoising auto-encoders as generative models. In Advances in Botvinick, S. Mohamed, and A. Lerchner. beta-vae: Learning basic
Neural Information Processing Systems, pages 899–907, 2013. visual concepts with a constrained variational framework. 2016.
[5] Y. Cao, Z. Zhou, W. Zhang, and Y. Yu. Unsupervised diverse [25] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning
colorization via generative adversarial networks. arXiv preprint algorithm for deep belief nets. Neural computation, 18(7):1527–
arXiv:1702.06674, 2017. 1554, 2006.
[6] M. Chen and L. Denoyer. Multi-view generative adversarial [26] X. Huang, Y. Li, O. Poursaeed, J. Hopcroft, and S. Belongie.
networks. arXiv preprint arXiv:1611.02019, 2016. Stacked generative adversarial networks. In IEEE Conference on
[7] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and Computer Vision and Pattern Recognition (CVPR), volume 2,
P. Abbeel. Infogan: Interpretable representation learning by 2017.
information maximizing generative adversarial nets. In Advances [27] S. Iizuka, E. Simo-Serra, and H. Ishikawa. Globally and
in Neural Information Processing Systems, pages 2172–2180, locally consistent image completion. ACM Transactions on
2016. Graphics (TOG), 36(4):107, 2017.
[8] X. Chen, D. P. Kingma, T. Salimans, Y. Duan, P. Dhariwal, J. [28] D. J. Im, S. Ahn, R. Memisevic, Y. Bengio, et al. Denoising
Schulman, I. Sutskever, and P. Abbeel. Variational lossy criterion for variational auto-encoding framework. In AAAI, pages
autoencoder. arXiv preprint arXiv:1611.02731, 2016. 2059–2065, 2017.
[9] E. L. Denton, S. Chintala, R. Fergus, et al. Deep generative [29] D. J. Im, C. D. Kim, H. Jiang, and R. Memisevic. Generating
image models using a laplacian pyramid of adversarial networks. In images with recurrent adversarial networks. arXiv preprint
Advances in neural information processing systems, pages 1486– arXiv:1602.05110, 2016.
1494, 2015. [30] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros. Image-to-image
[10] J. Donahue, P. Krahenb ¨ uhl, and T. Darrell. Adversarial translation with conditional adversarial networks.
feature learning. ¨ arXiv preprint arXiv:1605.09782, 2016. [31] D. P. Kingma, S. Mohamed, D. J. Rezende, and M. Welling.
[11] A. Dosovitskiy and T. Brox. Generating images with Semisupervised learning with deep generative models. In Advances
perceptual similarity metrics based on deep networks. arXiv in Neural Information Processing Systems, pages 3581–3589,
preprint arXiv:1602.02644, 2016. 2014.
[12] V. Dumoulin, I. Belghazi, B. Poole, A. Lamb, M. Arjovsky, [32] D. P. Kingma and M. Welling. Auto-encoding variational
O. Mastropietro, and A. Courville. Adversarially learned inference. bayes. arXiv preprint arXiv:1312.6114, 2013.
arXiv preprint arXiv:1606.00704, 2016. [33] A. Lamb, V. Dumoulin, and A. Courville. Discriminative
[13] I. Durugkar, I. Gemp, and S. Mahadevan. Generative multi- regularization for generative models. arXiv preprint
adversarial networks. arXiv preprint arXiv:1611.01673, 2016. arXiv:1602.03220, 2016. [34] A. B. L. Larsen, S. K. Sønderby, H.
[14] A. Fischer and C. Igel. An introduction to restricted boltzmann Larochelle, and O. Winther. Autoencoding beyond pixels using a
machines. In Iberoamerican Congress on Pattern Recognition, learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.
pages 14–36. Springer, 2012. [35] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham,
[15] M. Gadelha, S. Maji, and R. Wang. 3d shape induction from A. Acosta, ´ A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-
2d views of multiple objects. arXiv preprint arXiv:1612.05872, realistic single image super-resolution using a generative
2016. adversarial network. arXiv preprint arXiv:1609.04802, 2016. [36]
[16] J. Gauthier. Conditional generative adversarial nets for C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A.
convolutional face generation. Class Project for Stanford CS231N: Acosta, ´ A. Aitken, A. Tejani, J. Totz, Z. Wang, et al. Photo-
Convolutional Neural Networks for Visual Recognition, Winter realistic single image super-resolution using a generative
semester, 2014:5, 2014. adversarial network. arXiv preprint, 2017.
[17] M. Germain, K. Gregor, I. Murray, and H. Larochelle. Made: [37] M.-Y. Liu, T. Breuel, and J. Kautz. Unsupervised image-to-
Masked autoencoder for distribution estimation. In ICML, pages image translation networks. In Advances in Neural Information
881–889, 2015. Processing Systems, pages 700–708, 2017.
[18] C. J. Geyer. Markov chain monte carlo maximum likelihood. [38] M.-Y. Liu and O. Tuzel. Coupled generative adversarial
1991. networks. In Advances in Neural Information Processing Systems,
[19] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde- pages 469–477, 2016.
Farley, S. Ozair, A. Courville, and Y. Bengio. Generative [39] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey.
adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Adversarial autoencoders. arXiv preprint arXiv:1511.05644, 2015.
Lawrence, and K. Q. Weinberger, editors, Advances in Neural [40] A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J.
Information Processing Systems 27, pages 2672–2680. Curran Clune. Synthesizing the preferred inputs for neurons in neural
Associates, Inc., 2014. networks via deep generator networks. In Advances in Neural
[20] I. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and Information Processing Systems, pages 3387–3395, 2016.
harnessing adversarial examples. arXiv preprint arXiv:1412.6572, [41] S. Nowozin, B. Cseke, and R. Tomioka. f-gan: Training
2014. generative neural samplers using variational divergence
minimization. In Advances in Neural Information Processing [60] C. Yang, X. Lu, Z. Lin, E. Shechtman, O. Wang, and H. Li.
Systems, pages 271–279, 2016. Highresolution image inpainting using multi-scale neural patch
[42] A. v. d. Oord, N. Kalchbrenner, and K. Kavukcuoglu. Pixel synthesis. In The IEEE Conference on Computer Vision and
recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. Pattern Recognition (CVPR), volume 1, page 3, 2017.
[43] A. Radford, L. Metz, and S. Chintala. Unsupervised [61] R. Yeh, C. Chen, T. Y. Lim, M. Hasegawa-Johnson, and M.
representation learning with deep convolutional generative N. Do. Semantic image inpainting with perceptual and contextual
adversarial networks. arXiv preprint arXiv:1511.06434, 2015. losses. arXiv preprint arXiv:1607.07539, 2016.
[44] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. [62] R. A. Yeh, C. Chen, T. Y. Lim, A. G. Schwing, M. Hasegawa-
Lee. Generative adversarial text to image synthesis. In Proceedings Johnson, and M. N. Do. Semantic image inpainting with deep
of The 33rd International Conference on Machine Learning, generative models. In Proceedings of the IEEE Conference on
volume 3, 2016. Computer Vision and Pattern Recognition, pages 5485–5493, 2017.
[45] H. Ren, D. Chen, and Y. Wang. Ran4iqa: Restorative [63] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang.
adversarial nets for no-reference image quality assessment. arXiv Generative image inpainting with contextual attention. arXiv
preprint arXiv:1712.05444, 2017. preprint, 2018.
[46] D. J. Rezende, S. Mohamed, I. Danihelka, K. Gregor, and D. [64] R. S. Zemel. Autoencoders, minimum description length and
Wierstra. One-shot generalization in deep generative models. arXiv helmholtz free energy. NIPS, 1994.
preprint arXiv:1603.05106, 2016. [65] H. Zhang, T. Xu, H. Li, S. Zhang, X. Huang, X. Wang, and D.
[47] M. S. Sajjadi, B. Scholkopf, and M. Hirsch. Enhancenet: Metaxas. Stackgan: Text to photo-realistic image synthesis with
Single image ¨ super-resolution through automated texture stacked generative adversarial networks. arXiv preprint, 2017.
synthesis. In Computer Vision (ICCV), 2017 IEEE International [66] J.-Y. Zhu, P. Krahenb ¨ uhl, E. Shechtman, and A. A. Efros.
Conference on, pages 4501– 4510. IEEE, 2017. [48] R. Generative ¨ visual manipulation on the natural image manifold. In
Salakhutdinov and G. E. Hinton. Deep boltzmann machines. In European Conference on Computer Vision, pages 597–613.
AISTATS, volume 1, page 3, 2009. Springer, 2016.
[49] R. Salakhutdinov, A. Mnih, and G. Hinton. Restricted [67] J.-Y. Zhu, R. Zhang, D. Pathak, T. Darrell, A. A. Efros, O.
boltzmann machines for collaborative filtering. In Proceedings of Wang, and E. Shechtman. Toward multimodal image-to-image
the 24th international conference on Machine learning, pages 791– translation. In Advances in Neural Information Processing
798. ACM, 2007. Systems, pages 465–476, 2017.
[50] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A.
Radford, and X. Chen. Improved techniques for training gans. In
Advances in Neural Information Processing Systems, pages 2226–
2234, 2016.
[51] T. Salimans, A. Karpathy, X. Chen, and D. P. Kingma.
Pixelcnn++: Improving the pixelcnn with discretized logistic
mixture likelihood andother modifications. arXiv preprint
arXiv:1701.05517, 2017.
[52] E. Thibodeau-Laufer, G. Alain, and J. Yosinski. Deep
generative stochastic networks trainable by backprop. 2014.
[53] I. Tolstikhin, O. Bousquet, S. Gelly, and B. Schoelkopf.
Wasserstein auto-encoders. arXiv preprint arXiv:1711.01558,
2017.
[54] A. van den Oord, N. Kalchbrenner, L. Espeholt, O. Vinyals,
A. Graves, et al. Conditional image generation with pixelcnn
decoders. In Advances in Neural Information Processing Systems,
pages 4790–4798, 2016.
[55] P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A.
Manzagol. Stacked denoising autoencoders: Learning useful
representations in a deep network with a local denoising criterion.
Journal of machine learning research, 11(Dec):3371–3408, 2010.
[56] V. Vukotic, C. Raymond, and G. Gravier. Generative
adversarial ´ networks for multimodal representation learning in
video hyperlinking. In Proceedings of the 2017 ACM on
International Conference on Multimedia Retrieval, pages 416–419.
ACM, 2017.
[57] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum.
Learning a probabilistic latent space of object shapes via 3d
generative-adversarial modeling. In Advances in Neural
Information Processing Systems, pages 82–90, 2016.
[58] Q. Xu, Z. Qin, and T. Wan. Generative cooperative net for
image generation and data augmentation. arXiv preprint
arXiv:1705.02887, 2017.
[59] X. Xu, D. Sun, J. Pan, Y. Zhang, H. Pfister, and M.-H. Yang.
Learning to super-resolve blurry face and text images. In
Proceedings of the IEEE Conference on Computer Vision and
Pattern Recognition, pages 251–260, 2017.

You might also like