You are on page 1of 2

CHAPTER 10

FUTURE SCOPE

In the near future, it is expected that machine learning models will be able to produce cartoon versions
of videos in real-time, enabling applications like live streaming and real-time cartoon filters in video
conferencing applications. The major goal of the study is to solve various sorts of photographs with
one, two, or three objects that cannot be solved by any of the existing methods but can be solved by
the approach we present. It can produce the precise result mentioned above, and a variety of input
photographs will result in pleasing outcomes. The algorithm is not flawless, though, and it does not
react predictably to all inputs. It would be absurd to anticipate that a one-size-fits-all method will
deliver consistent outcomes given the range of input photos. So, we've demonstrated how to turn an
image into a cartoon. We have provided samples of how a cartoon is created from a pictures.
With the advancement of deep learning techniques, the accuracy and speed of learning algorithms have
improved significantly, making it possible to train models capable of generating high-quality cartoon
versions of images.
Additionally, with the increasing demand for personalized content, machine learning models can be
trained to produce cartoonified images based on an individual's preferences and unique facial features,
creating a highly personalized experience. Another application can be in the gaming industry, where
the use of cartoonish graphics is prevalent, and machine learning models can help generate cartoon
assets suitable for use in games. The future scope for cartonifying images using machine learning is
extensive and has the potential to revolutionize the way we perceive and interact with visual content.

24
CHAPTER 11
REFERENCES

[1] J. Wu, C. Zhang, T. Xue, B. Freeman, and J. Tenenbaum Learning a probabilistic latent space of
object shapes via 3D generative-adversarial modeling. In Advances in Neural Information Processing
Systems, pages 82–90, 2016.
[2] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y.
Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27, pages
2672–2680, 2014.
[3] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context
encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, pages 2536–2544, 2016.
[4] ArtsiomSanakoyeu, Dmytro Kotovenko, Sabine Lang, and Bjorn Ommer. A style-aware content
loss for real-time HD style transfer. In Proceedings of the European Conference on Computer Vision
(ECCV), pages 698–714, 2018.
[5] Yang Chen, Yu-Kun Lai, and Yong-Jin Liu. Cartoongan: Generative adversarial networks for photo
cartoonization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,
pages 9465–9474, 2018.
[6] Lvmin Zhang, Chengze Li, Tien-Tsin Wong, Yi Ji, and Chunping Liu. Two-stage sketch
colorization. In SIGGRAPH Asia 2018 Technical Papers, page 261.
[7] Kaiming He, Jian Sun, and Xiaoou Tang. Guided image filtering. In European Conference on
Computer Vision, pages 1–14. Springer, 2010.
[8] Dongbo Min, Sunghwan Choi, Jiangbo Lu, Bumsub Ham, Kwanghoon Sohn, and Minh N Do. Fast
global image smoothing based on weighted least squares. IEEE Transactions on Image Processing,
23(12):5638–5653, 2014.
[9] Zeev Farbman, RaananFattal, Dani Lischinski, and Richard Szeliski. Edge-preserving
decompositions for multi-scale tone and detail manipulation. In ACM Transactions on Graphics
(TOG), volume 27, page 67. ACM, 2008.
[10] Leon A Gatys, Alexander S Ecker, and Matthias Bethge. A neural algorithm of artistic style. arXiv
preprint arXiv:1508.06576, 2015.

25

You might also like