You are on page 1of 6

Volume 9, Issue 2, February 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

Reinforcing Visual Content Integrity through


Image Restoration and
AI Recognition: Literature Survey
M. Gangappa1; Eloori Karthik2; Gaddampally Shreyas Reddy3;
Gudimetla Satya Manjunadha Reddy4; Mohammad Shoaib5
Associate Professor1
1,2,3,4,5,6
Department of Computer Science and Engineering,
VNR Vignana Jyothi Institute of Engg & Technology Hyderabad, Telangana, India

Abstract:- Over time, images may undergo degradation The art and science of selectively painting or recreating
due to various factors, leading to loss of clarity and missing or degraded areas of images to revitalize them is
visual appeal. The motivation behind this study is to aid known as image in painting. It entails replacing damaged or
in the restoration of precious memories, as pictures serve undesirable areas of the image with new content while
triggers for cherished recollections, and enhancing maintaining the image's overall coherence and integrity.
image quality can improve clarity and visual comfort. Image in painting has a wide range of uses, from object
The significance of this research lies in its potential to removal and image completion to photo enhancement.
help individuals recover and preserve cherished
memories captured in damaged images. Restoring the For picture in painting, many strategies and techniques
visual quality of images can evoke emotions and have been created over time, each has advantages and
facilitate stronger connections to the past. In recent past disadvantages of its own. The field of image in painting has
AI has advanced significantly in these areas. However, advanced significantly, moving from traditional patch-based
the potential for misinformation and deep fake methods to complex deep learning algorithms. These
proliferation has raised concerns about the authenticity techniques are now used in many fields, including as
and credibility of visual content. In response to this multimedia, computer vision, image processing, and
challenge, we aimed at developing an automated system medical imaging [1].
to discern between AI- generated images and original
photographs with high accuracy. Additionally, The review examines on the state of image restoration
enhancing image quality has various practical benefits, via image in painting, focusing on the important innovations
such as increased clarity, improved viability, and and methods that have influenced the industry. The
reduced strain on the eyes when viewing the restored development of in painting methods will be examined,
images. Overall, this research demonstrates the covering from the initial human involvement to the present
effectiveness of deep learning with GANs for image data-driven deep learning period of time. The following text
restoration, highlighting its value in recovering will also be discussed by us, along with the difficulties and
invaluable memories and enhancing visual content for potential paths forward, providing an overview of this
broader applications. The results of this study showcase rapidly developing field's future and its possible effects on
the potential of AI-driven image restoration techniques picture restoration and associated applications. More
to positively impact our personal lives and the way we discussions will be dedicated to the challenges faced in this
interact with visual data in the digital age. field and how they can possibly be overcome. A
comprehensive overview will be provided, shedding light on
Keywords:- Deep Learning, GAN, Machine Learning, Image In- the future advancements in this rapidly developing field.
Painting, Deep Fake. The implications of these advancements for the realm of
picture restoration and other related applications will also be
I. INTRODUCTION examined in depth. In this way, a broader understanding of
the future direction and the potential impact of this cutting-
Images are essential to our visual environment because edge field will be accrued. The field of image in painting is
they are effective means of expression, recording, and a dynamic and stimulating field of study that is constantly
communication. Images, however, are not always ideal; they expanding the possibilities for picture restoration and
can have a number of flaws, like missing or damaged areas. alteration.
The act of revitalizing these photos by fixing or completing
the missing or damaged areas is known as image restoration. Producing incredibly realistic artificial intelligence
picture in painting is a technique that allows for the (AI)-generated content—such as pictures, movies, and
achievement of one intriguing and difficult aspect of picture texts—has become standard procedure in an era of quickly
restoration. developing AI and machine learning technologies. These
artificial intelligence (AI)-generated works, sometimes

IJISRT24FEB1068 www.ijisrt.com 1371


Volume 9, Issue 2, February 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
known as "deep fakes" or "synthetic media," have the power Image in painting has a multi-decade history, with
to conflate fact and fiction. Although AI-generated material early techniques based on manually created rules and
has valid applications in automation and the creative arts, heuristics. The in painting tasks that these approaches were
there are significant risks associated with false information, unable to tackle were difficult ones. However, the area saw
invasions of privacy, and malevolent intent. As a result, a major upheaval with the introduction of deep learning and
identifying AI-generated content has become crucial to machine learning. With their ability to learn from massive
maintaining the accuracy of digital data and media. datasets and generate realistic results, Convolutional Neural
Networks (CNNs) and other deep learning architectures
Artificial Intelligence (AI)-generated material is have completely changed the field of picture in painting.
generally created with methods such as Generative
Adversarial Networks (GANs) and deep neural networks, Generative Adversarial Networks (GANs) and U-Net
which can make photos and movies that are visually are two recent advances in deep learning-based in painting
identical to actual ones. These artificial intelligence (AI) models that have greatly enhanced the quality and realism of
elements can be used for a variety of tasks, including as in painted images. These models are able to provide material
editing photos, making fake movies, and even producing that blends in seamlessly with the surroundings by
wholly imaginary writing. comprehending the context of the image.

This review of the literature delves into the crucial Even though picture in painting has advanced
field of identifying AI-generated information and images, significantly, there are still issues to be resolved, like
which is a field that is constantly developing as detection managing sizable in painting regions, maintaining global
methods become more sophisticated and AI-generated coherence, and processing images in real time. The area is
media becomes of higher quality. still developing, looking at new methods, effective
algorithms, and uses in a range of fields, such as medical
Overall, this research demonstrates the effectiveness of imaging and the creative arts.
deep learning with GANs for image restoration, highlighting
its value in recovering invaluable memories and enhancing The burgeoning science of artificial intelligence,
visual content for broader applications. The results of this especially in deep learning, has led to the production of
study showcase the potential of AI-driven image restoration artificial intelligence-generated content, often known as
techniques to positively impact our personal lives and the synthetic media or deep fakes. The output of algorithms
way we interact with visual data in the digital age. such as Generative Adversarial Networks (GANs), these
artificial intelligence (AI) materials may create convincing
In section II explains about the background study of images, movies, and text that are almost identical to actual
image inpainting methods and AI generated image detection content. Although artificial intelligence (AI)-generated
techniques. In section III, literature review has been media has valid uses in automation, entertainment, and the
explained that delve us through the ways of implementing creative arts, there are also serious ethical issues.
image inpainting and AI driven content detection. In section
IV, the conclusion is given in the reference to all the The growth of AI-generated material has led to
research papers that are used to study about the image in concerns pertaining to misrepresentation, privacy violation,
painting techniques. and the possibility for malevolent use. Artificial intelligence
(AI)-generated deep fakes can be used to propagate false
II. BACKGROUND information, assume human identities, alter visual proof, and
trick viewers. Therefore, in order to preserve the legitimacy
Images are essential to modern civilization because and authenticity of digital media, it is now essential to
they may be used for artistic expression, archiving, and develop methods and instruments for identifying content
visual communication across all media. But pictures generated by artificial intelligence.
frequently suffer from a variety of deteriorations, from
physical harm like stains and scratches to digital arte facts A multifaceted strategy is required to detect content
like missing data and compression arte facts. The goal of generated by artificial intelligence (AI). This technique may
image restoration is to improve or restore these pictures so entail forensic investigation, metadata study, machine
they are more aesthetically pleasing, educational, or learning models, and the usage of different indicators and
comprehensive. arte facts left behind by the generative algorithms. As AI-
generated material grows more complex and challenging to
A specialised area of image restoration called "image distinguish from actual information, researchers and
in painting" is concerned with restoring damaged or missing engineers are constantly striving to stay ahead of the curve.
areas of images. This is a difficult assignment because it
calls for the creation of a believable that blends in with the In addition to fighting false information and
current image without affecting its general structure and safeguarding privacy, the development of AI content
cohesion. Image in painting has become more popular in the detection techniques is crucial for maintaining the security
domains of computer vision, image processing, and related of digital environments and confirming the legitimacy of
sciences. Its uses are diverse and include photo repair, media across a range of industries, including entertainment,
object removal, and image completeness. social media, law enforcement, and journalism.

IJISRT24FEB1068 www.ijisrt.com 1372


Volume 9, Issue 2, February 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
Furthermore, a crucial topic of discussion in this developing incorporates partial convolution layers to adaptively handle
industry is the ethics of the usage and regulation of such missing data.
technology.
Some recent research has explored the synergy
III. LITERATURE WORK between GANs and diffusion models for image in painting,
this hybrid approach combines the generative power of
It Involved searching and reviewing relevant academic GANs with the sequential refinement process of diffusion
literature from Google Scholar and IEEE Explorer models [1].By using a GAN to guide the diffusion process,
databases. These databases were chosen due to their these models achieve high-quality in paintings with fine
comprehensive coverage of scientific and technical details and global coherence. Research in this area continues
literature. We used a combination of keywords and Boolean to evolve with innovative model architectures [10].
operators to search for s relevant to our project.
Convolutional Neural Networks (CNNs) are widely
 Few Models which Help Image Restoration : used in in painting due to their ability to capture spatial
Generative Adversarial Networks (GANs) have been information and learn from large datasets. Models like U-
instrumental in advancing the field of image in painting. Net and Deep Fill leverage CNNs for in painting tasks[1].U-
Researchers have developed GAN-based in painting models Net, a popular architecture, employs an encoder-decoder
that can generate high-quality content for missing or structure with skip connections to generate in paintings
corrupted regions[15]. efficiently. DeepFill, on the other hand, uses a CNN to
predict missing pixels while considering global context.
[6] paper demonstrates notable models such as Context
Encoders (CE) and Partial Convolutional Neural Networks These advanced in painting techniques find
(PCN) have employed GAN architectures to generate applications in various domains, including medical image
realistic in painted images. CE uses an adversarial loss to restoration, video post-production, image editing, and
encourage realistic content generation, while PCN artistic creation. The ability to restore damaged or
incomplete images is crucial in these fields.

Table 1: Outline of Various Image Inpointing Techniques


AUTHOR METHOD ADVANTAGES DISADVANTAGES
• Preservation of Image Structure and Coherence • High memory requirements
• Handling Large Inpainting Regions • Limited Applications
G. Sridevi Diffusion based • Reduced Artifacts and Texture Consistency • Processing extremely high-resolution
[12] techniques. images or long video sequences with
diffusion-based techniques can be
cumbersome.
• CNNs excel at automatically extracting • Large Computational Requirements
relevant features from images
evin Walker • CNNs can be adapted to inpainting tasks of • High Memory Usage
CNN’s
[1] various complexities
• CNNs can generalize well from diverse
training data

• Realistic Content Generation • GANs are susceptible to mode collapse, a


phenomenon where the generator focuses
H. Wang [15] GAN’s • GANs are capable of understanding the on a subset of possible modes in the data
contextual information in an image distribution, leading to a lack of diversity
in the generated content.

Computer vision, while efficient, may lack


Computer Computer vision algorithms can process and the human touch and might struggle with
Q. Fan [8]
Vision inpaint large amounts of data relatively quickly. subjective or context-specific inpainting
requirements.
Local and
By considering both local and global This could result in longer processing
Ruisong Global
information, the inpainting process becomes times, especially for high-resolution images
Zhang [11] refinement
more informed and context-aware or in real-time applications.
technique

IJISRT24FEB1068 www.ijisrt.com 1373


Volume 9, Issue 2, February 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165

The dynamic selection mechanism can contribute


Dynamic Training such networks may require a
Ning Wang to improved inpainting quality by selecting the
Selection substantial amount of labeled data and
[13] most relevant information for each inpainted
Network careful tuning of parameters
region.

Transformers can efficiently represent complex


Jianlong Fu Bidirectional Transformers, especially deep and complex
features in the image, facilitating a more accurate
[17] Transformers ones, can be challenging to interpret
inpainting process.
The aggregation of contextual information
Aggregated These transformations often operate at multiple
Yanhong from various scales and regions may require
Contextual scales, allowing for effective adaptation to
Zeng a more sophisticated architecture and
Transformations different levels of detail in the image
training process.
• It may not be effective in cases where
Songwen Colour space • It is sensitive to anomalies in color
manipulation involves other factors like
Mo[22] combinations. distribution, texture, and patterns.
face replacement or voice synthesis.

Diffusion • Diffusion models can be prone to


Jireh Jam [10] • Handling Complex Inpainting Scenarios
models overfitting.

Bhavsar, A. • Developing, fine-tuning, and maintaining


CNN’s
V.[20] • Pre-trained CNN models can be fine-tuned CNN models can be complex.
• GANs require substantial amounts of
• GANs are capable of learning and recognizing
Xiao, Y.[25] GAN’s training data
patterns and features
• GANs are susceptible to mode collapse

• Depending on the watermarking


Zhengyuan Watermark • It is a reliable means of verifying the technique employed, there's a risk of
Jiang [23] based detection authenticity of the content degrading the quality of the original
content

• The versatility enables the transfer of


• While cross-concept settings offer
knowledge and capabilities across different
Hyeonseong Using cross versatility, they may lack the specificity
contexts, potentially saving time and resources
Jeon[24] concept settings and fine-tuning that a model dedicated to
in training separate models for each specific
a particular concept could provide
concept

It may struggle with abstract concepts or


tasks that require a deeper understanding of
Computer vision excels in visual understanding, context, emotions, or complex
Bhavsar, A. Computer
allowing machines to interpret and make relationships.
V.[20] Vision
decisions based on visual data This limitation becomes apparent in areas
where human intuition and contextual
understanding play a significant role.

Misinterpretation of context or failure to


Out of context It can provide robust security measures by adequately define what is considered
Xiao, Y.[25] object detection flagging activities or data points that appear out normal can lead to false positives or
of the ordinary, helping to detect potential threats negatives, impacting the reliability of the
detection system.

 Few Models which Help AI Generated Content AI-generated content to learn patterns and features that are
Detection : indicative of manipulation. During inference, the model
CNNs can be employed to analyze the visual aspects of evaluates the input media and outputs a probability or
images and videos, making them effective in detecting classification indicating the likelihood of manipulation[1].
anomalies in content. Specifically, CNNs are used to
compare the input image or video with known patterns or Paper [25] describes how GANs can be used in a
characteristics associated with AI-generated counter-GAN or adversarial approach for AI-generated
content[20].Pre-trained CNN models can be fine-tuned to content detection , by training a discriminator GAN model
classify images or videos as either real or AI-generated. to distinguish between real and AI-generated content,
These models are trained on large datasets of both real and discrepancies in the generated content can be revealed. The

IJISRT24FEB1068 www.ijisrt.com 1374


Volume 9, Issue 2, February 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
discriminator GAN model is trained on a dataset containing The literature survey on AI-generated content detection
both real and AI-generated media. During training, the techniques, including Convolutional Neural Networks
model learns to identify subtle differences and (CNNs), Generative Adversarial Networks (GANs), and
inconsistencies in AI-generated content. In practice, the diffusion models, reveals a diverse landscape of methods
discriminator can evaluate new media content and provide a and approaches for addressing the critical issue of
score or classification based on its authenticity. identifying manipulated or synthetic media. These
techniques have developed in response to the increasing
Diffusion models can be applied to AI-generated complexity of artificial intelligence (AI)-generated content,
content detection by analyzing the statistical properties and including deep fakes and altered photographs.
pixel-level features of media. They aim to identify
anomalies in the content that deviate from natural CNNs have proven to be useful for analysing visual
patterns[10].Diffusion models are trained to understand the content in photos and movies. Their adaptability, transfer
statistical distribution of natural media content. During the learning, and feature extraction characteristics make them
detection process, they analyze the distribution of pixel useful tools for content detection. Nevertheless, issues with
values, textures, and other visual features in the input media. overfitting, data needs, and model interpretability still need
Deviations from expected statistical properties can be used to be resolved[1].
to flag potentially AI-generated content.
Using the same technology as AI-generated content,
Using color space channel combinations for AI- GANs offer a distinctive adversarial method to content
generated content detection involves analyzing the statistical discovery. They are quite good at identifying patterns and
properties and patterns in the color channels of an image or spotting abnormalities in images. Although GANs provide a
video. By examining the channels (such as Red, Green, and strong answer, there are issues with training complexity,
Blue), we can detect anomalies and inconsistencies that may data needs, computational demands, and adversarial attack
be indicative of AI-generated content.The RGB color space resilience[22].
separates an image into its three primary color channels:
Red, Green, and Blue. Analyzing these channels separately Diffusion models provide statistical anomaly detection
can help detect variations and inconsistencies[22].One by highlighting content qualities that deviate from
common method is to compare the individual color channels expectations. Notable benefits include their ability to handle
of an image. AI-generated content may exhibit disparities in intricate inpainting settings with effectiveness and results
color distribution or patterns across channels. For example, that may be understood. For their implementation, however,
if the Red channel of a face in an image has different factors including data needs, processing requirements, and
features compared to the Green or Blue channels, it can fine-tuning to lower false positives are taken into
raise suspicions of manipulation. account[10].

IV. CONCLUSION The literature review concludes by highlighting the


continued necessity for a variety of flexible techniques to
This study clarified the literature review on artificial identify information created by artificial intelligence.
intelligence (AI) generated content identification and image Depending on the particular use case, the resources at hand,
inpainting methods. The following approaches can be used and the trade-off between recall and precision, the right
to broadly categorise image inpainting techniques: technique will be chosen. Staying abreast of the rapidly
changing environment of AI-generated media will probably
 Making use of diffusion models require combining various approaches and engaging in
 Using CNN interdisciplinary collaboration. The field of content
 Making use of GANs detection will develop in step with the advancement of
 Making use of Color Space channels technology, offering more powerful solutions to protect the
 Computer Vision legitimacy and authenticity of digital material.

The following techniques can be used to detect REFERENCES


artificial intelligence-generated content:
[1]. Jam, J., Kendrick, C., Walker, K., Drouard, V., Hsu,
 Models of diffusion J. G. S., & Yap, M. H. (2021, February). A
comprehensive review of past and present image
 GAN's
inpainting methods. Computer Vision and Image
 Transformers
Understanding, 203, 103147.
 CNN’s
https://doi.org/10.1016/j.cviu.2020.103147.
[2]. Isogawa, M., Mikami, D., Iwai, D., Kimata, H., &
These methods aid in enhancing the effectiveness and
Sato, K. (2018). Mask Optimization for Image
quality of AI-generated content identification as well as
Inpainting. IEEE Access, 6, 69728–69741.
image in painting.
https://doi.org/10.1109/access.2018.2877401 .

IJISRT24FEB1068 www.ijisrt.com 1375


Volume 9, Issue 2, February 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
[3]. Guo, Q., Gao, S., Zhang, X., Yin, Y., & Zhang, C. [15]. Wang, H., Jiao, L., Wu, H., & Bie, R. (2019). New
(2018, June 1). Patch-Based Image Inpainting via Inpainting Algorithm Based on Simplified Context
Two-Stage Low Rank Approximation. IEEE Encoders and Multi-Scale Adversarial Network.
Transactions on Visualization and Computer Procedia Computer Science, 147, 254–263.
Graphics, 24(6), 2023–2036. https://doi.org/10.1016/j.procs.2019.01.250
https://doi.org/10.1109/tvcg.2017.2702738 [16]. Abdal, R., Zhu, P., Mitra, N. J., & Wonka, P. (2021,
[4]. Lu, H., Liu, Q., Zhang, M., Wang, Y., & Deng, X. May 5). StyleFlow: Attribute-conditioned
(2017, March 13). Gradient-based low rank method Exploration of StyleGAN-Generated Images using
and its application in image inpainting. Multimedia Conditional Continuous Normalizing Flows. ACM
Tools and Applications, 77(5), 5969–5993. Transactions on Graphics, 40(3), 1–21.
https://doi.org/10.1007/s11042-017-4509-0 https://doi.org/10.1145/3447648
[5]. Xue, H., Zhang, S., & Cai, D. (2017, September). [17]. Zeng, Y., Fu, J., Chao, H., & Guo, B. (2023, July 1).
Depth Image Inpainting: Improving Low Rank Aggregated Contextual Transformations for High-
Matrix Completion With Low Gradient Resolution Image Inpainting. IEEE Transactions on
Regularization. IEEE Transactions on Image Visualization and Computer Graphics, 29(7), 3266–
Processing, 26(9), 4311–4320. 3280. https://doi.org/10.1109/tvcg.2022.3156949
https://doi.org/10.1109/tip.2017.2718183 [18]. Guo, H., Hu, S., Wang, X., Chang, M. C., & Lyu, S.
[6]. Elharrouss, O., Almaadeed, N., Al-Maadeed, S., & (2022). Robust Attentive Deep Neural Network for
Akbari, Y. (2019, December 6). Image Inpainting: A Detecting GAN-Generated Faces. IEEE Access, 10,
Review. Neural Processing Letters, 51(2), 2007– 32574–32583.
2028. https://doi.org/10.1007/s11063-019-10163-0 https://doi.org/10.1109/access.2022.3157297
[7]. Ding, D., Ram, S., & Rodriguez, J. J. (2019, April). [19]. Singh, J. M., & Ramachandra, R. (2023). Deep
Image Inpainting Using Nonlocal Texture Matching Composite Face Image Attacks: Generation,
and Nonlinear Filtering. IEEE Transactions on Vulnerability and Detection. IEEE Access, 11,
Image Processing, 28(4), 1705–1719. 76468–76485.
https://doi.org/10.1109/tip.2018.2880681 https://doi.org/10.1109/access.2023.3261247
[8]. Fan, Q., & Zhang, L. (2017, September 2). A novel [20]. Bhavsar, A. V., & Rajagopalan, A. N. (2012, April).
patch matching algorithm for exemplar-based image Range map superresolution-inpainting, and
inpainting. Multimedia Tools and Applications, reconstruction from sparse data. Computer Vision
77(9), 10807–10821. https://doi.org/10.1007/s11042- and Image Understanding, 116(4), 572–591.
017-5077-z https://doi.org/10.1016/j.cviu.2011.12.005
[9]. K. Alilou, V., & Yaghmaee, F. (2016, March 3). [21]. Segawa, Y., Kawamoto, K., & Okamoto, K. (2018,
Exemplar-based image inpainting using svd-based May 14). First-person reading activity recognition by
approximation matrix and multi-scale analysis. deep learning with synthetically generated images.
Multimedia Tools and Applications, 76(5), 7213– EURASIP Journal on Image and Video Processing,
7234. https://doi.org/10.1007/s11042-016-3366-6 2018(1). https://doi.org/10.1186/s13640-018-0272-z
[10]. Jiang, Y., Xu, J., Yang, B., Xu, J., & Zhu, J. (2020). [22]. Mo, S., Lu, P., & Liu, X. (2022, October 27). AI-
Image Inpainting Based on Generative Adversarial Generated Face Image Identification with Different
Networks. IEEE Access, 8, 22884–22892. Color Space Channel Combinations. Sensors, 22(21),
https://doi.org/10.1109/access.2020.2970169 8228. https://doi.org/10.3390/s22218228
[11]. Quan, W., Zhang, R., Zhang, Y., Li, Z., Wang, J., & [23]. Choi, J. G. (2000, December 1). Reliable watermark
Yan, D. M. (2022). Image Inpainting With Local and detection method based on analysis of correlation.
Global Refinement. IEEE Transactions on Image Optical Engineering, 39(12), 3308.
Processing, 31, 2405–2420. https://doi.org/10.1117/1.1327900
https://doi.org/10.1109/tip.2022.3152624 [24]. ZHAI, D., ZUO, W., DUAN, W., YU, J., & LI, T.
[12]. Sridevi, G., & Srinivas Kumar, S. (2019, January (2013, December 17). Image inpainting algorithm
14). Image Inpainting Based on Fractional-Order based on double-cross curvature-driven diffusion
Nonlinear Diffusion for Image Reconstruction. model. Journal of Computer Applications, 33(12),
Circuits, Systems, and Signal Processing, 38(8), 3536–3539.
3802–3817. https://doi.org/10.1007/s00034-019- https://doi.org/10.3724/sp.j.1087.2013.03536
01029-w [25]. Xiao, Y., & Zhao, J. (2024). Evaluating the
[13]. Wang, N., Zhang, Y., & Zhang, L. (2021). Dynamic effectiveness of machine learning models for
Selection Network for Image Inpainting. IEEE detecting AI-generated art. Journal of Emerging
Transactions on Image Processing, 30, 1784–1798. Investigators. https://doi.org/10.59720/23-132
https://doi.org/10.1109/tip.2020.3048629 [26]. Malik, A., Kuribayashi, M., Abdullahi, S. M., &
[14]. Liao, L., Hu, R., Xiao, J., & Wang, Z. (2019). Artist- Khan, A. N. (2022). DeepFake Detection for Human
Net: Decorating the Inferred Content With Unified Face Images and Videos: A Survey. IEEE Access,
Style for Image Inpainting. IEEE Access, 7, 36921– 10, 18757–18775.
36933. https://doi.org/10.1109/access.2019.2905268 https://doi.org/10.1109/access.2022.3151186

IJISRT24FEB1068 www.ijisrt.com 1376

You might also like