You are on page 1of 1

ABSTRACT

Many studies on face aging have been conducted, ranging from approaches that use pure image-
processing algorithms to those that use generative adversarial networks. It is preferable when
computationally aging a face that the age output is close to the expected age and that the
individual's characteristics are preserved. Conventionally, two types of modeling techniques have
been used in this task: prototype-based and model-based methods. When transforming a face from
a younger domain to an aged domain, both approaches fail to retain individual characteristics.
With advancements in computer vision, generative models, particularly generative adversarial
networks, have been used to perform this task (GANs). It is now possible to generate realistically
aged faces of specific individuals using them. However, these methods cannot meet the three
essential requirements of face aging simultaneously and usually generate aged faces with strong
ghost artifacts when the age gap becomes large. So, identity-preserved and attention-based
progressive face aging using generative adversarial network (IPAPFA-GAN) is proposed to
mitigate these issues. The proposed architecture uses self-attention GAN to maintain identity-
related information and improve quality of the generated images. And also, pixel-wise loss
replaced with attention loss to alleviate accumulative blurriness. Further, least-square GAN
employed for the discriminator to improve the quality of the synthesized images and stabilize
training process. Furthermore, quantitative experiments validate the effectiveness of our
approach. The inception score of the proposed solution has shown improvements in generating
the desired images with better quality with the score of 34.23. The Pearson correlation coefficient
of the proposed solution also has shown improvement in generating images with better aging
accuracy and smoothness with the score of 0.993. Finally, the verification confidence score of the
proposed solution has shown improvements in identity-preservation between input faces and
generated face images with the score of 97.03, 95.12, and 90.42 respectively from three age
groups. Comprehensive experiments demonstrate superior performance of the proposed solution
over the existing state-of-the-art methods on CACD benchmarked dataset with attention loss, least
square GAN, and self-attention mechanism.

Keywords: Face aging, GAN, self-attention, identity-preserved, generative models, IPAPFA-GAN.

You might also like