You are on page 1of 2

Replenishing life into degraded retro images through

Deep Learning
Jaideep Cherukuri (AP18110010330)
SRM UNIVERSITY, AP
Coursework Digital Image Processing

Aim: The goal of this project is to create an end to end Image processing system which is capable of
generating extremely realistic motion videos from a single sample image, even if the image is severely
degraded in quality.
The motivation for this work has come from a personal experience of my closest friend during this pandemic. Many individuals have succumbed
to the virus, one amongst those was his grandparent. During this heart wrecking period, it came to his realization that the only memories left with
him were few damaged photos tracing back to the 1900's.

Abstract: Before a boom in the imaging era in early 2000’s, it was extremely rare for common families to
capture nostalgic moments of their loved ones and to mitigate this innovation gap of that period, constant
research efforts have been put across in the field of image processing. Most often not, the elders of a family
only have certain deteriorated black and white images of their dearest who are no more. A picture could be
worth a thousand words but a video can bring back cherishing memories, this propagated the focus of interest
in recent times. Hence, in this project I focused on using a deep learning technique to recover old images that
have been severely degraded and take the processing a step ahead by generating motion videos from these
results using a first order motion model. Unlike traditional restoration tasks, which can be solved using
supervised learning, real-world photo degradation is complex, and the domain distance between synthetic
images and real-world old photos causes the network to fail in producing effective results. So, a triplet domain
translation technique is used which bridges the domain gap of real world images in the compact latent space.
This approach has been targeted to the structured defects like dust spots, scratches and unstructured defects
including noises and blurriness.
Methodology:
RESULTS

In this case, I have used characteristic traits in a face as additional key point to transform beard

CONCLUSION
Although, in this project I have only focused on transforming retro images into motion video. This architecture has a
wide array of applications, one of which includes generating animation videos for game models since motionscan is
extremely expensive and consumes large file size which further increases the size of a game.

You might also like