You are on page 1of 10

BACHELOR OF TECHNOLOGY

IN
Artificial Intelligence and Machine Learning

Batch Number: 06

2011CS020381 – V. Abhiram
Project Guide: Dr. A. Kiran Kumar
2011CS020382 – V. Chanukya
2011CS020383 – E. Bhavya Deepika
2011CS020384 – V. Keerthi Priyanka

Department of AIML, School of Engineering


Malla Reddy University
CONTENTS:
• Abstract
• Introduction
• Literature Survey
• Proposed Methodology
• Results and Discussion
• Conclusion
Abstract:
Currently there is strong interest in data-driven approaches to medical image
classification. However, medical imaging data is scarce, expensive, and fraught
with legal concerns regarding patient privacy. Typical consent forms only allow for
patient data to be used in medical journals or education, meaning many medical
data is inaccessible for public research. We propose a two-stage image generation
architecture that can be used to generate medical images and use them to create
public repositories so that researches for various purposes without having to wait
for days.
Introduction:
Generative Adversarial Networks or GAN:
• GANs are machine learning models where two neurons compete by using deep learning methods
to become more accurate in their predictions.
• GANs contain a generator and a discriminator. While the generator produces a
fake image using the data that has been fed to it, the discriminator classifies
whether the generated image is a fake or a real one.
Literature Survey:
Our survey was mostly based on two papers:
1. "Synthetic medical images from dual generative adversarial networks.“ by
Guibas, John T., Tejpal S. Virdi, and Peter S. Li where the authors used two
GANs, DCGAN and Conditional GAN for two stages to generate medical images.
The results obtained were near perfect but with some drawbacks as well as
performance issues.

2. “Wasserstein generative adversarial networks.“ by Arjovsky, Martin, Soumith


Chintala, and Léon Bottou. In this paper, the authors discussed about
Wasserstein GAN, an extension of DCGAN which was mainly introduced to
tackle some of the problem that were faced with DCGAN
Proposed Methodology:
• The proposed methodology uses WGAN, an extension of DCGAN which was
introduced solely to tackle problems such as mode collapse and vanishing
gradient caused due to DCGAN.
• The user can also define his own alteration to the parameters of the existing
image so that they can generate a new image that wasn’t used to train or test the
model.
Results and Discussions:
Original Images from Generated Fundus Manual Generated segments
DRIVE Dataset using the proposed system segmentation from Dataset
Observations:
• The existing system was able to achieve an F1 accuracy rating of 0.8997 for
our synthetically trained u-net and an F1 accuracy of 0.9153 for our DRIVE-
trained u-net.

• Our synthetic image output is expected to be similar but with an increase in the
F1 Score and other evaluation metrics.
Conclusion:

• From the result we can conclude that we are able to generate a synthetic image of
retina which is very close to the original image that was used for training.
• It is currently only possible to generate images that are similar to those in the dataset
and not ones that possess unique characteristics or features.
• This is due to the incomplete development of the second generator using cGAN
(Conditional GAN) which generates images from the training and testing sets which
can be developed by creating images of complex organs like Lungs, Brain, Heart etc.
• We can say that, now, the model is very similar to the existing model by with better
color accuracy and more accurate segmentation results but will be modified to be
more resourceful and functional than the existing model.
THANK YOU!

You might also like