You are on page 1of 20

MALARIA PARASITE -SREEJITH V

DETECTION BASED ON DEEP GUIDED BY:- Dr. Thomas

LEARNING TECHNIQUES George


BLOCK DIAGRAM
NIH Malaria Dataset

Infected(1) Uninfected(0)

Data Augmentation

Data Splitting

Create Model

Training and Testing Classify


DATASET ASSESSMENT ON TRADITIONAL CNN MODEL
Image resize using PIL Library
Data splitting using Sklearn Library

Parameters Details
Training

Training Accuracy Plot


Model

Model Evaluation
Confusion Matrix

Sensitivity, Specificity.
PERFORMANCE COMPARISON ON DIFFERENT TRANSFER
LEARNING MODELS

ALL EXPERIMENTS WERE CARRIED OUT ON JUPYTER NOTEBOOK


WITH SYSTEM SPECIFICATIONS : i7 8TH Gen Processor, 8 GB RAM, 2 GB GPU
using tensorflow and keras library.

MODEL TRAINING TIME ACCURACY SENSITIVITY SPECIFICITY


COSTUMIZED- 1 HOUR 30 MIN 95% 0.98 0.93
CNN MODEL 17
LAYERS

VGG-19 1 HOUR 40 MIN 86% 0.88 0.84


DENSENET-121 1 HOUR 45 MIN 94% 0.96 0.94
RESNET-50 1 HOUR 30 MIN 95% 0.97 0.95
ASSESSMENT OF THE PREVIOUS WORK AND CORRECTIONS DONE

Previous researchers either used high end systems that are too costly or there training time
crossed more than two hours.

CONTRIBUTIONS MADE TO CORRECT THIS ISSUE:-


1) USED FAST AI V2 LIBRARY
2) USED MIXED PRECISION TRAINING METHOD
3) USED FINE TUNING
4) ADDITIONALLY ASSESSED THE MODEL BASED ON KOHEN’S KAPPA COEFFICIENT

FASTAI V2 LIBRARY: Fastai is a deep learning library(PYTHON) which provides practitioners with high-
level components that can quickly and easily provide state-of-the-art results.
Optimization algorithms can be implemented in 4–5 lines of code.
CONTINUED……

 MIXED PRECISION METHOD:


As model sizes grow, the memory and compute requirements for training these models also
increases .
Instead of using 32-bit processing , 16-bit processing used to reduce memory and time of
computation.
Computations are done in float16 for performance, but variables must be kept in float32 for
numeric stability.

PROBLEMS OF USING MIXED PRECISION METHOD:-


Gradient underflow: If our gradients are too small (below the values that can be represented
using 16 bits) they will get converted to 0.
Activation exploding: A series of matrix multiplications (forward pass) can easily cause
activations (outputs) of the neural network to grow so large that they reach infinity.
CONTINUED……

 FINE TUNING:
Finding an optimal learning rate to optimize the network again to minimize error.
The pre-trained models are tuned accordingly by adjusting weights of the pre- trained layers.

ASSESSMENT BASED ON KOHEN KAPPA’S COEFFICIENT:


In multi-classification problems measures such as the accuracy, or precision/recall do not provide the complete
picture of the performance of our classifier.
The Kappa statistic (or value) is a metric that compares an Observed Accuracy with an Expected
Accuracy (random chance).
Kappa = (observed accuracy - expected accuracy)/(1 - expected accuracy)
Cohen’s Kappa statistic is a very good measure that can handle very well both multi-class and imbalanced class
problems.
Kappa< 0 is indicating no agreement , 0–0.20 as slight, 0.21–0.40 as fair, 0.41–0.60 as moderate, 0.61–0.80 as
substantial, and 0.81–1 as almost perfect agreement.
IMPLEMENTATION AND RESULTS
DATA AUGMENTATION AND DATA SPLITTING

DATA BLOCK VISUALIZATION


CONTINUED……

MIXED PRECISION TRAINING AND FINE TUNING

PERFORMANCE EVALUATION ON TRANSFER LEARNING MODELS


- FOR RESNET 50
CONTINUED……
FOR VGG-19
FOR DENSENET-121
MODEL PREDICTION
PERFORMANCE EVALUATION WITH PREVIOUS METHODS

MODELS EPOCHS (PREVIOUS) TIME(PREVIOUS) ACCURACY(PREVIOUS) EPOCHS(FASTAIV2) TIME(FAv2) ACCURACY

RESNET 50 20 1 HOUR 30 MIN 94% 5 30 MIN 97 %

VGG-19 20 1 HOUR 40 MIN 86% 5 30 MIN 96 %

DENSE-NET 121 20 1 HOUR 45 MIN 95% 5 30 MIN 96 %


FUTURE SCOPE
PUBLISHING MY RESEARCH AND FINDINGS AS JOURNAL.

CARRYING OUT THE SAME RESEARCH ON CLOUD.


THANK YOU !!!!

You might also like