You are on page 1of 1

Synopsys

As per the presented paper, for the work, data has been collected from authorized
hospitals, that were images of cancer cells taken by a microscope of 40x objective lens
(Model: PH100-DB500U-IPL, Brand: Phenix, Place of rigin: China) and a digital still
camera (Model:Phenix, Brand: MC-D200UVA, Place of origin: China). The images were
of 85 specimens in all comprised of 24 serous carcinoma, 22 mucinous carcinoma , 21
endometrioid, and 18 clear cell carcinoma. All of which were qualified Hedmatoxylin-
Eosin (H&E) stained tissue sections of ovarian cancer. The images were further
converted to JPG format and were cropped to 1024 * 1024 pixels from center. Then
each image was divided into 4 images of size 512 * 512 pixels. Each image then was
resized to 227 * 227 pixels. A total of 72392 images were obtained. Since an insufficient
size of training sample may cause overfitting so sufficient data was collected. Image
manipulation was done by enhancing the image by using Gaussian High Pass filter and
Laplass filter, then the images were rotated from 0o to 270o in 90 steps to increase the
sample size. For experimentation two recognition models were built. One was training
data without image Augmentation, the number of images were 7392. The other was
training data with augmentation, the number of images were 81312, which were 11
times greater in number. These data were further processed using DCNN based on
AlexNet for automatic classification of ovarian cancer cytological images. The DCNN
comprised of five convolutional layers, three max pooling layers, and two full reconnect
layers. Each of the layers was followed by a Rectified Linear Unit (ReLU) as the
activation function. Fully connected layers occupies most of the parameters which
causes overfitting. To prevent overfitting Dropout was done. After processing, the
probability for the four ovarian cancers were calculated by using Softmax function. The
result obtained from classification model trained by augmented image data were 78.20
% accurate, while in result obtained from model trained by original image data were
72.76% accurate. So an increment of 5.44% of accuracy was observed. statistical
significance of the two models were validated using paired-sample t test. The result
showed that the difference in accuracy of the two model has a significant (P<0.05). The
increase had statistical significance. The accuracy of DCNN was close to pathologists
diagnosis. For further confirmation tests and pathologists advise must be taken. Some
inaccuracy in detection may be a result of less number of samples, which is not the
scope of the presented work.

You might also like