Professional Documents
Culture Documents
Thesis3 Revised 2021 Appendices Na Lang
Thesis3 Revised 2021 Appendices Na Lang
by
Joshua G. Ang
Julie Seline S. Tapang
Jovan D. Valenzuela
ECE/5
Mapúa University
January 2021
1
Table of Contents
Chapter 1..........................................................................................................................................4
INTRODUCTION......................................................................................................................4
Chapter 2..........................................................................................................................................6
B. Physical Examination.....................................................................................................................................6
C. Digital Rectal Examination............................................................................................................................6
D. Alternative Rectal Examination.....................................................................................................................6
N. Image processing..............................................................................................................14
P. MobileNet V2 Model/Architecture...................................................................................16
CHAPTER 3..................................................................................................................................17
3.4 Hardware.........................................................................................................................19
2
3.5 Block Diagram of the System.........................................................................................20
3.6 Application......................................................................................................................22
3.7.1 Dataset......................................................................................................................................................23
3.7.2 Image Augmentation................................................................................................................................25
3.7.3 Model.......................................................................................................................................................27
Chapter 4........................................................................................................................................74
CONCLUSION.........................................................................................................................74
Chapter 5........................................................................................................................................75
RECOMMENDATIONS..........................................................................................................75
REFERENCES:.............................................................................................................................76
3
Chapter 1
INTRODUCTION
The perineal region is a potential location for the development of sickness in dogs. Most
of the malignant sickness affecting this region arise from these glandular tissues being
circumanal and anal sac disease being the most common ones [1]. Anal glands, also called anal
sacs, are located between the layers of muscles that make up the rectum. Most frequent ways that
anal glands become a problem are when they get impacted and eventually become infected.
Circumanal adenocarcinomas, anal sac disease, affect neutered and intact male and female dogs
and do not seem to be hormone dependent. Higher proneness has been reported in large breed
male dogs aged 11 years on average. Anal tumors are associated with low metastatic rates,
approximately 15% [2]. Failure of the anal sacs can be observed with defecation, poor muscle
tone in dogs, and generalized seborrhea (excessive secretion) lead to retention of sac contents.
With such retention it put at risk in bacterial overgrowth, infection, and inflammation. Signs of
pain and discomfort are associated with sitting. Scooting, licking, biting at the anal area, and
painful excretion of feces can be observed. Rectal prolapse, and rectal tumors are other examples
of malignant neoplasms that may affect the perineal region. In veterinary medicine, thoracic
radiography, ultrasound, low-field magnetic resonance imaging (MRI), and the traditional
physical examination are still the most used methods in detecting such sickness in dogs. These
devices detect the internal part of the dogs but not the external part. Developing a non-invasive
tool in analyzing the state of the anal structure of the dogs in detecting anal sickness can help
solve this problem.
Rectal image processing is already used in veterinary medicine based on this article:
Diagnostic Imaging Features of Normal Anal Sacs in Dogs and Cats of 2016 Jung, Yechan., et.al
[3]. When radiographs are used anal sacs are usually invisible because of border effacement with
adjacent soft tissue structures. However, gas-containing anal sacs are visible occasionally and
based on a study, gas-containing anal sacs is seen in 6.2% of male dogs and 4.9% female dogs
[4]. Another non-invasive imaging technique is the Ultrasonography, it allows internal body
structures to be seen by the reflection of ultrasonic waves. Ultrasonography is performed in real
time thus; the results are immediately known and be analyzed. Ultrasonographic evaluation of
anal sacs is easy, inexpensive and readily accessible. Thus, ultrasonography is considered the
most practical imaging modality for evaluation of anal sacs [3]. Previous studies measured anal
sac size indirectly or post-mortem, but ultrasonography can be applied to anal sac measurement
under natural conditions. Based from studies the normal diameters of anal sacs vary between 0.5
and 4 cm in dogs [5]. The ultrasonographic features of anal sac contents are expected vary
because of their variable viscosity and consistency.
4
Current medical imaging techniques that involves image processing require big
equipment and a veterinarian for interpretation. There is no mobile technology for disease
detection which uses deep learning is necessary. While diseases that does not have a superficial
manifestation require invasive procedures, those that have, may be identified physically using
image processing and deep learning.
In this research, we created an android application to help dog owners identify Anal Sac
Disease, Atresia Ani, and Rectal Prolapse in dogs and use the information given by the users for
the demography of each disease. The program uses image processing and convolutional neural
network over an online server where the information necessary for the demography study is also
stored. The program can distinguish the diseases apart immediately, minimizing the time and
number of tests done as well as the physical interaction of the veterinarian with the perianal area
of the dogs. Our specific objectives are: (1) to develop an image processing algorithm for
analyzing the unique characteristics of the diseases, (2) to use machine learning for detecting the
anal condition of the dog, (3) to train the prototype using at least 50 images of each of the
conditions to ensure high confidence of the identification, (4) to make an android application that
will ask for information such as the breed, sex, and age of the dog before the capture/open image
for the identification is enabled, and (5) to test the application using images of the diseases and
compare with a veterinarian’s diagnosis.
The device will benefit dog owners in detecting the diseases at home. The testing time
will be much less and will have a smaller chance of misdiagnosis which can aid in the proper
care and medication sooner.
This study covers the detection of disorders such as Anal Sac Disease, Atresia Ani, and
Rectal Prolapse. Its image processing features the anal sacs, adjacent structures, anatomic
location, margin, size, shape, and echogenicity. Through these parameters, it can manifest more
effective and specific findings to diagnose and detect early these delimited diseases. The authors
of this study anticipate application of diagnostic image processing for detection of the stated
diseases because modalities such as ultrasonography and MRI for soft tissue evaluation are
widely used and quite expensive in veterinary clinics. This study is limited in detecting only the
three disorders mentioned above.
5
Chapter 2
REVIEW OF RELATED LITERATURE
Pets cannot tell you how they are feeling, as a result, sickness may be present before
you are aware of it. Clinical examination was found to detect underlying disease during a case
series of canine geriatric screening appointments [6], so the clinical examination may have a
wider role in the early detection of disease. Clinical examination on dogs can help detect
sickness or abnormalities on the dogs also this type of examination may prevent any underlying
sickness. A good physical examination can detect minor abnormalities before they become
serious problems as well as identify major organ dysfunction without extensive and expensive
medical tests. However, full clinical examination of every patient may not be practical or even
necessary [7] and future work could focus on identifying patient groups where thorough
examination is likely to have a positive impact on long-term health outcomes.
B. Physical Examination
Physical examination is a medical examination routine that helps to monitor the
maintenance of optimal health of an apparently healthy patient. It is an important process for a
veterinarian to develop to help identify minor abnormalities that have probability to become a
serious problem and to detect dysfunctions on major organs without expensive and also
extensive medical tests. Good physical exam can detect many conditions and can change usage
of anesthetics, support, and monitoring. According to Dr. Cheryll Yuill of VCA Hospital, dogs
should be often have their physical examination according to their age and current health status.
Puppies should have their physical examination once a month while older dogs can have theirs
annually. Doctors check their superficial status and also palpate regions like lymph nodes, legs,
abdomen, and also the rectum [8].
6
used too by veterinarians to detect any malignant tumor inside dogs. The diagnostic imaging
modality has been used often by veterinary patients particularly by assessing lymph node.
There are many types of disorders but according to audits by groups such as Department
of Veterinary Clinical Medicine, College of Veterinary Medicine, University of Illinois at
Urbana-Champaign the common types disorders found on dogs are: Anal Sac Disease, Atresia
Ani, and Rectal Prolapse [11].
Anal sacs may become clogged (impacted), infected, abscessed, or cancerous. There are several
common causes of clogged anal sacs, including failure of the sacs to be squeezed out during
defecation, poor muscle tone in obese dogs, and excessive secretion of the gland. When the
clogged gland contents are not periodically squeezed out, this can make the glands susceptible to
bacterial overgrowth, infection, and inflammation [11].
Signs are related to pain and discomfort associated with sitting. Scooting, licking, biting
at the anal area, and painful defecation with tenesmus may be noted. Induration, abscesses, and
fistulous tracts are common. In impaction, hard masses are palpable in the area of the sacs; the
sacs are packed with a thick, pasty, brown secretion, which can be expressed as a thin ribbon
only with a large amount of pressure. When the sacs are infected or abscessed, severe pain and
7
often discoloration of the area are present. Fistulous tracts lead from abscessed sacs and rupture
through the skin; these must be differentiated from Atresia Anis. Anal sac neoplasms are usually
nonpainful and are associated with perineal edema, erythema, induration, or fistula formation.
Apocrine gland adenocarcinomas of the anal sac are typically seen in older female dogs [12].
Tail chasing
Lick or bite at the anal area, and have painful defecation with straining
H. Atresia Ani
Atresia ani is a congenital embryological anomaly in which the hindgut fails to fully
communicate with the perineum. The anus may be either stenotic or imperforate; atresia ani may
appear alone or in combination with rectovaginal or rectovestibular fistula (RVF). In dogs,
females and certain breeds including poodles and Boston Terriers are predisposed. There are few
reports of atresia ani in cats, and most are females with concurrent RVF. The most frequently
reported anomaly is atresia ani. Four types of atresia ani have been reported, including congenital
8
anal stenosis (Type I); imperforate anus alone (Type II) or combined with more cranial
termination of the rectum as a blind pouch (Type III); and discontinuity of the proximal rectum
with normal anal and terminal rectal development (Type IV). An increased incidence was found
in females and in several breeds, including miniature or toy poodles and Boston terriers. Surgical
repair is the treatment of choice, but postoperative complications can occur, including fecal
incontinence and colonic atony secondary to prolonged preoperative distension.
Shown in Figure 2, Atresia ani results when the dorsal membrane separating the rectum and anus
fails to rupture. Clinical signs are apparent at birth and include tenesmus, abdominal pain and
distention, retention of feces, and absence of an anal opening. It is rare in dogs but has been
reported in several breeds, including Toy Poodles and Boston Terriers, with an increased
incidence in females. Concurrent rectovaginal fistula may occur. Euthanasia is recommended in
large animals. In small animals, postoperative fecal incontinence may be a complication of
surgical intervention.[13]
Attitude change, training and painful defecation, loss of appetite, lethargy, diarrhea
Absence of Feces
9
Bulging of the perineum.
Type I A membrane over the anal opening remains, with the rectum ending as a blind
pouch just cranial to the closed anus.
Type II The anus is closed as in Type I but the rectal pouch is located somewhat
cranial to the membrane overlying the anus.
Type III The rectum ends as a blind pouch cranially within the pelvic canal (rectal atresia)
while the terminal rectum and anus are normal.
Type IV In females, atresia ani exists with a persistent communication between the rectum
and vagina (rectovaginal fistula). This fistula can occur with a normal anal
opening as well. The cause of this condition is failure of the cloacal membrane to
resorb or rupture by the second month of embryonal development; hence, continuity
between the rectum and anus is impaired. The result of this defect is an altered
defecation behavior resulting in outpouching of the anal region and abnormal
vaginal discharges.
J. Rectal Prolapse
Rectal prolapse, shown in Figure 3, in dogs can occur at any age of the animal and
can be congenital or develop later in life. Though this condition can occur in both males and
females, the female has an additional possible cause to her credit in the birthing process. There
are several diseases that can cause straining in puppies which may cause protrusion of the rectum
through the anus. [14]
10
Figure 3. Picture of Rectal Prolapse [39]
Rectal prolapse occurs when all layers of the anal/rectal tissue, along with the rectal
lining, protrude through the external anal opening. The protrusion of the rectal lining through the
external anal opening, meanwhile, is solely referred to as anal prolapse.
Dogs with rectal prolapse will demonstrate persistent straining while passing stool (or
defecating). In an incomplete prolapse, a small portion of the lining of the rectum will be visible
during excretion, after which it will subside. In a complete prolapse, there will be a persistent
mass of tissue protruding from the anus. In the chronic stages of complete prolapse, this tissue
might be black or blue in appearance.[15]
11
Prolapse may be classified as incomplete (only the innermost rectal layer is protruding)
or complete (all rectal layers are protruding). Distinctive feature of rectal prolapse in dogs can
easily be notice by an elongated, red, cylinder-shaped mass protruding through the anal opening.
The following parameters are the anatomic location of the anal sacs, margin, shape,
echogenicity (ultrasonography). The anal sacs were invisible on plain radiographs however, the
contrast study allowed visualization of location, shape, size, and margin (edge).
12
Figure 5. Anal scan using Radiography [3]
In Radiography the anal sac usually appeared round to oval in shape on ventrodorsal and lateral
views. The dog anal sacs were ellipsoidal, and had the major axis in the craniocaudal dimension
on ventrodorsal views and in the dorsoventral dimension on lateral views. While on the
ultrasonography, the results are similar to radiography with the anal sacs seen to be ellipsoidal. It
also measured the normal size of the anus of dogs which varies from 0.6 cm to 1.8cm (major
axis).
13
2.3 Dog-Human Companionship
Animal companionship is an integral aspect of life, reasons for owning a pet is that animals
are more consistent and reliable than human-human relationships. Dog companionship has a
significant role in their owner’s hearts and lives. Several dog owners report attachments to their
dogs that are as strong as their attachments to their friends, children, and spouses [16]. According
to the statistical data of the Philippine Canine Club, Inc., there are approximately 1.2 million dog
owners in the country [17]. The number of dog owners shows how humans deeply love the
companionship of dogs and why their health and well-being is important. Other research shows that
dogs serve important human-to-human social functions as to facilitate interaction with
unacquainted persons and to establish trust among the newly acquainted. Social contact has been
long recognized as beneficial in that it alleviates feelings of loneliness and social isolation. Pets
undoubtedly act as “social catalysts”, leading to greater social contact between people [18]. Most
dog owners value their pets as family members.
As shown in Figure 7, the loss of a pet may be particularly distressing for owners if it was
linked with diseases that may cause death. For these reasons many people may appreciate help
and advice on how to manage the health of their pets and avoid sickness that may lead to loss of
life. people do not own dogs to enhance their health, rather they value the relationship and the
contribution their pet makes to their quality of life [19].
N. Image processing
14
Based by Gulo, C. et al., medical image processing is entirely used in diagnosing
diseases, surgical intervention, follow-up of disease, and also in treatment planning. In the
technological world, there exists a wide variation of image processing able to have high-performance
computing solutions to reduce the required run-time [20]. Medical image processing has
contributed to significant medical advances, creating a system and techniques to support a more
efficient way of diagnosing diseases. These techniques are based from the image collected from
different imaging modalities such as x-ray, microscopy, computed tomography, and 3D
ultrasound computer tomography. The development in image processing allows an aid in non-
invasive approach to detection and classification of diseases [21].
Based by T. Deserno of SPIE, the international society for optics and photonics, image
processing uses a group of individual pixels called “digital images” where discrete brightness or
color values are assigned. They can be objectively evaluated, processed, and made available in
different places at the same time by means of appropriate protocols and communication networks,
such as picture archiving and communication system (PCACS) and the digital imaging and
communications in medicine (DICOM) protocol.
A systematic approach, scheme of image processing, is used by the clinician. First is the
image formation where the clinician captures images to form a digital image matrix. Image
visualization includes manipulation of the image matrix to produce an optimized output of
image. Next, Image analysis includes all the steps of processing used for quantitative
measurements and abstract interpretation of medical images. Image management is used in
storing, transferring, and retrieving the image. And lastly, Image enhancement denotes manual or
automatic techniques [22].
Convolutional Neural Network (CNN) is a Deep Learning algorithm which can take in an
input image, assign importance (learnable weights and biases) to various aspects/objects in the
image and be able to differentiate one from the other [23]. The pre-processing required in a
ConvNet is much lower as compared to other classification algorithms. While in primitive
methods filters are hand-engineered, with enough training, ConvNets have the ability to learn
these filters/characteristics. The architecture of a ConvNet is analogous to that of the
connectivity pattern of Neurons in the Human Brain and was inspired by the organization of the
Visual Cortex. Individual neurons respond to stimuli only in a restricted region of the visual field
known as the Receptive Field. A collection of such fields overlap to cover the entire visual area.
[24]
15
.
P. MobileNet V2 Model/Architecture
16
Figure 9. MobileNetV2
The MobileNetV2 architecture is based on an inverted residual structure where the input
and output of the residual block are thin bottleneck layers opposite to traditional residual models
which use expanded representations in the input an MobileNetV2 uses lightweight depthwise
convolutions to filter features in the intermediate expansion layer. Additionally, we find that it is
important to remove non-linearities in the narrow layers in order to maintain representational
power [25]. We demonstrate that this improves performance and provide an intuition that led to
this design. Finally, our approach allows decoupling of the input/output domains from the
expressiveness of the transformation, which provides a convenient framework for further
analysis. We measure our performance on Imagenet classification, COCO object detection, VOC
image segmentation. We evaluate the trade-offs between accuracy, and number of operations
measured by multiply-adds (MAdd), as well as the number of parameters [26]
CHAPTER 3
METHODOLOGY
17
The conceptual framework in Figure 10 shows that dog information and image of the dog anus
are needed in the system. Collected data are then processed and labeled. Generated information
from image classification, profile of canine and image file are then sent to Cloud. Image label
and confidence are returned to the user device.
18
Figure 11. Process Flowchart
Figure 11 shows the flowchart of the whole system. The user first input the canine profile and
image. If an image is selected, the image is scaled, converted to floating points and normalized to
a smaller value. It is then checked if the user device has internet connectivity. If yes, image data
and canine profile are sent to the server and online/server identifier is used. If no, offline
identifier is used. Generated results are then sent back to the user device or application.
The following libraries and frameworks are used for creating custom models for image
classification. The criteria for selecting the tools is based on the recommendation of Google
Colaboratory and it should support learning using the Graphics Processing Unit (GPU).
Keras
Keras is an open-source library used for machine learning and deep learning (neural network). It
has high level building blocks for creating models in machine learning and deep learning. Keras
library also supports CPU and GPU acceleration. It is compatible with Tensorflow, meaning
Tensorflow can interface as the low-level process and Keras for high level neural network API.
19
Tensorflow
Originally a proprietary software of Google for numerical computation and large-scale machine
learning, but it was released as an open source in 2015. Like Keras it can also be run on CPU and
GPU. In 2015 Google specially developed an acceleration software for Tensorflow framework
called Tensor Processing Unit (TPU). TPUs minimize the time to train large and complex
models from weeks of training to just hours [27].
Albumentation
Albumentation is a Python library for image augmentation supporting various image transform
operations. It is faster than the built-in augmentation in Tensorflow.
Flask
Flask is a web framework written in python. A framework "is a code library that makes a
developer's life easier when building reliable, scalable, and maintainable web applications" by
providing reusable code or extensions for common operations [28]. Flask is chosen as a backend
web framework for the server because it is much easier to implement and has enough learning
material available.
Apache2
Apache2 Web Server is the most used in Linux System. A web server accepts clients requests
and sends responses to those requests. Since it is lightweight and can be used for small websites,
it is selected because the server will be deployed using a Raspberry Pi board.
Android Studio
3.4 Hardware
Google Colaboratory
It is a Jupyter notebook environment where you can run and execute Python on your browser. It
also offers free usage/access to GPU and is highly integrated to Google Drive. With a free tier
account, it is limited to Intel Xeon CPU, Nvidia Tesla K80 GPU, 12 GB of ram and 30 GB
storage. All training is run on Google Colaboratory environment.
Raspberry Pi 3 Model B+
20
This micro-computer was used as a host for the server. It is running on Ubuntu Server 20.04 and
configured using Flask and Apache2 as web framework. It is also configured with Tensorflow
and Keras for online identification/classification.
Tensorflow lite enables developers to retrain and deploy machine learning models optimized and
converted to be used by mobile devices including Smartphones (Android/IOS). It is then
configured to be used as an offline identifier/classifier.
21
Figure 12. System Block Diagram
The block diagram on Figure 12 shows the high level overview of the developed system. At the
top layer, android phones act as the user input device. It is also used as an offline
identifier/classifier. Bottom layer shows Raspberry Pi configured to run web services, using
Flask as the backend web framework. The Flask web server will handle all the requests coming
from the user device. This request includes image classification, updating the database
(PostgresSQL) and uploading the image file to a cloud storage (Amazon S3).
3.6 Application
The application is developed with Android Studio and Kotlin programming language. The
complete code is shown in Appendix.
22
Figure 13. Graphical User Interface (left) Image Input Option (right) Canine Profile
Figure 13 shows the GUI of the application. The user can either select a picture from their
gallery or use the device camera to capture the image. It then prompts the user to input details of
their dog.
The GUI will display the selected/captured image and results after image classification.
Training consists of three steps. First is gathering images for the dataset, training the model and
evaluating the model. The difference in performance or accuracy should indicate if some dataset
is better than the other. The following section will show how to artificially increase the dataset
using image augmentation described in section 3.7.2. In section 3.7.3 shows training of pre-
trained networks and model fine-tuning.
23
3.7.1 Dataset
Datasets are constructed using images provided by the veterinarian, past research and
google images. It has three classes namely Anal Sac Disease, Rectal Prolapse and
Atresia Ani. In an image classification model, the larger the dataset the more likely the
model will perform well. It is difficult to obtain training images especially in the medical
field, since there are many legal restrictions for working with healthcare data.
Recommended number of images per class should be at least 1000 per class [29]. With a
limited dataset, a total of 37 images per class were gathered, which is not enough for a
deep learning model to perform well enough. To overcome this limitation, the dataset can
be artificially inflated or increased by using image augmentation [30], this is shown in
section 3.7.2.
Format Conversion
Before training, the dataset should be first processed. Neural networks cannot process raw data,
images should be read and decoded into floating points. Tensorflow Keras requires data to be
inputted as float32. Then normalized into smaller values between 1 and 0. To normalize each
pixel, it is divided with 255. For illustration the following code is used:
def convert_img_to_arr(file_path_list):
arr = []
for file_path in file_path_list:
img = load_img(file_path, target_size = (224,224))
img = img_to_array(img)
arr.append(img)
return arr
X = np.array(convert_img_to_arr(X))
X = X.astype('float32')/255
Class labels are then converted and represented as arrays. From a dataset of 3 classes, it can be
represented as Anal Sac Disease [1, 0, 0], Rectal Prolapse [0, 1, 0] and Atresia Ani [0, 0, 1]. For
illustration with 3 classes, labels with index 1 will print.
24
>>> y = np.array(utils.to_categorical(y,2)
[array([0., 1., 0.,], dtype=float32)
Dataset is split into testing and validation, this can be done manually or programmatically.
AUGMENTATIONS = Albumentation.Compose([Albumentation.Flip(p=0.5),
Albumentation.Transpose(p=0.5), Albumentation.HorizontalFlip(p=0.5), Albumentation.VerticalFlip(p=0.5),
Albumentation.ShiftScaleRotate(p=0.5), Albumentation.Rotate(limit=90,p=0.5),
25
Albumentation.OneOf([Albumentation.OpticalDistortion(p=0.3), Albumentation.GridDistortion(p=.1),
Albumentation.IAAPiecewiseAffine(p=0.3)], p=0.2),
Albumentation.OneOf([Albumentation.CLAHE(clip_limit=2),Albumentation.IA
Albumentation.Sharpen(),Albumentation.IAAEmboss(),Albumentation.RandomBrightnessContrast()],
p=0.3),Albumentation.HueSaturationValue(p=0.3),])
Figure 15. Image Augmentation, (top-left) Original Image, (top-right) CLAHE and Sharpen
(bottom-left), Median Blur (bottom-right) Random Brightness and Contrast
Instead of loading all images, generators can be used to load batches of images and randomly
augment/transform from their respective training and validation directory. Using generators also
prevents the machine from running out of RAM.
train_generator = train_datagen.flow(
X_train,y_train,
batch_size=batch_size,
shuffle=True
)
26
validation_generator = test_datagen.flow(
X_valid,y_valid,
batch_size=batch_size,
shuffle=False
)
3.7.3 Model
The model used as a base is MobilenetV2 developed by Google. It is pre-trained on a
large scale dataset called Imagenet consisting of 1 million images and 1000 classes.
Compared to similar models like VGG or Inception, it is more optimized for small
embedded hardware and mobile devices. In comparison with its predecessor
MobilenetV1, it uses less parameters, 30-40% faster and achieving higher accuracy [33].
Transfer Learning
With a small dataset, one good approach in increasing the performance of the model is
transfer learning. Transfer learning is salvaging and utilizing an existing pre-trained
classifier as a starting point for new tasks like image classification. It can take advantage
of those feature maps that the previous model has already learned. A typical
convolutional neural network composed of convolutional base and classifier. The main
goal of the convolutional base is to generate features from the image [34]. Classifier is
usually composed of fully connected layers. Its goal is to classify the image based on the
known features. Implementing transfer learning starts by removing first the classifier,
then adding a custom layer for the classifier and finally fine tuning the model. There are
three ways to fine tune the model; first is training the entire model, this strategy involves
using a large scale dataset and class dissimilarity with the pre-trained model. Second
strategy is training some of the layers and leaving the other layers frozen. This is the best
strategy if the dataset is small and small correlation with the dataset class with the pre-
trained model class. By freezing some of the layers it can prevent overfitting. Third
strategy is freezing only the convolutional base. This strategy involved having identical
classes with the pre-trained model. This is straight forward you can just alter the classifier
dense according to the number of classes in the dataset.
27
Figure 16. Fine-tuning Strategies
Since the gathered dataset is small and dissimilarity of class, the second strategy is the
only viable option. First is to create the base model from the pre-trained convnet.
MobilenetV2 is loaded with weights trained on Imagenet. The input_shape argument
specifies the input of the images which is (224, 224, 3). The include_top argument serves
as whether to include the last fully connected layer. The following codes show how to
load the base model:
baseModel = applications.MobileNetV2(
include_top=False,
weights='imagenet',
input_tensor=input_tensor,
input_shape=(target_size, target_size, 3))
Then define the new classifier or fully connected head layer. Instead of using fully connected
layers, global average pooling is added and directly added to a softmax activation layer [35].
28
Dropout is added to prevent overfitting. Dense layer is added so that the model can learn more
complex functions and classify for better results. The last layer is a Dense layer with the number
of classes in the dataset which is 3. Softmax activation outputs the final output of the entire
network. This is to ensure that the predicted class will have expected probability of 1.0 and the
other class with 0.0.
headModel = baseModel.output
headModel=GlobalAveragePooling2D(
input_shape=baseModel.output_shape[1:])(headModel)
headModel=Dropout(0.5)(headModel)
headModel=Dense(1024,activation='relu')(headModel)
headModel=Dense(1024,activation='relu')(headModel)
headModel=Dense(512,activation='relu')(headModel)
headModel=Dense(no_of_classes,activation='softmax')(headModel)
model = Model(inputs=baseModel.input, outputs=headModel)
Continuing with fine-tuning, freezing all the CONV layers in the body of MobilenetV2. Model
summary is shown in Appendix Table 3.
Before training the model it should be first compiled and define the loss function, optimizers,
and metrics for prediction. This is done by calling the compile function on the model.
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.Adam(lr=1e-4),
metrics=['accuracy']
)
Categorical Cross Entropy is set as the loss function because it is compatible with multi-class
classification. Based on the paper published by Shao-an Lu (2017), Adam optimizers have the
lowest training error/loss, but not val. error/loss [36]. This paper provides experimental
comparison of all available optimizers. The rule of the thumb, the larger the network the lower
the learning rate to set. Otherwise, the model will overfit. Metrics will be used to plot the
confusion matrix and accuracy vs. loss.
After setting all the variables and data, model.fit can be called to train the model. Model.fit has
been used with the following parameter: train_generator is initialized to start generating data for
training; steps_per_epoch is the number of calls to generate data per each epoch; epochs is the
29
number of epoch; validation_data is initialized with validation_generator for testing data;
validation_steps is the number of calls to validation_generator for testing.
history = model.fit(
train_generator,
steps_per_epoch=len(X_train) // batch_size,
epochs = epochs ,
validation_data=validation_generator,
validation_steps=len(X_valid) // batch_size,
)
By logging all the metrics during the training history object is generated, it contains all records
collected during the training. This includes training loss, validation loss, training accuracy and
validation accuracy. These metrics can help evaluate the model if it is underfitting or overfitting.
After training the model it can be saved by calling the save function. This model will be used for
online classification.
model.save('model.h5')
Model is also converted to a model that is compatible with smartphones. Tensorflow-lite library
is used to convert the model. This model will be used for offline classification.
converter = tensorflow.lite.TFLiteConverter.from_saved_model
(saved_model_dir)
tflite_model = converter.convert()
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
To fully assess and visualize the performance of the model, confusion matrix can be used. A
confusion matrix or error matrix is a tool to check the number of correct and incorrect
predictions. One axis of a confusion matrix is the class that the model predicted (predicted class),
and the other axis is the actual label (true class).
30
Figure 17. Confusion Matrix for Binary Classification
There are four different possible outcomes of a single prediction for binary classification, this
includes True positives (TP), true negatives (TN), false positives (FP), and false negatives (FN).
True positive is when the model correctly predicts the positive class. True negative is when the
model correctly predicts the negative class. False positive is when the model incorrectly predicts
the positive class. False negative is when the model incorrectly predicts the negative class. Using
these parameters the following performance metrics can be used. The overall accuracy of the
model is defined as the fraction of the total samples that were correctly classified by the
classifier.
TP+TN
accuracy = (3.1)
TP+ TN + FP+ FN
Precision tells what fraction of predictions as a positive class were actually positive. It is the
fraction of true positive (TP), among the true positive (TP) and false positive (FP).
TP
precision= (3.2)
TP+ FP
Recall or also known as True Positive Rate (TPR),Sensitivity, Probability of Detection. It is the
fraction of all positive samples that were correctly predicted as positive by the classifier.
TP
recall= (3.3)
TP+ FN
F1-Score is the combination of precision and recall into a single metric. It is the harmonic mean
of precision and recall function.
31
precision ×recall 2TP
F 1−score=2 × = (4)
precision+recall 2 TP+ FP + FN
2⋅TotalTP
MicroF 1= (5)
2⋅TotalTP+TotalFP+ TotalFN
Macro-average F1-score uses the F1-score calculated from each class divided to the number of
classes. For illustration, the number of classes is set to 3.
32
The last one is weighted-average F1-score, it can be computed using the F1-score and total
number of samples of each class.
In deep learning the ideal model should have a precision of 1 and recall of 1. Which means F1-
score would be 100 percent. F1-score is a more useful measurement than accuracy in a multi-
class matix due to uneven class distribution.
Using scikit-learn package confusion matrix and metrics can be programmatically computed.
After retraining MobileNetV2, the model achieved an accuracy of 93.9394% and 0.2686 loss
after 100 epochs/iterations. Training log is shown in Appendix A. Results are presented below.
33
Figure 18. Training Accuracy
Figures 18 and 19 illustrate the progress of training accuracy and training loss. Here we see both
training accuracy and validation accuracy is increasing while training loss and validation loss is
decreasing. Both plot behavior is converging, this indicates that the model is not overfitting or
underfitting and has good generalization.
34
Figure 20. Image Classification
Figure 20 shows image classification using the trained model. Green labels indicate correct class
prediction while red labels indicate incorrect class prediction.
CONFUSION MATRIX
PREDICTED CLASS
35
RECTAL ANAL SAC
ATRESIA ANI SUPPORT
PROLAPSE DISEASE
ATRESIA ANI 37 0 0 37
ACTUAL RECTAL
0 43 2 45
CLASS PROLAPSE
ANAL SAC
6 0 45 51
DISEASE
ATRESIA ANI 37 90 6 0
RECTAL
CLASS 43 88 0 2
PROLAPSE
ANAL SAC
45 80 6 2
DISEASE
As seen in the confusion matrix (Table 4), the model successfully predicted all 37 images from
class Atresia Ani, predicted 43 out of 45 images from class Rectal Prolapse and predicted 45 out
of 51 images from class Anal Sac Disease. The model tends to misclassify Anal Sac Disease
with Atresia Ani.
CLASSIFICATION REPORT
RECTAL
1.00 0.96 0.98 45
CLASS PROLAPSE
ANAL SAC
0.88 0.96 0.92 51
DISEASE
36
MICRO-AVERAGE 0.91 0.97 0.94 133
Using data from confusion matrix (Table 5), the following metrics can be computed as shown in
Table 4.2. The model performed best with class Rectal Prolapse with F1-score of 98%. It can be
observed that Micro F1, Macro F1, Weighted F1 are all equal which is 94%.
Accuracy of the system was verified through the comparison of the system’s
results to the observations of a Veterinarian. Both outputs will be recorded in the
table below:
Table 6. Controlled detection of Anal Sac Disease
Image Veterinarian’s
Rectal Image (w/ ASD) App result Remarks
Count Diagnosis
Disease retriever
(Severe)
1
Confidence: Sex: Male
99.78254%
37
Disease: Anal Sac Anal Sac Disease Breed: Pitbull
Disease
(Severe) Sex: Female
2
Confidence:
78.60805%
Disease
(Severe) Sex: Female
Confidence:
99.67379%
3
Disease
(Severe) Sex: Male
Confidence:
4
99.999464%
38
Disease: Anal Sac Anal Sac Disease Breed: Labrador
Disease retriever
(Moderate)
98.96259%
5
Disease
(Mild) Sex: Male
Confidence:
6
99.72344%
Disease
(Severe) Sex: Female
Confidence:
99.93517%
7
39
Disease: Anal Sac Anal Sac Disease Breed: Chihuahua
Disease
(Severe) Sex: Female
Confidence:
99.965334%
8
Disease
(Severe) Sex: Male
Confidence:
9 99.99826%
40
Disease: Anal Sac Anal Sac Disease Breed: Aspin
Disease
(Moderate) Sex: Male
Confidence:
99.996686%
10
Disease
(Mild) Sex: Male
Confidence:
11 99.98988%
Disease Shepherd
(Mild)
12 Confidence: Sex: Male
99.950325%
41
Disease: Atresia Ani Anal Sac Disease Breed: Aspin
99.98764%
13
Disease Retriever
(Mild)
99.546707%
14
42
Disease: Anal Sac Anal Sac Disease Breed: Pitbull
Disease
(Mild) Sex: Male
Confidence:
99.822193%
15
Disease Pomeranian
(Severe)
Disease
(Mild) Sex: Male
17 Confidence:
81.362516%
43
Disease: Anal Sac Anal Sac Disease Breed: Husky
Disease
(Severe) Sex: Female
Confidence:
18
91.730356%
Disease
(Moderate) Sex: Male
Confidence:
19
99.997544%
Disease Shepherd
(Mild)
99.44258%
20
44
Disease: Anal Sac Anal Sac Disease Breed:
Disease Affenpinscher
(Mild)
84.454256%
21
Disease
(Moderate) Sex: Female
Confidence:
22
99.784267%
Disease retriever
(Severe)
99.52974%
45
Disease: Anal Sac Anal Sac Disease Breed: Aspin
Disease
(Moderate) Sex: Female
Confidence:
99.32749%
24
Disease
(Moderate) Sex: Female
Confidence:
99.87871%
25
46
Disease: Anal Sac Anal Sac Disease Breed: Aspin
Disease
(Severe) Sex: Male
Confidence:
26
99.67668%
Disease Dachshund
(Severe)
Disease
(Mild) Sex: Female
28 Confidence:
99.794334%
Disease
(Severe) Sex: Female
29 Confidence:
99.853075%
47
Disease: Anal Sac Anal Sac Disease Breed: Pitbull
Disease
(Severe) Sex: Female
Confidence:
98.913366%
30
Table 6 tabulates the controlled detection of anal sac disease in dogs. The images to be used will
be from the internet. 30 images will be extracted from the internet and be tested on the actual
prototype to determine the accuracy of the device in detecting ASD.
Image Veterinarian’
Rectal Image with Atresia Ani App result Remarks
Count s Diagnosis
48
Disease: Atresia Ani Atresia Ani Breed: Aspin
99.927765%
99.617034%
Pomeranian
Confidence: (Moderate)
49
Disease: Atresia Ani Atresia Ani Breed: Aspin
99.74728%
4
98.896884%
Maltipoo
Confidence: (Mild)
50
Disease: Atresia Ani Atresia Ani Breed:
Doberman
Confidence: (Mild)
Pinscher
99.94154%
7
Sex: Female
94.620097%
8
Labrador
Confidence: (Mild)
Retriever
92.66087%
Sex: Male
51
Disease: Atresia Ani Atresia Ani Breed:
Yorkshire
Confidence: (Moderate)
Terrier
99.80118%
Sex: Male
10
87.562567%
11
94.75894%
12
52
Disease: Atresia Ani Atresia Ani Breed: Shih
tzu
Confidence: (Mild)
13
99.604356%
14
53
Disease: Atresia Ani Atresia Ani Breed: Beagle
96.66849%
15
98.960423%
16
Terrier
Confidence: (Mild)
17
54
Disease: Atresia Ani Atresia Ani Breed: Aspin
99.62088%
18
97.34318%
19
55
Disease: Atresia Ani Atresia Ani Breed:
Labrador
Confidence: (Severe)
Retriever
99.98634%
20 Sex: Female
Tzu
Confidence: (Severe)
21
Pomeranian
Confidence: (Severe)
22
56
Disease: Atresia Ani Atresia Ani Breed:
German
Confidence: (Mild)
Shepherd
93.77092%
23 Sex: Female
98.441905%
24
Labrador
Confidence: (Mild)
25
57
Disease: Atresia Ani Atresia Ani Breed:
German
Confidence: (Severe)
Shepherd
98.706347%
Sex: Female
26
Japanese Spitz
Confidence: (Mild)
27 99.717414% Sex:Female
95.816004%
28
58
Disease: Atresia Ani Atresia Ani Breed: Saint
Bernard
Confidence: (Moderate)
95.95628%
30
Table 7 tabulates the controlled detection of Atresia Ani in dogs. The images to be used will be
from the internet. 30 images will be extracted from the internet and be tested on the actual
prototype to determine the accuracy of the device in detecting Atresia Ani.
Image Veterinarian’s
Rectal Image with Rectal Prolapse App result Remarks
Count Diagnosis
59
Disease: Rectal Rectal Prolapse Breed: Siberian
Prolapse Husky
(Severe)
99.999404%
1
Prolapse Shepherd
(Severe)
99.84301%
2
60
Disease: Rectal Rectal Prolapse Breed: German
Prolapse Shepherd
(Severe)
99.780065%
Prolapse
(Severe) Sex: Male
Confidence:
99.61247%
61
Disease: Rectal Rectal Prolapse Breed: Aspin
Prolapse
(Severe) Sex: Female
Confidence:
99.876934%
Prolapse Shepherd
(Mild)
62
Disease: Rectal Rectal Prolapse Breed: Golden
Prolapse Retriever
(Moderate)
96.514136%
7
Prolapse
(Severe) Sex: Female
Confidence:
8
99.78521%
Prolapse
(Moderate) Sex: Male
Confidence:
9
99.902046%
63
Disease: Rectal Rectal Prolapse Breed: Poodle
Prolapse
(Moderate) Sex: Female
Confidence:
10
97.77115%
Prolapse
(Severe) Sex: Male
Confidence:
99.51219%
11
Prolapse Shepherd
(Severe)
12 99.99609%
64
Disease: Rectal Rectal Prolapse Breed: Bulldog
prolapse
(Mild) Sex: Male
Confidence:
13
99.79382%
prolapse
(Severe) Sex: Male
Confidence:
99.772996%
14
65
Disease: Rectal Rectal Prolapse Breed: Silky
Prolapse Terrier
(Mild)
99.97662%
15
prolapse
(Moderate) Sex: Male
Confidence:
16 99.98853%
66
Disease: Rectal Rectal Prolapse Breed:
prolapse Rottweiler
(Severe)
99.66947%
17
prolapse
(Moderate) Sex: Female
Confidence:
97.995216%
18
67
Disease: Rectal Rectal Prolapse Breed: Aspin
prolapse
(Mild) Sex: Female
Confidence:
19 99.90386%
prolapse
(Moderate) Sex: Male
Confidence:
99.698013%
20
Prolapse
(Mild) Sex: Male
Confidence:
21 99.948066%
68
Disease: Rectal Rectal Prolapse Breed: German
Prolapse Shepherd
(Mild)
99.946946%
22
Prolapse
(Moderate) Sex: Male
Confidence:
99.66445%
23
Prolapse
(Mild) Sex: Male
Confidence:
24
99.944276%
69
Disease: Rectal Rectal Prolapse Breed: Shih Tzu
Prolapse
(Mild) Sex: Female
25
Confidence:
99.99877%
Prolapse
(Moderate) Sex: Male
Confidence:
98.823404%
26
70
Disease: Rectal Rectal Prolapse Breed:
Prolapse Chihuahua
(Severe)
89.310735%
27
Prolapse
(Severe) Sex: Female
Confidence:
28
99.87575%
Prolapse
(Moderate) Sex: Male
Confidence:
99.94898%
29
71
Disease: Rectal Rectal Prolapse Breed: German
Prolapse Shepherd
(Severe)
99.96501%
30
Table 8 tabulates the controlled detection of rectal prolapse in dogs. The images to be used will
be from the internet. 30 images will be extracted from the internet and be tested on the actual
prototype to determine the accuracy of the device in detecting Rectal Prolapse.
Correctly
30/30 30/30 30/30
identified
72
Figure 20. Signed document of the Veterinarian’s diagnosis on the controlled testing
Figure 20 shows the signed document of the Veterinarian’s diagnosis on the controlled testing
done by the researchers. It verifies the results and the accuracy of the prototype in detecting Anal
Sac Disease, Atresia Ani, and Rectal Prolapse.
73
Chapter 4
CONCLUSION
For the testing of Anal Sac Diseases, the model was able to correctly identify 30 out of 30
samples with average confidence, for mild, moderate and severe cases, of 96.01%, 99.66% and
97.76% respectively. The values are close to each other which indicate that the difference in
severity of the disease did not have a significant impact in the pattern recognition.
For the testing of Atresia Ani, the model was able to correctly identify 30 out of 30
samples with average confidence, for mild, moderate and severe cases, of 97.48%, 98.87% and
97.65% respectively. The values are also close to each other which indicate that the difference in
severity did not have a significant impact in the pattern recognition.
For the testing of Rectal Prolapse, the model was able to correctly identify 30 out of 30
samples with average confidence, for mild, moderate and severe cases, of 99.94%, 98.92% and
99.12% respectively. The values are also close to each other which indicate that the difference in
severity did not have a significant impact in the pattern recognition.
It can be concluded from these results that the detection Anal Sac Disease, Atresia Ani,
and Rectal Prolapse using image processing and convolutional neural network was done
successfully.
74
Chapter 5
RECOMMENDATIONS
After completing the research entitled “Detection of Anal Sac Disease, Atresia Ani and
Rectal Prolapse for Canis lupus familiaris using Image Processing and Convolutional Neural
Network”, the researchers recommend the following for further study:
The researchers recommend adding more diseases for identification such as skin diseases
which are commonly encountered by veterinarians.
It is also recommended to separate the stages or severity of the diseases in training the
model such as early, middle and advanced stages. This can help avoid incorrect detection since
the early and advanced stages of some diseases may have distinct features.
Using a server with higher capacity to allow parallel computing such as Amazon or
Google Cloud Computing Services. This would allow multiple users to use the cloud computing
at the same time without the server crashing.
75
REFERENCES:
[1] Turek M.M. & Withrow S.J. 2013. Cancer of the gastrointestinal tract, p.423-431. In:
Withrow S.J. & Vail D.M. (Eds), Small Animal Clinical Oncology. 5th ed. W.B. Saunders
Elsevier, St Louis.
[2] Lobrigados, C. A., & Modena, D. F. (2018). Malignant perianal tumors in dogs: The
contribution of computed tomography for staging and surgical planning1. 2241-2243.
doi:10.1590/1678-5150-PVB-5822
[3] Jung, Y., Jeong, E., & Park, S. et al (2016). Diagnostic imaging features of normal anal sacs
in dogs and cats. 17(3), 331-335.
[4] Dennis R, Penderis J. Radiology corner—anal sac gasappearing as an osteolytic pelvic lesion.
Vet Radiol Ultrasound 2002, 43, 552-553.
[5] Pappalardo E, Martino PA, Noli C. Macroscopic, cytological and bacteriological evaluation
of anal sac content in normal dogs and in dogs with selected dermatological diseases. Vet
Dermatol 2002, 13, 315-322.
[6] RADOSTITS, O. M., TYLER, J. D. & MAYHEW, I. (2000) The clinical examination. In
Houston, D.M. & Radostits, O.M., eds. Veterinary Clinical Examination and Diagnosis 91–124.
1st edn. London: Elsevier..
[7] Misseghers BS, Binnington AG, Mathews KA: Clinical observations of the treatment of
canine Atresia Anis with topical tacrolimus in 10 dogs. Can Vet J 41: 623-627, 2000.
[9] S. Mukaratirwaa and T. Chituraa (2007). Canine subclinical prostatic disease: histological
prevalence and validity of digital rectal examination as a screening test.
[10] Britton PD. Fine needle aspiration or core biopsy. The Breast 1999;8:1-4
[11] Veterinary Manual. (2019). Disorders of the Rectum and Anus in Dogs - Dog Owners -
VeterinaryManual.
[12] Rubin, S. I. (2019). Anal Sac Disease - Digestive System. Retrieved from
https://www.msdvetmanual.com/digestive-system/diseases-of-the-rectum-and-anus/anal-sac-
disease
[13] Rubin, S. I. (2019). Atresia Ani -Disorders of the Rectum and Anus in Dogs - Dog Owners.
(2019).
76
[14] WagWalking. (2019). Rectal Prolapse in Dogs - Symptoms, Causes, Diagnosis, Treatment,
Recovery, Management, Cost.
[15] petMD, L. (2019). Protrusion of the Rectum and Anus in Dogs | petMD. [online]
Petmd.com.
[16] American Pet Association.(2012). Pet statistics. Retrieved June 30, 2019, from
www.apapets.com
[17] Statistics: Philippine Canine Club, Inc. (PHILIPPINES). (2018, July). Retrieved June 30,
2019, from http://www.fci.be/en/statistics/ByNco.aspx?iso=PH
[18] McNicholas, J., Gilbey, A., & Rennie, A. (2012). Pet ownership and human health: A brief
review of evidence and issues. Pet Ownership and Human Health: A Brief Review of Evidence
and Issues, 331, 1252-1254.
[19] McNicholas, J., Gilbey, A., & Rennie, A. (2012). Pet ownership and human health: A brief
review of evidence and issues. Pet Ownership and Human Health: A Brief Review of Evidence
and Issues, 331, 1252-1254.
[20] Gulo, C., Sementille, A., Tavares, J. (November 2012). Techniques of Medical Images
Processing and Analysis accelerated by High-Performance Computing: A Systematic Literature
Review.
[21] N. B. Linsangan, J. J. Adtoon and J. L. Torres, "Geometric Analysis of Skin Lesion for Skin
Cancer Using Image Processing," 2018 IEEE 10th International Conference on Humanoid,
Nanotechnology, Information Technology,Communication and Control, Environment and
Management (HNICEM), Baguio City, Philippines, 2018, pp. 1-5, doi:
10.1109/HNICEM.2018.8666296
[22] T. Deserno, "Medical Image Processing" in Optipedia, SPIE Press, Bellingham, WA (2009).
[23] J. Zhou, D. Xiao and M. Zhang, "Feature Correlation Loss in Convolutional Neural
Networks for Image Classification," 2019 IEEE 3rd Information Technology, Networking,
Electronic and Automation Control Conference (ITNEC), Chengdu, China, 2019, pp. 219-223,
doi: 10.1109/ITNEC.2019.8729534.
[24] Saha, S. (2018, December 17). A Comprehensive Guide to Convolutional Neural Networks -
the ELI5 way. Retrieved October 08, 2020, from https://towardsdatascience.com/a-comprehensive-guide-
to-convolutional-neural-networks-the-eli5-way-3bd2b1164a53
[25] B. Dan, X. Sun and L. Liu, "Diseases and Pests Identification of Lycium Barbarum Using
SE-MobileNet V2 Algorithm," 2019 12th International Symposium on Computational
Intelligence and Design (ISCID), Hangzhou, China, 2019, pp. 121-125, doi:
77
10.1109/ISCID.2019.00034. https://ieeexplore.ieee.org/stamp/stamp.jsp?
tp=&arnumber=9098279&isnumber=9098237
[27] Sato, K., Young, C., & Patterson, D. (2017, April 17). An in-depth look at Google’s first
Tensor Processing Unit (TPU). Google Cloud. https://cloud.google.com/blog/products/gcp/an-in-
depth-look-at-googles-first-tensor-processing-unit-tpu
[28] Kleppmann, Martin. (2017). Designing Data-Intensive Applications: The Big Ideas Behind
Reliable, Scalable, and Maintainable Systems: Newton, Massachusetts, O’Reilly Media, Inc.
[29] Rosebrock, Adrian. (2017). Deep Learning for Computer Vision with Python — Starter
Bundle. PyImageSearch.
[30] Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on Image Data Augmentation for
Deep Learning. Journal of Big Data, 6(1), 7–10. https://doi.org/10.1186/s40537-019-0197-0
[31] Nazaré, T. S., Costa, P. G. B., Contato, W. A., & Ponti, M. (2018, January 1). Deep
Convolutional Neural Networks and Noisy Images. ResearchGate.
https://www.researchgate.net/publication/322915518_Deep_Convolutional_Neural_Networks_a
nd_Noisy_Image
[32] Buslaev, A., Iglovikov, V. I., Khvedchenya, E., Parinov, A., Druzhinin, M., & Kalinin, A.
A. (2020). Albumentations: Fast and Flexible Image Augmentations. Information, 11(2), 125.
https://doi.org/10.3390/info11020125
[33] Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., & Chen, L. (2019, March).
MobileNetV2: Inverted Residuals and Linear Bottlenecks. The IEEE Conference on Computer
Vision and Pattern Recognition (CVPR). https://arxiv.org/abs/1801.04381
[35] Lin, T.-Y., Goyal, P., Girshick, R., He, K., & Dollar, P. (2020). Focal Loss for Dense Object
Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(2), 318–327.
https://doi.org/10.1109/tpami.2018.2858826
[36] Lu, Shao-an. (2017, December 29). SGD > Adam?? Which One Is The Best Optimizer:
Dogs-VS-Cats Toy Experiment. SALu. https://shaoanlu.wordpress.com/2017/05/29/sgd-all-
which-one-is-the-best-optimizer-dogs-vs-cats-toy-experiment/
[37]https://uk.bellfor.info/anal-glands-in-dogs-treating-and-preventing-complaints
78
[38]https://www.dvm360.com/view/surgery-stat-diagnosis-and-surgical-management-atresia-ani-
small-animals
[39]https://www.vetstream.com/treat/felis/technique/rectum-prolapse-surgical-management
APPENDIX
79