Professional Documents
Culture Documents
Deep Learning for Breast Cancer Image Classification using CNN and ResNet
Student’s Name
Institutional Affiliation
Instructor’s Name
Abstract
Deep learning models have revolutionized breast cancer research and clinical practice by
procedures. These models integrate various factors such as genetic markers, tumor
Image-guided procedures, including biopsies and tumor resections, benefit from real-time image
analysis, improving precision and reducing invasiveness. However, several challenges remain in
harnessing the full potential of deep learning for breast cancer. The availability of diverse and
well-curated datasets is crucial for training accurate and reliable models. Privacy concerns, bias
In this study, the performance of two models, CNN and Decision Tree Classifier, was evaluated
for breast cancer image classification. The CNN model achieved an accuracy of 87.78%,
precision of 0.8783, F1-score of 0.8770, and recall of 0.8761. On the other hand, the RESNET
achieved an accuracy of 82.52%, precision of 0.8323, F1-score of 0.8261, and recall of 0.8231.
These results demonstrate the effectiveness of deep learning models, particularly CNN, in
accurately differentiating between benign and malignant breast tumors using mammography or
histopathology images. By automating this process, clinicians can benefit from faster diagnosis,
reduced subjectivity, and improved accuracy. Future research should address data-related
3
challenges and ethical considerations to further enhance the capabilities of deep learning models
Table of Contents
Abstract..........................................................................................................................................2
Deep Learning for Breast Cancer Image Classification using CNN and ResNet....................5
Objectives................................................................................................................................7
Literature Review..........................................................................................................................8
Methodology.................................................................................................................................10
BreakHis................................................................................................................................12
Image processing......................................................................................................................13
Architectural Components..........................................................................................................15
Transfer learning......................................................................................................................17
Residual Learning....................................................................................................................20
Experimental Results.....................................................................................................................23
Discussion.....................................................................................................................................23
Conclusion....................................................................................................................................24
References......................................................................................................................................26
5
6
Deep Learning for Breast Cancer Image Classification using CNN and ResNet
Millions of people around the world are affected by reast cancer, which is a prevalent condition
with the potential to endanger one's life. A promising method for improving the detection and
diagnosis of breast cancer is deep learning, a subfield of artificial intelligence. Researchers and
healthcare professionals want to make breast cancer detection more accurate and efficient by
utilizing the capabilities of deep neural networks. This will allow for personalized treatment
plans and early intervention. Deep learning algorithms are well-suited for analyzing the
numerous medical imaging and genomic data associated with breast cancer because they excel at
extracting intricate patterns and features from large and complex datasets (Key et al., 2019).
These models can learn to recognize subtle abnormalities that indicate breast cancer by training
complement to the expertise of pathologists and radiologists, this automated analysis may speed
up diagnosis and improve accuracy. In addition, deep learning methods make it possible to
combine multi-modal data, such as imaging data, molecular profiles, or clinical data, for a more
Worldwide and in the United Kingdom (UK), breast cancer is a major issue, particularly among
women. In the UK, it is the most frequently diagnosed cancer in women. Based on information
from previous years, there were approximately 54,800 new cases of breast cancer reported in
2018. Tragically, the majority of women who die from cancer in the UK also have breast cancer.
In 2017, this disease was the cause of approximately 11,400 deaths. However, there is a glimmer
of hope in the fact that survival rates have shown a positive trend over time. The most recent data
indicate that approximately 85% of women diagnosed with breast cancer in England and Wales
live for at least five years. Worldwide, breast cancer is a major health problem (Sun et al., 2017).
7
Worldwide, women are diagnosed with cancer the most frequently. According to estimates, there
will be over 2.3 million new cases of breast cancer worldwide in 2020. Sadly, it is also
responsible for a significant number of cancer-related deaths. In the same year, approximately
685,000 women, or roughly half of Hawaii's population, were diagnosed with breast cancer. Be
that as it may, early recognition of bosom disease can fundamentally affect patients' treatment
The advantages of early detection cover a wide range of aspects of the disease and its treatment.
The availability of a wider range of treatment options is one of the primary advantages of early
diagnosis. When breast cancer is detected early, when the tumors are smaller and more localized,
more conservative treatment options are available. Instead of removing the breasts completely
2019). In addition, early-stage cancers may benefit from less aggressive treatments like hormone
therapy or targeted therapies. Early-stage breast cancers may not require intensive treatments like
chemotherapy or radiation because of their smaller tumor sizes and limited spread. This can
improve the patient's overall quality of life both during and after treatment, as well as reduce the
likelihood of negative side effects. Individuals are given the opportunity to make well-informed
decisions based on their preferences and medical considerations when they are given the
opportunity to select from a wider range of treatment options. This increases the likelihood of
successful outcomes.
Radiologists often face significant challenges in diagnosing breast cancer and they may miss
approximately 20% of cases. Early breast cancer diagnosis is further complicated by the
(CAD) systems have been developed (Abdelrahman et al., 2021). Convolutional Neural
Networks (CNNs) have recently been shown to have potential for mammogram classification. In
performance.
This study aims to contrast various CNN architectures by focusing on the classification of benign
advantages can be realized. In the first place, physically clarifying mammograms would be
essentially more straightforward with a powerful classifier for the whole picture. Second,
accurate classification can help reduce the number of patient callbacks by assisting in the
identification of potential cancer cases during the initial evaluation (Alanazi et al., 2021). Time
can be saved, patient anxiety can be reduced, and healthcare resources can be utilized to their full
potential. Thirdly, a dependable classifier can help cut down on false positives and unnecessary
follow-up tests, easing the burden on patients and cutting costs associated with healthcare.
Objectives
1. Collection of Images from Mammograms: The study will make use of a publicly accessible
online database that includes images from both cancerous and noncancerous mammograms. For
purposes of training and evaluation, this database will provide a diverse selection of images.
2.The creation of a mechanism for automatic detection: From the collected mammogram images,
a system for automatic detection will be developed to classify breast cancer masses as well as
noncancerous masses. To accurately classify and extract meaningful features, this mechanism
3. CNN Model Comparison: The CNN model with the highest level of accuracy in classifying
benign and malignant tumors will be chosen by comparing the results of various CNN models.
Researchers will be able to choose the best CNN architecture for mammogram classification
The study aims to use the power of deep learning to advance the diagnosis of breast cancer by
conducting this comparative analysis of CNN architectures. This study's findings may lead to
improved patient outcomes, earlier detection, fewer false positives, and improved accuracy and
Literature Review
Some experts have studied how to tell apart less harmful and more dangerous breast cancer. They
used different computer programs to help with this. These studies have been looking at how well
computer programs can help diagnose things. They want to know how accurate and fast the
programs are.
DA Ragab and some people suggested using a computer program to tell the difference between
harmless and dangerous mammogram pictures. They used two methods to divide the data into
sections: one based on the area of interest and another based on specific criteria like color or size.
To make things more accurate, they used a special computer program called a support vector
machine instead of using a normal layer of the program. They also used another special program
called a Deep Convolutional Neural Network to help pick out important details. The first way of
dividing the DDSM dataset was better than the second because it gave more precise results.
Charan and his team. A better version of the neural network model called DenseNet-II came out
in 2018. They fixed a problem with overfitting by adding a new layer to the first three layers of
10
the network. This happened because the network had a lot of settings and was too complex. The
mammogram pictures were improved with additional information to avoid mistakes caused by
not having enough data. The beginning part of the DenseNet computer model was switched with
something called InceptionNet. Researchers named Charan and others made this change in 2018.
The DenseNet-II computer system did better than other systems like DenseNet, AlexNet,
VGGNet, and GoogleNet. The DenseNet-II network did really well: it got 94. 55% of its
predictions right. It was also good at detecting true positives (95. 6%) and avoiding false
positives (95. 3%) To make the information for this study, 2042 pictures of mammograms were
N used a combination of computer programs called CNN and random forest to make predictions.
Dhungel and other experts find lumps in breast X-rays. They used a computer program to look
for things that seemed strange. After that, some computer tools were used to sort them out. We
were able to correctly identify breast abnormalities in 85 to 90% of cases when using two sets of
Phu T used the BreakHis dataset. Nguyen and some other people are trying to figure out if breast
cancer is a safe or dangerous kind. They made a special computer program that changed the size
of pictures to sort them into categories. The BreakHis collection has 7909 pictures of breast
cancer. Among them, 2440 are harmless and 5429 are harmful. The group of non-cancerous
things is divided into four smaller groups. The group of cancerous things is also divided into four
smaller groups.
Shen Li and his team made a computer program that can find signs of breast cancer in
mammogram pictures using advanced technology called deep learning. In their tests, they
showed that using digitized film mammograms from the CBIS-DDSM set made different
11
mammography machines work better and more accurately. They put together the top four models
by using the VGG16 and ResNet50 models as classifiers. The score for the single model was 0.
88, but when four models were combined, the score increased to 0. 91 In the INbreast database,
images of a certain type were tested and the results showed that using one model had a 95%
accuracy rate, while using four models together had a 98% accuracy rate. The sensitivity and
Richa Agarwal and her team created a computer program that finds lumps in breast x-rays by
looking at small sections of the image at a time. The computer program was first taught how to
recognize breast images using the CBIS-DDSM collection. Then, it was examined if the program
works with the INbreast database. The technique of using knowledge gained from the CBIS-
DDSM dataset to understand the INbreast dataset worked well, with a success rate of 98%.
Levy and his team taught computer models how to recognize patterns by using techniques like
transfer learning, adding more data, and making the data easier for the computer to understand.
They taught three different computer structures called CNNs. One was not very complex, one
was called AlexNet, and one was called GoogleNet. They discovered that GoogleNet was the
best with a score of 0. 92 They used the DDSM data in their tests.
Li, Huynh, and their team compared three ways to tell if a growth is safe or dangerous. The
researchers tried three different ways to analyze the tumor images. One used a computer program
that was already programmed to recognize certain features of tumors, and then another program
to make a decision. The second way looked at different sections of the tumor and made a
decision using another program. The third way used a combination of different programs to make
a decision. The ensemble classifier did better than the other two ways by a lot, with a score of 0.
86.
12
Methodology
The data used for this research was collected at baseline include breast ultrasound images. The
dataset included images from 600 female patients aged between 25 and 75 years old. The dataset
consisted of 780 images, with an average image size of 500x500 pixels. The images were in
PNG format. Each image had a corresponding ground truth image, providing the original
representation. The images were categorized into three classes: normal, benign, and malignant,
allowing for classification and analysis of different breast conditions. (Al-Dhabyani W, Gomaa
M, Khaled H, Fahmy A. Dataset of breast ultrasound images. Data in Brief. 2020 Feb;28:104863.
DOI: 10.1016/j.dib.2019.104863.).
13
Image processing
The dataset was divided into a training set and a test set for performance comparison of different
methods. The training set was used to train the models and fine-tune their parameters, while the
test set was used to evaluate their predictive performance. However, to address the class
imbalance, the dataset was balanced using the SMOTEtomek algorithm. This algorithm
14
combines the SMOTE technique, which generates synthetic samples for the minority class, with
Tomek links, which remove overlapping data points. The SMOTE technique helps increase the
representation of the minority class by creating synthetic samples through interpolation. Tomek
links identify pairs of samples from different classes that are close to each other and remove
them to enhance class separation. Applying the SMOTEtomek algorithm resulted in a balanced
dataset where all classes have a similar number of samples. This process ensures that the
classifiers can effectively learn from both the minority and majority classes and minimizes biases
towards the majority class, leading to more reliable evaluation of classifiers and more accurate
For the purpose of this research, two popular deep learning models, CNN (Convolutional Neural
Network) and ResNet (Residual Neural Network), will be used. During the training process,
evaluation metrics such as F1-score, precision, recall rate, and AUC will be used to assess the
15
performance of each model. These metrics provide insights into the accuracy, precision, recall,
and discriminatory power of the models in classifying breast cancer images. By utilizing these
evaluation metrics, the effectiveness of the models in accurately identifying and classifying these
classification performance can result from training CNNs directly on raw images. Before feeding
the raw images into the deep learning classifiers, it is therefore necessary to convert them into
When compared to the format that our network classifiers require, the medical dataset in our
scenario contains images of varying shapes and sizes. The images must be resized or rescaled to
the desired dimensions in order to guarantee compatibility with the network classifier's input
size. There are two primary reasons why this process of resizing or scaling is essential
(Mohapatra et al., 2021). First, it makes sure that all of the images that are fed into the CNN have
the same size, which is a requirement. For efficient processing, CNNs typically anticipate a fixed
size for the input. We create a uniform format that the CNN model can effectively process by
Second, image rescaling and resizing can expedite the training process. During training, dealing
with large, high-resolution images can significantly increase the computational load. We can
reduce the amount of computation required without sacrificing the essential characteristics
required for accurate classification by scaling or resizing the images to a smaller size while
maintaining the essential details (Ragab et al., 2019). The CNN model can be processed and
There is more to image preprocessing than just resizing and scaling. It might also include
normalization, cropping, and filtering, among other methods. In order to eliminate any potential
biases and ensure that all of the image's pixel values fall within a predetermined range,
normalization is frequently used. By focusing solely on the area of interest, cropping can be
utilized to eliminate irrelevant parts of the image. Smoothing and sharpening are two examples
Resizing the image is an essential step in preparing it to meet the required input size for CNN
classifiers. The size of the collected input images may not always match the size of the CNN
model's input. Resizing the images ensures compatibility and optimal performance for the
classifiers. The images can be resized in two common ways: scaling and editing Cropping
involves removing the border pixels from the images in order to obtain the desired input size.
However, if this method is used, important features and patterns around the image borders may
be lost. However, according to Sanchez-Cauce et al., reducing the size of the images preserves
the content as a whole. 2021). Scaling is regarded as a reasonable choice because it minimizes
the risk of losing important patterns or features, despite the fact that the images' features and
The classification of breast cancer images has been revolutionized by Convolutional Neural
Networks (CNNs). In this section, we'll go over the architecture of CNNs and how they are used
Architectural Components
CNN architectures are built on the foundation of convolutional layers. They extract local features
from the input images and record the spatial relationships between pixels by applying filters or
kernels. Using element-wise multiplications and summations, each filter traverses the input
image to generate feature maps. CNNs are capable of recognizing various patterns at various
scales, such as edges, textures, and shapes, by employing multiple filters. The feature maps'
spatial dimensions are determined by the size and stride of the filters.
technique for CNNs that randomly resets a portion of the neurons' outputs during training to
zero. This encourages the learning of more robust and generalizable representations and helps the
Backpropagation is an algorithm used to train CNNs that finds the model's parameter gradients
in relation to a given loss function. During the optimization process, the gradients are then used
to update the network's weights and biases. When training CNNs, optimization algorithms like
Stochastic Gradient Descent (SGD) and its variants like Adam and RMSprop are frequently
utilized. In order to reduce the loss function and improve the model's classification performance,
CNNs make use of images' spatial hierarchies. The underlying layers catch low-level elements,
like edges and corners, while more profound layers learn more dynamic and significant level
portrayals. The network is able to capture both local and global contextual information thanks to
the increasing receptive fields of the feature maps produced by various layers (Wang et al.,
2019). The model is able to recognize subtle patterns and variations that are associated with
18
various stages and types of cancer thanks to this hierarchical representation learning, which is
When there's not a lot of labeled information available, transfer learning can be a really useful
way to do deep learning. Transfer learning is being used to help classify pictures of breast cancer.
It works by using already-made models to speed up the process and get better results. This
section will talk about using transfer learning to help identify breast cancer in pictures.
Transfer learning
Utilizing transfer learning, researchers can utilize the knowledge of the pre-trained model to
significantly shorten the amount of time required for the target dataset's training (Charan et al.,
2018). Greater Adaptability: A vast dataset has proactively provided rich, nonexclusive element
possible for the model to better adapt to the target dataset even with fewer training samples.
Managing a Lack of Data: Annotation datasets for breast cancer image classification tasks
frequently have a limited number of options because it is difficult to obtain and label medical
images. With transfer learning, researchers can use the knowledge from other image
classification tasks to improve the model's performance with less training data.
Extraction of Meaningful Features: The pre-trained models can learn high-level features that can
help identify breast cancer because they are trained on a variety of images from different
domains. These features, which can be used for the target task and capture discriminative
However, when using transfer learning to classify breast cancer images, there are a few things to
keep in mind: Likeness of Datasets: Transfer learning will not be successful unless there is a high
19
degree of similarity between the target dataset and the dataset that was previously trained.
Despite the fact that low-level features learned by early layers are typically transferable, higher-
breast cancer images (Guan et al., 2017). Consequently, selecting a model that has already been
prepared using a dataset with visual properties, such as mammographic images, is crucial.
Overfitting: Transfer learning is prone to overfitting when the target dataset is small. Overfitting
happens when the model neglects to satisfactorily sum up to new, unseen information subsequent
dropout, and early stopping methods can reduce overfitting and improve generalization.
Adaptation to the Domain Breast cancer images exhibit distinctive characteristics and variations
because of factors like imaging techniques, patient demographics, and anatomical variations. To
domain adaptation techniques may be required to align the learned representations with the target
dataset.
Residual Learning
In the field of deep learning, Residual Networks (ResNets) have emerged as a significant
architectural innovation specifically designed to address the issue of vanishing gradients during
training. It is difficult to effectively train very deep models in traditional deep neural networks
because the gradients tend to decrease exponentially as the network gets deeper. By introducing
residual connections, ResNets address this issue by enabling the model to learn residual
The observation that, in an ideal situation, a deeper network should not perform worse than a
shallower network gave rise to the idea of residual learning. Deeper networks, on the other hand,
have a harder time effectively capturing and propagating gradients because of the vanishing
gradient problem, which reduces performance. ResNets solve this issue by introducing skip
connections, also known as shortcut connections. These connections allow information to flow
directly from one layer to a layer below it, effectively bypassing some layers.
ResNets' skip connections let the network learn residual mappings rather than trying to learn the
underlying mapping directly. This suggests that rather than explicitly learning the entire
transformation, the model can concentrate on learning the residual between the input and the
desired output (Masud et al., 2021). ResNets solve the vanishing gradient problem and make it
possible to train networks that are significantly deeper by spreading residual information
𝑦 =𝐹 ( 𝑥 ) + 𝑥 , where x is the layer's input, F(x) is the residual mapping the layer learned, and y is
the layer's output. The network can learn the input's residuals using this formulation, capturing
the additional data required to achieve the desired output. The network is able to learn the
residual mapping rather than the entire transformation by starting from scratch thanks to the skip
connections, which guarantee that the initial input is directly added to the residual mapping.
including the classification of breast cancer images. Researchers have been able to train and
optimize extremely deep networks, surpassing the limitations of previous architectures, thanks to
the introduction of residual connections. ResNets have achieved cutting-edge results in a variety
21
of image classification challenges, such as the recognition of fine-grained visual categories and
the detection of objects in complex scenes, by facilitating the effective training of deep models.
ResNets have demonstrated significant promise for enhancing the models' accuracy and
robustness in the classification of breast cancer images. The inherent ability of ResNets to learn
residual mappings and capture complex representations has been particularly helpful in
identifying subtle patterns and variations associated with various types of breast cancer and
stages (Ragab et al., 2019). In medical imaging, where accurate abnormality identification and
classification are necessary for early diagnosis and treatment planning, this capability is
essential.
Pre-trained ResNet models have been fine-tuned or new ResNet models have been trained using
large datasets to use ResNets in breast cancer image classification. These models are able to
capture both low-level and high-level features by utilizing the residual connections, enabling a
more in-depth examination of mammographic images. Deep models can be optimized even with
limited training data thanks to the network's residual connections' ability to effectively propagate
gradients.
In addition, ResNets have demonstrated promising results in resolving other issues related to the
classification of breast cancer images, such as data imbalance. Biased models that favor the
majority class may result from imbalanced datasets in which one class has a significant
advantage over the others. By learning more discriminative features from minority classes,
ResNets can help mitigate the effects of data imbalance and improve overall classification
As a result, residual Networks (ResNets) can be argued to have revolutionized deep learning by
addressing the issue of vanishing gradients during training. The model is now able to learn
residual mappings thanks to the addition of residual connections, which results in the capture of
more intricate representations and a significant boost to the efficiency of deep networks (Wang et
al. 2019). ResNets have demonstrated superior performance in the classification of breast cancer
images, enabling the detection of subtle patterns and variations associated with various types and
stages of breast cancer. Because of their inherent capabilities, ResNets are useful tools in the
creation of computer-aided diagnosis systems that are accurate and dependable, facilitating
The performance results for the two trained models: CNN and ResNet.
CNN
23
RESNET
The CNN model shows an initial train accuracy of 42.73% and train loss of 14.0964, while the
test accuracy starts at 57.71% with a test loss of 1.9955. However, as the training progresses over
10 epochs, the CNN model significantly improves its performance. By the 10th epoch, the train
accuracy reaches 97.55% with a train loss of 0.1025, and the test accuracy increases to 87.78%
with a test loss of 0.3457. In comparison, the ResNet model starts with a train accuracy of
58.92% and train loss of 0.9147, while the test accuracy begins at 53.01% with a test loss of
9.0642. Despite a relatively higher initial accuracy, the ResNet model faces challenges during
training. However, by the 10th epoch, the ResNet model improves its performance with a train
24
accuracy of 89.08% and a train loss of 0.2798. The test accuracy also increases to 82.52% with a
Analyzing the evaluation metrics, the CNN model achieves an F1-score of 0.8770, a precision of
0.8783, and a recall of 0.8761. On the other hand, the ResNet model achieves an F1-score of
Based on these results, it can be concluded that the CNN model outperforms the ResNet model
in terms of accuracy, loss reduction, and evaluation metrics. The CNN model shows a higher
accuracy of 87.78% compared to 82.52% for the ResNet model. Additionally, the CNN model
achieves a lower final train loss of 0.1025 compared to 0.2798 for the ResNet model. The test
loss is also lower for the CNN model (0.3457) compared to the ResNet model (0.4053).
Furthermore, the evaluation metrics, including F1-score, precision, and recall, are slightly higher
CONCLUSION
25
In conclusion, the result from this research shows how the use of deep learning methods can be
used to help in breast cancer diagnosis provided there is access to large dataset and image
variability for training these methods. However, the network models are less accurate as a result
of the limited availability of imaging data. Future research in this area should investigate the use
of various neural network classifiers and their combination with other network algorithms to
overcome this limitation. The accuracy of the classifiers could be improved using this strategy. In
addition, it would be fascinating to observe and analyze these classifiers' performance in greater
detail. This could involve evaluating how well they work with larger and more diverse datasets,
which would make it possible to conduct a thorough evaluation of their capabilities. The
classification models' accuracy could be improved and new insights gained from this
investigation.
In addition, considering the use of alternative validation split variations to evaluate their impact
on the models' performance is something to think about. The dataset would be divided into
subsets for training, validation, and testing under this strategy, and then the model's accuracy
would be examined in light of how these subsets affect accuracy. By carrying out such
experiments, researchers can gain a deeper comprehension of the ways in which various methods
References
https://www.sciencedirect.com/science/article/pii/S0010482521000421
Alanazi, S. A., Kamruzzaman, M. M., Islam Sarker, M. N., Alruwaili, M., Alhwaiti, Y.,
Alshammari, N., & Siddiqi, M. H. (2021). Boosting breast cancer detection using
https://www.hindawi.com/journals/jhe/2021/5528622/
Charan, S., Khan, M. J., & Khurshid, K. (2018, March). Breast cancer detection in mammograms
https://www.sciencedirect.com/science/article/pii/S0895611118302349
Guan, S., & Loew, M. (2017, October). Breast cancer detection using transfer learning in
Guan, S., & Loew, M. (2019). Breast cancer detection using synthetic mammograms from
Medical-Imaging/volume-6/issue-3/031411/Breast-cancer-detection-using-synthetic-
mammograms-from-generative-adversarial-networks/10.1117/1.JMI.6.3.031411.short
27
Key, T. J., Verkasalo, P. K., & Banks, E. (2019). Epidemiology of breast cancer. The lancet
Masud, M., Hossain, M. S., Alhumyani, H., Alshamrani, S. S., Cheikhrouhou, O., Ibrahim, S., ...
& Gupta, B. B. (2021). Pre-trained convolutional neural networks for breast cancer
Mohapatra, S., Muduly, S., Mohanty, S., Ravindra, J. V. R., & Mohanty, S. N. (2022). Evaluation
of deep learning models for detecting breast cancer using histopathological mammograms
https://www.sciencedirect.com/science/article/pii/S2666412722000162
Ragab, D. A., Sharkas, M., Marshall, S., & Ren, J. (2019). Breast cancer detection using deep
https://www.hindawi.com/journals/jhe/2021/5528622/
Sánchez-Cauce, R., Pérez-Martín, J., & Luque, M. (2021). Multi-input convolutional neural
network for breast cancer detection using thermal images and clinical data. Computer
https://www.sciencedirect.com/science/article/pii/S0169260721001206
Sun, W., Tseng, T. L. B., Zhang, J., & Qian, W. (2017). Enhancing deep convolutional neural
network scheme for breast cancer diagnosis with unlabeled data. Computerized Medical
https://www.sciencedirect.com/science/article/pii/S0895611116300696
28
Sun, Y. S., Zhao, Z., Yang, Z. N., Xu, F., Lu, H. J., Zhu, Z. Y., ... & Zhu, H. P. (2017). Risk
Sutherland, R. L., & Musgrove, E. A. (2019). Cyclins and breast cancer. Journal of mammary
Tan, Y. J., Sim, K. S., & Ting, F. F. (2017, November). Breast cancer detection using
https://ieeexplore.ieee.org/abstract/document/8308076/
Wang, Z., Li, M., Wang, H., Jiang, H., Yao, Y., Zhang, H., & Xin, J. (2019). Breast cancer
detection using extreme learning machine based on feature fusion with CNN deep
https://ieeexplore.ieee.org/abstract/document/8613773/