You are on page 1of 23

Abstract— Lung cancer is responsible for frequently causing of cancer-related death worldwide.

25
percent of cancer deaths belongs to lung cancer related. The identification and segmentation of lung
nodules is a crucial step in early stage lung cancer before it progress to a tumor .the computer tomography
scan reports are very crucial in understanding the lung nodule. Lung regions do not possess uniformity in
shape and densities in the pulmonary structures. This make classification and segmentation of lung cancer
even more challenging. The classified and segmented lung cancer report aids in better decision making for
physician.

In this research work, we attempt to use a novel single stage convolutional neural network that can
classify and segment the lung cancer nodules. This research work focuses on delivering high performance
convolutional neural network models with high accuracy. Multiple hyper parameter optimization techniques
are part of building robust model for predicting the lung cancer modules. A detailed comparative study was
performed by using different model evolution metrics as criteria. The proposed model simulation achieved
a high classification accuracy, ROC-AUC Area, F1 Score, Recall and Precision compared to other existing
model till date.

Keywords—Convolutional Neural Networks, Ensemble Models, Bootstrap Aggregation, One Shot


Detection , Instance Segmentation, Semantic Segmentation , Receiver Operating Curve, F1-Score.

I. INTRODUCTION

A cancer that starts and grows in around regions of lungs is called as lung cancer. Smoking contributes to
60 percent lung cancer cases. Another major reason for causing lung cancer is exposure to the strong and
harmful chemicals for long time.

Coughing, Chest pain, Wheezing, weight loss are all symptoms of the cancer. These symptoms are
generally not seen in the initial stage of the cancer. They are triggered at the later stage of the cancer
tumor growth. The growth of the tumor can be exponential or linear based on the type of the human body
and resistance with in the body.

Different traditional and model methods has been in practice for curing the cancer cell. Surgery,
Chemotherapy, radiation therapy, Targeted medication, immunotherapy are the some options that are
used for treatment. A physician prescribe the number of cycles that required for complete curing of the
cancer cells growth.

Adenocarcinoma:

Adenocarcinoma is a cancer that starts in the cells of the glands that line your organs (glandular epithelial
cells). Mucus, digestive juices, and other liquids are secreted by these cells. Tumors might arise if your
glandular cells start to alter or expand out of control. Some tumors discovered in glandular cells are benign
and do not cause malignancy. Adenomas are the medical term for these tumors. Some tumors that grow in
glandular cells, however, are malignant. Adenocarcinomas are the medical term for these tumors.

Large cell lung carcinoma (LCLC)

is one of numerous types of non-small cell lung cancer (NSCLC) (NSCLC). LCLC is a type of lung that
starts the outer regions of the lungs and expands to the inner regions of the lung. It has rapid growth
compared to other types of cancer cells.

The most prevalent early indications of LCLC are shortness of breath and tiredness. NSCLC accounts
for more than 85% of all lung malignancies, while LCLC accounts for only 10%.

Large cell lung carcinomas, also known as large cell lung cancers, are named from the enormous size of
the cancer cells that can be seen when a tumor is inspected under a microscope (as opposed to the tumor
size, which also tends to be quite large)

Squamous cell carcinoma

Skin cancer that develops in the squamous cells that make up the middle and outer layers of the skin is
known as squamous cell carcinoma. Skin cancer called squamous cell carcinoma is normally not life-
threatening, but it can be aggressive. Squamous cell carcinoma of the skin, if left untreated, can become
large and spread to other parts of the body, posing major health risks. The majority of skin squamous cell
carcinomas are caused by extended exposure to ultraviolet (UV) radiation, which can come from the sun,
tanning beds, or lamps. Squamous cell carcinoma of the skin and other types of skin cancer can be
reduced by avoiding UV exposure.

By replicating intelligent behavior and critical thinking in a human-like fashion, artificial intelligence (AI) can
be utilized to analyses and comprehend complex medical data. The application of artificial intelligence (AI)
in imaging diagnostics reduces radiologists' workload and enhances the sensitivity of lung cancer
screening, cutting lung cancer morbidity and death. In this work, we aimed to evaluate the role of artificial
intelligence in lung cancer screening, as well as the future potential and usefulness of AI in nodule
classification.
II. LITERATURE SURVEY

Dai Y, Yan S, Zheng B, Song C (2018) Incorporating automatically learned pulmonary nodule attributes
into a convolutional eural network to improve accuracy of benign-malignant nodule classification. Phys
Med Biol 63:245004

Al-Shabi M, Lan BL, Chan WY, Ng K-H, Tan M (2019) Lung nodule classification using deep local-global
networks. Int J Comput Assist Radiol Surg 14:1815

Lung Nodule Classification on Computed Tomography Images Using Fractalnet Amrita Naik1 · Damodar
Reddy Edla1 · Venkatanareshbabu Kuppili1

Prediction of mediastinal lymph node metastasis based on 18F-FDG PET/CT imaging using support vector
machine in non-small cell lung cancer Guotao Yin1,2 & Yingchao Song3 & Xiaofeng Li1,2 & Lei Zhu1,2 &
Qian Su1,2 & Dong Dai1,2 & Wengui Xu1,2

Recognizing lung cancer and stages using a self-developed electronic nose system Ke Chen a,e, Lei Liu a,
Bo Nie a, Binchun Lu b, Lidan Fu b, Zichun He c, Wang Li d,*, Xitian Pi a, Hongying Liu a,*

Xie Y, Xia Y, Zhang J, Song Y, Feng D, Fulham M, Cai W (2019) Knowledge-based collaborative deep
learning for benign-malignant lung nodule classification on chest ct. IEEE Trans Med Imag 38:991–1004

Al-Shabi M, Lan BL, Chan WY, Ng K-H, Tan M (2019) Lung nodule classification using deep local-global
networks. Int J Comput Assist Radiol Surg 14:1815

Al-Shabi M, Lan BL, Chan WY, Ng K-H, Tan M (2019) Lung nodule classification using deep local-global
networks. Int J Comput Assist Radiol Surg 14:1815

Lung nodule classification using combination of CNN, second and higher order texture features Amrita
Naik∗ 4 and Damodar Reddy Edla

A hybrid algorithm for lung cancer classification using SVM and Neural Networks Pankaj Nangliaa, Sumit
Kumarb,∗, Aparna N. Mahajana, Paramjit Singha, Davinder Ratheea

Semantic segmentation and detection ofmediastinal lymph nodes and anatomical structures in CT data
for lung cancer staging David Bouget1 · Arve Jørgensen1,3 · Gabriel Kiss1,5 · Haakon Olav Leira1,4 ·
Thomas Langø2

Using Double Convolution Neural Network for Lung Cancer Stage Detection Goran Jakimovski 1,* and
Danco Davcev 2

Automated AJCC (7th edition) staging of non-small cell lung cancer (NSCLC) using deep convolutional
neural network (CNN) and recurrent neural network (RNN) Dipanjan Moitra* and Rakesh Kr. Mandal

A manifold learning regularization approach to enhance 3D CT image-based lung nodule classification


Ying Ren1 · Min-Yu Tsai2,3,4 · Liyuan Chen3,4 · Jing Wang3,4 · Shulong Li3 · Yufei Liu5,6 · Xun Jia2,3,4 ·
henyang Shen2,3,
Shape and margin-aware lung nodule classification in low-dose CT images via soft activation mapping
Yiming Lei a , Yukun Tian a , Hongming Shan b , Junping Zhang a , ∗, Ge Wang b , Mannudeep K. Kalra c

Multi-Task Learning for Lung Nodule Classification on Chest CT PENGHUA ZHAI1, 2, YALING TAO1, 2,
HAO CHEN1, 2, TING CAI1, 2, AND JINPENG LI1, 2 1HwaMei Hospital,

A Cascaded Neural Network for Staging in Non-Small Cell Lung Cancer Using Pre-Treatment CT

Screening and staging for non-small cell lung cancer by serum laser Raman spectroscopy HongWang
a,⁎,1, Shaohong Zhang b,1, LimeiWana, Hong Suna, Jie Tan a, Qiucheng Sub

The 8th lung cancer TNM classification and clinical staging system: review of the changes and clinical
implications Wanyin Lim1, Carole A. Ridge1, Andrew G. Nicholson2, Saeed Mirsadraee1

III. PROPOSED METHODOLOGY

A. Data
The research work uses gold standard dataset called LIDC/IDRI database. The dataset is licensed under
creative commons Attribution 3.0 Unposted License. The dataset is also referred as LUNA 16 and is made
available for public reference

Details of the Dataset

A total of 888 CT Scan images were used for analysis. The LIDC/IDRI dataset consist of the annotation
files in PASCAL VOC format. The dataset has three kinds of Lung Cancer Images.

Each radiologist has labelled the CT Scan images using different techniques and ensemble model is used
to decide the final size of the nodules. Based on the size, the nodules are classified into 2 categories.

 Nodule < 3 mm : Minor Nodules


 Nodules >=3 mm : Medium to Major Nodules
Figure 1:
B. Pre Processing

The term "image pre-processing" refers to the most basic processes performed on photographs. If entropy
is an information metric, these strategies reduce rather than increase image information content. Pre-
processing is used to improve picture data by removing undesired distortions or increasing specific visual
features that are relevant for further processing and analysis. Multiplication and addition with a constant
are two typical point procedures. Two common point methods are multiplication and addition with a
constant.

g(x)=αf(x)+β (1)

The parameters >0 and bias are known as the gain and bias parameters, and they are sometimes
referred to as the contrast and brightness controls, respectively.

A non-linear change to individual pixel values is known as gamma correction. While we used linear
operations on individual pixels in picture normalisation, such as scalar multiplication and
addition/subtraction, gamma correction uses a non-linear operation on the source image pixels,
which might change the saturation of the image.

(2)

Here the relation between output image and gamma is nonlinear.

Histogram equalization

Histogram equalization is a well-known contrast enhancement technique because of its ability to work on
virtually any sort of image. By adjusting the image's intensity histogram to the appropriate shape,
histogram equalization is a complex method for tailoring an image's dynamic range and contrast. Unlike
contrast stretching, histogram modelling operators can map between pixel intensity levels in input and
output images using non-linear and non-monotonic transfer functions.

The normalized histogram.


P(n)  = number of pixels with intensity n/ total number of pixels. Sigmoid function is a continuous nonlinear
activation function. The name, sigmoid, is obtained from the fact that the function is “S” shaped.
Statisticians call this function the logistic function.

(3)

(4)

g (x,y) is Enhanced pixel value

c is Contrast factor

th is Threshold value

fs(x,y) is original image

Geometric Transformations

Color and brightness/contrast are dealt with in the previous ways in this article. The placements of pixels in
an image are changed with geometric transformation, but the colors remain the same.

Geometric transformations allow for the removal of geometric distortion that happens during the acquisition
of a picture. Rotation, scaling, and distortion (or distortion!) of images are common Geometric
transformation procedures.

Geometric transformations have two essential steps:

1. Spatial change of the image's actual pixel rearrangement

2. Grey level interpolation, in which the modified image is assigned grey levels.
Transformations:

Scaling: is the process of resizing the images, it involves correcting and extrapolating each pixels to either
higher or lower values and project the new pixels to a linear space. This is one of the effective
transformation that can speed the process of convolutional neural network

(5)

Figure 2:

Translation: It is the process of shifting the object location. Identify the region of interest and project the
ROI to a new dimension space to create the new images

(6)
Rotation: Rotation of images is done by keeping the center of the image as constant . With a center of the
images and try to rotate around all the four corners based on the angel specified. This leads to the shift the
dimensions of the lapelled data.

(7)

Figure 3:
Shearing: The transformation talks about dividing the pixels horizontally and identify the region of interest
and project the sheared coordinates to a newer dimension space. This leads to creating a better ROI for
project of the data.

(8)
Figure 4:

Affine Transformation: It is usual to combine the scaling factors, shearing factors, and rotation angle into
one matrix rather than defining each separately. As a result, Affine Transformation is defined as the
combination of the four transformations.

(9)

Figure 5:
Perspective Transformation: To have a better understanding of the essential information, alter the
perspective of a given image or video. The points on the image from which you wish to obtain information
by shifting the perspective must be specified here.
C. Model Architecture

The schematic below shows the Meta architecture of the network. There are three major blocks  in
architecture, namely:

Figure 6:
Backbone Network:  It is the basic architecture of the convolutional neural network. The base
architecture is creates multiple convolutions at different scales.0.25, 0.125, 0.06 are pre dominant
scales used in processing the convolutional neural network. The scaled augmentations are also
called as P2, P3, P4, P5
Input (torch. Tensor): Input is an image that is part of the training dataset. A random image is
selected to start the training process of the image. The below parameters are calculated at the
beginning of the training

B: Batch Size

H: Image Height

W: Image Width

Input Channels: BGR (Blue, Green, Red)

BGR pattern is mandatory for the processing image, if we process in RGB pattern the accuracy drop
can be observed. If we have an image of dimension  n x n with a padding p, which is convolved with
filter of dimension f x f with stride of s, then the output dimensions can be determined using the below
general equation:

(!0)

Output (Dictionary of torch. Tensor):

Each convolution exhibits a feature map. By default the channel size is equal to 256 for all scales. The
strides are 4, 8, 16, 32, and 64 outputs respectively. The output format is (B, C, H/S, W/S) feature maps C
and S stands for channel size and stride

B: Channel Size

C: Channel Stride

H/S: Height, Stride

W/S: Width, Stride

ResNet: Multiple bottleneck blocks are part of the stem block. Stem block and stages are the block
structures. The stem block is a down samples the input image size. The twice by 7*7 convolution with
stride equal to 2. The image size will be processed to using max pool of the stem block.
Bottleneck Block: The bottleneck block has three convolutional neural network. Each convolution will
be processed using different kernel sizes. The kernels will be processed using 1*1, 3*3, 5*5 convolutions
using random functions like Gaussian distributions

There are three types of bottleneck blocks as shown

 Stride=1, w/o short cut convolution


 Stride=1, with shortcut convolution
 Stride=2, with shortcut convolution

Figure 7:
Because FPN was initially shown as a pyramid with the stem layer at the bottom, this layer is dubbed
'lateral' convolution (it is rotated in this article). The lateral convolution layers return 256-ch feature
maps by combining features from the res2-res5 stages with varying channel counts. An output
convolution layer comprises 33 convolutions with the same number of channels as the input
convolution layer. H/32, The FPN forward process begins with the res5 output (see Fig. 6).

The 256-channel feature map is given to the output convolution after travelling via the lateral
convolution, and it is registered to the results list as P5 (1/32 scale).

The up-sampler (Interpolate with nearest neighbor receives the 256-channel feature map, which is
then appended to the res4 output (via lateral convolution). The output convolution is applied to the
resulting feature map, and the result tensor P4 is entered into the results list (1/16 scale). The
foregoing procedure (from up sampling to result insertion) is repeated three times, and the final result
list contains four tensors: P2 (1/4 scale), P3 (1/8), P4 (1/16), and P5 (1/32).

Last Level Max Pool: A max pooling layer with kernel size = 1 and stride = 2 is added to the final
block of the ResNet to create the P6 output. This layer simply down samples P5 (1/32 scale) features
to 1/64-scale features for inclusion in the result list.

ResNet has an identity shortcut that integrates the input and output features, which is utilized in (b),
(c). A shortcut convolution layer is utilized for the first block of a stage (res2-res5) to match the
number of input and output channels.

Convolution with down sampling and stride=2 (as in (c)): The feature map is down sampled by a
convolution layer with stride=2 in the first block of the res3, res4, and res5 stages. Because the input
and output channel numbers are not the same, a shortcut convolution with stride=2 is also employed.

Block Diagram of Architecture


IV. MODEL VALIDATION METRICS
Results and Discussion Model Training Parameters: Graph patterns are used to determine the training
progress. If we stop before the validation error trend shifts from falling to increasing order, we will end up
with an under fit model, and if we stop after that time, we will end up with an over fit model.

Precision, Recall, F1 Score

Positive predictive value is the ratio between true positive and total positive (actual as a base). It defines
the right predictions done for a particular disease

Precision (also known as positive predictive value) is the percentage of relevant examples fou. It nd
among the recovered instances, whereas recall (also known as sensitivity) is the percentage of relevant
instances found. As a result, relevance lies at the heart of both precision and memory.

The F-score, also known as the F1-score, is a metric for how accurate a model is on a given dataset. It's
used to assess binary classification systems that divide examples into 'positive' and 'negative' categories.

IOU Score

Figure 8:

Intersection over union is a key metric to determine the accuracy of the model. For each image we will

calculate the area that is occupied using labels and measure the area. The denominator will be used to as

the reference pixels and numerator will be predicted pixels.

ROC- AUC
The receiver operating curve , area under the curve is used to measure the goodness of them

model. X Axis has False Positive Rate, Y Axis has True Positive Rate. The Area Under the Curve

(AUC) - ROC curve is a performance statistic for classification issues at various threshold levels.

AUC represents the degree or measure of separability, whereas ROC is a probability curve. It

indicates how well the model can distinguish between classes. The AUC indicates how well the

model predicts 0 classes as 0 and 1 courses as 1. The higher the AUC, the better the model predicts

0 classes as 0 and 1 classes as 1. By analogy, the higher the AUC, the better the model

distinguishes between people who have the condition and those who do not.

The ROC curve is plotted with TPR against the FPR where TPR is on the y-axis and FPR is on the

x-axis.

Figure 9:

V. RESULTS AND DISCUSSION


Year of Author and Title Name Accuracy Sensitivity Specificity
Publication
2018 Dai Y, Yan S, Zheng B, Song C (2018) 0.83 0.81 0.83
Incorporating automatically learned pulmonary
nodule attributes into a convolutional eural
network to improve accuracy of benign-malignant
nodule classification. Phys Med Biol 63:245004

2018 Al-Shabi M, Lan BL, Chan WY, Ng K-H, Tan M 0.87 0.87 0.94
(2019) Lung nodule classification using deep local-
global networks. Int J Comput Assist Radiol Surg
14:1815
2018 Lung Nodule Classification on Computed 0.82 0.9 0.96
Tomography Images Using Fractalnet Amrita
Naik1 · Damodar Reddy Edla1 ·
Venkatanareshbabu Kuppili1
2018 Prediction of mediastinal lymph node metastasis 0.72 0.86 0.91
based on 18F-FDG PET/CT imaging using support
vector machine in non-small cell lung cancer
Guotao Yin1,2 & Yingchao Song3 & Xiaofeng Li1,2
& Lei Zhu1,2 & Qian Su1,2 & Dong Dai1,2 &
Wengui Xu1,2
2018 Recognizing lung cancer and stages using a self- 0.89 0.82 0.81
developed electronic nose system Ke Chen a,e, Lei
Liu a, Bo Nie a, Binchun Lu b, Lidan Fu b, Zichun
He c, Wang Li d,*, Xitian Pi a, Hongying Liu a,*
Year of Author and Title Name Accurac Sensitivity Specificity
Publication y

Xie Y, Xia Y, Zhang J, Song Y, Feng D, Fulham M, Cai W


(2019) Knowledge-based collaborative deep learning
for benign-malignant lung nodule classification on
2019 chest ct. IEEE Trans Med Imag 38:991–1004 0.891 0.89 0.9
Al-Shabi M, Lan BL, Chan WY, Ng K-H, Tan M (2019)
Lung nodule classification using deep local-global
2019 networks. Int J Comput Assist Radiol Surg 14:1815 0.89 0.91 0.92
Al-Shabi M, Lan BL, Chan WY, Ng K-H, Tan M (2019)
Lung nodule classification using deep local-global
2019 networks. Int J Comput Assist Radiol Surg 14:1815 0.88 0.89 Not Given
Lung nodule classification using combination of CNN,
second and higher order texture features Amrita
2019 Naik∗ 4 and Damodar Reddy Edla 0.81 0.86 96.55
A hybrid algorithm for lung cancer classification using
SVM and Neural Networks Pankaj Nangliaa, Sumit
Kumarb,∗, Aparna N. Mahajana, Paramjit Singha,
2019 Davinder Ratheea 0.81 0.89 0.98
Semantic segmentation and detection ofmediastinal
lymph nodes and anatomical structures in CT data for
lung cancer staging David Bouget1 · Arve
Jørgensen1,3 · Gabriel Kiss1,5 · Haakon Olav Leira1,4 ·
2019 Thomas Langø2 0.88 0.75 0.72
Using Double Convolution Neural Network for Lung
Cancer Stage Detection Goran Jakimovski 1,* and
2019 Danco Davcev 2 NA 0.96 0.95
Automated AJCC (7th edition) staging of non-small
cell lung cancer (NSCLC) using deep convolutional
neural network (CNN) and recurrent neural network
2019 (RNN) Dipanjan Moitra* and Rakesh Kr. Mandal 0.86 0.87 0.92
Year of Author and Title Name Accuracy Sensitivity Specificity
Publication

A manifold learning regularization approach to


enhance 3D CT image-based lung nodule classification
Ying Ren1 · Min-Yu Tsai2,3,4 · Liyuan Chen3,4 · Jing
Wang3,4 · Shulong Li3 · Yufei Liu5,6 · Xun Jia2,3,4 ·
2020 henyang Shen2,3, 0.82 0.85 0.95
Shape and margin-aware lung nodule classification in
low-dose CT images via soft activation mapping Yiming
Lei a , Yukun Tian a , Hongming Shan b , Junping Zhang
2020 a , ∗, Ge Wang b , Mannudeep K. Kalra c 0.89 0.92 0.94
Multi-Task Learning for Lung Nodule Classification on
Chest CT PENGHUA ZHAI1, 2, YALING TAO1, 2, HAO
CHEN1, 2, TING CAI1, 2, AND JINPENG LI1, 2 1HwaMei
2020 Hospital, 0.88 0.91 0.92

A Cascaded Neural Network for Staging in Non-Small


2021 Cell Lung Cancer Using Pre-Treatment CT 0.82 0.81 0.82

Screening and staging for non-small cell lung cancer by


serum laser Raman spectroscopy HongWang a,⁎,1,
Shaohong Zhang b,1, LimeiWana, Hong Suna, Jie Tan a,
2021 Qiucheng Sub 0.83 0.88 0.83

The 8th lung cancer TNM classification and clinical


staging system: review of the changes and clinical
implications Wanyin Lim1, Carole A. Ridge1, Andrew G.
2021 Nicholson2, Saeed Mirsadraee1 0.84 0.81 0.84
2022 Proposed Model 0.912 0.87 0.89
VI. CONCLUSION AND FUTURE SCOPE

The proposed three-stream deep network model aids in the analysis and production of better feature
maps to distinguish between normal and pathological nodules in CT medical images of the lungs.
Because of the combination of handcrafted and automated elements, classification accuracy improves.
As a result, there's a lower possibility of overlooking crucial information. As a result, this proposed
approach could be quite useful for detecting lung cancer in its early stages. Furthermore, when compared
to existing methods, our method has a classification accuracy of 98.2 percent, which is greater. In the
future, feature-based input could replace raw data, allowing the network to learn more effectively. As a
result, the network's performance could be significantly improved.

VII. REFERENCE

1 Dai Y, Yan S, Zheng B, Song C (2018) Incorporating automatically learned pulmonary nodule
attributes into a convolutional eural network to improve accuracy of benign-malignant nodule
classification. Phys Med Biol 63:245004
2 Al-Shabi M, Lan BL, Chan WY, Ng K-H, Tan M (2019) Lung nodule classification using deep
local-global networks. Int J Comput Assist Radiol Surg 14:1815
3 Lung Nodule Classification on Computed Tomography Images Using Fractalnet Amrita Naik1 ·
Damodar Reddy Edla1 · Venkatanareshbabu Kuppili1
4 Prediction of mediastinal lymph node metastasis based on 18F-FDG PET/CT imaging using
support vector machine in non-small cell lung cancer Guotao Yin1,2 & Yingchao Song3 &
Xiaofeng Li1,2 & Lei Zhu1,2 & Qian Su1,2 & Dong Dai1,2 & Wengui Xu1,2
5 Recognizing lung cancer and stages using a self-developed electronic nose system Ke Chen a,e,
Lei Liu a, Bo Nie a, Binchun Lu b, Lidan Fu b, Zichun He c, Wang Li d,*, Xitian Pi a, Hongying Liu
a,*
6 Xie Y, Xia Y, Zhang J, Song Y, Feng D, Fulham M, Cai W (2019) Knowledge-based collaborative
deep learning for benign-malignant lung nodule classification on chest ct. IEEE Trans Med Imag
38:991–1004
7 Al-Shabi M, Lan BL, Chan WY, Ng K-H, Tan M (2019) Lung nodule classification using deep
local-global networks. Int J Comput Assist Radiol Surg 14:1815
8 Al-Shabi M, Lan BL, Chan WY, Ng K-H, Tan M (2019) Lung nodule classification using deep
local-global networks. Int J Comput Assist Radiol Surg 14:1815
9 Lung nodule classification using combination of CNN, second and higher order texture features
Amrita Naik∗ 4 and Damodar Reddy Edla
10 A hybrid algorithm for lung cancer classification using SVM and Neural Networks Pankaj
Nangliaa, Sumit Kumarb,∗, Aparna N. Mahajana, Paramjit Singha, Davinder Ratheea
11 Semantic segmentation and detection ofmediastinal lymph nodes and anatomical structures in
CT data for lung cancer staging David Bouget1 · Arve Jørgensen1,3 · Gabriel Kiss1,5 · Haakon
Olav Leira1,4 · Thomas Langø2
12 Using Double Convolution Neural Network for Lung Cancer Stage Detection Goran Jakimovski
1,* and Danco Davcev 2
13 Automated AJCC (7th edition) staging of non-small cell lung cancer (NSCLC) using deep
convolutional neural network (CNN) and recurrent neural network (RNN) Dipanjan Moitra* and
Rakesh Kr. Mandal
14 A manifold learning regularization approach to enhance 3D CT image-based lung nodule
classification Ying Ren1 · Min-Yu Tsai2,3,4 · Liyuan Chen3,4 · Jing Wang3,4 · Shulong Li3 ·
Yufei Liu5,6 · Xun Jia2,3,4 · henyang Shen2,3,
15 Shape and margin-aware lung nodule classification in low-dose CT images via soft activation
mapping Yiming Lei a , Yukun Tian a , Hongming Shan b , Junping Zhang a , ∗, Ge Wang b ,
Mannudeep K. Kalra c
16 Multi-Task Learning for Lung Nodule Classification on Chest CT PENGHUA ZHAI1, 2, YALING
TAO1, 2, HAO CHEN1, 2, TING CAI1, 2, AND JINPENG LI1, 2 1HwaMei Hospital,
17 A Cascaded Neural Network for Staging in Non-Small Cell Lung Cancer Using Pre-Treatment CT
18 Screening and staging for non-small cell lung cancer by serum laser Raman spectroscopy
HongWang a,⁎,1, Shaohong Zhang b,1, LimeiWana, Hong Suna, Jie Tan a, Qiucheng Sub
19 The 8th lung cancer TNM classification and clinical staging system: review of the changes and
clinical implications Wanyin Lim1, Carole A. Ridge1, Andrew G. Nicholson2, Saeed Mirsadraee1

You might also like