You are on page 1of 6

nd

2 International Conference on Innovative Research in Engineering and Technology (iCIRET2013), January 3-5, 2013

LUNG CANCER CLASSIFICATION – A SURVEY


SASIKALA.S1, SATHYAPRIYA. N2, DR.EZHILARASI.M3
1
Associate Professor, Department of Electronics and Communication Engineering,
Kumaraguru College of Technology, Coimbatore- 49.
Email: sundarsasi@gmail.com
2
PG Scholar, Department of Electronics and Communication Engineering,
Kumaraguru College of Technology, Coimbatore- 49.
Email: sathyasenthil2001@gmail.com
3
Principal, KGISL Institute of Technology, Coimbatore- 49.
Email:ezhilarasim@gmail.com

Abstract
Lung cancer is one of the most serious cancers in the world with a gradual increase in the number of deaths
every year. As early as the detection, the chance of successful treatment is higher. Lot of work have been done in
this area. This paper aims to present a review of recent image processing methods in lung cancer detection,
segmentation and classification. The Main contributions, advantages and drawbacks of various methods are
discussed in detail. Problematic issues of CAD system and outlook for the future research are discussed too.
The major goal of the paper is to provide a comprehensive reference source for the researchers involved in lung
cancer detection and classification.
Keywords: Lung cancer, bit plane slicing, k-Gabor, snake algorithm, SVM, nearest neighbour, region growing.

I. INTRODUCTION
Cancer is a disease of abnormal cells multiplying and growing into a tumour. The mortality rate of lung
cancer is the highest among all other types of cancer [6]. Cancer that starts in the lung is called primary lung
cancer. There are several different types and these are divided into two main groups, Small cell lung cancer and
Non small cell lung cancer. Metastasis occurs when a cancer cell leaves the site where it began and moves into a
lymph node or to another part of the body through the bloodstream. Computer-aided diagnosis (CAD) systems
are used to extract quantitative data from (CT) images of chest to reach a diagnosis that can be presented to a
radiologist. CAD systems are also used to extract image features to characterize the nodule as benign or
malignant (cancerous). This paper presents an overview of the methods available for detection and classification
of lung cancer.

II. METHODOLOGY
M.Gomathi et.al, proposed a CAD system [1] for the detection of lung cancer through the analysis of chest
CT images. They used the basic image processing techniques including Bit-Plane Slicing, Erosion, Median
Filter, Dilation, Outlining, Lung Border Extraction and Flood-Fill algorithms. The first step is lung region
extraction where the lung region and regions of interest (ROIs) are detected from the chest CT scan image
which not only contains the lung region, but also contains background, heart, liver and other organs. In this
process the bit plane slicing is done and the best suitable slice with better accuracy and sharpness is chosen for
the further enhancement of lung region. Then erosion, dilation and median filters are applied to the enhanced
image for further improvement of the image from other distortion. Outlining algorithm is then applied to
determine the outline of the regions. After that lung border extracted and flood fill algorithm is applied to fill the
obtained lung border with the lung region.
After the lung region is extracted, Fuzzy Possibilistic C Mean (FPCM) that combines the characteristics of
both fuzzy and possibilistic C-means is used for lung segmentation in order to detect the cancer nodule. Then
Area of the candidate region, The Maximum Drawable Circle (MDC) inside the candidate region, Mean
intensity value of the candidate region are extracted and following three diagnostic rules are applied to detect the
cancerous nodule. Rule 1: if the area of candidate region exceeds the initial threshold value T1, then it is
eliminated for further consideration. Rule 2: If the radius of the drawable circle for the candidate region is less
than the threshold T2, then that region is considered as non cancerous nodule and is eliminated for further
consideration. Rule 3: If the mean intensity value of candidate region goes below minimum threshold T2 or goes
beyond maximum threshold T3, then that region is assumed as non cancerous region. These rules are passed to
the Extreme learning machine (ELM) in order to detect the cancer nodules for the supplied lung image. ELM is
a single hidden layer feed-forward neural Networks will randomly selects the input weights and analytically
determines the output weights. The experimentation is performed with 1000 images containing 13 cancerous
nodules, 8 nodules less than 2mm with 2474 slices obtained from the reputed hospital. This technique detects 10
cancer nodules correctly and 122 false positive regions.

ISBN-978-8-1924-2185-8/$100 © 2013 Published by the Park College of Engineering and Technology


nd
2 International Conference on Innovative Research in Engineering and Technology (iCIRET2013), January 3-5, 2013

In paper [2], Omer M. Soysal et.al, implemented a novel hierarchical decision engine by a hierarchy of
artificial neural Networks. They presented an image processing tool as a part of a comprehensive computer
aided detection system for lung nodule detection from CT scan images. Here the images are retrieved by
converting XML file to MATLAB structure. Two types of features utilised here are Geometric feature and
photometric features. The geometric feature includes circularity and spectral feature depends on polygonal
shape of ROI. Photometric feature class consists of co-occurrence and run-length. Run-length can be used to
measure texture characteristics of an image based on binning approach. In particular an efficient method for
constructing the run-length matrix involves the choice of bin-size, is proposed in this work which is used
repeatedly in extracting various run-length features, useful for characterizing image texture such as sum of run
length and short-run emphasis are also discussed.
In paper [3] Eva M. van Rikxoort et.al, presented a completely automatic method to segment the lungs,
lobes and pulmonary segments from volumetric chest CT scans. The method starts with lung segmentation
based on region growing and standard image processing techniques. After the segmentation three similar
approaches in which voxels are classified are used to perform fissure, lobe and segment segmentation. For the
detection of the fissures inside the lungs two types of gray-scale features used such as the eigen values of the
Hessian matrix and gray-value information on different scales. Next, the pulmonary fissures are extracted by a
supervised filter. Subsequently the lung lobes are obtained by voxel classification where the position of voxels
in the lung and relative to the fissures is used as features. Finally, each lobe is subdivided in its pulmonary
segments by applying another voxel classification that employs features based on the detected fissures and the
relative position of voxels in the lobe. Initially all scans are sub-sampled with a factor 2 in each direction using
block averaging (the mean of eight voxels becomes the new voxel value) to reduce required computation times.
All computations are performed on those sub-sampled data. Next, the lobar fissures are segmented using a
supervised approach. Based on the lung and fissure segmentations, the lobes are extracted. Finally, the segments
are extracted per lobe. Since the fissure, lobe and segment segmentation are all based on classification. Gray
scale features and position features and Automatic 3-D algorithm are used for segmentation. The quantitative
evaluation performed is based on random points in a large set of scans and reflects the way the segments are
used by radiologists. The performance of the lobe segmentation was evaluated using the same data. For the
middle lobe in the right lung, the automatic system performed better than the human observers. This automatic
system performed better for the left lung than for the right lung.
Another automated Computer Aided Diagnosing (CAD) system is proposed in [4] for detection of lung
cancer. The CT images are pre-processed through contrast enhancement, thresholding, and filtering and blob
analysis. To separate the suspected nodule areas (SNA) from the image, a segmentation process using Otsu
threshold and region growing technique is applied. The following features are used as the diagnostic indicators.
1.Area of the interest, 2.Calcification, 3.Shape, 4.Size of nodule, 5. Contrast enhancement. Texture features are
considered to make a comparison between cancerous and non cancerous images. An artificial neural network is
developed and trained by back propagation algorithm to differentiate the cancerous nodules from the
noncancerous and tested with different images from a database of the DICOM CT Lung images of NIH/NCI
Lung Image Database Consortium (LIDC) dataset. In this system, 90% sensitivity with 0.05 false positives per
image is achieved. The advantage of binary image slicing technique is data and user-independent and also faster
compared to the thresholding technique, which is utilized here. The accuracy of 85% is indicated by surgeons
and radiologists for locating cancerous nodules of 2.5–7.0 mm.
A novel approach is proposed for diagnosing malignant lung nodules based on analyzing the spatial
distribution of Hounsfield values for the detected lung nodules in [5]. Spatial distribution of image intensities
(or Hounsfield values) comprising the malignant nodule appearance is accurately modelled with a new
rotationally invariant second-order Markov- Gibbs Random Field (MGRF). In this paper, a new maximum
likelihood estimation approach is introduced to estimate the neighbourhood system of the proposed rotation
invariant MGRF and its potentials from a training set of nodule images with normalized intensity ranges. The
visual appearance of both the small 2D and the large 3D malignant lung nodules in an LDCT chest image are
modelled with a generic translation and rotation invariant second-order MGRF. It’s voxel-wise and central-
symmetric pair wise voxel potentials account for differences between the Hounsfield values (i.e. gray levels, or
intensities) of the nodules. Possible monotone (order-preserving) intensity changes, e.g. due to different sensor
characteristics, are taken into account by equalizing lung areas on every segmented LDCT data set. Its novelty
lies in using the appearance of a segmented 3D nodule instead of the more conventional growth rate as a reliable
diagnostic feature. The appearance is described in terms of the voxel-wise conditional Gibbs energies for a
generic rotationally and translationally invariant second-order MGRF model of malignant nodules with
analytically estimated characteristics for voxel neighbourhoods and potentials.
A Hybrid methodology first segments the cancerous regions by using area property of the connected
components and then uses contrast variance for separating the cancerous cells from non-cancerous tissues is
discussed in [7]. This paper proposes 2 phases. In first phase, an optimal thresholding based on otsu’s method is
ISBN-978-8-1924-2185-8/$100 © 2013 Published by the Park College of Engineering and Technology
nd
2 International Conference on Innovative Research in Engineering and Technology (iCIRET2013), January 3-5, 2013

done, connected regions are identified and segmented based on their areas. Then the thoracic region is identified
and the cancerous nodules are separated from the non-cancerous tissues using the property of contrast difference
between them. This generalized hybrid segmentation algorithm is presented based on contrast and area
properties of the regions in the image. It is capable of segmenting cancerous regions properly even the
cancerous cells which are attached to the other healthy regions.
A new feature extraction method that employs Gabor filters, captures texture information from medical
images without a costly segmentation usually associated to texture extractors, known as the k-Gabor method [8]
is proposed by Gabriel Humpire-Mamani et.al., The k-Gabor method can quantify texture information from
specific regions, tissues and internal structures of the images providing a concise representation for a richer
image analysis and also feature vectors generated describe the images more precisely. This k-Gabor feature
extractor comprises two stages: (i) pixels values of each histogram equalized original gray scale image are
clustered using the k-Means algorithm and generates k new image for analysis; (ii) Gabor features are extracted
from each image generated in the first stage. Finally, all the features extracted from the set of clustered images
compose the final feature vector, building the k-Gabor elements. Gabor filters work with rotations and scales
and hence the analysis of the internal structures in any level of the original image is stronger and more robust.
Also, clustering image regions provides an advantage of the shape characteristic of the regions. The Gabor
method generates a feature vector considering every clusterized image created by the k-Means algorithm with 6
orientations and 4 scales. Since we have k masked (clusterized) images, the size of the feature vector will be
kx6x4 number of features captured from each Gabor subspace. The mean and standard deviation from each
subspace are also computed. The value of k can vary according to the type of the images stored in the dataset,
considering the degree of detail demanded in the application. The k value was empirically set to k = 2, 3, 10 to
three different datasets of medical images. The proposed k-Gabor method is compared to several well-known
feature extractors available in the literature, and the results reveal that this method presented the best precision
and retrieval of the images when answering similarity queries. The total time spent to compute the k-Gabor
features was always within fractions of seconds.
In paper [9], a fully automatic method to identify if a lung nodule is well-circumscribed, juxtavascular,
juxtapleural or pleural tail in computed tomography (CT) images is proposed. Then, the texture features of a
lung nodule are extracted based on the voxel labelling outputs, and its location information is inferred. A new
method is proposed to classify nodule locations in CT images exploiting the advantages of both voxel labelling
and context characterization. First, the voxels are classified into only two categories: background (i.e.
parenchyma) and foreground (i.e. nodule, vessel, and pleural wall). Then they designed an optimized graph
model based on conditional random field (CRF) with global and region-based energy terms besides the standard
unary and pair wise terms. Second, to infer the location information, a learning-based context characterization
based on the voxel labelling outputs is used. For context characterisation the SIFT (Scale-Invariant Transform
Feature) descriptor is extracted by fixing the key point location at the nodule centroid, and with an empirically
chosen scale parameter, the SIFT algorithm then automatically determines the principal orientation and a 128-
dimensional feature vector is computed. A four-class SVM is then trained to classify the feature descriptors into
four types of nodule locations.
A patch-based technique for super-resolution enhancement of the 4D-CT images along the superior-inferior
direction is proposed in [10]. Four dimensional CT uses new technology to take images that not only capture the
location of your tumour, but also capture its movement and the movement of your body’s organs over time [11].
The anatomical information that is missing at one particular phase can be recovered from other phases. Based on
this assumption, a patch-based mechanism for guided reconstruction of super-resolution axial slices is
employed. Specifically, to reconstruct each targeted super-resolution slice for a CT image at a particular phase,
they agglomerate a dictionary of patches from images of all other phases in the 4D-CT sequence. Then they
perform a sparse combination of the patches in this dictionary to reconstruct details of a super-resolution patch,
under constraint of similarity to the corresponding patches in the neighbouring slices. By iterating this procedure
over all possible patch locations, a super resolution 4D-CT image sequence with enhanced anatomical details
can be eventually reconstructed. They resort to constructing a dictionary that is adapted to each local patch
based on the neighbouring patches at other phases, instead of a general dictionary for the whole 4D-CT. This
allows us to considerably reduce the size of the dictionary since only patches that are structurally close to the
neighbourhood of the target patch need to be included. For better characterization of structural patterns in the
patches, additional features derived from the patches, such as image gradients. After obtaining the relevant
dictionary for patch then a suitable reconstruction using the dictionary is done.
In [12], HRCT (high-resolution computed tomography) image enhancement and de-noising algorithm
which is based on rough set is proposed. The equivalent relations defined by the knowledge (the density
dissimilarities of human organs) of medical image science, HRCT image is partitioned into sub images for
background and object. The sub images of background and object are enhanced and de-noising respectively and
they are combined to form a final enhanced image. Lung tissue in the HRCT image of chest is regard as the
ISBN-978-8-1924-2185-8/$100 © 2013 Published by the Park College of Engineering and Technology
nd
2 International Conference on Innovative Research in Engineering and Technology (iCIRET2013), January 3-5, 2013

target area. The HRCT image gets the outstanding target area enhancement. The ROI (Region of Interest) de-
noising introduced can enable doctors freely use the mouse to specify a polygon within the image. Contrast of
HRCT is the key. Image contrast is the differences of signal relative intensity in region, the ratio of black and
white; it also is the gradual change from black to white level. In medical imaging examination, improve the
contrast is very helpful to image clarity and observation of patient organ details and gray-levels. The
enhancement and de-noising can attenuate image noise as much as possible, keeping edges sharp clear detail,
smooth region. The enhancement algorithm retains the full gray value and structure of the target area (lung
tissue), thus reasonably retains the fidelity of the target area (lung tissue). The concept of ROI is then introduced
after enhancement.
Paper [13] investigates the suitable boundary model to minimize artefacts in the reconstruction for lung
function. Three different 3D models with monolayer electrode are generated to represent a full range of
geometric integrity and corresponding reconstructed images are compared, using conjugate gradient (CG)
method. Three different multilayer electrode structures for 3D models are investigated including the bar
electrode structure of both two and three layers, and the two-layer electrode structure of spiral property.. An
advanced version of the conjugate gradient (CG) method, i.e. the Schur CG method, is used to solve the inverse
problem. Schur CG method can accelerate the convergence of iterative steps and also get more stable and
accurate results. The result indicates that the quality of the reconstructed image depends on the structures of
electrode and boundary farm.
The feasibility and accuracy of tracking the motion of a lung tumour in a breathing phantom using a
computer vision algorithm and electronic portal images are investigated in [14]. A multi-resolution optical flow
algorithm that incorporates weighting based on the differences between frames is used to obtain a set of vectors
corresponding to the motion between two frames. A global value representing the average motion is obtained by
computing the average weighted mean from the set of vectors. The tracking accuracy of the optical flow
algorithm is compared to potentiometer measurements. A self-resetting technique has been used to offset the
drift observed in the cumulative position of the target. For a 12 breaths/min motion, a maximum average inter-
frame velocity error of (1.06 ± 0.61) mm/s is obtained. A correlation coefficient of 0.97 bounded by a 95%
prediction interval of (0.96, 0.98) is established between the optical flow and potentiometer results. Maximum
absolute average positional error of 0.42 ± 0.21 mm is achieved. This approach offers the potential of real-time
tumour motion tracking. Optical flow tracking is a method which tracks apparent motion in the image using the
temporal and spatial intensity gradients. Various implementations of the optical flow algorithm have been used
to track organ and tumour motion. Since the change of each individual pixel can be represented by an optical
flow vector, it offers the potential to track the motion of a deformable object where a rigid template might not
find an exact match. Due to the low contrast of the tumour on EPI and the scattering of the photon beam with
surrounding tissues, the accurate detection of motion with small magnitudes is challenging because it has been
difficult to find a meaningful threshold for the optical flow vectors that separates small motion from random
noise. This paper evaluates the accuracy of an approach that attempts to automatically track the position and
velocity of an uncontoured moving target on an EPI image with an average image-difference weighted optical
flow algorithm. A small group of reliable flow vectors are obtained from a set of absolute image-difference
intensities. The set of image-difference intensities highlights the regions where changes have occurred. This
varies for subsequent image pairs. Hence, instead of having a fixed threshold for the entire image sequence, an
adaptive set of thresholds is obtained A three layer resolution optical flow algorithm was found to be sufficient
for the detection of the average clinical tumour velocities. In estimating the inter-frame velocity, a strong
correlation was found between the optical flow and potentiometer results with maximum average inter-frame
velocity error of (1.06 ± 0.61) mm/s. This corresponds to a maximum average inter-frame displacement error of
(0.14 ± 0.08) mm measured over the time interval of 0.133 s. The potentiometer used in this case has a linearity
tolerance of 0.5% for a maximum extension of 12 mm. The approach has a strong potential as a non-invasive
method of real-time tracking organ or tumour motion, which could be used for adaptive radiotherapy purposes.
Comparing the tracking accuracy of the algorithm with potentiometer measurements is a first step towards
establishing the accuracy and feasibility of applying the algorithm on actual patient data.
In [15], a new approach based on image registration is proposed to help the experts in better diagnosis of
the lung cancer which is an improved method based on a previously proposed approach by the authors. In a
multistep process the similarity between a pair of images acquired during past and current cancer stages is
maximized with regards to the fact that tumour changes should be preserved during the deformation process.
The similarity measure used during the registration process is Normalized Mutual Information. In the non-rigid
image registration phase constraints are enforced on the optimization criteria, number of iterations and number
of B-Spline grid nodes to preserve the tumour change. Control points which are transformation parameters for
the Thin-Plate Spline warping are extracted from edge detected images in a semiautomatic manner. In the final
step subtraction of sequential CT images is performed to detect the changes in the lung including the tumour
change. Using the Insight Toolkit framework improved the quality of the final results and also ensured a more
ISBN-978-8-1924-2185-8/$100 © 2013 Published by the Park College of Engineering and Technology
nd
2 International Conference on Innovative Research in Engineering and Technology (iCIRET2013), January 3-5, 2013

robust application. The approach used here mainly has three steps which include non-rigid image registration,
control point extraction and Thin-Plate Spline (TPS) warping. In the first step initial misalignment between the
source and target images are removed then using the edge and structural information of the registered image and
the target image number of Control Points (CP) are identified as parameters for the TPS transformation function.
To remove the initial misalignment in the first step a non-rigid image registration is used which provides a
sufficient flexibility and performance for the purpose of temporal analysis. Parameter setting of the non-rigid
image registration provides the required control over the direction of the process. The TPS transformation
function enables for the final warping to have limited support which is a desired property for the lung cancer
progression assessment since the effect of a control point would be local rather than a propagating global effect.
Experiments to assess the changes in CT images of 8 patients proved to be accurate and computationally
efficient with 1.7 mm average Root Mean Square (RMS) error, average computation time of 40 seconds per
slice and average of 40 minutes for a full lung volume with 60 slices in 5 mm thickness. The results were
validated by comparison to manual registration and fully automated Free Form Deformation (FFD) methods.
Methods based on FFDs have reported good results for tumours with sizes smaller than 1.5 cm (Small Cell Lung
Cancer) but presented inconsistencies for tumours of bigger sizes (>3.0 cm) such that errors increased from 0.9
mm for SCLC to 4.1 mm for tumours of bigger size .The proposed method in this paper has maintained more
consistent results for tumours of different sizes due to utilization of user provided information and the emphasis
on local information.
A feature extraction model is proposed in [16] which is carried out in two phases. First phase carries out
image pre-processing, edge based segmentation using Snake algorithm and its corresponding database is
prepared based on the contour features of the lung. In second phase, the Region of Interest nodules (ROI) are
extracted from numerous dataset and its features are calculated and stored in a database in terms of a metric.
Results obtained from the ROI features are used to categorize the scanned sample with its closest neighbour to
classify the ROI as positive or not. Grey Level Co-occurrence Matrix (GLCM) features can also be considered
for ROI classifications to be even more accurate. Finally, the assessment of tumour growth and the reduction in
non-pathological area during subsequent periods of cancer are carried out using a nearest neighbour (NN) rule
based on features extracted during both phases. The procedure run over 132 CT samples from 2 different
scanners. Hence the proposed scheme eliminates image background during the detection of ROI nodules thereby
avoiding the unnecessary misperception while scanning the pixels.
Pei Xiaomin et.al, proposed a 3D multiscale filter using a Gaussian kernel of different scale for matching
different structures of images for the detection of lung nodules [17]. The Gaussian kernel for 2D images is
defined as

s is the scale of Gaussian kernel which is used to match the nodules of different size. The second order
gradients of Gaussian kernel are calculated. The second-order partial derivatives of an original image L(x,y) at
different directions based on Gaussian kernel are calculated as a convolution with derivatives of Gaussian in
different directions. The two-order derivate of Gaussian kernel in x direction is given as follows.

γ is a parameter which define a family of normalized derivatives. This normalization is important for a fair
comparison between the responses of differential operators at multiple scales. After obtain the two-order
derivatives of original image in different directions.((x,x),(x,y),(y,x)and(y,y))the two-order derivative
matrix(Hessian matrix)can be obtained as H.

The eigen value of the Hessian matrix are calculated to extract the principal directions to decompose the local
second order structure of the image. Nodules appear as bright blob structures in a darker environment. Different
types of Hessian eigen value correspond to different structures. For Lung nodules in CT image, the eigen values
of Hessian matrix should satisfy the following condition: λ 1 ≈ λ2 <<0. The nodule measure is analyzed at
different scales s of Gaussian kernel and the response of the filter is maximized at one scale, at which Gaussian

ISBN-978-8-1924-2185-8/$100 © 2013 Published by the Park College of Engineering and Technology


nd
2 International Conference on Innovative Research in Engineering and Technology (iCIRET2013), January 3-5, 2013

kernel is approximately matched with the size of the nodule to be detected. Then nodule measure provide by the
filter response at different scales are fused to obtain the final measure of nodule by using

For each nodule candidate, a 2D geometrical region rowing technique constrained to 5 repetitions is
developed to segment it accurately and distinguish from vessel junctions and vessels. The 4 features
compactness, circularity and two second centeral moments are extracted. A rule-based classifier employed with
a separation metric as a criterion for determining the ‘optimal’ composite feature for separating nodules from
non-nodules. To eliminate overtraining in the process of threshold selection the cut-off threshold must pass
through one of the nodules which are automatically determined by using the gain defined below. The ratio of the
number of non-nodules removed to the number of nodules sacrificed is calculated for the k possible thresholds
and maximum value was defined as the gain of the feature. And the corresponding threshold was considered to
be the optimal threshold for the feature. Nodule candidates surviving this rule-based classifier were considered
to be the final nodule candidates. This scheme is tested on 30 series CT lung images from LIDC datasets. For
the total of 30 CT studies all 52 true nodules had been identified and 8.4 false positives per scan.

III. CONCLUSIONS
The idea of an ultimate CAD system is to segment the lung nodule and detect the cancerous nodule and
hence to provide a second opinion to doctors for deciding an appropriate treatment. A detailed discussion about
the CAD systems developed for lung cancer detection and their advantages and limitations are presented. This
paper helps to introduce new innovative methods for developing CAD systems with improved accuracy.

REFERENCES
[1] M.Gomathi , Dr.P.Thangaraj “ A Computer Aided Diagnosis System For Detection Of Lung Cancer Nodules Using
Extreme Learning Machine” proceeding of International Journal of Engineering Science and Technology Vol. 2(10),
2010, 5770-5779.
[2] Omer M. Soysal1, Jianhua Chen2,3 and Helmut Schneider1 “ An Image Processing Tool for Efficient Feature
Extraction in Computer-Aided Detection Systems “ 2010 IEEE International Conference on Granular Computing.
[3] Eva M. van Rikxoort, Bartjan de Hoop, Saskia van de Vorst, Mathias Prokop, and Bram van Ginneken, “Automatic
Segmentation of Pulmonary Segments From Volumetric Chest CT Scans” IEEE TRANSACTIONS ON MEDICAL
IMAGING, VOL. 28, NO. 4, APRIL 2009.
[4] Disha Sharma and Gagandeep Jindal “Computer Aided Diagnosis System for Detection of Lung Cancer in CT Scan
Images “International Journal of Computer and Electrical Engineering, Vol. 3, No. 5, October 2011
[5] A. El-Baz, A. Soliman, P. McClure, G. Gimel’farb, M. Abo El-Ghar, R. Falk_ “Early Assessment Of Malignant Lung
Nodules Based On The Spatial Analysis Of Detected Lung Nodules” 978-1-4577-1858-8/12/$26.00 ©2012 IEEE
[6] Source: U.S. Cancer Statistics Working Group. United States Cancer Statistics: 1999–2008 Incidence and Mortality
Web-based Report. Atlanta (GA): Department of Health and Human Services, Centers for Disease Control and
Prevention, and National Cancer Institute; 2012. Available at: http://www.cdc.gov/uscs.
[7] C. Nandini1, Brinal Jason Machado, Chandan, Nandkishor Patil, Padmanabh Aski “Hybrid Approach for Segmenting
Cancerous regions from CT scan Images of Lungs”, TECHNIA International Journal of Computing Science and
Communication Technologies, VOL. 3, NO. 1, July 2010. (ISSN 0974-3375)
[8] Gabriel Humpire-Mamani, Agma J. M. Traina, Caetano Traina Jr. “k-Gabor: A New Feature Extraction Method for
Medical Images Providing Internal Analysis” 978-1-2053-9078/1//$31.00@2012 IEEE.
[9] Yang Song, Weidong Cai, Yue Wang, David Dagan Feng “Location Classification Of Lung Nodules With ptimized
Graph Construction” 978-1-4577-1858-8/12/$26.00 ©2012 IEEE.
[10] Yu Zhang, Guorong Wu, Pew-Thian Yap, Qianjin Feng, Jun Lian, Wufan Chen, Dinggang Shen “Reconstruction of
Super-Resolution Lung 4D-CT Using Patch-Based Sparse Representation“ 978-1-4673-1228-8/12/$31.00 ©2012 IEEE
[11] Source available at: http://www.upmccancercenter.com/radonc/4dct.cfm.
[12] XIE Gang, YAN Chengdong, CAO Tianrui, WANG Fang “ROI of HRCT Enhancement and De-noising Based on
Rough Set “2010 International Conference on Computational Aspects of Social Networks.
[13] Wenru Fan, Huaxiang Wang, Xiaoyan Chen, Zhiying Lv “Three dimensional EIT models for human lung
reconstruction based on Schur CG algorithm” 978-1-4244-3316-2/09/$25.00 ©2009 IEEE
[14] Peng (Troy) Teo, Roan Crow, Samantha Van Nest, Stephen Pistorius “Tracking a Phantom’s Lung Tumour Target
Using Optical Flow Algorithm and Electronic Portal, Imaging Devices“ 978-1-4577-1775-8/12/$26.00 ©2012 IEEE
[15] Dawood M. S. Almasslawi, Ehsanollah Kabir “Using Non-Rigid Image Registration and Thin-Plate Spline Warping for
Lung Cancer Progression Assessment”, 978-1-4244-8728-8/11/$26.00 ©2011 IEEE.
[16] Vivekanandan D, Sunil Retmin Raj, “A Feature Extraction Model for Assessing the Growth of Lung Cancer in
Computer Aided Diagnosis” IEEE-International Conference on Recent Trends in Information Technology, ICRTIT
2011 978-1-4577-0590-8/11/$26.00 ©2011 IEEE MIT, Anna University, Chennai. June 3-5, 2011.
[17] Pei Xiaomin Guo Hongyu, Dai Jianping, “ Computerized Detection of Lung Nodules in CT Images by use of
Multiscale filters and Geometrical constraint Region Growing “,978-1-4244-4713-8/10/$25.00 ©2010 IEEE

ISBN-978-8-1924-2185-8/$100 © 2013 Published by the Park College of Engineering and Technology

You might also like