You are on page 1of 19

biocybernetics and biomedical engineering 38 (2018) 71–89

Available online at www.sciencedirect.com

ScienceDirect

journal homepage: www.elsevier.com/locate/bbe

Review Article

Medical image registration in image guided surgery:


Issues, challenges and research opportunities

Fakhre Alam a,*, Sami Ur Rahman a, Sehat Ullah a, Kamal Gulati b


a
Department of Computer Science & IT, University of Malakand, Dir (L), Khyber Pakhtunkhwa, Pakistan
b
School of Computer Science and Information Technology, Stratford University, VA, USA

article info abstract

Article history: Multimodal images of a patient obtained at different time, pre-surgical planning, intra-
Received 5 July 2017 procedural guidance and visualization, and post-procedural assessment are the core com-
Received in revised form ponents of image-guided surgery (IGS). In IGS, the goal of registration is to integrate
18 September 2017 corresponding information in different images of the same organ into a common coordinate
Accepted 6 October 2017 system. Registration is a fundamental task in IGS and its main purpose is to provide better
Available online 18 October 2017 visualization and navigation to the surgeons. In this paper, we describe the most popular
types of medical image registration and evaluate their prominent state-of-the art issues and
Keywords: challenges in image-guided surgery. We have also presented the factors which affect the
Medical image registration accuracy, reliability and efficiency of medical image registration methods. It is not possible
Image registration methods to achieve highly successful IGS until all the issues and challenges in registration process are
Image-guided surgery identified and subsequently solved.
© 2017 Nalecz Institute of Biocybernetics and Biomedical Engineering of the Polish
Academy of Sciences. Published by Elsevier B.V. All rights reserved.

diagnosis and accurately resect the tumor based on image


1. Introduction
information. The availability of precise image information
during IGS is due to the integration pre-operative images
Over the last few decades, image-guided surgery (IGS) and (images of a patient obtained before surgery) and intra-
radiotherapy has brought several changes in the way patients operative images (images of a patient obtained during surgery)
are treated and operations are performed. IGS is now an with navigation technology.
alternative to conventional surgery because it provides several Real time visualization of interested anatomical regions in
advantages such as minimum risk of tissue damage, improved medical images during surgical process is an essential
localization and targeting, shorter procedures through in- requirement for IGS system [3]. The required images of the
creased visualization of surgical field and improved hand-eye interested anatomical regions are obtained through high
coordination. The availability of these techniques provides resolution 3D scans such as magnetic resonance imaging
several benefits to the patients such as reduced surgical (MRI), ultrasound (US), computed tomography (CT) and
trauma, fast recovery and reduced hospital stay and cost [1,2]. positron emission tomography (PET). Similarly, multiple
Nowadays, IGS techniques assist clinicians in qualitative images are obtained in different time-frames or from different

* Corresponding author at: Department of Computer Science & IT, University of Malakand, Dir (L), Khyber Pakhtunkhwa, Pakistan.
E-mail addresses: fakhrealam@uom.edu.pk (F. Alam), srahman@uom.edu.pk (S.U. Rahman), sehatullah@uom.edu.pk (S. Ullah),
kgulati@stradford.edu (K. Gulati).
https://doi.org/10.1016/j.bbe.2017.10.001
0208-5216/© 2017 Nalecz Institute of Biocybernetics and Biomedical Engineering of the Polish Academy of Sciences. Published by Elsevier
B.V. All rights reserved.
72 biocybernetics and biomedical engineering 38 (2018) 71–89

angles of the same subjects. Generally, information in the


individual images are not enough for accurate diagnoses and 2. General concepts and methods
need to be aligned to reveal high information. IGS relies on
preoperative images to provide visualization and facilitate Image-guided surgery (IGS) involves anatomical images of a
surgical navigation within the human organ. During surgery, patient (obtained preoperatively) and patient physical space
preoperative images are matched against their correspon- (intraoperative images). Preoperative images spatially localize
dences in the intraoperative patient volumetric images using pathology which are further aligned with corresponding
the process of registration. More specifically, alignment of features of interest in intraoperative images of a patient in
corresponding information in the same underlying tissue or operating room. The alignment preoperative and intraopera-
patient's organ for analysis, visualization and navigational tive images greatly help surgeon in proper guidance during
guidance is performed through the process of registration in surgery [6]. Moreover, registration methods also quantitatively
image-guided surgery [4]. IGS involves the creation of precise, compare medical images taken at different times, from which
patient-specific models of relevant anatomy for the surgical information about evolution over time can be inferred [7].
process, and registering the models and corresponding image These information also help surgeons in the monitoring of
data to the patient [5]. In IGS system, the registration process is tumor growth over time.
maintained over the course of therapy. The combined information obtained from multiple input
Successful IGS greatly depends on the registration of images (preoperative and intraoperative images) provide a
preoperative image data with intraoperative physical anato- larger field of view and better image quality than that available
my. In image-guided surgical system, preoperative image based in one type of image (e.g. intraoperative images). On the
techniques are integrated with intraoperative guidance system. completion of registration process in IGS, all the interested
In registration process, the same coordinate frames and information in the preoperative images is mapped to the
anatomical structures in both preoperative image (prior to intraoperative patient anatomy. As a result, surgeons get
surgery) and intraoperative image (during surgery) are mapped combined information intraoperatively in operating room
to each other. Successful IGS also needs multiple time which helps them in surgical guidance and treatment. Fig. 1
conduction of procedure i.e. obtaining images of interested shows surgical view of preoperative diagnostic images and
regions at multiple-time frames or with multiple scanners. intraoperative navigation in prostate surgery [8]. In the figure,
Therefore, it is not a good practice in IGS to rely on a single image image (a) is operating view of the surgeon while image (b) is
obtained either at the beginning of procedure/in the same time- preoperative T2-weighted MRI images overlaid onto stereo-
frames or with a single modality. The quality of obtained images endoscopic intraoperative camera view. The preoperative T2-
is further improved with alignment and registration. These high weighted MRI images contain prostate and the tumor location
quality and more informative images help surgeons to accu- which were in use clinically for preoperative assessment by
rately locate region of interest while the surgery is in progress. the surgeons.
In spite of numerous research development and clinical Every modality has its own features to extract different
use, image registration procedures undertaken in IGS are still types of information from human organs i.e. magnetic
need further improvement. Moreover, a continuous research resonance image (MRI) and computer tomography (CT) extract
interest is seen in this area due to strong dependency of IGS on anatomical structures while positron emission tomography
image registration methods. It is therefore necessary to (PET) and single-photon emission computed tomography
develop more advanced methods in the area of registration (SPECT) access functional information [9,10]. The integration
having capability to accurately and efficiently align medical of both functional and anatomical information is always
images. In this regard, researchers and clinical practitioners required in IGS and greatly helps surgeons in diagnosis and
need to come forward and work on some of the prominent treatment planning.
issues and challenges in the area of medical image registra- The main aim of registration is to find the geometrical
tion, which are being presented in this survey paper. transformation and mapping between two or more images in
Therefore, the objective of the paper can be summarized as: order to obtain maximum information [11–13]. Registration is
(a) to give an overview of medical image registration types, an iterative process, source image is transformed to target
putting emphasis on their role in image-guided surgery. (b) image, similarity measures between them is computed and
Additional emphasis has been given to the issues and the resultant image is generated if the similarity measures are
challenges in the field and their possible solutions and fulfilled. The process is repeated for the optimization of
research guidelines are presented. (c) In order to study medical transformation parameters if the measure of similarity
image registration in depth, most recent state-of-the-art between source and target image is not perfectly aligned.
knowledge is presented in a systematic fashion. The contri- The mapping of 2D and 3D source and target images can be
bution of this survey paper is to provide comprehensive expressed as
knowledge on medical image registration in a systematic
manner, focusing on the issues and challenges. Rest of the tðx0 ; y0 Þ ¼ gðTðsðx; yÞÞÞ (1)
paper is organized as follows: Section 2 presents general
concepts and methods used for the registration of medical tðx0 ; y0 ; z0 Þ ¼ gðTðsðx; y; zÞÞÞ (2)
image in IGS. Section 3 describes issues and challenges in the
available registration methods for IGS, while Section 4 where t is fixed target image, s is moving source image, g is
presents some solutions and techniques to cope with these intensity mapping function, T is special transformation, x0 , y0 ,
issues. Concluding remarks are presented in Section 5. z0 are coordinates of source image and x, y, z are coordinates of
biocybernetics and biomedical engineering 38 (2018) 71–89 73

Fig. 1 – Surgical view of intraoperative video data and preoperative diagnostic images in prostate surgery. In the figure, image
(a) is operating view of the surgeon while image (b) is preoperative T2-weighted MRI images overlaid onto stereo-endoscopic
intraoperative camera view. The T2-weighted MRI images contain prostate and the tumor location which were in use
clinically for preoperative assessment by the surgeons.

target image. For the transformation of 2D images, the planning. Accurate Information in feature detection step is
coordinates are x0 , y0 and x, y while for 3D images, the coor- also important for the surgeon because he/she further uses the
dinates are x0 , y0 , z0 and x, y, z for source and target images obtained information for object recognition and matching.
respectively. Therefore, proper detection of features during registration
Image registration algorithm involves four main steps: (1) process is essential for successful IGS.
features detection, (2) feature matching, (3) transform model The second step in image registration is feature matching,
estimation and (4) resampling and transformation [14]. The in which the corresponding detected features in source and
result of registration depends on these four steps. Fig. 2 shows target images are mapped to each other. In feature matching
the process of registration from input images to the resultant step, the correspondence among similar features between set
registered image. of input images is established [15]. The correspondence is
In medical image processing, feature detection is per- made through intensity distribution in a neighborhood of
formed in a task specific manner, usually as product of a every pixel in both source and target images [16]. In IGS
segmentation preprocessing step. Important and desirable applications, the features are matched based on similarity
features in source and target images are detected in the first measures among corresponding anatomical and pathological
step of registration during surgical procedure. These features information in preoperative and intraoperative image data
consist of diagnostic information about anatomy and pathol- sets. In these applications, precise transfer of corresponding
ogies from different types of organs. In image-guided surgery, information between preoperative image and intraoperative
the accurate detection of desirable features i.e. tumors ensure patient image is crucial but a challenging task. Poor matching
successful removal of abnormal tissues. Advancement in can result alignment errors and corresponding information
imaging modalities and registration methods lead to the can be transferred to the wrong position, with potentially
precise detection of abnormal tissue (tumor) in its early stages. serious clinical consequences for the patient.
Moreover, these techniques also improve the diagnostic Transform model estimation is another important step of
procedures and facilitate better staging and preoperative medical image registration in image-guided surgery. In IGS,

Fig. 2 – The process of image registration.


74 biocybernetics and biomedical engineering 38 (2018) 71–89

the process of patient registration involves calculating a


transformation that maps corresponding points between
preoperative data sets to the patient's physical anatomy.
The transformed parameters are estimated before surgery
which minimize the distance between two images. The
transform model estimation is performed by first locating
features/point landmarks on the patient's intraoperative
images, then matched them to patient's preoperative images
and the transformation relating the two point sets is
calculated [17].
Transformation depends on the situation, it can range from
a simple rigid transformation (translation and rotation) to a
non-rigid transformation (affine, elastic and fluid). Rigid
transformation was a perfect choice for the estimation of
minimum deformation in intra-subject registration. However,
Fig. 3 – Classification of medical image registration.
in case of complex deformation in preoperative and intraop-
erative imaging, high dimensional transformation methods
are required. Some of the popular transformation methods for
registration of preoperative and intraoperative images include the image because they are more robust and easily processed
Splines, finite element models (FEM) and optical flow based than raw pixel values [23]. Different types of cues such as color,
methods [18]. The selection of appropriate transformation shapes and texture are used to represent image features.
method is crucial because the deformation and appearance Images contain local and global features in which the earlier
change in the resultant image after the registration greatly covers a large portion of the image while the later one focuses
depends on it [19]. For accurate and reliable estimation of on the specific portion. Local and global features are extracted
deformation in images, a priori known information about the with different types of techniques for the analysis and
image distortions, degradation and errors is essential for the registration of medical images.
chosen transformation model [20]. In case of un-availability of Feature-based registration consists of correlating fiducials
the required information, the model should be flexible and and/or anatomical landmarks present in both preoperative
general enough to cope with all types of possible degradation image and intraoperatively on the patient's organ. It is the
problems. method of choice in current commercial neurosurgery
Finally, the process of image resampling is performed navigation systems. The process of registration is performed
which geometrically transform the coordinates of source by the transformation of corresponding features (e.g. blood
image into the target image using mapping function [21,22]. In vessel outline) between preoperative image and intraoperative
IGS, several types of images of a patient are obtained i.e. patient image [24]. The transformation is performed on the
preoperatively and intraoperatively with different resolutions bases of best similarity measures between preoperative image
and coordinate systems. The process of resampling brings all and intraoperative image data i.e. 2D–3D and 3D–2D images.
of them into a common coordinate frame. In registration, Feature-based registration approaches are computationally
resampling uses transformation function, which extract and efficient because transformation is based on the analytical
interpolate the coordinates of pixel locations in source image values of geometric points landmarks. Furthermore, these
(e.g. preoperative) to the geometry of the target image (e.g. approaches also show high robustness to illumination
intraoperative). The transformation is performed on the basis changes and are better suited for large displacements.
of computed pixel values in each image region. These pixel However, in IGS and real time applications, the available
values are computed with proper interpolation techniques and feature-based registration methods could not properly extract
the transformation of each pixel values is performed from source specific features such as blood vessel outline from clinical
image to the target image according to the estimated mapping images i.e. 3D–2D and 2D–3D registration. The extraction and
function. Thus, information in both images is properly mapped matching of corresponding features in the preprocessing step
and a more informative registered image is generated. and the manual and semi automatic specification of land-
marks made these approaches less accurate. Images with large
2.1. Classification of medical image registration homogenous areas and appearance variations are mostly
registered with feature/landmarks-based methods. Fig. 4
The existing medical image registration methods are classified shows the landmarks-based registration and fusion of retinal
into feature-based, intensity-based, segmentation-based and images with large homogenous areas and appearance varia-
fluoroscopy-based, as shown in Fig. 3. The detail is given in the tions. As can be seen from Fig. 4, the landmarks in the source
sub-sections below. image are mapped with their corresponding landmarks on the
target image. After landmarks matching and transformation,
A. Features/landmark based registration the registered and fused image is shown on the right side of
Fig. 4.
Image features such as landmark points, lines, edges and The three most popular methods for the implementation of
curve are the abstract representation of an image and show its feature-based registration are surface-based, point-based and
behavior. Image features are extracted from raw pixel values in curve-based registration [4]. In surface-based registration, the
biocybernetics and biomedical engineering 38 (2018) 71–89 75

Fig. 4 – Landmarks based registration and fusion of retinal images.

surfaces (structural information) of one image (source image) same anatomical structure surfaces from source and target
are matched with the corresponding surfaces of another image images. These image surfaces are used as input for the
(target image) and are transformed accordingly. In this type of registration process. In the registration process, the same
registration, a dense set of corresponding points between two image surfaces in both images are detected and iteratively
surfaces is determined. The proper finding of such corre- transformed. This transformation is continuously performed
sponding points is usually challenging because the surface until the closest fit between the two equivalent surfaces is
may undergo large deformations, and sometimes there might found. Rigid surface-based registration methods are widely
be missing data, such as unexpected holes and different used in medical diagnostics but are mostly prone to errors for
boundary locations, in the surface [25]. Fig. 5 shows the convoluted surfaces. Deformable methods for surface-based
deformable surface-based registration and fusion of endo- medical image registration extract surfaces from one image i.e.
scopic 3D movie clip of pharyngeal surface (a) and a 3D CT source image and elastically deform them to best fit in another
image (b) of the same region of a patient having head and neck image i.e. target image. The deformable surfaces are also
cancer. The surface-based registration of (a) and (b) will permit called curves and these curves are implemented as snakes or
fusion of the endoscopically available information about the active contours. Deformable methods for surface based
size of tumor on the pharyngeal surface with the same medical image registration are successfully applied to inter-
information detected in the CT image. In Fig. 5(a) and (b), a subject and atlas registration. However, the main drawback is
large deformation between two surfaces can be seen which is the requirement of good initial pre-registration for proper
caused by the swallowing process and posture change of a convergence.
patient. Due to the limitation of endoscopic procedure, a part Several types of registration algorithms are available which
of the pharyngeal anatomy is visually inaccessible by the extract the corresponding structure surfaces by various
camera. Images in Fig. 5(c) and (d) are obtained by the color- segmentation techniques. These corresponding structure
coded alignment of a CT surface and a real reconstruction. The surfaces are used as unique features in the registration
reconstructed surfaces in registered images are only partial process. Among the available algorithms, head and hat is
surfaces with respect to CT surface and pharyngeal surface. one of the most popular methods for the registration of rigid
Moreover, many holes are visible in the surfaces of registered body multimodal images [26]. Head and hat algorithm
images, which show that surface-based registration is not determine the two surfaces of same region in source and
suitable for images between modalities with large deformations. target images. The first surface is obtained from higher
Rigid and deformable methods are available for the proper resolution image while the second surface is extracted from
implementation of surface-based medical image registration lower resolution image. The first surface is called as ‘‘head’’
[4]. Rigid methods for surface-based registration extract the and is represented as a stack of disk. The second surface is

Fig. 5 – Deformable surface based registration and fusion of the pharynx images of a patient with head and neck cancer: (a) is a
segmented CT image; (b) An endoscopic video reconstruction; images (c, d) are color-coded correspondences between a CT
surface and a real reconstruction. Registration of these images will permit fusion of endoscopically available information
about the tumor extent on the pharyngeal surface with the tumor information seen in the CT, thereby improving the
radiation plan [25].
76 biocybernetics and biomedical engineering 38 (2018) 71–89

called as ‘‘hat’’ and is represented as a list of unconnected 3D procedure at the beginning of image-guided surgery are some
points. In this algorithm, iterative transformation is performed other issues in point-based registration method.
on head surface with respect to the hat surface. This process is Fig. 6 [31] shows preoperative image to intraoperative
repeated until the closest fit of the hat onto the head is found. patient anatomy registration using point landmarks (fiducial
The computational cost of head and hat algorithm is low and markers) methods in a typical IGS system. The registration
its segmentation task is easy, however, it is prone to error for process begin by the determination of interested points
convoluted surfaces. The iterative closest point (ICP) algorithm (fiducial markers) in the preoperative image and intraopera-
is another popular method for rigid body surface-based image tive physical anatomy of a patient with tracked pointer once
registration [27]. This method is used with several types of the patient is immobilized on the operating table, as shown in
geometrical primitives such as point sets, line sets, curve sets the figure. In the next step, the preoperative image is
and surfaces. The transformation and distance between a converted into the coordinates of physical anatomy of a
point set and a surface is calculated iteratively until the patient with spatial transformation mechanism. In the last
convergence of metric to minimum value. step, surgeon point on the physical anatomy of a patient using
Point-based registration is another popular method for the a tracked pointer and see the corresponding location in the
registration of corresponding anatomical point landmarks in images on the computer screen.
the pre-operative data sets and patient's physical anatomy. Feature-based registration methods also involve the
Point-based registration involves determining the coordinates extraction, matching and transformation of curves in both
of corresponding points in preoperative images and intraop- source and target images. Curves are also called snakes or
erative anatomy and computing the geometrical transforma- active contours and are geometrical features, which provide
tion that best matches these points [28]. important information of the image structure. Curve-based
Anatomical point landmarks are fiducial markers, either methods are also called elastic methods and are mostly used
intrinsic or extrinsic attached to the patient's body. Extrinsic for the registration of deformable tissues [9]. Curve-based
markers are used as a reference points for registration and registration methods perform feature matching and geometric
intrinsic markers are the anatomical landmarks or pixel values transformation independently in several regions of input
in the image. A point-based registration method is broadly images. Curve-based registration computes similarity mea-
adopted in commercial navigation system and is mostly used sures and mutual information both locally (on the sub image)
in existing IGS system. This method properly aligns preopera- and globally (on the whole image). Curve-based registration
tive image data (MRI, CT images) with intraoperative physical methods are more flexible and apply translation, rotation,
anatomy [29]. An optimal match is often achieved between scaling and shearing parameters to objects during transfor-
interested features (point landmarks) of preoperative image mation of source and target images. Furthermore, these
data and intraoperative physical anatomy. The popularity and methods deform slice into volume and transform multi-modal
common use of point-based method is due to the well 3D images with high accuracy. Curve-based methods are also
established intraoperative workflow. Moreover, in this type of suited for inter-subject and atlas registration [4]. However,
registration method, registration error properties such as proper initial pre-registration and precise extraction of start
fiducial localization error (FLE), fiducial registration error (FRE) point and end point are essential to achieve successful
and target registration error (TRE) are extensively analyzed [30]. registration. Fig. 7 shows feature-based registration of brain
Despite its success, physical contact with multiple point MRI images using points, curves and counters [32]. Registra-
landmarks over the patient physical anatomy using a tion is performed on the basis of seven point-pairs (red dots
navigated instrument is an issue. As a result, surgeon often inside (a) and (b)), one counter-pair (outside the region of (a)
needs to create a bigger exposure than required for minimally and (b)) and four curves-pairs (blue curves inside (a) and (b)). In
invasive surgery. Moreover, the extreme care in the handling the pre-processing step, the difference image between target
of tracker reference body and single time conduction of and source images is obtained, as shown in Fig. 7(c). Fig. 7(d) is

Fig. 6 – Preoperative image to intraoperative patient anatomy registration using point land marks (fiducial markers) methods
in a typical IGS system. Image (a) is preoperative CT image obtained before intervention while (b) show physical anatomy of
the patient.
biocybernetics and biomedical engineering 38 (2018) 71–89 77

Fig. 7 – Feature based registration of brainMRI images using points, curves and counters: (a) target MRI image; (b) source MRI
image; (c) difference between target and source images; (d) resultant registered image; (e) difference between (a) and (d); (f)
the mesh of transformation function.

the registered image obtained from the proper mapping of image pixel or voxel values (image gray values) without
points, contours and curves in source and target images. Fig. 7 considering sparse feature landmarks [33]. Within certain
(e) is difference of target image and resultant registered image space of transformation, these methods search for maximum
while (f) show a grid of transformation function. This grid is similarity measures between source and target images.
obtained as a result of applying the registration method to a Intensity-based image registration methods use different
regular grid and shows the form of deformation. Feature-based matching parameters for establishing the correspondence of
registration in Fig. 7 uses non-uniform cubic B-splines inter- similar intensity values between source and target images.
polation to model the extracted curve-pairs. Interpolation These matching parameters includes the sum of squared or
makes the parameter space almost the same in the extracted absolute differences (SSD and SAD, respectively), cross
curve-pair. After interpolation, the extracted curve pairs are sub- correlation (CCor), correlation coefficient (CCoef), mutual
divided into a set of points (according to curvature and length of information (MI), Normalized Mutual Information (NMI),
curves) to achieve best similarity measure between curves. Normalized Correlation (NC) and Mean Squared Difference
(MSD). All the mentioned matching parameters play an
B. Intensity-based image registration important role in intensity-based image registration by
maximizing intensity similarity measures and by reducing
Intensity-based registration is currently the most widely cost function. Fig. 8 shows registration of MRI (a) and CT (b)
used method in which image intensity i.e. scalar values in the images using their intensities [34]. The intensity values of hard
image pixels or voxels is considered for registration. Registra- tissues (bone) are higher in CT image and lower in MRI.
tion methods based on image intensity directly operate on Therefore, the bones are more visible in CT image and less
78 biocybernetics and biomedical engineering 38 (2018) 71–89

Fig. 8 – Intensity-based registration of MRI and CT images. Target CT image (b) which shows brighter bone structure is
registered with source MR image (a) with darker bone structure. Intensity based registration transform midlevel gray values
into high-level values, and high-level gray values (bone) into low values. Image (c) is remapped CT with an approximation of
the MR intensity distribution.

visible in MRI image. Intensity based registration map midlevel volumes or points or surfaces. Speed, easy implementation
gray values into high-level values, and high-level gray values and automatic behavior are the main features of reductive
(bone) into low values. The resultant registered image is based registration. On the other hand, the use of reductive
remapped CT with an approximation of the MR intensity registration methods create sensitivity for surgeons in several
distribution as shown in Fig. 8(c). types of image-guided surgeries such as marginal hypo-
Intensity-based registration methods operate on image metabolic swelling in PET, poor handling of dissimilarities in
intensity values and the transformation is performed itera- scanned volume and image to image registration. Full image-
tively. At each iteration, the similarity measures between contents based method uses similarity measures such as
voxel intensities of source and target images are optimized. cross-correlation, image uniformity, square intensity differ-
This iterative transformation of image intensity values involve ences and intensity variance for transformation and establish-
interpolation between sample points and map both the ing correspondence between images. In image-guided surgical
position and related intensity value at that particular position procedure, this method provides improved visualization in
[35]. In feature-based methods, the delineation of feature the registration of inter-subject and atlas images. However,
landmarks is important for accurate registration which some the computation cost is high in some clinical application such
time affect the accuracy of registration. On the other hand, as the registration of 3D–3D data [38]. Moreover, full image
intensity-based registration methods provide high accuracy by content-based method has not been introduced yet in the
taking into account more image information. Performing registration of time-constrained applications such as intra-
retrospective registration with intensity-based methods need operative 2D–3D.
minimum amount of preprocessing or user interaction [36]. As
a result, the automation of these methods are easy when C. Segmentation-based registration
compared to point-based or surface-based registration algo-
rithms. However, the lack of human supervision in intensity- Medical image registration based on segmentation extract
based methods may also produce inaccurate registration corresponding features and organs surfaces from source and
results. Registration of single and multi-modal images, target images and aligned them properly. The alignment of
registration of same or different dimensional images (2D– corresponding structure between two images is performed
2D, 2D–3D, 3D–3D) and registration of rigid and deformable with either rigid models or deformable models. Segmentation-
models are the widely used application areas where intensity- based registration using rigid models are simple and are widely
based registration methods are successfully applied. used methods in clinical applications. Deformable models, on
Reductive registration and full image content-based regis- the other hand, are complex but are successfully used for the
tration are the two types of intensity based medical image organs with large deformation. Segmentation is performed
registration. Movements and principle orientations based prior to registration, which identify either boundaries of the
method is a popular example of reductive registration which structure or categorize every voxel on the basis of its intensity
directly operate on image gray values [37]. Moments and properties. In medical image analysis, segmentation is used for
principle orientations method is also called reduction to the localization of pathology, quantification of tissue volumes,
scalars/vectors registration because it directly reduce image study of anatomical structure, computer-aided diagnosis and
gray levels to representative scalars/vectors. In image-guided treatment planning, and image-guided surgery [39].
surgery, reductive registration is used to register medical In image-guided surgery, segmentation is used for the
images of different modalities by mapping their corresponding extraction of 3D anatomical data, which is necessary for
biocybernetics and biomedical engineering 38 (2018) 71–89 79

Fig. 9 – Segmentation-based registration: a labeled image (the atlas) is registered with an input image. The label from the atlas
is then overlaid on the deformed input image so the segmented structures of the atlas are also available in the resultant
image.

planning and guiding interventions [1]. Registration methods D. Fluoroscopy-based registration


based on image segmentation are widely used in IGS and
minimally invasive intervention. In segmentation based C-arm-based fluoroscopy is an imaging technique that uses
registration, the anatomical structures and other regions of X-rays to create images of the internal organs in real time [41].
interest in preoperative and intraoperative images are It is the most commonly used interventional system, available
segmented before actual transformation [14]. Segmentation in all electrophysiology laboratories. During fluoroscopic
is performed on the basis of corresponding landmarks in the examination computer creates continuous images of structure
images. In the second step, transformation function is on the screen so that the body part and its motion can be seen
repeatedly applied on the images till the alignment of in detail. Contrast material such as barium or iodine is used to
corresponding landmarks. Fig. 9 shows atlas-based segmen- enhance the quality of images [42]. Fluoroscopic techniques
tation and registration [40]. In the figure, an atlas source image have replaced invasive open surgical procedures with mini-
is registered with another target input image. Registration is mally and non-invasive image-guided procedures. With each
performed on the bases of voxel representation of both incremental advancement in the technology, smaller vessels
images. In the first step, a segmented atlas image is aligned and more subtle contrast differences can be visualized in real
with original input image and a new deformed input image is time, often with low radiation dose [43].
generated. The deformed input image contains all the Fluoroscope allows physician and surgeon to see the
segmented structures labeled in atlas image. In the next step, internal structure and function of different organs such as
labels from the atlas are overlaid on the deformed input image heart, lung, kidneys, bones, muscles and joints. For example,
so the segmented structures of an atlas are also available in the fluoroscope is used to watch the pumping action of the heart or
resultant image. the motion of swallowing. Similarly, in cardiac catheterization,
Segmentation-based registration methods are more suc- fluoroscopy allows the physicians to view the flow of blood
cessful than feature-based and intensity-based registration through the coronary arteries in order to evaluate the presence
methods when the images contain low or missing information of arterial blockages. Fluoroscopy is useful for both diagnosis
about human anatomy. These registration methods are and therapy in angiography, general radiology, interventional
computationally efficient and support multi-modal registra- radiology and image-guided interventions (IGI). In fluoroscopy,
tion. The accuracy is also high but depends on segmentation 2D real time high resolution X-rays images of human anatomy
accuracy because continuous splitting in input images are obtained through an imaging scanner intensifier called C-
sometimes compromise accuracy. Furthermore, the available arm. 2D C-arm allows the physician to monitor progress and
method for segmentation-based registration is not fully immediately make any corrections. However, due to lack of 3D
automatic because segmentation step is mostly performed spatial information, it is not sufficient for the proper visualiza-
semi-automatically. tion of complex structures and their spatial relationship in
80 biocybernetics and biomedical engineering 38 (2018) 71–89

human anatomy. In other words, accurate path planning on the MR image with the fluoroscopic image space and placing them
C-arm AP-view image is difficult. Therefore, registration of in the same co-ordinate system helps to obtain real-time 2D
fluoroscopic images with improved preoperative imaging (high and 3D pictures of human organ. Generally, CT/MR C-arm
quality 3D CT/MR images) of relevant anatomy would provide system (Ziehm or Siemens Healthcare) is used to register 3D
accurate and efficient guidance in complex structures and their CT/MR data with 3D live fluoroscopic images. Following
spatial relationship [44]. successful image fusion, the anatomic landmarks marked in
2D–3D registration is an important and commonly used preoperative images were overlaid on live fluoroscopy. The
technique of medical image registration. 2D–3D medical image accuracy of image registration was determined by measuring
registration is performed by the alignment of preoperative 3D the distance between overlay markers and a reference point in
data sets (e.g. CT/MR volume) with intraoperative 2D fluoro- the image [49].
scopic X-rays images (e.g. CBCT volumes). In 2D–3D registra- Registration of 3D MRI or CT data with 3D fluoroscopic
tion, C-arm is accurately posed on patient anatomy and the images provides high spatial and temporal resolution along
associated surgical plan for successful image-guided naviga- with promising accuracy and feasibility [50]. Co-registration of
tion [45]. Fast and accurate registration of intraoperative 2D 3D CT/MR images and real time 3D fluoroscopy (3D–3D
fluoroscopic images and preoperative 3D data sets is useful for registration) is a reliable, efficient and accurate technique in
intraoperative guidance and can greatly help the interven- real-time image-guided interventions. Real time 3D fluorosco-
tionists during surgical procedure. The available 2D–3D py (using C-arm) could offer significant advantages in the
registration approaches provide useful decision support catheterization laboratory by potentially reducing procedure
system for quick localization of the target surgical site, time, contrast and radiation dose. This technique is success-
minimum radiation exposure, reduced contrast dose and fully used for non-rigid registration and soft tissue deforma-
low risk of wrong-site surgery [46]. Integration of 3D CT/MR tion e.g. structural heart disease and coronary artery
image data sets with fluoroscopic images (2D–3D registration) interventions [49]. One the other hand, the success of this
has potentially overcome the limitations of 2D angiography technique is relatively low in the treatment of patients with
due to proper visualization of complex vascular structures [47]. congenital heart disease.
The alignment of preoperative 3D images and intraoper- Real-time 3D fluoroscopy guidance with cone beam CT is a
ative 2D fluoroscopic live images is however a challenging task new, promising, and feasible technique for needle interven-
because finding intermodal correspondence is a nontrivial tions. Because of the integration of the information of cone
problem. Furthermore, nonlinear aspect of the underlying beam CT and live fluoroscopy, real-time 3D fluoroscopy can be
optimization and estimation of 3D CT/MR volume poses of an used for real-time needle intervention, with a high degree of
object from its 2D X-rays projections are also challenging accuracy. Cone-beam CT (3D–3D registration) makes use of
problems in 2D–3D registration. 2D–3D rigid registration works intraoperative rotational scan. The intraoperative scan is done
precisely in the initial alignment and in rigid structures e.g. with the patient on the table in the position that is needed for
bones, but might not be enough for the precise alignment of the procedure. In the registration process, the operator track
non-rigid structures e.g. heart due to breathing and natural the position and movement of the C-arm which precisely
movement [48]. identify the position of region of interest in the patient.
Modern fluoroscopy consists of new and innovative Furthermore, pre-operative CT/MR volume is merged with the
imaging techniques with advanced real time 3D capabilities. intraoperative cone-beam CT based on anatomical landmarks
Real time 3D fluoroscopy (3D–3D) has addressed some of the visible in both 3D scans. Real-time 3D fluoroscopy guidance
shortcomings of 2 dimensional fluoroscopy and increased the with cone beam CT is shown in Fig. 10 [51]. In the figure, the
accuracy of image-guided surgical procedures. This technique cone-beam computed tomography (CT; gray) is superimposed
is used to obtain intraoperative real-time 3D images and on the CT angiography reconstruction (red). The matching of
perform automatic registration of real time 3D CT/MR data bony structures and arterial calcifications are clearly visible in
with 3D live fluoroscopic images. Co-registering both 3D CT/ the figure.

Fig. 10 – 3D–3D registration in axial (a) and coronal (b) planes. The cone-beam computed tomography (CT; gray) is
superimposed on the CT angiography reconstruction (red). Note the matching of bony structures and arterial calcifications
[51].
biocybernetics and biomedical engineering 38 (2018) 71–89 81

Real time 3D fluoroscopy is currently a key technology in


image-guided radiation therapy, radiosurgery, minimally
invasive surgery, endoscopy, and interventional radiology.
In endoscopy, 3D virtual images of human organs and vessels
are generated from preoperative 3D CT/MR images and
registered to real-time live endoscopic images. This type of
registration provides an augmented reality environment,
which show anatomical structures that are hidden from the
direct view by currently exposed tissues. Similarly, in
interventional radiology, real time 3D fluoroscopic registration
allows 3D visualization of tools, like catheters and needles
which can greatly improve guidance. In image-guided radia-
tion therapy, real time 3D fluoroscopy allows registration of
preoperative CT data and intra-interventional data images.
This allows precise patient positioning, which is of utmost
importance for exact dose delivery to the target and for
avoiding irradiation of healthy critical tissue [52]. The
complexities and challenges of 3D fluoroscopy includes:
accurate navigation in supra-aortic vessels and visceral
branches, miss registration of preoperative and intraoprative
images, respiration-related and cardiac cycle-related vessel
displacement, vessel elongation, and displacement by stiff
devices and patient movement. Fig. 11 – Issues and challenges in medical image
registration.
3. Issues and challenges

The development of highly sophisticated data scanning


devices and advancement in imaging techniques raises more medical image is not consistent due to the effect of noise, blur
challenges in the area of medical image registration. The main and organ movement. Therefore, highly robust and consistent
challenge is the development of more accurate and efficient registration is required to manage small amount of variations
registration methods in clinically acceptable time-frames [53]. in the source and target images during IGS. Similarly, without
Moreover, real time registration of preoperative and intraop- high accuracy in medical image registration method, it is not
erative images is difficult to achieve due to the difference of possible to obtain successful results [14]. Accuracy is always
image modality and dimensionality. This section investigates affected by the introduction of errors (either actual or timely)
issues and challenges in medical image registration and in the medical images during the registration process.
presents different solutions provided by researchers. However, Similarly, robustness is greatly affected by the variation of
in order to be an effective instrument for the clinical practice, intensity and missing of required data in the input images [54].
registration algorithms must be computationally efficient, The performance, robustness and accuracy in medical
accurate and most importantly robust to the multiple biases image registration methods depends on several parameters
affecting medical images. We have investigated several types including modality, effects on image contents, similarity
of issues and challenges in medical image registration (see measures, transformation, optimization and implementation
Fig. 11). These issues and challenges are further discussed in mechanism [55]. These complex parameters are interdepen-
the subsections below. dent and it is difficult to assess the effect of each one on the
registration method. However, the initial assessments up-to
3.1. Efficiency, accuracy and robustness some level about the influences of these parameters is
important prior to registration. Registration methods based
An efficient, accurate and robust registration between pre- on image features are efficient than intensity-based methods
operative images and intraoperative patient anatomy is but have low accuracy and robustness.
critical for successful and effective image-guided surgery.
Efficient, accurate and robust registration of corresponding 3.2. Similarity measures
information from different types of images provides a basis for
diagnostic and medical decision-making, treatment monitor- Similarity measure is an important criterion for the evaluation
ing, and healthcare support [4]. Registration method cannot be of medical images during registration process. In other words,
accepted as a clinical tool for patient healthcare and it is the criteria to evaluate how much two or more images of
management until it has been proved efficient, accurate and the same organ are similar. In registration process, similarity
robust. However, computational efficiency (performance), measures statistically evaluate and relate source/preoperative
accuracy in the alignments of images and robustness against and target/intraoperative images during IGS [56]. Various
multiple biases affecting medical images are the three main similarity measures such as sum of squared differences,
issues in non-rigid registration [54]. The natural behavior of mutual information, correlation coefficient and joint entropy
82 biocybernetics and biomedical engineering 38 (2018) 71–89

are used to evaluate voxel intensity differences between which considered six nearest neighbors of each voxel to
source and target images. Among the various similarity calculate MI for the improvement of robustness and accuracy
measures, sum of square differences (SSD) [57] is the simplest in non-rigid registration. Rueckert et al. framework was based
one which can evaluate the differences between transformed on second-order MI to the problem of 2D registration by using a
source image and fixed target image as 4D joint histogram. Computationally the proposed approach is
X not suitable to register 3D images because a large number of
SSD ¼ xk 2 f T ðf S ðgðxk ÞÞf T ðxk ÞÞ2 (3) samples are required in it to compute a high dimensional
histogram.
where fS is transformed source image, fT is target image and xk
Russakoff et al. [59] extended mutual information as a
denotes the total number of pixels that belong to the overlap of
similarity measure to incorporate spatial information by
source image and transformed target image. Transformation
considering regional MI on both corresponding pixels and
function g( ) is applied to the source image and transformed
their neighbors. Although the method proposed by Russakoff
image and is denoted by fS(g(xk)). Similarity measures uses
et al. efficiently takes region of corresponding pixels into
measurements of absolute scale and registration positions
account but may not be reliable when the number of samples
during the process. For absolute scale measurements, correla-
is small. The main reason was the use of high-dimensional
tion coefficient finds the values range between +1 and 1.
histogram to perform registration. Yi et al. [62] incorporated
These values indicate how good the two images (source and
spatial relationships into the registration by the addition of
target) are related. The similarity between two images is also
global normalized mutual information and local matching
evaluated on the bases of registration position by a maximum
statistic. Loeckx et al. [63] proposed conditional mutual
or minimum value at the registration location. This is done by
information (cMI) as a similarity measure for multi-modal
correlation function and the sum of the absolute value of the
non-rigid image registration. The proposed method integrates
difference between the two images.
both intensity and spatial dimensions to express the location
of the joint intensity pair. Loeckx et al. approach overcomes
3.2.1. Mutual information several problems inherent to the use of global MI, using
Mutual information (MI) is an intensity based similarity artificial and clinical images, but its main drawback was
measure, which automatically estimates the similarity in increased computation time as compared to traditional MI
multi-modal images [58]. In general, mutual information show measures. Another approach presented by Zhuang et al. [64]
statistical relationship between two variables and it measures uses spatially encoded mutual information in the computa-
the amount information that one variable contains about tion of a spatial variable and its associated entropy measures.
other. In image-guided surgical intervention, MI is widely used In this method, spatial variable information are combined into
similarity measure which estimates the corresponding rela- the computation of their joint histogram and a hierarchical
tionship between preoperative image and intraoperative weighting scheme is used to regularize the locality of the
patient anatomy. During registration process, MI is maximized weighting function. Similarly, Myronenko et al. [65] also
for the achievement of effective matching of intensity values developed a new similarity measure for image registration,
between two images. In the process, when MI is reach to a which accounts for complex spatially-varying intensity dis-
maximum level, it finds the most complex corresponding tortions and non-stationeries of the images. The method
regions between images [59]. MI as similarity measure is proposed by Myronenko et al. is although simple and fast but
successfully used in image-guided surgery because it is robust only applicable to mono-modal registration.
to outliers and efficiently calculated. The recent work proposed by Woo et al. [66] addresses the
In high volume multi-modal images the corresponding problems of local intensity variations and missing of spatial
points greatly varies which results the differences in intensi- and geometric information about the voxel. In order to achieve
ties. These differences require the estimation of joint histo- better MI, J Woo incorporated spatial and geometric informa-
gram, which alternatively increases the computation time of tion via a 3D Harris operator which decompose the image into
registration. Moreover, in the registration of multimodal three disjoint and geometrically distinct regions. Computa-
images, local intensity variations also degrade the perfor- tionally the proposed approach may not suitable when applied
mance of mutual information because the joint histogram to large population studies and routine clinical practice. The
computation is adversely affected. Another issue related to reason is its implementation on CPU instead of implementing
estimation of similarity measure in multimodal image it on GPU or with the use of parallel computing.
registration is the exclusion of spatial and geometrical
information about the voxel. Like that of intensity informa- 3.2.2. Correlation coefficient
tion, the estimation of spatial and geometrical information are The correlation coefficient is another similarity measure for
also important because they may provide additional cues medical image registration. It symmetrically measures the
about the optimal registration. linear dependence between image intensities of correspond-
There exists a lot of work on the integration of spatial ing voxels in both images. In order to determine how strong
information while estimating mutual information (MI) as a the pixels of two images are related, correlation coefficient
similarity measure. Pluim et al. [60] described an approach uses a value that can range between 1 and +1. If the
which combines spatial information by multiplying the MI correlation coefficient value is in the negative range, then the
with an external local gradient. Pluim et al. further incorpo- relationship between the pixels is negatively correlated, or as
rated both gradient magnitude and orientation for the one value increases, the other decreases. On the other hand, if
calculation of MI. Rueckert et al. [61] proposed a framework the correlation coefficient value falls in the positive range, the
biocybernetics and biomedical engineering 38 (2018) 71–89 83

relationship between the pixels is positively correlated, or both are used for the estimation of transformation T. The alignment
values increase or decrease together. This means that, greater of source image s with target image t results a combined image.
the absolute value of a correlation coefficient, the stronger the Joint entropy estimates the amount of information in the
linear relationship between two images. Pearson originally combined images. If the two images s and t are dissimilar (i.e.,
developed the mathematical formula for correlation coeffi- no relation between them), then sum of the entropies of
cient which estimates the degree of relationship between two individual images is called joint entropy. In case of high
quantities [67]. For monochrome images, the formula is define similarity in the images, the joint entropy is low compared to
as the sum of the individual entropies. If s and t are totally
P unrelated, then the joint entropy will be the sum of the
i ðxi xm Þðyi ym Þ
r ¼ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P ffiqffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P ffi (4) entropies of the individual images. The more similar (i.e. less
2 2
i ðxi xm Þ i ðyi ym Þ independent) the images are, the lower the joint entropy
compared to the sum of the individual entropies. Joint
where xi and yi are intensity values of ith pixel in source and
histogram calculated from image s and t is used for the
target image respectively while xm and ym are mean intensity
visualization of joint entropy as
values of the said images. The value of r shows the level of
similarity between two images i.e. in case of absolute similari- Hðs; tÞHðsÞ þ HðtÞ (5)
ty the value will be 1, if they are completely unrelated the value
In entropy-based image registration, only pixel intensity
will be 0.
values are used for alignment and image histograms are used
Correlation coefficient accurately and efficiently evaluates for computation [72]. The use of pixel intensity values only as
the accuracy of mono-modal medical image registration. For alignment measure neglect the spatial information in the
the registration of multi-modal images, however, correlation images which may affect alignment accuracy. Similarly,
coefficient is not a favorable similarity measure because of entropy-based measures are more complicated than simpler
poor statistical and computational efficiency [68]. In image- measures and are therefore computationally expensive than
guided surgery and radiotherapy, the available images are simpler one.
mostly belongs to the same type of modality. Therefore,
correlation coefficient as similarity measure in these applica- 3.3. Registration of multimodal images
tions is a useful choice for clinicians. Registration of medical
images with correlation coefficient as similarity metric Each modality exhibit different characteristics i.e. CT, ultra-
provides several advantages including easy implementation, sound (US), rotational C-arm angiography and MRI are used for
no need to estimate probability densities at every iteration, anatomical imaging while PET, SPECT and functional magnetic
insensitivity to geometric distortion, intensity inhomogeneity resonance imaging (fMRI) are used for functional imaging. The
and data missing. On the other hand, correlation coefficient is appropriate matching of diverse features (functional and
greatly affected by the outliers, which consequently degrade anatomical contents) in multimodal images is important for
registration performance. Moreover, local extrema and large successful registration. However, proper matching in multi-
errors in registration also affect the performance of correlation modal image registration is more challenging and harder
coefficient. In order to avoid such problems in registration, task. The appropriate matching of corresponding features is
appropriate techniques are required for sampling and visual still an issue because images obtained from multiple
inspection. modalities differ in spatial resolution. Some approaches,
such as information theoretic approaches and reduction to
3.2.3. Joint entropy mono-modal registration have been proposed to solve the
Combining images with misaligned structure results an image problem of appropriate feature matching in multimodal
with duplicated information. The basic purpose of registration image registration. Information theoretic approaches use
is to reduce the duplicated information and make it more mutual information (MI) as similarity metric to solve the
simple and informative. Registration uses several types of problem of mismatching in multimodal registration. How-
matrices for such type of information measure in multiple ever, MI is a general similarity metric and it neither overlap
images. Joint entropy is commonly used information measure invariant nor assumes any relationship between the image
in digital image processing [5]. The measurement of uncer- intensities. Similarly, reduction to mono-modal registration
tainty in both joint distribution and conditional distribution of approach was developed which aims to simplify the problem
a pair of random variables is performed with joint entropy. The and find its solution i.e. simulating one modality from
relative transformation of source image to target image is another or mapping multi-modal images into a common
always occurred when the joint entropy is minimum [69]. domain.
During transformation, the volume of overlap between source In multi-modal image registration, the association between
image and target image also changes as they are transformed intensity values of related pixels is also complex and
relative to one another. The relative transformation and unknown. The missing of features in one image and presence
volume overlap greatly affects the reliability of alignment and in another image, mapping of single intensity value in one
registration. However, the solution for this problem is already image to multiple values in another image are the challenging
done by Collignon et al. [70] and Wells et al. [71] using mutual issues in multi-modal image registration [73]. These issues
information (MI) as registration metric. greatly affect the proper computation of similarity measures
In order to align source image s with target image t in the based on their intensity values in medical image registration
registration process, the two symbols at each voxel location [74].
84 biocybernetics and biomedical engineering 38 (2018) 71–89

3.4. Detection of reliable landmarks elastic transformation, the inaccurate extracted landmarks
also produce registration error in the presence of local maxima
The reliable identification of anatomical landmarks in mutli- [83]. The available optimization methods avoid the problem of
modal (CT, MR, PET, etc.) 3D images are essential and one of local maxima and improve similarity measures. However,
the important first step in medical image registration. Land- further investigation is needed to develop advanced optimi-
marks are detected either with manual method or with zation methods for medical image registration.
automatic method [75]. The manual method for the identifi-
cation of landmarks requires medical expertise and takes 3.7. Guidance to clinicians
more time. The available automatic methods for landmarks
identification are fast and could reliably detect landmarks in In image-guided surgery and radiotherapy, clinicians faces
medical images. The automatic methods for landmarks several problems while taking pre-operative and intra-opera-
selection are mostly depend on machine learning approaches. tive measures. The main problem is the accurate mapping of
Therefore, quality of training data sets plays an important role contrast information in multi-modal images i.e. organ
in the reliable identification of anatomical landmarks. In scanned multiple times with different scanners. In such type
computer vision, it is easy to obtain large training data sets but of scenario, it is difficult for the clinicians to know exactly the
in medical field creating a large database of images are location and orientation of patient with respect to different
challenging and requires a lot of efforts and time. imaging systems. Image registration and fusion in treatment
room provides more guidance and help to the clinicians while
3.5. Outliers rejection operating on patient data. With image to patient registration,
data is associated precisely and the treatment is given to the
The basic aim of medical image registration is to find the patient according to pre-operative plan [35].
optimal transformation between two images by maximizing In surgical guidance system, registration methods process-
similarity measures such as mutual information (MI), entropy es information obtained from physical devices. Information is
and correlation-coefficient. However, mutual information is processed with an algorithmic procedure for optimal transfor-
always affected by the presence of outliers (objects in one mation of images. Most of the transformation is optimal due to
image but not in another) in source and target images. In the advancement in medical image registration methods but it
medical image registration, the presence of unpredictable is not ideal. As a result, the chances of error called target
outliers in preoperative and intraoperative images greatly registration error (TRE) is high. Similarly, the role of image
affects mutual information [76]. Therefore, several approaches registration is also highly important when the surgical
have been used for the rejection of outliers in medical image guidance is based on preoperative images. Here, accurate
registration. The most prominent among them include registration is required for the surgical guidance system.
consistency test [77], intensity transformation [78], gradient- Inaccuracy in the surgical guidance system will be useless and
based asymmetric multifeature MI [79], graph-based multi- dangerous to the patient life. In typical image-guided surgery
feature MI [80], joint saliency map (JSM) [76] and normalized (IGS), the anatomy of a patient captured in the preoperative
gradients [81]. The rejection of outliers is a challenging task in image remains rigid from image acquisition to surgical
medical image registration because a large number of outliers procedure. Non-rigid registration, which is successfully used
are present in the image-guided surgery applications. There- for image-to-image registration need further research and
fore, more efforts are required to improve the robustness of improvement in image-to-patient registration [84]. For accurate
available similarity measures toward outliers. registration and transformation of corresponding points from
image-to-patient, further improvement is required in surgical
3.6. Convergence of optimization methods to local maxima guidance system especially in case of non-rigid registration.

In the process of registration, optimization is the optimal 3.8. Relating contrasting information
transformation of intensity values that best align source and
target images. Several types of optimization techniques are Relating contrasting information in different types of medical
available which solve various problems in medical image images is a challenging task in multimodal image registration.
registration. These techniques include Powell's method, In IGS, the patient's organ is scanned multiple times with
steepest gradient/gradient decent, Conjugate gradient, different types of imaging modalities which create difficulties
Powell's conjugate Quasi-Newton, Gauss-Newton and Sto- for the identification/fixation of patient location and orienta-
chastic gradient decent. Each method uses its own mechanism tion with respect to different imaging systems. Therefore, it is
to properly extract and match the corresponding features in necessary to developed more advanced registration methods
medical image registration. The selection of optimization which can easily remove the differences in patient positioning
technique plays an important role in performance of registra- and relate information from different types of images.
tion process [82]. Powell's method is one of the common
choices for solving optimization problems in image registra- 3.9. Parameters determination and their correspondence
tion due to its proven performance and simple implementa-
tion. This method allows to simplify multidimensional Parameters such as points, landmarks and curves are the
problem to one dimensional optimization task. components of an image and their proper determination and
Sometime registration accuracy is compromised during mapping are essential for accurate registration. Image registra-
optimization in the presence of local maxima. Similarly, in tion algorithms determine the corresponding parameters in
biocybernetics and biomedical engineering 38 (2018) 71–89 85

both source image and target images and aligned them properly  Soft tissue deformation that occurs during image-guided
[85]. The correspondence between two images is either surgeries causes errors in registering preoperative images to
functional or structural. The former relates the equivalent the current position of a patient during the surgery.
anatomical structures in the two images while the later line up Therefore, it is desirable to estimate tissue deformation
the same functional regions. Image registration algorithms, during surgery and compensate for it. One of the solutions to
which determine high number of corresponding parameters are this problem is to use stereo images. Preoperative MRI or CT
more flexible. However, the efficiency of such algorithms is slow images should be registered with stereo images taken from
and requires more computation time. Rigid and affine the surface of the soft tissue during surgery.
registration algorithms are computationally efficient because  The alignment of functional images of low quality and the
they take less parameters for correspondence. On the other determination of functional abnormality is often a difficult
hand, non-rigid registration algorithms are mostly slow task in medical image registration. The proper identification
because they determine a large number of parameters by of functional abnormality in organs is performed by the
matching voxel intensities in images. Moreover, the transfor- registration of functional images with anatomical images of
mation in non-rigid registration algorithms is asymmetric and the same organ [86].
there is no guarantee of mapping each landmark/point in the  In computer assisted surgery and radiotherapy the efficiency
source image to its corresponding position in the target image. is always required while aligning multiple images. However,
the processing time of current registration algorithms is
3.10. Automatic image registration slow due to the complex imaging modalities and anatomical
structures in human body. Moreover, most of the current
Automatic registration in medical images aligns the common registration algorithms are also sensitive to initial position-
detected features in preoperative and intraoperative images ing of images.
without user interaction. Automatic registration methods are  For accurate registration, the proper understanding of the
widely used in medical image processing and several types of healthy and pathological states of organs before image
image-guided surgeries are successfully performed with mapping at different scale is necessary.
automatic medical image registration methods. The perfor-  Online image to physical space registration and interfacing
mance of automatic image registration is high because it of computational devices for the guidance of surgical
requires less time and minimum efforts from the user while operation need further research work.
aligning the subject images. Moreover, the point/landmarks in  The lack of accurate correspondence of parameters in source
automatic registration method transform globally and with and target images due to asymmetric nature is also a big
high efficiency [14]. The accuracy of automatic image registra- issue in medical image registration.
tion methods is also high but greatly depends on the precision  Registration method, which is robust to geometric distor-
and optimization of algorithms. Automatic image registration is tions i.e. non-linear and local, is more accurately align two
still an open problem in medical imaging and some of the images. However, registration of medical images with highly
challenges include the proper selection of 3D landmarks, complex nonlinear and local distortions often produces in
extraction of same features in multi-modal images, variable/ accurate result. Moreover, the absence of knowledge of
limited anatomical coverage and low contrast to noise. correspondences between images also affects the accuracy
of registration.
3.11. Other issues and challenges  In feature-based registration, point pattern matching cre-
ates difficulties when medical images contain noise and
Medical image registration also faces several other important missing data. Similarly, feature based image registration
issues and challenges other than above which are highlighted methods are less robust in feature extraction and less
below. accurate in feature matching.
 Image segmentation is often required in feature-based
 'Feature-based' methods required the difficult extraction of methods before registration. In case of manual segmenta-
specific features (e.g. blood vessel outline) from the 3D and tion procedures, skilled human interaction is required which
2D images before registration. This is very difficult to achieve create difficulties toward the automatic process of registra-
accurately and robustly using clinical images and has tion. In case of automatic segmentation, errors may occur in
limited the clinical adoption of such methods. An alterna- the registration process.
tive, 'intensity-based' approach matches images on raw  Registration methods based on extrinsic landmarks require
image intensity values. high skill from surgeon and often produce less accurate
 During surgical procedures, the registration between the results due to skin movement during surgery. Similarly,
preoperative images and the tracking system can become registration methods based on anatomic landmarks requires
inaccurate, due to movement of the patient, deformation of user interaction for the identification of landmarks.
tissue, shifting of tracking equipment, and the like. Correcting
an inaccurate registration traditionally requires a time-
4. Research opportunities in alleviating issues
consuming interruption to the surgical procedure. The
and challenges
interruption may be omitted, but doing so introduces
inaccuracies in image registrations, and may result in medical
personnel being provided with incorrect information as to the In order to make IGS a practical reality, the issues and
location of surgical instruments such as imaging probes. challenges in medical image registration must be addressed.
86 biocybernetics and biomedical engineering 38 (2018) 71–89

To the best of our knowledge, this paper describes the first based on mutual information are available which create
attempt to highlight the issues and challenges in medical statistical relationship among the features in source and
image registration in a comprehensive manner. We have also target images. Although, mutual information is a standard
made an effort to provide a comprehensive survey on the similarity measure for multimodal image registration but
image registration concepts and methods and its applications its performance degrades when the images contains local
in image-guided surgery (IGS). In Section 2 we have described intensity variations. Moreover, mutual information only
general concepts and methods for medical image registration considers intensity information in the images and ignores
in the field of IGS. We have presented the main issues and spatial information. Therefore, the development of ad-
challenges and their possible solutions in medical image vance techniques in which mutual information can easily
registration in general and in image-to-patient registration in cope with local intensity variations and fully consider
particular in Section 3. After comprehensive analysis of spatial information along with intensity information will
medical image registration methods and the available issues bring a great change in multimodal medical image
and challenges, this section describe some guidelines that will registration.
would be helpful for the development of new advance 6. Correlation coefficient is one of the important statistical
registration methods. The adaptability of these guidelines in similarity measures for the registration of medical image
clinics would greatly help surgeon in preoperative planning in image-guided surgery. Correlation coefficient accurately
and intraoperative surgical navigation. These guidelines include: and efficiently evaluates the similarity between two
images in mono-modal medical image registration. How-
1. The accurate transfer of corresponding features in ever, in multi-modal registration, the performance of
preoperative image to intraoperative patient anatomy is correlation coefficient is greatly affected by the outliers,
a challenging task in image-guided surgery (IGS). Al- local extrema and large errors. An enhancement in the
though, the location of virtual resection line is visible and available correlation measures and development of ad-
easy to estimate in IGS but it is difficult to directly locate vanced techniques will solve such problems in multi-
tumor and relevant vessels because they are hidden modal medical image registration.
underneath the organ. Advancement in preoperative 7. Joint entropy is another similarity measure which esti-
imaging techniques and intraoperative navigation sys- mates the amount of information in the two or more
tems will support surgeon to precisely and directly combined images. However, image registration based on
visualize spatial relationship of surgical instruments to joint entropy is complicated, computationally expensive
anatomical structures. and consider only pixel intensity values in the images.
2. The occlusion of tissue or small objects in IGS creates a Considering only pixel intensity values and neglecting
challenge for registration. In the occluded scenario, spatial information in images greatly affect the alignment
interested regions are either not present or not clearly accuracy. Therefore, more research is needed in the area of
visible in all preoperative and intraoperative images. image registration with joint entropy to overcome the
Although, registration of images with minor occlusions above issues.
produce reasonable results but images with major occlu- 8. The alignment of functional images of low quality and the
sions is a challenging task and it is important to address determination of functional abnormality is often a difficult
this issue. task in medical image registration. Therefore, resolution of
3. One of the most important questions concerning medical functional images and the accuracy of functional analysis
image registration is its use in real clinical settings. Clinical techniques need further improvement.
data is always affected by intensity consistencies such as 9. In medical image registration, the identification of reliable
noise, motion and intensity in-homogeneity. The current- landmarks is performed with either manual or automatic
ly available registration algorithms provide limited capa- method. The former requires medical expertise and takes
bility to efficiently and accurately cope with these issues in more time while the later is fast but depends on machine
real clinical setting. In order to increase the use of learning approaches. In other words, automatic methods
registration in clinical practice and make it an effective depends on the quality of training data sets which are
instrument for the above issues, accurate, robust and easily obtained in computer vision but requires more
computationally efficient algorithms are desired. efforts and time in medical field. Therefore, the develop-
4. In medical images, landmarks provide anatomy specific ment and availability of large databases of images in the
constraints and guide the deformation process in regions medical field will easily solve reliable landmarks identifi-
with uneven information. However, the detection and cation problem in automatic methods.
extraction of significant landmarks to perform an accurate 10. In medical image registration, optimal transformation is
registration remains a very challenging task. performed by maximizing mutual information in the
5. In multimodal registration, features in the same images source and target images. In case of pre-operative and
obtained from different scanning device are aligned. Due intra-operative images, mutual information is greatly
to different scanning devices, images of the same subject affected by the presence unpredictable outliers. Although,
show different feature characteristic i.e. functional and several types of approaches have been developed for the
anatomical. Therefore, accurate correspondence of fea- rejection of outliers but it is still a challenging issue in
tures between source and target images in multimodal image-guided surgery. The development of new techni-
registration remains a challenge in image-guided surgery. ques for the minimization of large number of outliers in
Although, several types of image registration methods IGS will reduce their effect on MI.
biocybernetics and biomedical engineering 38 (2018) 71–89 87

11. Most of the optimization methods in medical image image registration. They have also developed solutions that
registration converge to local maxima which is not have mostly resolved the issues. However, there is substantial
desired. Further research is needed for the development room for improvement in image registration in clinical
of advanced optimization methods for medical image applications because current researches are difficult to achieve
registration, which can easily avoid local maxima. fast and accurate registration to satisfy clinical demand.
12. In surgical guidance system, transformation of corre- Therefore, further research and developments are needed for
sponding points is usually optimal but not ideal. However, the advancement of image registration methods and their
ideal transformation (i.e. maps every point in the image proper implementation in clinical applications.
space onto its correct counterpart in physical space and
vice versa) is only possible if the target registration error
(TRE) is very low. Therefore, further research is required to references
minimize the chances of TRE in image to physical space
registration.
13. In image-guided surgery, it is difficult to relate contrasting [1] Cleary K, Peters TM. Image-guided interventions:
information in multi-modal images due to differences in technology review and clinical applications. Annu Rev
Biomed Eng 2010;12:119–42.
images and patient positioning. The main reason is the
[2] Lindseth F, Langø T, Selbekk T, Hansen R, Reinertsen I,
identification/fixation of patient location and orientation
Askeland C, et al. Ultrasound-based guidance and therapy.
with respect to different imaging systems. Therefore, it is In: Gunarathne GPP, editor. Advancements and
necessary to developed more advanced registration breakthroughs in ultrasound imaging. 2013.
methods, which can easily remove the differences in [3] Castro Pareja CR. Real-time 3D elastic image registration.
patient positioning and relate information from different The Ohio State University; 2004.
types of images. [4] Bali RK. Clinical knowledge management: opportunities
and challenges. Idea Group Pub.; 2005.
14. Parameters determination and their correspondence in
[5] Bankman I. Handbook of medical imaging: processing and
non-rigid registration is not computationally efficient as analysis management. Elsevier Science; 2000.
compared to rigid registration. The efficiency in non-rigid [6] Miga MI, Clements LW, Galloway RL, Miga MI, Clements LW,
registration is affected by the identification of large Galloway RL. Apparatus and methods of compensating for
number of parameters and asymmetric transformation. organ deformation, registration of internal structures to
The computationally efficiency in non-rigid registration images, and applications of same. Google Patents; 2008.
[7] Olver PJ, Tannenbaum A. Mathematical methods in
can be improved by using symmetric algorithms for
computer vision. Springer; 2003.
transformation and the introduction of techniques which
[8] Thompson S, Penney G, Billia M, Challacombe B, Hawkes D,
uses minimum number of parameters for correspondence. Dasgupta P. Design and evaluation of an image-guidance
15. Despite the wide spread use, accuracy and performance, system for robot-assisted radical prostatectomy. BJU Int
automatic image registration methods are still an open 2013;111:1081–90.
problem. Some of the challenges in automatic image [9] Alam F, Rahman SU, Khusro S, Ullah S, Khalil A. Evaluatin
registration include the proper selection of 3D landmarks, of medical image registration techniques based on nature
and domain of the transformation. J Med Imaging Radiat Sci
extraction of same features in multi-modal images,
2016;47:178–93.
variable/limited anatomical coverage and low contrast
[10] Dogra A, Patterh MS. CT and MRI brain images registration
to noise. Moreover, the accuracy of automatic image for clinical applications. Cancer Sci Ther 2014;6:18–26.
registration methods also depends on the precision and [11] Alam F, Rahman SU, Khalil A, Khusro S, Sajjad M.
optimization of algorithms. The investigation of new Deformable registration methods for medical images: a
optimization algorithms for automatic image registration, review based on performance comparison. Proc Pak Acad
as well as the development of advanced schemes for 3D Sci A: Phys Comput Sci 2016;53:111–30.
[12] Alam F, Rahman SU, Khalil A, Ullah S, Khusro S.
landmark selection, feature extraction, anatomical cover-
Quantitative evaluation of intrinsic registration methods
age and contrast to noise will improve the performance of for medical images. Sindh Univ Res J – SURJ (Sci Ser)
automatic image registration. 2017;491:43–8.
[13] Alam F, Rahman SU, Ullah S, Khalil A, Uddin A. A review on
extrinsic registration methods for medical images. Tech J
Univ Eng Technol Taxila 2016;21:110–9.
5. Conclusion [14] Alam F, Rahman SU. Intrinsic registration techniques for
medical images: a state-of-the-art review. J Postgrad Med
Inst (Peshawar-Pakistan) 2016;30.
The registration problem is one of the great challenges that [15] Zitova B, Flusser J. Image registration methods: a survey.
must be addressed in order to make image-guided surgery Image Vis Comput 2003;21:977–1000.
(IGS) a practical reality. In this paper, medical image registra- [16] Gonzalez RC. Digital image processing. Pearson Education;
tion, its use and importance in IGS and prominent issues and 2009.
challenges has been discussed. We have discussed what has [17] DeLorenzo C, Papademetris X, Staib LH, Vives KP, Spencer
DD, Duncan JS. Image-guided intraoperative cortical
been done to alleviate these issues, and what needs to be done
deformation recovery using game theory: application to
in the form of different research opportunities and guidelines.
neocortical epilepsy surgery. IEEE Trans Med Imaging
One of the most significant findings of this study is that the 2010;29:322–38.
researchers from the medical imaging community have done a [18] Risholm P, Golby AJ, Wells WM. Multi-modal image
lot of work to cope with the issues and challenges in medical registration for pre-operative planning and image guided
88 biocybernetics and biomedical engineering 38 (2018) 71–89

neurosurgical procedures. Neurosurg Clin N Am [40] Erdt M, Steger S, Sakas G. Regmentation: a new view of
2011;22:197–206. image segmentation and registration. J Radiat Oncol Inform
[19] Csapo I, Davis B, Shi Y, Sanchez M, Styner M, Niethammer 2012;4:1–23.
M. Longitudinal image registration with non-uniform [41] Rivera T, Uruchurtu E. Radiation monitoring in
appearance change. Medical Image Computing and interventional cardiology: a requirement. J Phys: Conf Ser
Computer-Assisted Intervention – MICCAI 2012. Springer; 2017;012098.
2012. p. 280–8. [42] Lisle DA. Imaging for students. 4th ed. CRC Press; 2012.
[20] Patel PM, Shah VM. Image registration techniques: a [43] Gingold E. Modern fluoroscopy imaging systems. Image
comprehensive survey. Int J Innov Res Dev 2014;3:68–78. Wisely; 2014.
[21] El-Samie FEA, Hadhoud MM, El-Khamy SE. Image super- [44] Sra J. Cardiac image registration. J Atrial Fibrillation 2008;1
resolution and applications. CRC Press; 2012. (September–November):25.
[22] Goshtasby AA. 2-D and 3-D image registration: for medical, [45] El Hakimi W. Accurate 3D-reconstruction and-navigation
remote sensing, and industrial applications. Wiley; 2005. for high-precision minimal-invasive interventions.
[23] Deng H, University OS. Image feature detection and Technische Universität; 2016.
matching for biological object recognition. Oregon State [46] Otake Y, Schafer S, Stayman J, Zbijewski W, Kleinszig G,
University; 2007. Graumann R, et al. Automatic localization of vertebral
[24] Navab N, Jannin P.Information Processing in Computer- levels in X-ray fluoroscopy using 3D–2D registration: a tool
Assisted Interventions: First International Conference, to reduce wrong-site surgery. Phys Med Biol 2012;57:5485.
IPCAI 2010. Proceedings. Springer; 2010. [47] Fagan T, Truong U, Jone P, Bracken J, Quaife R, Hazeem A,
[25] Zhao Q, Pizer S, Niethammer M, Rosenman J. Geometric- et al. Multimodality 3-dimensional image integration for
feature-based spectral graph matching in pharyngeal congenital cardiac catheterization. Methodist Debakey
surface registration. Medical Image Computing and Cardiovasc J 2014;10:68–76. This study gives an overview on
Computer-Assisted Intervention: MICCAI. International how fusion imaging can be used for planning and
Conference on Medical Image Computing and Computer- performing interventions in patients with congenital heart
Assisted Intervention, vol. 17. 2014. pp. 259–66. disease..
[26] Feng DD. Biomedical information technology. Elsevier [48] Rivest-Henault D, Sundar H, Cheriet M. Nonrigid 2D/3D
Science; 2011. registration of coronary artery models with live fluoroscopy
[27] Yankeelov TE, Pickens DR, Price RR. Quantitative MRI in for guidance of cardiac interventions. IEEE Trans Med
cancer. CRC Press; 2011. Imaging 2012;31:1557–72.
[28] Maurer C, Maciunas RJ, Fitzpatrick JM. Registration of head [49] Suntharos P, Setser RM, Bradley-Skelton S, Prieto LR. Real-
CT images to physical space using a weighted combination time three dimensional CT and MRI to guide interventions
of points and surfaces [image-guided surgery]. IEEE Trans for congenital heart disease and acquired pulmonary vein
Med Imaging 1998;17:753–61. stenosis. Int J Cardiovasc Imaging 2017;1–8.
[29] Shamir RR, Joskowicz L, Shoshan Y. Fiducial [50] Narayan SA, Qureshi S. Multimodality medical image
optimization for minimal target registration error in fusion: applications in congenital cardiology. Future
image-guided neurosurgery. IEEE Trans Med Imaging Medicine; 2017.
2012;31:725–37. [51] van den Berg JC. Update on new tools for three-dimensional
[30] Otake Y, Armand M, Sadowsky O, Armiger RS, Kazanzides P, navigation in endovascular procedures. AORTA J 2014;2:279.
Taylor RH. An iterative framework for improving the [52] Markelj P, Tomaževič D, Likar B, Pernuš F. A review of 3D/2D
accuracy of intraoperative intensity-based 2D/3D registration methods for image-guided interventions. Med
registration for image-guided orthopedic surgery. In: Navab Image Anal 2012;16:642–61.
N, Jannin P, editors. Information Processing in Computer- [53] Alam F, Rahman SU, Hassan M, Khalil A. An investigation
Assisted Interventions: First International Conference, towards issues and challenges in medical image
IPCAI 2010. Proceedings. Berlin, Heidelberg: Springer Berlin registration. J Postgrad Med Inst (Peshawer Pakistan)
Heidelberg; 2010. p. 23–33. 2017;31:224–33.
[31] Lindseth F, Langø T, Selbekk T, Hansen R, Reinertsen I, [54] Liu Y. On the real-time performance, robustness and
Askeland C, et al. Ultrasound-based guidance and therapy. accuracy of medical image non-rigid registration. College of
Advancements and Breakthroughs in Ultrasound Imaging. William & Mary; 2011.
2013. pp. 28–82. [55] Neri E, Baert AL, Caramella D, Bartolozzi C. Image
[32] Wen P. Medical image registration based-on points, contour processing in radiology: current applications. Springer;
and curves. 2008 International Conference on BioMedical 2007.
Engineering and Informatics. 2008. pp. 132–6. [56] Svedlow M, McGillem C, Anuta PE. Experimental
[33] Chou Y-Y. Transitive and symmetric nonrigid image examination of similarity measures and preprocessing
registration. Citeseer; 2004. methods used for image registration. LARS Symposia. 1976.
[34] Meyer J. Histogram transformation for inter-modality p. 150.
image registration. 2007 IEEE 7th International Symposium [57] Suehling M, Huber M, Soza G. Method and system for
on BioInformatics and BioEngineering. 2007. pp. 1118–23. semantics driven image registration. Google Patents; 2012.
[35] Hajnal JV, Hill DLG. Medical image registration. CRC Press; [58] Keyvanpour M-R, Alehojat S. Analytical comparison of
2001. learning based methods to increase the accuracy and
[36] Beutel J. Handbook of medical imaging: medical image robustness of registration algorithms in medical imaging.
processing and analysis. Society of Photo Optical; 2000. Int J Adv Sci Technol 2012;41.
[37] Wilson D, Laxminarayan S. Handbook of biomedical image [59] Russakoff DB, Tomasi C, Rohlfing T, Maurer Jr CR. Image
analysis: vol. 3: Registration models. Kluwer Academic/ similarity using mutual information of regions. European
Plenum Publishers; 2007. Conference on Computer Vision. 2004. pp. 596–607.
[38] Lee M-E, Kim S-H, Seo I-H. Intensity-based registration of [60] Pluim JPW, Maintz JBA, Viergever MA. Image registration by
medical images. 2009 International Conference on Test and maximization of combined mutual information and
Measurement. 2009. pp. 239–42. gradient information. In: Delp SL, DiGoia AM, Jaramaz B,
[39] Guo Y. Medical image registration and application to atlas- editors. Medical Image Computing and Computer-Assisted
based segmentation. Kent State University; 2007. Intervention – MICCAI 2000: Third International
biocybernetics and biomedical engineering 38 (2018) 71–89 89

Conference. Proceedings. Berlin, Heidelberg: Springer Berlin [74] Liu X, Lei Z, Yu Q, Zhang X, Shang Y, Hou W. Multi-modal
Heidelberg; 2000. p. 452–61. image matching based on local frequency information.
[61] Rueckert D, Clarkson MJ, Hill DLG, Hawkes DJ. Non-rigid EURASIP J Adv Signal Process 2013;2013:1–11.
registration using higher-order mutual information; [75] Riegler G, Urschler M, M, R., ther, Bischof H, Stern D.
2000;438–47. Anatomical landmark detection in medical applications
[62] Yi Z, Soatto S. Nonrigid registration combining global and driven by synthetic data. 2015 IEEE International Conference
local statistics. IEEE Conference on Computer Vision and on Computer Vision Workshop (ICCVW). 2015. pp. 85–9.
Pattern Recognition, 2009. CVPR 2009. 2009. pp. 2200–7. [76] Qin B, Gu Z, Sun X, Lv Y. Registration of images with
[63] Loeckx D, Slagmolen P, Maes F, Vandermeulen D, Suetens P. outliers using joint saliency map. IEEE Signal Process Lett
Nonrigid image registration using conditional mutual 2010;17:91–4.
information. IEEE Trans Med Imaging 2010;29:19–29. [77] Auer M, Regitnig P, Holzapfel GA. An automatic nonrigid
[64] Zhuang X, Arridge S, Hawkes DJ, Ourselin S. A nonrigid registration for stained histological sections. IEEE Trans
registration framework using spatially encoded mutual Image Process 2005;14:475–86.
information and free-form deformations. IEEE Trans Med [78] Likar B, Pernuš F. A hierarchical approach to elastic
Imaging 2011;30:1819–28. registration based on mutual information. Image Vis
[65] Myronenko A, Song X.Image registration by minimization Comput 2001;19:33–44.
of residual complexityIEEE Conference on Computer Vision [79] Tomazevic D, Likar B, Pernus F. 3-D/2-D registration by
and Pattern Recognition, 2009. CVPR 2009. 2009. pp. 49–56. integrating 2-D information in 3-D. IEEE Trans Med Imaging
[66] Woo J, Stone M, Prince JL. Multimodal registration via 2006;25:17–27.
mutual information incorporating geometric and spatial [80] Staring M, Van Der Heide UA, Klein S, Viergever MA, Pluim
context. IEEE Trans Image Process 2015;24:757–69. JP. Registration of cervical MRI using multifeature mutual
[67] Kaur A, Kaur L, Gupta S. Image recognition using coefficient information. IEEE Trans Med Imaging 2009;28:1412–21.
of correlation and structural similarity index in [81] Pszczolkowski S, Zafeiriou S, Ledig C, Rueckert D. A robust
uncontrolled environment. Int J Comput Appl 2012;59. similarity measure for nonrigid image registration with
[68] Kim J. Intensity based image registration using robust outliers. 2014 IEEE 11th International Symposium on
similarity measure and constrained optimization: Biomedical Imaging (ISBI). 2014. pp. 568–71.
applications for radiation therapy. Citeseer; 2004. [82] Kosiński W, Michalak P, Gut P. Robust image registration
[69] Bailey DL, Townsend DW, Valk PE, Maisey MN. Positron based on mutual information measure. J Signal Inf Process
emission tomography: basic sciences. London: Springer; 2012;3:175.
2006. [83] Xuan Y, Jihong P. Elastic image registration using attractive
[70] Collignon A, Maes F, Delaere D, Vandermeulen D, Suetens P, and repulsive particle swarm optimization. In: Wang T-D,
Marchal G. Automated multi-modality image registration Li X, Chen S-H, Wang X, Abbass H, Iba H, et al., editors.
based on information theory. Information Processing in Simulated Evolution and Learning: 6th International
Medical Imaging. 1995. pp. 263–74. Conference, SEAL 2006. Proceedings. Berlin, Heidelberg:
[71] Wells WM, Viola P, Atsumi H, Nakajima S, Kikinis R. Multi- Springer Berlin Heidelberg; 2006. p. 782–9.
modal volume registration by maximization of mutual [84] Fitzpatrick JM. The role of registration in accurate surgical
information. Med Image Anal 1996;1:35–51. guidance. Proc Inst Mech Eng Part H: J Eng Med
[72] Sabuncu MR. Entropy-based image registration. Citeseer; 2010;224:607–22.
2004. [85] Crum WR, Hartkens T, Hill D. Non-rigid image registration:
[73] Kim YS, Lee JH, Ra JB. Multi-sensor image registration based theory and practice. Br J Radiol 2014;77(s2):S140–253.
on intensity and edge orientation information. Pattern [86] Lüders H, Comair YG. Epilepsy surgery. Lippincott Williams
Recognit 2008;41:3356–65. & Wilkins; 2001.

You might also like