You are on page 1of 9

[Shruthishree et. al., Vol.5 (Iss.

4: RACSIT), April, 2017] ISSN- 2350-0530(O), ISSN- 2394-3629(P)


ICV (Index Copernicus Value) 2015: 71.21 IF: 4.321 (CosmosImpactFactor), 2.532 (I2OR)
Recent Advances in Computer Science and Information Technology InfoBase Index IBI Factor 3.86

RACSIT - 17

A REVIEW PAPER ON MEDICAL IMAGE PROCESSING


Shruthishree S.H *1, Harshvardhan Tiwari 2
*1
Assistant Professor, SET Department of Information Science Engineering, Jain University,
Bangalore, India
2
Head of the Department Information Science and Engineering, Jothy Institution of Technology,
Bangalore, India

DOI: https://doi.org/10.5281/zenodo.572290

Abstract
Biomedical image processing has experienced dramatic expansion, and has been an
interdisciplinary research field attracting expertise from applied mathematics, computer sciences,
engineering, statistics, physics, biology and medicine. Computer-aided diagnostic processing has
already become an important part of clinical routine. Accompanied by a rush of new
development of high technology and use of various imaging modalities, more challenges arise;
for example, how to process and analyze a significant volume of images so that high quality
information can be produced for disease diagnoses and treatment. The principal objectives of this
course are to provide an introduction to basic concepts and techniques for medical image
processing and to promote interests for further study and research in medical imaging processing.
The rapid progress of medical science and the invention of various medicines have benefited
mankind and the whole civilization. Modern science also has been doing wonders in the surgical
field. But, the proper and correct diagnosis of diseases is the primary necessity before the
treatment. The more sophisticate the bio-instruments are, better diagnosis will be possible. The
medical image plays an important role in clinical diagnosis and therapy of doctor and teaching
and researching etc. Medical imaging is often thought of as a way to represent anatomical
structures of the body with the help of X-ray computed tomography and magnetic resonance
imaging. But often it is more useful for physiologic function rather than anatomy. With the
growth of computer and image technology medical imaging has greatly influenced medical field.
As the quality of medical imaging affects diagnosis the medical image processing has become a
hotspot and the clinical applications wanting to store and retrieve images for future purpose
needs some convenient process to store those images in details.

Keywords: Medical Imaging; Bioimaging; Neuroimaging; Visualization; Giga-Voxel; Tera-


Voxel; Picture Archiving And Communication Systems (PACS); Content-Based Image Retrieval
(CBIR); Virtual Reality (VR); Graphics Processing Unit (GPU) Programming.

Http://www.granthaalayah.com ©International Journal of Research - GRANTHAALAYAH [21]


[Shruthishree et. al., Vol.5 (Iss.4: RACSIT), April, 2017] ISSN- 2350-0530(O), ISSN- 2394-3629(P)
ICV (Index Copernicus Value) 2015: 71.21 IF: 4.321 (CosmosImpactFactor), 2.532 (I2OR)
Recent Advances in Computer Science and Information Technology InfoBase Index IBI Factor 3.86
Cite This Article: Shruthishree S.H, and Harshvardhan Tiwari. (2017). “A REVIEW PAPER ON
MEDICAL IMAGE PROCESSING.” International Journal of Research - Granthaalayah, 5(4)
RACSIT, 21-29. https://doi.org/10.5281/zenodo.572290.

1. Introduction

Biomedical image processing has experienced dramatic expansion, and has been an
interdisciplinary research field attracting expertise from applied mathematics, computer sciences,
engineering, statistics, physics, biology and medicine. Computer-aided diagnostic processing has
already become an important part of clinical routine. Accompanied by a rush of new
development of high technology and use of various imaging modalities, more challenges arise;
for example, how to process and analyze a significant volume of images so that high quality
information can be produced for disease diagnoses and treatment. The principal objectives of this
course are to provide an introduction to basic concepts and techniques for medical image
processing and to promote interests for further study and research in medical imaging processing.
Recent advances in biomedical signal processing and image processing have frequently been
reviewed [21, 35, 36]. Usually, such review articles are driven by classifying the methods that
are used for processing pixel and voxel data, e.g., image segmentation, or their applications in
diagnostics, treatment planning and follow up studies. In contrast, this paper focuses on
processing large data volumes of medical images and its related challenges. During the last
years, the amount of medical image data grew from Kilo- to Terabyte. This is mainly due to
improvements in medical image acquisition systems with increasing pixel resolution and faster
reconstruction processing. For example, the new Sky Scan 2011 x-ray nano-tomograph has a
resolution of 200 nm per pixel and the high resolution micro computed tomography (CT)
reconstructs images with 8 000 × 8 000 pixel per slice with 0.7 µm isotropic detail detectability.
This results in 64 Megabyte (MB) per slice. New CT and magnetic resonance imaging (MRI)
systems can scale the image resolution and the reconstruction time. Whole human body scans
with this resolution reach several Gigabytes (GB) of data load. Large medical image data occurs
in two different ways: first, a huge amount of image data from thousands of images such as in
picture archiving and communication systems (PACS) and second, a large amount of image data
from a single data set.

2. Background

We first define terminology that is used throughout the review, and we describe important issues
in the segmentation of medical images.

Definitions
An image is a collection of measurements in two-dimensional (2-D) or three dimensional (3-D)
space. In medical images, these measurements or ‘image intensities’ can be radiation absorption
in X-ray imaging, acoustic pressure in ultra sound, or radio frequency (RF) signal amplitude in
MRI. If a single measurement is made at each location in the image, then the image is called a
scalar image. Medical imaging has been undergoing a revolution in the past decade with the
advent of faster, more accurate, and less invasive devices. This has driven the need for
corresponding software development which in turn has provided a major impetus for new
algorithms in signal and image processing. Many of these algorithms are based on partial

Http://www.granthaalayah.com ©International Journal of Research - GRANTHAALAYAH [22]


[Shruthishree et. al., Vol.5 (Iss.4: RACSIT), April, 2017] ISSN- 2350-0530(O), ISSN- 2394-3629(P)
ICV (Index Copernicus Value) 2015: 71.21 IF: 4.321 (CosmosImpactFactor), 2.532 (I2OR)
Recent Advances in Computer Science and Information Technology InfoBase Index IBI Factor 3.86
differential equations and curvature driven flows which will be the main topics of this survey
paper. Mathematical models are the foundation of biomedical computing. Basing those models
on data extracted from images continues to be a fundamental technique for achieving scientific
progress in experimental, clinical, biomedical, and behavioral research. Today, medical images
are acquired by a range of techniques across all biological scales, which go far beyond the visible
light photographs and microscope images of the early 20th century. Modern medical images may
be considered to be geometrically arranged arrays of data samples which quantify such diverse
physical phenomena as the time variation of hemoglobin deoxygenation during neuronal
metabolism, or the diffusion of water molecules through and within tissue. The broadening scope
of imaging as a way to organize our observations of the bio physical world has led to a dramatic
increase in our ability to apply new processing techniques and to combine multiple channels of
data into sophisticated and complex mathematical models of physiological function and
dysfunction. A key research area is the formulation of biomedical engineering principles based
on rigorous mathematical foundations in order to develop general-purpose software methods that
can be integrated into complete therapy delivery systems. Such systems support the more
effective delivery of many image-guided procedures such as biopsy, minimally invasive surgery,
and radiation therapy.

Types
The two types of methods used for Image Processing are Analog and Digital Image Processing.
Analog or visual techniques of image processing can be used for the hard copies like printouts
and photographs. Image analysts use various fundamentals of interpretation while using these
visual techniques. The image processing is not just confined to area that has to be studied but on
knowledge of analyst. Association is another important tool in image processing through visual
techniques. So analysts apply a combination of personal knowledge and collateral data to image
processing.

Digital Processing techniques help in manipulation of the digital images by using computers. As
raw data from imaging sensors from satellite platform contains deficiencies. To get over such
flaws and to get originality of information, it has to undergo various phases of processing. The
three general phases that all types of data have to undergo while using digital technique are Pre-
processing, enhancement and display, information extraction.

Http://www.granthaalayah.com ©International Journal of Research - GRANTHAALAYAH [23]


[Shruthishree et. al., Vol.5 (Iss.4: RACSIT), April, 2017] ISSN- 2350-0530(O), ISSN- 2394-3629(P)
ICV (Index Copernicus Value) 2015: 71.21 IF: 4.321 (CosmosImpactFactor), 2.532 (I2OR)
Recent Advances in Computer Science and Information Technology InfoBase Index IBI Factor 3.86
Fundamental steps in image processing:
1) Image acquisition: to acquire a digital image
2) Image preprocessing: to improve the image in ways that increases the chances for success
of the other processes.
3) Image segmentation: to partitions an input image into its constituent parts or objects.
4) Image representation: to convert the input data to a form suitable for computer
processing.
5) Image description: to extract features that result in some quantitative information of
interest or features that are basic for differentiating one class of objects from another.
6) Image recognition: to assign a label to an object based on the information provided by its
descriptors.
7) Image interpretation: to assign meaning to an ensemble of recognized objects.

Medical image computing typically operates on uniformly sampled data with regular x-y-z
spatial spacing (images in 2D and volumes in 3D, generically referred to as images). At each
sample point, data is commonly represented in integral form such as signed and unsigned short
(16-bit), although forms from unsigned char (8-bit) to 32-bit float are not uncommon. The
particular meaning of the data at the sample point depends on modality: for example a CT
acquisition collects radio density values, while a MRI acquisition may collect T1 or T2-weighted
images. Longitudinal, time-varying acquisitions may or may not acquire images with regular
time steps. Fan-like images due to modalities such as curved-array ultrasound are also common
and require different representational and algorithmic techniques to process. Other data forms
include sheared images due to gantry tilt during acquisition; and unstructured meshes, such as
hexahedral and tetrahedral forms, which are used in advanced biomechanical analysis (e.g.,
tissue deformation, vascular transport, bone implants).

Segmentation
Segmentation is the process of partitioning an image into different segments. In medical
imaging, these segments often correspond to different tissue classes, organs, pathologies, or other
biologically relevant structures. Medical image segmentation is made difficult by low contrast,
noise, and other imaging ambiguities. Although there are many computer vision techniques for
image segmentation, some have been adapted specifically for medical image computing. Below
is a sampling of techniques within this field; the implementation relies on the expertise that
clinicians can provide.

The commonly used term “biomedical image processing” means the provision of digital image
processing for biomedical sciences. In general, digital image processing covers four major areas
1) Image formation includes all the steps from capturing the image to forming a digital
image matrix.
2) Image visualization refers to all types of manipulation of this matrix, resulting in an
optimized output of the image.
3) Image analysis includes all the steps of processing, which are used for quantitative
measurements as well as abstract interpretations of biomedical images. These steps
require a priori knowledge on the nature and content of the images, which must be
integrated into the algorithms on a high level of abstraction. Thus, the process of image

Http://www.granthaalayah.com ©International Journal of Research - GRANTHAALAYAH [24]


[Shruthishree et. al., Vol.5 (Iss.4: RACSIT), April, 2017] ISSN- 2350-0530(O), ISSN- 2394-3629(P)
ICV (Index Copernicus Value) 2015: 71.21 IF: 4.321 (CosmosImpactFactor), 2.532 (I2OR)
Recent Advances in Computer Science and Information Technology InfoBase Index IBI Factor 3.86
analysis is very specific, and developed algorithms can be transferred rarely directly into
other application domains.
4) Image management sums up all techniques that provide the efficient storage,
communication, transmission, archiving, and access (retrieval) of image data. Thus, the
methods of telemedicine are also a part of the image management.

In contrast to image analysis, which is often also referred to as high-level image processing, low
level processing denotes manual or automatic techniques, which can be realized without a priori
knowledge on the specific content of images. This type of algorithms has similar effects
regardless of the content of the images. For example, histogram stretching of a radiograph
improves the contrast as it does on any holiday photograph. Therefore, low-level processing
methods are usually available with programs for image enhancement.

3. Algorithms used for Image Processing at Different Stages

Image registration Image registration is one of the most common algorithms in medical imaging
and the one with the most GPU implementations. One reason for this is the GPU’s hardware
support for linear interpolation, which makes it possible to transform images and volumes very
efficiently. Hastreiter and Ertl (1998) were one of the first to take advantage of the GPU for
image registration, mainly for its ability to perform fast interpolation in 3D. A common approach
is to let the GPU calculate a similarity measure, often mutual information (Viola and Wells,
1997; Pluim et al., 2003; Mellor and Brady, 2005), over the images in parallel while the CPU
runs a serial optimization algorithm to find the parameters (e.g. translations and rotations) that
give the best match between the two images. The mutual information between two discrete
variables a and b is defined as

1) Edge Detection Technique

Edge detection is an image processing technique for finding the boundaries of objects within
images. It works by detecting discontinuities in brightness. Edge detection is used for image
segmentation and data extraction in areas such as image processing, computer vision, and
machine vision. Common edge detection algorithms include Sobel, Canny, Prewitt, Roberts,
and fuzzy logic methods. Consider the ideal case of a bright object O on a dark background. The
physical object is represented by its projections on the image I. The characteristic function 1O of
the object is the ideal segmentation, and since the object is contrasted on the background, the
variations of the intensity I are large on the boundary ∂O. It is therefore natural to characterize
the boundary ∂O as the locus of points where the norm of the gradient |∇I| is large. In fact, if ∂O
is piecewise smooth then |∇I| is a singular measure whose support is exactly ∂O. This is the
approach taken in the 60’s and 70’s by Roberts [81] and Sobel [91] who proposed slightly
different discrete convolution masks to approximate the gradient of digital images.
Disadvantages with these approaches are that edges are not precisely localized, and may be
corrupted by noise. Note the thickness of the boundary of the heart ventricle as well as the
presence of “spurious edges” due to noise. Cann proposed to add a smoothing pre-processing
step (to reduce the influence of the noise) as well as a thinning post-processing phase (to ensure
that the edges are uniquely localized). See [26] for a survey and evaluation of edge detectors
Http://www.granthaalayah.com ©International Journal of Research - GRANTHAALAYAH [25]
[Shruthishree et. al., Vol.5 (Iss.4: RACSIT), April, 2017] ISSN- 2350-0530(O), ISSN- 2394-3629(P)
ICV (Index Copernicus Value) 2015: 71.21 IF: 4.321 (CosmosImpactFactor), 2.532 (I2OR)
Recent Advances in Computer Science and Information Technology InfoBase Index IBI Factor 3.86
using gradient techniques. A slightly different approach initially motivated by psychophysics
was proposed by Marr and Hildreth, where edges are defined as the zeros of _G_ ∗ I, the
Laplacian of a smooth version of the image. One can give a heuristic justification by assuming
that the edges are smooth curves; more precisely, assume that near an edge the image is of the
form where S is a smooth function with |∇S| = O(1)which vanishes on the edge, ε is a small
parameter proportional to the width of the edge, and ϕ : R →[0, 1] is
a0smooth0increasing0function.

Figure 2: Result of two edge detectors on a heart MRI image

The most powerful edge-detection method that edge provides is the Canny method. The Canny
method differs from the other edge-detection methods in that it uses two different thresholds (to
detect strong and weak edges), and includes the weak edges in the output only if they are
connected to strong edges. This method is therefore less likely than the others to be fooled by
noise, and more likely to detect true weak edges.

Original image canny edge detection

Figure 3: Canny edge detection form original image

2) Feature Extraction

Contrast limited adaptive histogram equalization (CLAHE) is a popular technique in biomedical


image processing, since it is very effective in making the usually interesting salient parts more

Http://www.granthaalayah.com ©International Journal of Research - GRANTHAALAYAH [26]


[Shruthishree et. al., Vol.5 (Iss.4: RACSIT), April, 2017] ISSN- 2350-0530(O), ISSN- 2394-3629(P)
ICV (Index Copernicus Value) 2015: 71.21 IF: 4.321 (CosmosImpactFactor), 2.532 (I2OR)
Recent Advances in Computer Science and Information Technology InfoBase Index IBI Factor 3.86
visible. The image is split into disjoint regions, and in each region local histogram equalization is
applied. Then, the boundaries between the regions are eliminated with a bilinear interpolation.
The main objective of this method is to define a point transformation within a local fairly large
window with the assumption that the intensity value within it is a stoical representation of local
distribution of intensity value of the whole image. The local window is assumed to be unaffected
by the gradual variation of intensity between the image centers and edges. The point
transformation distribution is localized around the mean intensity of the window and it covers the
entire intensity range of the image. Consider a running sub image W of N X N pixels centered on
a pixel P (i, j), the image is filtered to produce another sub image P of (N X N) pixels according
to the equation below

And Max and Min are the maximum and minimum intensity values in the whole image, while μ୵
and σ୵ indicate the local window mean and standard deviation which are defined as:

As a result of this adaptive histogram equalization, the dark area in the input image that was
badly illuminated has become brighter in the output image while the side that was highly
illuminated remains or reduces so that the whole illumination of the image is same

3) Super-Pixel Classification using Slic Algorithm

This paper uses the simple linear iterative clustering algorithm (SLIC) to aggregate nearby pixels
into super pixels in retinal fundus images. Compared with other super pixel methods, SLIC is
fast, memory efficient and has excellent boundary adherence. The number of desired super pixels
is the main parameter why we used SLIC and it is simple also only because of this parameter.
We adopted a new super pixel algorithm, simple linear iterative clustering (SLIC), which uses a
k-means clustering approach for proper generation of super pixels. This algorithm is best when
compared to other conventional methods. Along that, it is faster and more memory efficient,
improves segmentation performance, and is straightforward to extend to super pixel generation.
SLIC is simple to use and understand. By default, the only parameter of the algorithm is k, the
desired number of approximately equally-sized super pixels. To produce roughly equally sized
super pixels, the grid interval is S= N/k. For color images in the CIELAB color space, the
clustering procedure begins with an initialization step where k initial cluster centres = li, ai, bi,
xi, yi T is sampled on a regular grid spaced S pixels apart. The centres are moved to seed
locations corresponding to the lowest gradient position in a neighborhood. This is done to avoid
centering a super pixel on an edge, and to reduce the chance of seeding a super pixel with a noisy

Http://www.granthaalayah.com ©International Journal of Research - GRANTHAALAYAH [27]


[Shruthishree et. al., Vol.5 (Iss.4: RACSIT), April, 2017] ISSN- 2350-0530(O), ISSN- 2394-3629(P)
ICV (Index Copernicus Value) 2015: 71.21 IF: 4.321 (CosmosImpactFactor), 2.532 (I2OR)
Recent Advances in Computer Science and Information Technology InfoBase Index IBI Factor 3.86
pixel. Next, in the assignment step, each pixel “i” is associated with the nearest cluster centre
whose search region overlaps its location. This is the key to speeding up our algorithm because
limiting the size of the search region significantly reduces the number of distance calculations,
and results in a significant speed advantage over conventional k- means clustering where each
pixel must be compared with all cluster centers. This is only possible through the introduction of
a distance measure D, which determines the nearest cluster centered for each pixel.

4. Circular Hough-Transformation

We established an approach based on the In order to detect small circular spots in the image, will
implement an approach called Circular Hough Transformation. Images are obtained by detecting
circles on the images using circular Hough transformation. With this technique, from the image a
set of circular objects can be extracted. Circle shape of the optic disk is computed using circle
equation given below

Where “r” represents the radius of the circle and (a, b) represents coordinates, which is the center
of the circular object. To find out a circular disk in the image it is required to collect votes in
three dimensional spaces (a, b, r). CHT transforms the image coordinate parameters into set of
collected votes in the constraint space. Followed by every dot in the votes are calculated and
accumulated in the group for all combination. Highest voting point will be considered the center
of the circle and the coordinate points are

5. Conclusion

In this paper, we sketched some of the fundamental concepts of medical image processing. It is
important to emphasize that none of these problem areas has been satisfactorily solved, and all of
the algorithms we have described are open to considerable improvement. In particular,
segmentation remains a rather ad hoc procedure with the best results being obtained via
interactive programs with considerable input from the user. Nevertheless, progress has been
made in the field of automatic analysis of medical images over the last few years thanks to
improvements in hardware, acquisition methods, signal processing techniques, and of course
mathematics. Curvature driven flows have proven to be an excellent tool for a number of image
processing tasks and have definitely had a major impact on the technology base. The
mathematical challenges in medical imaging are still considerable and the necessary techniques
touch just about every major branch of mathematics. In summary, we can use all the help we can
get!

References

[1] L. Alvarez, F. Guichard, P. L. Lions, and J. M. Morel, Axiomes et ´equations fundamentals du


traitement d‘images, C. R. Acad. Sci. Paris 315 (1992), 135–138.
[2] Axioms and fundamental equations of image processing, Arch. Rat. Mech. Anal. 123 (1993), no.
3, 199–257.

Http://www.granthaalayah.com ©International Journal of Research - GRANTHAALAYAH [28]


[Shruthishree et. al., Vol.5 (Iss.4: RACSIT), April, 2017] ISSN- 2350-0530(O), ISSN- 2394-3629(P)
ICV (Index Copernicus Value) 2015: 71.21 IF: 4.321 (CosmosImpactFactor), 2.532 (I2OR)
Recent Advances in Computer Science and Information Technology InfoBase Index IBI Factor 3.86
[3] L. Alvarez, P. L. Lions, and J. M. Morel, Image selective smoothing and edge detection by
nonlinear diffusion, SIAM J. Numer. Anal. 29 (1992), 845–866.
[4] L. Alvarez and J-M. Morel, Formalization and computational aspects of image analysis, Acta
Numerica 3 (1994), 1–59.
[5] L. Ambrosio, A compactness theorem for a special class of fucntions of bounded variation,Boll.
Un. Math. It. 3-B (1989), 857–881.
[6] Lecture notes on optimal transport theory, Euro Summer School, Mathematical Aspects of
Evolving Interfaces, CIME Series of Springer Lecture Notes, Springer, July 2000.
[7] S. Ando, Consistent gradient operators, IEEE Transactions on Pattern Analysis and Machine
Intelligence 22 (2000), no. 3, 252–265.
[8] S. Angenent, S. Haker, and A. Tannenbaum, Minimizing flows for the Monge-Kantorovich
problem, SIAM J. Math. Anal. 35 (2003), no. 1, 61–97 (electronic).

Http://www.granthaalayah.com ©International Journal of Research - GRANTHAALAYAH [29]

You might also like