You are on page 1of 86

A

Major Project Report


On
DECOMPOSITION AND RECONSTRUCTION OF
MEDICAL IMAGES USING DWT METHOD
Submitted in the partial fulfillment of the requirement for the
award of
Bachelor of Technology
In
Electronics and Communication Engineering
By
B.LIKHITHA (13R01A0465)

G.MANISHA (13R01A04D3)

S.BHARGAVI (14R05A0409)

Under the esteemed guidance of


Mrs. P.Surya Kumari
Assistant Professor

Department of Electronics and Communication


Engineering
CMR INSTITUTE OF TECHNOLOGY
(Permanently Affiliated to JNTU Hyderabad and Accredited by NBA New Delhi)
Kandlakoya (V), Medchal Road, Hyderabad – 501 401
2013-2017
Department of Electronics and Communication Engineering

CERTIFICATE

This is to certify that the major project report entitled ‘Decomposition and
reconstruction of medical images using DWT method’ is the bonafide work done
and submitted by

B.LIKHITHA (13R01A0465)

G.MANISHA (13R01A04D3)

S.BHARGAVI (14R05A0409)

towards the partial fulfillment of the requirement for the award of Bachelor of
Technology in Electronics and Communication Engineering from Jawaharlal
Nehru Technological University, Hyderabad. The results embodied in this
dissertation have not submitted to any other university or organization for the
award of any other degree.

Guide Head of Department Principal


Mrs. P.Surya kumari Dr. M. Gurunadha Babu Dr. M. Janga Reddy

External Examiner
DECLARATION

We hereby declare that the major project entitled


“DECOMPOSITION AND RECONSTRUCTION OF MEDICAL IMAGES
USING DWT METHOD’ is carried out by us during the academic year 2016–
2017 in partial fulfillment of the award of Bachelor of Technology in Electronics
and Communication Engineering from CMR Institute of Technology affiliated to
Jawaharlal Nehru Technological University Hyderabad. We have not submitted
the same to any other university or organization for the award of any other degree.

B.LIKHITHA (13R01A0465)

G.MANISHA (13R01A04D3)

S.BHARGAVI (14R05A0409)

DEPARTMENT OF ECE I CMRIT


ACKNOWLEDGEMENT

We are deeply indebted to Mrs.P.Surya kumari, Assistant Professor,


Department of Electronics and Communication Engineering, the guiding force
behind this project; we want to thank him for giving us the opportunity to work
under him. In spite of his schedules in the Department, he was always available
to share with us his deep insights, wide knowledge and extensive experience. His
advice has value lasting much beyond this project.

We would like to express our deep gratitude to Mr. Nagaraja Kumar Pateti,
Assistant Professor, Department of ECE (Project coordinator) for providing
us an opportunity to work and guiding in our college.

We express our respects to Dr. M. Gurunadha Babu, Head of Department,


Electronics and Communication Engineering (ECE), for encouraging us
throughout our project and for his support.

We are very thankful to Dr. M. Janga Reddy, Principal of CMR Institute of


Technology for providing us with the opportunity and facilities required to
accomplish our project.

In addition, we would like to thank all teaching and non-teaching members of


ECE Department for their generous help in various ways for the completion of
this thesis. They have been great sources of inspiration to us and we thank them
sincerely.

Finally, yet importantly, we would like to thank my parents. They are my first
teachers when we came into this world, who taught me the value of hard work by
their own example and to our friends whose support was very valuable in the
completion of the work.

B.LIKHITHA (13R01A0465)

G.MANISHA (13R01A04D3)

S.BHARGAVI (14R05A0409)

DEPARTMENT OF ECE II CMRIT


ABSTRACT

Fusion of Medical images derives useful information from medical

images containing the data which has important clinical significance for doctors

during their analysis. The idea behind the concept of image fusion is to improve

the image content by fusing two images like MRI (Magnetic resonance imaging)

& CT (Computer tomography) images to provide useful &precise information for

doctor for their clinical treatment.

In this paper Discrete Wavelet Transforms (DWT)method has been

used to fuse two medical images to decompose the functional & anatomical

images. The fused image contains both functional information and more spatial

characteristics with no color distortion. Experimental results show the best fusion

performance is given by the discrete wavelet transforms.

DEPARTMENT OF ECE III CMRIT


INDEX
TABLE OF CONTENTS

CHAPTER 1 ....................................................................................................................................... 1
INTRODUCTION ............................................................................................................................... 1
1.1 BACKGROUND ....................................................................................................................... 1
1.2 SCOPE OF THESIS ................................................................................................................... 1
1.3 SIGNIFICANCE ........................................................................................................................ 1
1.4 ORGANIZATION OF THESIS .................................................................................................... 2
CHAPTER 2 ....................................................................................................................................... 3
INTRODUCTION TO IMAGE PROCESSING ........................................................................................ 3
2.1 IMAGE PROCESSING .............................................................................................................. 3
2.2 TYPES OF IMAGE PROCESSING .............................................................................................. 3
2.3 DIGITAL IMAGE FUNDAMENTALS .......................................................................................... 5
2.3.1 DIGITAL IMAGE REPRESENTATION ................................................................................ 5
2.3.2 COORDINATE CONVENTIONS ........................................................................................ 5
2.3.4 IMAGES AS MATRICES .................................................................................................... 6
2.3.5 RELATIONS BETWWEN PIXELS ........................................................................................ 7
2.4 IMAGE TRANSFORMS IN SPATIAL DOMAIN.......................................................................... 9
2.5 HISTOGRAM EQUALIZATION .............................................................................................. 10
CHAPTER 3 ..................................................................................................................................... 13
IMAGE RESTORATION AND IMAGE SEGMENTATION .................................................................... 13
3.1 IMAGE RESTORATION ......................................................................................................... 13
3.1.1 DEGRADATION MODEL ................................................................................................. 13
3.2 IMAGE SEGMENTATION ...................................................................................................... 14
3.2.1 NON CONTEXTUAL THRESHOLDING ............................................................................. 15
3.2.2 CONTEXTUAL SEGMENTATION ..................................................................................... 17
3.3 MORPHOLOGICAL IMAGE PROCESSING .............................................................................. 18
3.4 IMAGE COMPRESSION ........................................................................................................ 22
3.4.1 MODELS ....................................................................................................................... 23

DEPARTMENT OF ECE IV CMRIT


3.4.2 ERROR FREE COMPRESSION ........................................................................................ 24
CHAPTER-4..................................................................................................................................... 25
INTRODUCTION TO IMAGE FUSION .............................................................................................. 25
4.1 IMAGE FUSION .................................................................................................................... 25
4.2 IMAGE FUSION TECHNIQUES .............................................................................................. 26
4.3 TYPES OF IMAGE FUSION .................................................................................................... 29
4.4 APPLICATIONS AND USES OF IMAGE FUSION ..................................................................... 30
4.5 ADVANTAGES AND DISADVANTAGES OF IMAGE FUSION ................................................... 30
4.5.1ADVANTAGES ................................................................................................................ 30
4.5.2 DISADVANTAGES .......................................................................................................... 31
CHAPTER-5..................................................................................................................................... 32
DECOMPOSITION AND RECONSTRUCTION OF MEDICAL IMAGES ............................................... 32
5.1 DECOMPOSITION OF WAVELETS ......................................................................................... 32
5.2 MEDICAL IMAGES LIKE MRI AND CT .................................................................................... 36
5.2.1 MAGNETIC RESONANCE IMAGING .............................................................................. 36
5.2.2 COMPUTED TOMOGRAPHY(CT) ................................................................................... 37
CHAPTER-6..................................................................................................................................... 39
DISCRETE WAVELET TRANSFORM ................................................................................................. 39
6.1 INTRODUCTION TO WAVELETS............................................................................................ 39
6.2 DEFINITION OF WAVELET .................................................................................................... 40
6.3 WAVELET TRANSFORMS ...................................................................................................... 41
6.4 DISCRETE WAVELET TRANSFORM ....................................................................................... 42
6.5 APPLICATIONS OF WAVELETS .............................................................................................. 43
6.6 DWT of Images .................................................................................................................... 45
CHAPTER-7..................................................................................................................................... 47
SOFTWARE TOOLS ......................................................................................................................... 47
7.1 INRODUCTION TO MATLAB ................................................................................................. 47
7.2 MATLAB WINDOWS:............................................................................................................ 48
7.3 BASIC INSTRUCTIONS IN MATLAB ...................................................................................... 50
7.3: DIP USING MATLAB ............................................................................................................ 54
7.4 MATLAB FUNCTIONS: ......................................................................................................... 55

DEPARTMENT OF ECE V CMRIT


CHAPTER-8..................................................................................................................................... 67
FUSION OF MEDICAL IMAGES IN MATLAB .................................................................................... 67
8.2 PERFORMANCE ASSESSMENT ............................................................................................. 68
8.2.1 STANDARD DEVIATION ................................................................................................. 68
8.2.2 SIGNAL TO NOISE RATIO .............................................................................................. 70
CHAPTER-9..................................................................................................................................... 72
RESULT AND DISCUSSION .............................................................................................................. 72
CHAPTER-10................................................................................................................................... 75
CONCLUSION AND FUTURE SCOPE................................................................................................ 75

DEPARTMENT OF ECE VI CMRIT


LIST OF FIGURES

Figure 2.1 Digital image processing ................................................................................................. 3


Figure2.2 Digital image processing basic block diagram ................................................................. 4
Figure 2.3 Fundamental steps in digital image processing ............................................................. 4
Figure 2.4 Coordinate conventions used in Image Processing Toolbox. ........................................ 6
Figure 2.5 Adjacency ....................................................................................................................... 8
Figure 2.6 Connectivity ................................................................................................................... 9
Figure 2.7 Tone-scale adjustments. .............................................................................................. 10
Figure 2.8 The original image and its histogram, and the equalized versions. ............................. 10
Figure 2.9 Transfer function for an ideal low pass filter. ............................................................. 11
Figure 2.10 Transfer function for homomorphic filtering. ........................................................... 12
Figure 3.1 Block diagram .............................................................................................................. 13
Figure 3.2 Greyscale and binary mapping ..................................................................................... 15
Figure 3.3 Simple thresholding ...................................................................................................... 16
Figure 3.4 Color thresholing .......................................................................................................... 17
Figure 3.5 4-8 Neighbourhood ...................................................................................................... 18
Figure 3.6 Structuring elements .................................................................................................... 19
Figure 3.7 Examples of structuring elements ............................................................................... 20
Figure 3.8 Erosion and dilation ...................................................................................................... 20
Figure 3.9 Set operations on binary image ................................................................................... 21
Figure 3.10 Compression and decompression ............................................................................. 23
Figure 3.11 Source encoder........................................................................................................... 23
Figure 3.12 Error free compression ............................................................................................... 24
Figure 5.1 Three-level wavelet decomposition tree ..................................................................... 34
Figure 5.2 First stage of step 1 wavelet decomposition ................................................................ 34
Figure 5.3 Final stage of step 1 wavelet decomposition ............................................................... 35
Figure5.4 Block Diagram of 1 step 2-D DWT ................................................................................. 35
Figure 5.5 2-Step decomposition .................................................................................................. 36
Figure 5.6 MRI Image.................................................................................................................... 37
Figure 5.7 CT Image ...................................................................................................................... 38

DEPARTMENT OF ECE VII CMRIT


Figure 6.1 Seismic Wavelet........................................................................................................... 39
Figure 7.1 Matlab toolbox ............................................................................................................. 47
Figure 7.2 Matlab window............................................................................................................. 48
Figure 7.3 Edit window .................................................................................................................. 49
Figure 7.4 Plot figure ..................................................................................................................... 51
Figure 8.1 Image fusion process in toolbox ................................................................................... 67
Figure 8.2 Block diagram ............................................................................................................... 68
Figure 8.3 Signal to noise ratio ...................................................................................................... 71
Figure 9.1 MRI Image..................................................................................................................... 72
Figure 9.2 CT Image ..................................................................................................................... 723
Figure 9.3 Fused image.................................................................................................................. 74

DEPARTMENT OF ECE VIII CMRIT


Decomposition and reconstruction of medical images using DWT

CHAPTER 1
INTRODUCTION
1.1 BACKGROUND
Image processing is a method to convert an image into digital form
and perform some operations on it, in order to get an enhanced image or to extract
some useful information from it. It is a type of signal dispensation in which input
is image, like video frame or photograph and output may be image or
characteristics associated with that image.

Usually Image Processing system includes treating images as two


dimensional signals while applying already set signal processing methods to them.
It is among rapidly growing technologies today, with its applications in various
aspects of a business. Image Processing forms core research area within
engineering and computer science disciplines too.

1.2 SCOPE OF THESIS


The fusion of multi-modality imaging increasingly plays an important role
in medical imaging field as the extension of clinical use of various medical imaging
systems. Different medical imaging techniques may provide scans with
complementary and occasionally redundant information. The fusion of medical
images can lead to additional clinical information not apparent in the single images.
However, it is difficult to simulate the surgical ability of image fusion when
algorithms of image processing are piled up merely. So many solutions to medical
diagnostic image fusion have been proposed today

1.3 SIGNIFICANCE
The purpose of image processing is divided into 5 groups. They are:

1. Visualization - Observe the objects that are not visible.


2. Image sharpening and restoration - To create a better image.
3. Image retrieval - Seek for the image of interest.
4. Image Recognition – Distinguish the objects in an image.
DEPARTMENT OF ECE 1 CMRIT
Decomposition and reconstruction of medical images using DWT

1.4 ORGANIZATION OF THESIS

Chapter 2:.Introduction to image processing

Chapter 3:.Image restoration and image segmentation

Chapter 4: Introduction to image fusion

Chapter 5: Decomposition and reconstruction of medical images

Chapter 6: Discrete wavelet transforms

Chapter 7: Software tools

Chapter 8: Fusion of medical images

Chapter 9: Results

Chapter10: Conclusion and Future scope

DEPARTMENT OF ECE 2 CMRIT


Decomposition and reconstruction of medical images using DWT

CHAPTER 2
INTRODUCTION TO IMAGE PROCESSING
2.1 IMAGE PROCESSING
Image processing is a method to convert an image into digital form and perform some
operations on it, in order to get an enhanced image or to extract some useful information from it. It
is a type of signal dispensation in which input is image, like video frame or photograph and output
may be image or characteristics associated with that image. Usually Image Processing system
includes treating images as two dimensional signals while applying already set signal processing
methods to them. It is among rapidly growing technologies today, with its applications in various
aspects of a business. Image Processing forms core research area within engineering and computer
science disciplines too.

2.2 TYPES OF IMAGE PROCESSING


The two types of methods used for Image Processing are Analog or visual techniques
of image processing can be used for the hard copies like printouts and photographs. Image analysts
use various fundamentals of interpretation while using these visual techniques. The image
processing is not just confined to area that has to be studied but on knowledge of analyst. Association
is another important tool in image processing through visual techniques. So analysts apply a
combination of personal knowledge and collateral data to image processing. To get over such flaws
and to get originality of information, it has to undergo various phases of processing. The three
general phases that all types of data have to undergo while using digital technique are Pre-
processing, enhancement and display, information extraction.

Figure 2.1 Digital image processing

DEPARTMENT OF ECE 3 CMRIT


Decomposition and reconstruction of medical images using DWT

Figure2.2 Digital image processing basic block diagram

Figure 2.3 Fundamental steps in digital image processing

DEPARTMENT OF ECE 4 CMRIT


Decomposition and reconstruction of medical images using DWT

2.3 DIGITAL IMAGE FUNDAMENTALS


2.3.1 DIGITAL IMAGE REPRESENTATION
An image may be defined as a two-dimensional function fxy(,), where x and y are spatial
(plane) coordinates, and the amplitude of f at any pair of coordinates is called the intensity of the
image at that point. The term gray level is used often to refer to the intensity of monochrome images.
Colour images are formed by a combination of individual images. For example, in the RGB colour
system a colour image consists of three individual monochrome images, referred to as the red (R),
green (G), and blue (B) primary (or component) images. An image may be continuous with respect
to the x- and y-coordinates, and also in amplitude. Converting such an image to digital form requires
that the coordinates, as well as the amplitude, be digitized. Digitizing the coordinate values is called
sampling; digitizing the amplitude values is called quantization. Thus, when x, y, and the amplitude
values of f are all finite, discrete quantities, we call the image a digital image.

2.3.2 COORDINATE CONVENTIONS


The result of sampling and quantization is a matrix of real numbers. We use two
principal ways in this book to represent digital images. Assume that an image fxy(,) is sampled so
that the resulting image has M rows and N columns. We say that the image is of size MN*. The
values of the coordinates are discrete quantities. For notational clarity and convenience, we use
integer values for these discrete coordinates. In many image processing books, the image origin is
defined to be at (,)(,)xy=00. The next coordinate values along the first row of the image are
(,)(,)xy=01. The notation (,)01 is used to signify the second sample along the first row. It does not
mean that these are the actual values of physical coordinates when the image was sampled. Figure
2.1(a) shows this coordinate convention. Note that x ranges from 0 to M-1 and y from 0 to N-1 in
integer increments.

The coordinate convention used in the Image Processing Toolbox to denote arrays is
different from the preceding paragraph in two minor ways. First, instead of using (,)xy, the toolbox
uses the notation (,)rc to indicate rows and columns. Note, however, that the order of coordinates is
the same as the order discussed in the previous paragraph, in the sense that the first element of a
coordinate tuple, (,)ab, refers to a row and the second to a column. The other difference is that the
origin of the coordinate system is at (,)(,)rc=11; thus, r ranges from 1 to M, and c from 1 to N, in
integer increments. Figure 2.1(b) illustrates this coordinate convention.

DEPARTMENT OF ECE 5 CMRIT


Decomposition and reconstruction of medical images using DWT

Image Processing Toolbox documentation refers to the coordinates in Fig. 2.1(b) as pixel
coordinates. Less frequently, the toolbox also employs another coordinate convention, called
spatial coordinates, that uses x to refer to columns and y to refers to rows. This is the opposite of
our use of variables x and y.

Figure 2.4 Coordinate conventions used in Image Processing Toolbox.

With a few exceptions, we do not use the toolbox’s spatial coordinate convention in this book,
but many MATLAB functions do, and you will definitely encounter it in toolbox and MATLAB
documentation.

2.3.4 IMAGES AS MATRICES


The coordinate system in Fig. 2 and the preceding discussion lead to the following representation
for a digitized image:

The right side of this equation is a digital image by definition. Each element of this array
is called an image element, picture element, pixel, or pel. The terms image and pixel are used
throughout the rest of our discussions to denote a digital image and its elements. A digital image
can be represented as a MATLAB matrix:

DEPARTMENT OF ECE 6 CMRIT


Decomposition and reconstruction of medical images using DWT

where f(1, 1) =f(,)00 (note the use of a mono space font to denote MATLAB quantities). Clearly,
the two representations are identical, except for the shift in origin.

The notation f(p, q) denotes the element located in row p and column q. For example, f(6, 2) is the
element in the sixth row and second column of matrix f. Typically, we use the letters M and N,
respectively, to denote the number of rows and columns in a matrix. A 1N* matrix is called a row
vector, whereas an M1* matrix is called a column vector. A 11* matrix is a scalar.

2.3.5 RELATIONS BETWWEN PIXELS

1. Adjacency
2. Connectivity
 Neighbours of a Pixel

There are two different ways to define the neighbours of a pixel P located at (x,y) :

 4-neighbours

The 4-neighbors of pixel p, denoted by , are the four pixels located at ,

, , and , there are, respectively, above (north), below (south),


to the left (west) and right (east) of the pixel p.

 8-neighbours

The 8-neighbors of pixel p, denoted by , include the four 4-neighbors and four pixels along

the diagonal direction located at (northwest), (northeast),

(southwest) and (southeast).

DEPARTMENT OF ECE 7 CMRIT


Decomposition and reconstruction of medical images using DWT

 Adjacency

Two pixels are connected if they are neighbours and their gray levels satisfy some
specified criterion of similarity. For example, in a binary image two pixels are connected if they are
4-neighbors and have same value (0/1). Let v:a set of intensity values used to define adjacency and
connectivity. In a binary Image v={1},if we are referring to adjacency of pixels with value 1. In a
Gray scale image, the idea is the same, but vtypically contains more elements, for example v= {180,
181, 182,....,200}. If the possible intensity values 0 to 255, vset could be any subset of these 256
values.

Figure 2.5 Adjacency

DEPARTMENT OF ECE 8 CMRIT


Decomposition and reconstruction of medical images using DWT

 Connectivity

Let S represent a subset of pixels in an image, two pixels p and q are said to be connected in S if
there exists a path between them. Two image subsets S1and S2are adjacent if some pixel in S1is
adjacent to some pixel in S2.

Figure 2.6 Connectivity

2.4 IMAGE TRANSFORMS IN SPATIAL DOMAIN


The value of a pixel with coordinates (x y) in the enhanced image 𝐹̂ is the result of performing
some operation on the pixels in the neighbourhood of (x, y) in the input image, F.

Neighbourhoods can be any shape, but usually they are rectangular.

 Grey scale manipulation

The simplest form of operation is when the operator T only acts on a 1x1pixel neighbourhood in the
input image, that is 𝐹̂ (x, y) only depends on the value of F at (x, y). This is a grey scale
transformation or mapping.

The simplest case is thresholding where the intensity profile is replaced by a step function, active at
a chosen threshold value. In this case any pixel with a grey level below the threshold in the input
image gets mapped to 0 in the output image. Other pixels are mapped to 255.

DEPARTMENT OF ECE 9 CMRIT


Decomposition and reconstruction of medical images using DWT

Figure 2.7 Tone-scale adjustments.

2.5 HISTOGRAM EQUALIZATION


Histogram equalization is a common technique for enhancing the appearance of images.
Suppose we have an image which is predominantly dark. Then its histogram would be skewed
towards the lower end of the grey scale and all the image detail is compressed into the dark end of
the histogram. If we could `stretch out' the grey levels at the dark end to produce a more uniformly
distributed histogram then the image would become much clearer. Histogram equalization involves
finding a grey scale transformation function that creates an output image with a uniform histogram
(or nearly so).

Figure2.8 The original image and its histogram, and the equalized versions.

DEPARTMENT OF ECE 10 CMRIT


Decomposition and reconstruction of medical images using DWT

 Image transform in frequency domain


Image enhancement in the frequency domain is straightforward. We simply compute the Fourier
transform of the image to be enhanced, multiply the result by a filter (rather than convolve in the
spatial domain), and take the inverse transform to produce the enhanced image.

 Filtering

Low pass filtering involves the elimination of the high frequency components in the image. It results
in blurring of the image (and thus a reduction in sharp transitions associated with noise). An ideal
low pass filter would retain all the low frequency components, and eliminate all the high frequency
components. However, ideal filters suffer from two problems: blurring and ringing. These problems
are caused by the shape of the associated spatial domain filter, which has a large number of
undulations. Smoother transitions in the frequency domain filter, such as the Butterworth filter,
achieve much better results.

Figure2.9 Transfer function for an ideal low pass filter.

 Homomorphic filtering

Images normally consist of light reflected from objects. The basic nature of the image F(x,y) may
be characterized by two components: (1) the amount of source light incident on the scene being
viewed, and (2) the amount of light reflected by the objects in the scene. These portions of light are
called the illumination and reflectance components, and are denoted i(x,y) and r(x,y) respectively.
The functions i and r combine multiplicatively to give the image function F: F(x,y) = i(x,y)r(x,y),

DEPARTMENT OF ECE 11 CMRIT


Decomposition and reconstruction of medical images using DWT

We cannot easily use the above product to operate separately on the frequency components of
illumination and reflection because the Fourier transform of the product of two functions is not
separable; that is

Suppose, however, that we define

Then

where Z, I and R are the Fourier transforms of and respectively. The function Z
represents the Fourier transform of the sum of two images: a low frequency illumination image and
a high frequency reflectance image.

Figure2.10 Transfer function for homomorphic filtering.

DEPARTMENT OF ECE 12 CMRIT


Decomposition and reconstruction of medical images using DWT

CHAPTER 3
IMAGE RESTORATION AND IMAGE
SEGMENTATION

3.1 IMAGE RESTORATION


The purpose of image restoration is to "compensate for" or "undo" defects which degrade an
image. Degradation comes in many forms such as motion blur, noise, and camera misfocus. In cases
like motion blur, it is possible to come up with an very good estimate of the actual blurring function
and "undo" the blur to restore the original image. In cases where the image is corrupted by noise,
the best we may hope to do is to compensate for the degradation it caused. In this project, we will
introduce and implement several of the methods used in the image processing world to restore
images.

3.1.1 DEGRADATION MODEL


The block diagram for our general degradation model is

Figure3.1 Block diagram

where g is the corrupted image obtained by passing the original image f through a low pass filter
(blurring function) b and adding noise to it. We present four different ways of restoring the image.

DEPARTMENT OF ECE 13 CMRIT


Decomposition and reconstruction of medical images using DWT

I. Inverse Filter

In this method we look at an image assuming a known blurring function.

II. Weiner Filtering

In this section we implement image restoration using wiener filtering, which provides us with the
optimal trade-off between de-noising and inverse filtering. We will see that the result is in general
better than with straight inverse filtering.

III. Wavelet Restoration

We implement three wavelet based algorithms to restore the image.

IV. Blind Deconvolution

In this method, we assume nothing about the image. We do not have any information about the
blurring function or on the additive noise. We will see that restoring an image when we know
nothing about it is very hard

3.2 IMAGE SEGMENTATION


Segmentation partitions an image into distinct regions containing each pixels with similar
attributes. To be meaningful and useful for image analysis and interpretation, the regions should
strongly relate to depicted objects or features of interest. Meaningful segmentation is the first step
from low-level image processing transforming a greyscale or colour image into one or more other
images to high-level image description in terms of features, objects, and scenes. The success of
image analysis depends on reliability of segmentation, but an accurate partitioning of an image is
generally a very challenging problem.

DEPARTMENT OF ECE 14 CMRIT


Decomposition and reconstruction of medical images using DWT

3.2.1 NON CONTEXTUAL THRESHOLDING


Thresholding is the simplest non-contextual segmentation technique. With a single threshold,
it transforms a greyscale or colour image into a binary image considered as a binary region map.
The binary map contains two possibly disjoint regions, one of them containing pixels with input
data values smaller than a threshold and another relating to the input values that are at or above the
threshold. The former and latter regions are usually labelled with zero (0) and non-zero (1) labels,
respectively.

The segmentation depends on image property being thresholded and on how Threshold is
chosen. Generally, the non-contextual thresholding may involve two or more thresholds as well as
produce more than two types of regions such that ranges of input image signals related to each region
type are separated with thresholds.

 Simple thresholding

The most common image property to threshold is pixel grey level: g(x,y) = 0 if f(x,y) < T and
g(x,y) = 1 if f(x,y) ≥ T, where T is the threshold. Using two thresholds, T1 < T1, a range of grey levels
related to region 1 can be defined: g(x,y) = 0 if f(x,y) < T1 OR f(x,y) > T2 and g(x,y) = 1 if T1 ≤ f(x,y)
≤ T2.

Figure 3.2 Greyscale and binary mapping

DEPARTMENT OF ECE 15 CMRIT


Decomposition and reconstruction of medical images using DWT

The main problems are whether it is possible and, if yes, how to choose an adequate threshold
or a number of thresholds to separate one or more desired objects from their background. In many
practical cases the simple thresholding is unable to segment objects of interest, as shown in the
above images.

Figure 3.3 Simple thresholding

 Adaptive thresholding

Since the threshold separates the background from the object, the adaptive separation may
take account of empirical probability distributions of object (e.g. dark) and background (bright)
pixels. Such a threshold has to equalise two kinds of expected errors: of assigning a background
pixel to the object and of assigning an object pixel to the background. More complex adaptive

DEPARTMENT OF ECE 16 CMRIT


Decomposition and reconstruction of medical images using DWT

thresholding techniques use a spatially varying threshold to compensate for local spatial context

effects.

 Colour thresholding

Color segmentation may be more accurate because of more information at the pixel level
comparing to greyscale images. The standard Red-Green-Blue (RGB) colour representation has
strongly interrelated colour components, and a number of other colour systems (e.g. HSI Hue-
Saturation-Intensity) have been designed in order to exclude redundancy, determine actual object /
background colours irrespectively of illumination, and obtain more more stable segmentation.

Figure 3.4 Color thresholing

3.2.2 CONTEXTUAL SEGMENTATION


Region growing
Non-contextual thresholding groups pixels with no account of their relative locations in the image
plane. Contextual segmentation can be more successful in separating individual objects because it
accounts for closeness of pixels that belong to an individual object.

DEPARTMENT OF ECE 17 CMRIT


Decomposition and reconstruction of medical images using DWT

Two basic approaches to contextual segmentation are based on signal discontinuity or


similarity. Discontinuity-based techniques attempt to find complete boundaries enclosing relatively
uniform regions assuming abrupt signal changes across each boundary. Similarity-based techniques
attempt to directly create these uniform regions by grouping together connected pixels that satisfy
certain similarity criteria. Both the approaches mirror each other, in the sense that a complete
boundary splits one region into two.

Figure 3.5 4-8 Neighbourhood

Generally, a "good" complete segmentation must satisfy the following criteria:

1. All pixels have to be assigned to regions.


2. Each pixel has to belong to a single region only.
3. Each region is a connected set of pixels.
4. Each region has to be uniform with respect to a given predicate.
5. Any merged pair of adjacent regions has to be non-uniform.

3.3 MORPHOLOGICAL IMAGE PROCESSING


Binary images may contain numerous imperfections. In particular, the binary regions
produced by simple thresholding are distorted by noise and texture.

DEPARTMENT OF ECE 18 CMRIT


Decomposition and reconstruction of medical images using DWT

Morphological image processing pursues the goals of removing these imperfections by


accounting for the form and structure of the image. These techniques can be extended to grey scale
images.

Morphological image processing is a collection of non-linear operations related to the shape


or morphology of features in an image. According to Wikipedia, morphological operations rely only
on the relative ordering of pixel values, not on their numerical values, and therefore are especially
suited to the processing of binary images. Morphological operations can also be applied to grey
scale images such that their light transfer functions are unknown and therefore their absolute pixel
values are of no or minor interest.

Figure 3.6 Structuring elements

A morphological operation on a binary image creates a new binary image in which the pixel
has a non-zero value only if the test is successful at that location in the input image.

The structuring element is a small binary image, i.e. a small matrix of pixels, each with a value of
zero or one:

 The matrix dimensions specify the size of the structuring element.


 The pattern of ones and zeros specifies the shape of the structuring element.
 An origin of the structuring element is usually one of its pixels, although generally the origin
can be outside the structuring element.

DEPARTMENT OF ECE 19 CMRIT


Decomposition and reconstruction of medical images using DWT

Figure 3.7 Examples of structuring elements

A common practice is to have odd dimensions of the structuring matrix and the origin defined
as the centre of the matrix. Structuring elements play in morphological image processing the same
role as convolution kernels in linear image filtering.

Erosion and dilation

The erosion of a binary image f by a structuring element s (denoted f s) produces a new


binary image g = f s with ones in all locations (x,y) of a structuring element's origin at which that
structuring element s fits the input image f, i.e. g(x,y) = 1 is s fits f and 0 otherwise, repeating for all
pixel coordinates (x,y).

Figure 3.8 Erosion and dilation

DEPARTMENT OF ECE 20 CMRIT


Decomposition and reconstruction of medical images using DWT

Compound operations

Many morphological operations are represented as combinations of erosion, dilation, and simple
set-theoretic operations such as the complement of a binary image:

f c(x,y) = 1 if f(x,y) = 0, and f c(x,y) = 0 if f(x,y) = 1,

the intersection h = f ∩ g of two binary images f and g:

h(x,y) = 1 if f(x,y) = 1 and g(x,y) = 1, and h(x,y) = 0 otherwise,

and the union h = f ∪ g of two binary images f and g:

h(x,y) = 1 if f(x,y) = 1 or g(x,y) = 1, and h(x,y) = 0 otherwise:

Figure3.9 Set operations on binary image

A pixel belonging to an object is preserved by the hit and miss transform if and only if s1
translated to that pixel fits inside the object AND s2 translated to that pixel fits outside the object. It
is assumed that s1 and s2 do not intersect; otherwise it would be impossible for both fits to occur
simultaneously. Morphological filtering of a binary image is conducted by considering compound
operations like opening and closing as filters. They may act as filters of shape. For example, opening
with a disc structuring element smooth’s corners from the inside, and closing with a disc smooth’s
corners from the outside. But also these operations can filter out from an image any details that are
smaller in size than the structuring element, e.g. opening is filtering the binary image at a scale
defined by the size of the structuring element.

DEPARTMENT OF ECE 21 CMRIT


Decomposition and reconstruction of medical images using DWT

Only those portions of the image that fit the structuring element are passed by the filter;
smaller structures are blocked and excluded from the output image. The size of the structuring
element is most important to eliminate noisy details but not to damage objects of interest.

3.4 IMAGE COMPRESSION


Images contain extreme amounts of data. A 512*512 image is made up of 0,25 106 pixels,
with 1 byte per color already resulting in 0,75 MByte of data. At 25 images per second, 1 minute of
video at that resolution already yields 1,125 GBytes of data. Scanning an A4 (210*297 mm) piece
of paper with 300 dpi (dots per inch) in black and white gives 8,75 Mbits, or 1,1 Mbytes, scanning
in three colors gives 26 MBytes. There is an obvious necessity to compress images for both storing
and transportation over communication channels.

` In image or in general data compression we make use of the difference between information
and data. Information is what is actually essential for an image or data set, that which we really need
to have for what we would like to proceed to do with it. What that information is, thus depends on
what the further use of the image will be.

Whether a satellite photo is used by an agricultural specialist to check cultivation crops or by


a geographer to map the urbanization of rural areas, the relevant information in the image is different
for each purpose.

To assess "lossy" compression methods on the suitability for certain applications we often use
quality metrics:

Another criteria could be based on the applications of the images and can be objective or
subjective, for example, judgment by a panel of human observers could occur in terms as excellent,
good, acceptable or poor.

DEPARTMENT OF ECE 22 CMRIT


Decomposition and reconstruction of medical images using DWT

3.4.1 MODELS

A general system model for compression and decompression is:

Figure3.10 Compression and decompression

It is customary to use the names "encoder" and "decoder", which have its roots in the field of
Information Theory, rather than names as "compressor" and "decompressor". If the transmission or
storing channel is error-free, the channel encoder and decoder are omitted. Other wise, extra data
bits can be added to be able to detect (for example parity, Cyclic Redundancy Checks) or correct
(Error Correcting Code for memory) errors, often using special hardware. We shall not pay any
more attention to encoders and decoders. With "lossless" compression it holds that g(x,y)=f(x,y).

A general model for a source encoder is:

Figure3.11 Source encoder

The "mapper" transforms the data to a format suitable for reducing the inter-pixel redundancies.
This step is generally irreversible and can reduce the amount of data; used for Run Length Encoding,
but not in transformations to the Fourier or Discrete Cosinus domains.

DEPARTMENT OF ECE 23 CMRIT


Decomposition and reconstruction of medical images using DWT

3.4.2 ERROR FREE COMPRESSION


For many applications this is the only acceptable manner, such as for documents, text and computer
programs. This is often used for images because they are already an approximation of reality, with
the spatial and intensity quantization and errors in the projection system.

Figure3.12 Error free compression

DEPARTMENT OF ECE 24 CMRIT


Decomposition and reconstruction of medical images using DWT

CHAPTER-4
INTRODUCTION TO IMAGE FUSION

4.1 IMAGE FUSION


In Image fusion process a fused better visualized image is formed by combining two or
more images to retrieve the vital information from these images. Image fusion techniques, merge&
integrate the complementary information from multiple image sensor data & makes the image more
suitable for the visual perception and processing. Image fusion process extracts all the useful
information to minimize redundancy & reduce uncertainty from the source images. Image fusion
can combine information from two or more images into a single composite image which become
more informative and more suitable for computer processing & visual perception for further analysis
and diagnosis. But it is necessary to align two images accurately before they fused . Before fusing
images, all features should be preserve in the images and should not introduce any inconsistency or
artifacts, so that it could not distract the observer. The advantages of image fusion are improved
capability and reliability. The fused image should not have any undesired feature. The idea behind
the image fusion concept is that the fused image after image fusion method should possess all
relevant information.
The fusion of multi-modality imaging increasingly plays an important role in medical imaging
field as the extension of clinical use of various medical imaging systems. Different medical imaging
techniques may provide scans with complementary and occasionally redundant information. The
fusion of medical images can lead to additional clinical information not apparent in the single
images. However, it is difficult to simulate the surgical ability of image fusion when algorithms of
image processing are piled up merely. So many solutions to medical diagnostic image fusion have
been proposed today. Registered medical MRI and CT images of the same people and same spatial
part are used for fusion. The fusion of medical images acquired from different instrument modalities
such as MRI (magnetic resonance imaging), CT(computed tomography), X-rays and PET (positron
emission tomography) of the same objects is often needed . A number of fusion techniques have
been reported in the literature . The Fusion techniques include different methods for pixel averaging
or complicated methods like wavelet transform fusion and principal component analysis. Pixel level
image method is comparatively easy to implement & the resultant image contain huge & original
information. Many simple image fusion algorithm based on wavelet transform is proposed in refer.

DEPARTMENT OF ECE 25 CMRIT


Decomposition and reconstruction of medical images using DWT

The image is decomposed into spatial frequency bands at different scales in wavelet transform
method, such as low-low, high-high, high- low and low- high- band.
The average image information is given by the low-low band . Other bands High-high,
High-low contain directional information due to spatial orientation.
In high bands higher absolute values of wavelet coefficients correspond to salient
features such as edges or lines. The common element idea in almost all of them is the use of wavelet
transforms to decompose images into a multi resolution scheme . MRI images provide greater
contrast of soft tissues of brain than CT images, but the brightness of hard tissues such as bones is
higher in CT images. CT &MRI images individually have some Shortcomings such as MRI images
not concentrate on hard tissues & in CT image soft tissues can’t be clearly visible. In this paper
image fusion of CT & MRI images has been carried out so that the fused image which is the
combination of soft & hard tissues proven as the focused image for doctors & their clinical
treatment. This paper further quantitatively evaluates the fused images quality through two
performance measures Standard Deviation (SD) and SNR.

4.2 IMAGE FUSION TECHNIQUES


 IHS (Intensity-Hue-Saturation) Transform
 Principal Component Analysis (PCA)
 Pyramid techniques
 High pass filtering
 Wavelet Transform
 Artificial Neural Networks
 Discrete Cosine Transform

 IHS(INTENSITY-HUE-SATURATION) TRANSFORM
Intensity, Hue and Saturation are the three properties of a color that give controlled visual
representation of an image. IHS transform method is the oldest method of image fusion. In the IHS
space, hue and saturation need to be carefully controlled because it contains most of the spectral
information. For the fusion of high resolution PAN image and multispectral images, the detail
information of high spatial resolution is added to the spectral information.

DEPARTMENT OF ECE 26 CMRIT


Decomposition and reconstruction of medical images using DWT

This paper presents many IHS transformation techniques based on different color models.
These techniques include HSV, IHS1, IHS2, HIS3, IHS4, IHS5, IHS6, YIQ. Based on these different
formula, IHS transformation gives different results .

 PYRAMID TECHNIQUE
Image pyramids can be described as a model for the binocular fusion for human visual
system. By forming the pyramid structure an original image is represented in different levels. A
composite image is formed by applying a pattern selective approach of image fusion. Firstly, the
pyramid decomposition is performed on each source image. All these images are integrated to form
a composite image and then inverse pyramid transform is applied to get the resultant image. The
MATLAB implementation of the pyramid technique is shown in this paper. Image fusion is carried
out at each level of decomposition to form a fused pyramid and the fused image is obtained from it.

 HIGH PASS FILTERING (HPF)

The high resolution multispectral images are obtained from high pass filtering. The
high frequency information from the high resolution panchromatic image is added to the low
resolution multispectral image to obtain the resultant image. It is performed either by filtering the
High Resolution Panchromatic Image with a high pass filter or by taking the original HRPI and
subtracting LRPI from it. The spectral information contained in the low frequency information of
the HRMI is preserved by this method.

 PRINCIPAL COMPONENT ANALYSIS (PCA)

Despite of being similar to IHS transform, the advantage of PCA method over IHS
method is that an arbitrary number of bands can be used. This is one of the most popular methods
for image fusion. Uncorrelated Principal components are formed from the low resolution
multispectral images. The first principal component (PC1) has the information that is common to
all bands used. It contains high variance such that it gives more information about panchromatic
image. A high resolution PAN component is stretched to have the same variance as PC1 and replaces
PC1. Then an inverse PCA transform is employed to get the high resolution multispectral image.

DEPARTMENT OF ECE 27 CMRIT


Decomposition and reconstruction of medical images using DWT

 WAVELET TRANSFORM

Wavelet transform is considered as an alternative to the short time Fourier


transforms. It is advantageous over Fourier transform in that it provides desired resolution in time
domain as well as in frequency domain whereas Fourier transform gives a good resolution in only
frequency domain.

In Fourier transform, the signal is decomposed into sine waves of different frequencies whereas the
wavelet transform decomposes the signal into scaled and shifted forms of the mother wavelet or
function. In the image fusion using wavelet transform, the input images are decomposed into
approximate and informative coefficients using DWT at some specific level. A fusion rule is applied
to combine these two coefficients and the resultant image is obtained by taking the inverse wavelet
transform. Mamta Sharma / (IJCSIT) International Journal of Computer Science and Information
Technologies,

 DISCRETE COSINE TRANSFORM (DCT)

Discrete cosine Transform has found importance for the compressed images in
the form of MPEG, JVT etc. By taking discrete cosine transform, the spatial domain image is
converted into the frequency domain image. Chu-Hui Lee and Zheng-Wei Zhou have divided the
images into three parts as low frequency, medium frequency and high frequency. Average
illumination is represented by the DC value and the AC values are the coefficients of high frequency.
The RGB image is divided into the blocks of with the size of 8*8 pixels. The image is then grouped
by the matrices of red, green and blue and transformed to the grey scale image.

 ARTIFICIAL NEURAL NETWORKS (ANN)

Artificial Neural networks (ANN) have found their importance in pattern


recognition. In this a nonlinear response function is used. It uses Pulse Coupled Neural Network
(PCNN) which consists of a feedback network. This network is divided into three parts namely the
receptive field, the modulation field and the pulse generator. Each neuron corresponds to the pixel
of the input image. The matching pixel’s intensity is used as an external input to the PCNN.

DEPARTMENT OF ECE 28 CMRIT


Decomposition and reconstruction of medical images using DWT

This method is advantageous in terms of hardiness against noise, independence of geometric


variations and capability of bridging minor intensity variations in input patterns. PCNN has
biological importance and used in medical imaging as this method is feasible and gives real time
system performance.

4.3 TYPES OF IMAGE FUSION

Single Sensor

Single sensor captures the real world as a sequence of images. The set of images are fused
together to generate a new image with optimum information content. For example in illumination
variant and noise full environment, a human operators like detector operator may not be able to
detect objects of his interest which can be highlighted in the resultant fused image. The
shortcoming of this type of systems lies behind the limitations of the imaging sensor that are being
used in other sensing area. Under the conditions in which the system can operate, its dynamic
range, resolution, etc. are all restricted by the competency of the sensor. For example, a visible-
band sensor such as the digital camera is appropriate for a brightly illuminated environment such
as daylight scenes but is not suitable for poorly illuminated situations found during night time, or
under not good conditions such as in fog or rain.

Multi Sensor

A multi-sensor image fusion scheme overcomes the limitations of a single sensor image
fusion by merging the images from several sensors to form a composite image an infrared camera
is accompanying the digital camera and their individual images are merged to obtain a fused
image. This approach overcomes the issues referred to before. The digital camera is suitable for
daylight scenes; the infrared camera is appropriate in poorly illuminated environments. It is used
in military area, machine vision like in object detection, robotics, medical imaging. It is used to
solve the merge information of the several images.

Multiview Fusion

In this images have multiple or different views at the same time. Multimodal Fusion:
Images from different models like panchromatic, multispectral, visible, infrared, remote sensing.

DEPARTMENT OF ECE 29 CMRIT


Decomposition and reconstruction of medical images using DWT

Common methods of image fusion

• Weighted averaging pixel wise

• Fusion in transform domain

• Object level fusion

Multifocus Fusion:

Images from 3d views with its focal length. The original image can be divided into regions such that
every region is in focus in at least one channel of the image.

4.4 APPLICATIONS AND USES OF IMAGE FUSION


1) Fusion is basically used remote or satellite area for the proper view of satellite vision

2) It must use in medical imaging where disease should analyses through imaging vision through
spatial resolution and frequency perspectives.

3) Image fusion used in military areas where all the perspectives used to detect the threats and other
resolution work based performance.

4) For machine vision it is effectively used to visualize the two states after the image conclude its
perfect for the human vision.

5) In robotics field fused images mostly used to analyse the frequency variations in the view of
images.

6) Image fusion is used in artificial neural networks in 3d where focal length varies according to
wavelength transformation.

4.5 ADVANTAGES AND DISADVANTAGES OF IMAGE FUSION

4.5.1ADVANTAGES
1. It is easiest to interpret.

2. Fused image is true in color.

3. It is best for identification and recognition

DEPARTMENT OF ECE 30 CMRIT


Decomposition and reconstruction of medical images using DWT

4. It is low in cost

5. It has a high resolution used at multiscale images.

6. Through image fusion there is improved fused images in fog

7. Image fusion maintains ability to read out signs in all fields.

8. Image fusion has so many contrast advantages basically it should enhance the image with all
the perspectives of image.

9. It increases the situational or conditional awareness.

10. Image fusion reduced the data storage and data transmission.

4.5.2 DISADVANTAGES
1. Images have less capability in adverse weather conditions it is commonly occurred when image
fusion is done by single sensor fusion technique.

2. Not easily visible at night it is mainly due to camera aspects whether it is in day or night.

3. More source energy is necessary for the good Mamta Sharma / (IJCSIT) International Journal of
Computer Science and Information Technologies, Vol. 7 (3) , 2016, 1082-1085 www.ijcsit.com
1084 visualization of mages based on spatial frequency.

4. Due to rain or fog visualization is not cleared if one click the two source images in this type of
weather conditions it will give the worst output.

5. In this process there is huge chances of data loss.

DEPARTMENT OF ECE 31 CMRIT


Decomposition and reconstruction of medical images using DWT

CHAPTER-5
DECOMPOSITION AND RECONSTRUCTION OF
MEDICAL IMAGES

5.1 DECOMPOSITION OF WAVELETS


Wavelet transforms (WT) converts spatial domain information to frequency domain information.
The Fourier transformed signal XFT (f) gives the global frequency distribution of the time signal
x(t). The original signal can be reconstructed using the inverse Fourier transform:

Before wavelet transform, most well known method for this purpose was Fourier transform (FT).
Limitations of FT have been overcome in Short Time Fourier Transform (STFT), which is able to
retrieve both frequency and time information from a signal. In STFT along with FT concept, a
windowing concept is used. Here FT is applied over a windowed part of the signal and then moves
the window over the signal.

The advantage of wavelet transform over Fourier is local analysis. That means wavelet analysis can
reveal signal aspects like discontinuities, breakdown points etc. more clearly than FT.

A wavelet basis set starts with two orthogonal functions: the scaling function or father wavelet (t)
and the wavelet function or mother wavelet ø(t), by scaling and translation of these two orthogonal
functions we obtain a complete basis set. Wavelet transform can be expressed

DEPARTMENT OF ECE 32 CMRIT


Decomposition and reconstruction of medical images using DWT

Where the ᵠ* is the complex conjugate symbol and function ø is called wavelet function or mother
wavelet.

Wavelet transform can be implemented in two ways: continuous wavelet transform and
discrete wavelet transform. Continuous wavelet transform (CWT) can be defined by:

The transformed signal XWT ( , s) is a function of the translation parameter and the scale parameter
s. The mother wavelet is denoted by ø, the * indicates that the complex conjugate.

Where CWT performs analysis by contraction and dilation of mother function, in discrete wavelet
transform (DWT) this scenario is different. DWT uses filter banks to analyse and reconstruct signal.
This appealing procedure was presented by S. Mallat in 1989 that utilizes the decomposition of the
wavelet transform in terms of low pass (averaging) filters and high pass (differencing) filters. A
filter bank separates a signal in different frequency bands. DWT of a discrete time-domain signal is
computed by successive low pass and high pass filtering as shown in figure, which is known as
Mallat Tree decomposition. In the figure, the signal is denoted by the sequence x[n], where n is an
integer. The low pass filter is denoted by L0 while the high pass filter is denoted by H0. At each
level, the high pass filter produces detail information or detail coefficients, d[n], while the low pass
filter associated with scaling function produces approximate coefficients, a[n]. The input data is
passed through set of low pass and high pass filters. The output of high pass and low pass filters are
down sampled by 2. Increasing the rate of already sampled signal is called up sampling whereas
decreasing the rate is called down sampling.

DEPARTMENT OF ECE 33 CMRIT


Decomposition and reconstruction of medical images using DWT

Figure5.1 Three-level wavelet decomposition tree

The DWT of an image represents the image as sum of wavelets. Here four isometrics S 0, SH, SV,
and SD with mutually orthogonal ranges, satisfies the following sum rule:

With I denoting the identity operator in an appropriate Hilbert space are used. Human eyes are
less sensitive to high frequency details. Here the Haar DWT - simplest type of DWT has been
applied. In 1D-DWT average of fine details in small area is recorded.

In case of 2D-DWT we first perform one step of the transform on all rows. The left side of the matrix
contains down sampled low pass coefficients of each row; the right side contains the high pass
coefficients as shown in the figure.

Figure5.2 First stage of step 1 wavelet decomposition

DEPARTMENT OF ECE 34 CMRIT


Decomposition and reconstruction of medical images using DWT

Next, we apply one-step to all columns. These results in four types of coefficients:
LL, HL, LH, HH as follows:

Figure5.3 Final stage of step 1 wavelet decomposition

Figure5.4 Block Diagram of 1 step 2-D DWT

DEPARTMENT OF ECE 35 CMRIT


Decomposition and reconstruction of medical images using DWT

The subdivided squares represent the use of the pyramid subdivision algorithm
to image processing, as it is used on pixel squares. At each subdivision step the top
left-hand square represents averages of nearby pixel numbers, averages taken with
respect to the chosen low-pass filter; while the three directions, horizontal, vertical,
and diagonal represent detail differences, with the three represented by separate bands
and filters. We can continue decomposition of the coefficients from low pass filtering
in both directions further in the next step.

Figure 5.5 2-Step decomposition

5.2 MEDICAL IMAGES LIKE MRI AND CT


5.2.1 MAGNETIC RESONANCE IMAGING
Magnetic resonance imaging (MRI) is a medical imaging technique used
in radiology to form pictures of the anatomy and the physiological processes of the
body in both health and disease. MRI scanners use strong magnetic fields, radio
waves, and field gradients to generate images of the organs in the body.

MRI is based upon the science of nuclear magnetic resonance (NMR).


Certain atomic nuclei can absorb and emit radio frequency energy when placed in an
external magnetic field. In clinical and research MRI, hydrogen atoms are most-often
used to generate a detectable radio-frequency signal that is received by antennas in
close proximity to the anatomy being examined. Hydrogen atoms exist naturally in
people and other biological organisms in abundance, particularly in water and fat. For
this reason, most MRI scans essentially map the location of water and fat in the body.
Pulses of radio waves excite the nuclear spin energy transition, and magnetic field
gradients localize the signal in space.

DEPARTMENT OF ECE 36 CMRIT


Decomposition and reconstruction of medical images using DWT

By varying the parameters of the pulse sequence, different contrasts can be


generated between tissues based on the relaxation properties of the hydrogen atoms
therein.

Since its early development in the 1970s and 1980s, MRI has proven to be a
highly versatile imaging technique. While MRI is most prominently used in diagnostic
medicine and biomedical research, it can also be used to form images of non-living
objects. MRI scans are capable of producing a variety of chemical and physical data,
in addition to detailed spatial images.

MRI is widely used in hospitals and clinics for medical diagnosis, staging of
disease and follow-up without exposing the body to ionizing radiation.

Figure 5.6 MRI Image

5.2.2 COMPUTED TOMOGRAPHY(CT):

A CT scan makes use of computer-processed combinations of many X-


ray images taken from different angles to produce cross-sectional (tomographic)
images (virtual "slices") of specific areas of a scanned object, allowing the user to see
inside the object without cutting. Other terms include computed axial tomography
(CAT scan) and computer aided tomography.

DEPARTMENT OF ECE 37 CMRIT


Decomposition and reconstruction of medical images using DWT

Digital geometry processing is used to generate a three-dimensional image of


the inside of the object from a large series of two-dimensional radiographic images
taken around a single axis of rotation. Medical imaging is the most common
application of X-ray CT. Its cross-sectional images are used for diagnostic and
therapeutic purposes in various medical disciplines. The rest of this article discusses
medical-imaging X-ray CT; industrial applications of X-ray CT are discussed
at industrial computed tomography scanning.

The term "computed tomography" (CT) is often used to refer to X-ray CT,
because it is the most commonly known form. But, many other types of CT exist, such
as positron emission tomography (PET) and single-photon emission computed
tomography (SPECT). X-ray tomography is one form of radiography, along with
many other forms of tomographic and non-tomographic radiography.

CT produces a volume of data that can be manipulated in order to demonstrate various


bodily structures based on their ability to block the X-ray beam. Although,
historically, the images generated were in the axial or transverse plane, perpendicular
to the long axis of the body, modern scanners allow this volume of data to be
reformatted in various planes or even as volumetric (3D) representations of structures.
Although most common in medicine, CT is also used in other fields, such as non
destructive materials testing. Another example is archaeological uses such as imaging
the contents of sarcophagi. Individuals responsible for performing CT exams are
called radiographers or radiologic technologists.

Figure5.7 CT Image

DEPARTMENT OF ECE 38 CMRIT


Decomposition and reconstruction of medical images using DWT

CHAPTER-6
DISCRETE WAVELET TRANSFORM

6.1 INTRODUCTION TO WAVELETS


A wavelet is a wave-like oscillation with an amplitude that begins at zero,
increases, and then decreases back to zero. It can typically be visualized as a "brief
oscillation" like one recorded by a seismograph or heart monitor. Generally, wavelets
are purposefully crafted to have specific properties that make them useful for signal
processing. Wavelets can be combined, using a "reverse, shift, multiply and integrate"
technique called convolution, with portions of a known signal to extract information
from the unknown signal.

Figure6.1 Seismic Wavelet

For example, a wavelet can be created having a frequency of Middle C and a


short duration of roughly a 32nd note. If this wavelet were to be convolved with a
signal created from the recording of a song, then the resulting signal would be useful
for determining when the Middle C note was being played in the song.

DEPARTMENT OF ECE 39 CMRIT


Decomposition and reconstruction of medical images using DWT

Mathematically, the wavelet will correlate with the signal if the unknown signal
contains information of similar frequency. This concept of correlation is at the core of
many practical applications of wavelet theory.
As a mathematical tool, wavelets can be used to extract information from many
different kinds of data, including – but certainly not limited to – audio signals and
images. Sets of wavelets are generally needed to analyze data fully. A set of
"complementary" wavelets will decompose data without gaps or overlap so that the
decomposition process is mathematically reversible. Thus, sets of complementary
wavelets are useful in wavelet based compression/decompression algorithms where it
is desirable to recover the original information with minimal loss. In formal terms,
this representation is a wavelet series representation of a square-integral function with
respect to either a complete, orthonormal set of basic functions, or an over complete
set or frame of a vector space, for the Hilbert space of square integral functions. This
is accomplished through coherent states.

6.2 DEFINITION OF WAVELET

There are a number of ways of defining a wavelet (or a wavelet family).

Scaling filter:
An orthogonal wavelet is entirely defined by the scaling filter – a low-pass finite
impulse response (FIR) filter of length 2N and sum 1. In biorthogonal wavelets,
separate decomposition and reconstruction filters are defined.
For analysis with orthogonal wavelets the high pass filter is calculated as the
quadrature mirror filter of the low pass, and reconstruction filters are the time reverse
of the decomposition filters. The scaling filter can define Daubechies and Symlet
wavelets.

Scaling function:
Wavelets are defined by the wavelet function ψ (t) (i.e. the mother wavelet) and
scaling function φ (t) (called father wavelet) in the time domain.
The wavelet function is in effect a band-pass filter and scaling it for each level halves
its bandwidth.

DEPARTMENT OF ECE 40 CMRIT


Decomposition and reconstruction of medical images using DWT

This creates the problem that in order to cover the entire spectrum, an infinite
number of levels would be required. The scaling function filters the lowest level of
the transform and ensures the entire spectrum is covered. See for a detailed
explanation.
For a wavelet with compact support, φ (t) can be considered finite in length and
is equivalent to the scaling filter g. Meyer wavelets can be defined by scaling functions

Wavelet function:
The wavelet only has a time domain representation as the wavelet function ψ
(t). For instance, Mexican hat wavelets can be defined by a wavelet function. See a
list of a few Continuous wavelets.

6.3 WAVELET TRANSFORMS


A wavelet is a mathematical function used to divide a given function or
continuous-time signal into different scale components. Usually one can assign a
frequency range to each scale component. Each scale component can then be studied
with a resolution that matches its scale. A wavelet transform is the representation of a
function by wavelets. The wavelets are scaled and translated copies (known as
"daughter wavelets") of a finite-length or fast-decaying oscillating waveform (known
as the "mother wavelet"). Wavelet transforms have advantages over traditional Fourier
transforms for representing functions that have discontinuities and sharp peaks, and
for accurately deconstructing and reconstructing finite, non-periodic and/or non-
stationary signals.
Wavelet transforms are classified into discrete wavelet transforms (DWTs) and
continuous wavelet transforms (CWTs). Note that both DWT and CWT are
continuous-time (analog) transforms. They can be used to represent continuous-time
(analog) signals. CWTs operate over every possible scale and translation whereas
DWTs use a specific subset of scale and translation values or representation grid.
There are a large number of wavelet transforms each suitable for different
applications. For a full list see list of wavelet-related transforms but the common ones
are listed below:

DEPARTMENT OF ECE 41 CMRIT


Decomposition and reconstruction of medical images using DWT

 Continuous wavelet transform (CWT)


 Discrete wavelet transform (DWT)
 Fast wavelet transform (FWT)
 Lifting scheme & Generalized Lifting Scheme
 Wavelet packet decomposition (WPD)
 Stationary wavelet transform (SWT)
 Fractional Fourier transform (FRFT)

 Generalized transforms:
There are a number of generalized transforms of which the wavelet transform is
a special case. For example, Joseph Segman introduced scale into the Heisenberg
group, giving rise to a continuous transform space that is a function of time, scale, and
frequency. The CWT is a two-dimensional slice through the resulting 3d time-scale-
frequency volume.

Another example of a generalized transform is the chirplet transform in which


the CWT is also a two dimensional slice through the chirplet transform. An important
application area for generalized transforms involves systems in which high frequency
resolution is crucial. For example, dark field electron optical transforms intermediate
between direct and reciprocal space have been widely used in the harmonic analysis
of atom clustering, i.e. in the study of crystals and crystal defects. Now that
transmission electron microscopes are capable of providing digital images with Pico
meter-scale information on atomic periodicity in nanostructure of all sorts, the range
of pattern recognition and strain/metrology applications for intermediate transforms
with high frequency resolution (like brushlets and ridgelets) is growing rapidly.

Fractional wavelet transform (FRWT) is a generalization of the classical


wavelet transform in the fractional Fourier transform domains. This transform is
capable of providing the time- and fractional-domain information simultaneously and
representing signals in the time-fractional-frequency plane.

6.4 DISCRETE WAVELET TRANSFORM


In numerical analysis and functional analysis, a discrete wavelet
transform (DWT) is any wavelet transform for which the wavelets are discretely
sampled.

DEPARTMENT OF ECE 42 CMRIT


Decomposition and reconstruction of medical images using DWT

As with other wavelet transforms, a key advantage it has over Fourier


transforms is temporal resolution: it captures both frequency and location information
(location in time).

Hungarian mathematician Alfred Haar invented the first DWT. For an input
represented by a list of numbers, the Haar wavelet transform may be considered to
pair up input values, storing the difference and passing the sum. This process is
repeated recursively, pairing up the sums to prove the next scale, which leads
to differences and a final sum. Other forms of discrete wavelet transform include
the non- or undecimated wavelet transform (where down sampling is omitted),
the Newland transform (where an orthonormal basis of wavelets is formed from
appropriately constructed top-hat filters in frequency space). Wavelet packet
transforms are also related to the discrete wavelet transform. Complex wavelet
transform is another form.

The Haar DWT illustrates the desirable properties of wavelets in general. First,
it can be performed in operations; second, it captures not only a notion of the
frequency content of the input, by examining it at different scales, but also temporal
content, i.e. the times at which these frequencies occur. Combined, these two
properties make the Fast wavelet transform (FWT) an alternative to the
conventional fast Fourier transform (FFT).

6.5 APPLICATIONS OF WAVELETS


Generally, an approximation to DWT is used for data compression if a signal is
already sampled, and the CWT for signal analysis. Thus, DWT approximation is
commonly used in engineering and computer science, and the CWT in scientific
research.

Like some other transforms, wavelet transforms can be used to transform data,
and then encode the transformed data, resulting in effective compression. For
example, JPEG 2000 is an image compression standard that uses biorthogonal
wavelets. This means that although the frame is over complete, it is a tight frame (see
types of frames of a vector space), and the same frame functions (except for
conjugation in the case of complex wavelets) are used for both analysis and synthesis,
i.e., in both the forward and inverse transform. For details, see wavelet compression.

DEPARTMENT OF ECE 43 CMRIT


Decomposition and reconstruction of medical images using DWT

A related use is for smoothing/denoising data based on wavelet coefficient


thresholding, also called wavelet shrinkage. By adaptively thresholding the wavelet
coefficients that correspond to undesired frequency components smoothing and/or
denoising operations can be performed. Wavelet transforms are also starting to be used
for communication applications. Wavelet OFDM is the basic modulation scheme used
in HD-PLC (a power line communications technology developed by Panasonic), and
in one of the optional modes included in the IEEE 1901 standard. Wavelet OFDM can
achieve deeper notches than traditional FFT OFDM, and wavelet OFDM does not
require a guard interval (which usually represents significant overhead in FFT OFDM
systems).

As a representation of a signal:

Often, signals can be represented well as a sum of sinusoids. However, consider


a non-continuous signal with an abrupt discontinuity; this signal can still be
represented as a sum of sinusoids, but requires an infinite number, which is an
observation known as Gibbs phenomenon. This, then, requires an infinite number of
Fourier coefficients, which is not practical for many applications, such as
compression. Wavelets are more useful for describing these signals with
discontinuities because of their time-localized behaviour (both Fourier and wavelet
transforms are frequency-localized, but wavelets have an additional time-localization
property). Because of this, many types of signals in practice may be non-sparse in the
Fourier domain, but very sparse in the wavelet domain. This is particularly useful in
signal reconstruction, especially in the recently popular field of compressed sensing.
(Note that the short-time Fourier transform (STFT) is also localized in time and
frequency, but there are often problems with the frequency-time resolution trade-off.
Wavelets are better signal representations because of multiresolution analysis.)

This motivates why wavelet transforms are now being adopted for a vast
number of applications, often replacing the conventional Fourier transform. Many
areas of physics have seen this paradigm shift, including molecular dynamics,
ab initio calculations, astrophysics, density-matrix localisation, and seismology,
optics, turbulence and quantum mechanics. This change has also occurred in image
processing, EEG, EMG, ECG analyses, brain rhythms, DNA analysis, protein
analysis, climatology, human sexual response analysis, general signal processing,
speech recognition, computer graphics, multifractal analysis, and sparse coding.

DEPARTMENT OF ECE 44 CMRIT


Decomposition and reconstruction of medical images using DWT

In computer vision and image processing, the notion of scale space


representation and Gaussian derivative operators is regarded as a canonical multi-
scale representation.

Wavelet denoising

Suppose we measure a noisy signal. Assume s has a sparse representation in a


certain wavelet bases, and Most elements in p are 0 or close to 0, and since W is
orthogonal, the estimation problem amounts to recovery of a signal in aid Gaussian
noise. As p is sparse, one method is to apply a Gaussian mixture model for p.

Assume the variance of "significant" coefficients, and is the variance of


"insignificant" coefficients. Then, it is called the shrinkage factor, which depends on
the prior variances and the effect of the shrinkage factor is that small coefficients are
set early to 0, and large coefficients are unaltered. Small coefficients are mostly noises,
and large coefficients contain actual signal. At last, apply the inverse wavelet
transform to obtain ȿ=WϷ.

6.6 DWT of Images


One here comes back to dyadic multiresolution: a wavelet orthonormal basis
in L2( R2) is built up from (tensor) products involving

 A scale function ϕ associated to a multiresolution {Vj}j∈ Z of L2( R)


 L2( R)= Lj Wj

For this purpose, one defines three wavelets:

 ψ1(x1, x2) = ϕ(x1)ψ(x2) (horizontal)


 ψ2(x1, x2) = ψ(x1)ϕ(x2) (vertical),
 ψ3(x1, x2) = ψ(x1)ψ(x2) (diagonal). Moreover, one puts, for 1 ≤ k ≤ 3,
 ψj,nk (x) = 2jψk(2jx1 − n1, 2jx2 − n2)
 {ψj,n1 , ψj,n2 , ψj,n3 } form an orthonormal basis of the subspace of details
 W2j = (Vj ⊗ Wj) ⊕ (Wj ⊗ Vj) ⊕ (Wj ⊗ Wj) at scale jL2( R2) = Lj Wj2
 The whole image s(x) decomposes as s(x) = Pk,j,ndk(j,n).2jψj,nk (x)

DEPARTMENT OF ECE 45 CMRIT


Decomposition and reconstruction of medical images using DWT

 The coefficient dk(j,n) =Z(∞,−∞)2jψk(2jx1 − n1, 2jx2 − n2)s(x) d2x = hψj,nk


|s(t)i,
As function of the three discrete variables k, j and n, is the discrete wavelet transform
of the image.

DEPARTMENT OF ECE 46 CMRIT


Decomposition and reconstruction of medical images using DWT

CHAPTER-7
SOFTWARE TOOLS

7.1 INRODUCTION TO MATLAB


MATLAB is a software package for high performance numerical computation
and visualization provides an interactive environment with hundreds of built in
functions for technical computation, graphics and animation. The MATLAB name
stands for MATRIX Laboratory

Figure7.1 Matlab toolbox

At its core ,MATLAB is essentially a set (a “toolbox”) of routines (called “m


files” or “mex files”) that sit on your computer and a window that allows you to create
new variables with names (e.g. voltage and time) and process those variables with any
of those routines (e.g. plot voltage against time, find the largest voltage, etc).

DEPARTMENT OF ECE 47 CMRIT


Decomposition and reconstruction of medical images using DWT

It also allows you to put a list of your processing requests together in a file and
save that combined list with a name so that you can run all of those commands in the
same order at some later time. Furthermore, it allows you to run such lists of
commands such that you pass in data and/or get data back out (i.e. the list of
commands is like a function in most programming languages). Once you save a
function, it becomes part of your toolbox (i.e. it now looks to you as if it were part of
the basic toolbox that you started with). For those with computer programming
backgrounds: Note that MATLAB runs as an interpretive language (like the old
BASIC). That is, it does not need to be compiled. It simply reads through each line of
the function, executes it, and then goes on to the next line. (In practice, a form of
compilation occurs when you first run a function, so that it can run faster the next time
you run it.)

7.2 MATLAB Windows:


MATLAB works with through three basic windows

Figure7.2 Matlab window

DEPARTMENT OF ECE 48 CMRIT


Decomposition and reconstruction of medical images using DWT

Command Window: This is the main window. It is characterized by MATLAB


command prompt >> when you launch the application program MATLAB puts you
in this window all commands including those for user-written programs ,are typed in
this window at the MATLAB prompt

Graphics window: the output of all graphics commands typed in the command
window are flushed to the graphics or figure window, a separate gray window with
white background color the user can create as many windows as the system memory
will allow

Edit window: This is where you write edit, create and save your own programs in
files called M files.

Figure7.3 Edit window


Input-output: MATLAB supports interactive computation taking the input from the
screen and flushing, the output to the screen. In addition it can read input files and
write output files

Data Type: the fundamental data –type in MATLAB is the array. It encompasses
several distinct data objects- integers, real numbers, matrices, character strings,
structures and cells. There is no need to declare variables as real or complex,
MATLAB automatically sets the variable to be real.

DEPARTMENT OF ECE 49 CMRIT


Decomposition and reconstruction of medical images using DWT

Dimensioning: Dimensioning is automatic in MATLAB. No dimension statements


are required for vectors or arrays .we can find the dimensions of an existing matrix or
a vector with the size and length commands.

Where to work in MATLAB

All programs and commands can be entered either in the

a) Command window

b) As an M file using Mat lab editor

Note: Save all M files in the folder 'work' in the current directory. Otherwise you
have to locate the file during compiling.

Typing quit in the command prompt>> quit, will close MATLAB Mat lab
Development Environment.

For any clarification regarding plot etc, which are built in functions type help topic
i.e. help plot

7.3 BASIC INSTRUCTIONS IN MATLAB


1. T = 0: 1:10

This instruction indicates a vector T which as initial value 0 and final value 10 with
an increment of 1

Therefore T = [0 1 2 3 4 5 6 7 8 9 10]

2. F= 20: 1: 100

Therefore F = [20 21 22 23 24 ……… 100]

3. T= 0:1/pi: 1

Therefore T= [0, 0.3183, 0.6366, 0.9549]

4. zeros (1, 3)

The above instruction creates a vector of one row and three columns whose values are
zero

DEPARTMENT OF ECE 50 CMRIT


Decomposition and reconstruction of medical images using DWT

Output= [0 0 0]

5. zeros( 2,4)

Output = 0 0 0 0

0000

6. ones (5,2)

The above instruction creates a vector of five rows and two columns

Output = 11

11

11

11

11

7. a = [ 1 2 3] b = [4 5 6]

a.*b = [4 10 18]
8. If C= [2 2 2]
b.*C results in [8 10 12]

9. plot (t, x)

If x = [6 7 8 9] t = [1 2 3 4]

Figure7.4 Plot figure

This instruction will display a figure window which indicates the plot of x versus t

DEPARTMENT OF ECE 51 CMRIT


Decomposition and reconstruction of medical images using DWT

10. stem (t,x) :- This instruction will display a figure window as shown

11. Subplot: This function divides the figure window into rows and columns.

Subplot (2 2 1) divides the figure window into 2 rows and 2 columns 1 represent
number of the figure

Subplot (3 1 2) divides the figure window into 3 rows and 1 column 2 represent
number of the figure

12. Conv
Syntax: w = conv(u,v)
Description: w = conv(u,v) convolves vectors u and v. Algebraically, convolution is
the same operation as multiplying the polynomials whose coefficients are the elements
of u and v.
13.Disp
Syntax: disp(X)
Description: disp(X) displays an array, without printing the array name. If X contains
a text string, the string is displayed. Another way to display an array on the screen is
to type its name, but this prints a leading "X=," which is not always desirable. Note
that disp does not display empty arrays.
14.xlabel
Syntax: xlabel('string')
Description: xlabel('string') labels the x-axis of the current axes.
15. ylabel
Syntax : ylabel('string')
Description: ylabel('string') labels the y-axis of the current axes.

16.Title

DEPARTMENT OF ECE 52 CMRIT


Decomposition and reconstruction of medical images using DWT

Syntax : title('string')

Description: title('string') outputs the string at the top and in the center of the current
axes.

17. grid on

Syntax: grid on

Description: grid on adds major grid lines to the current axes.

18. FFT Discrete Fourier transform.

FFT(X) is the discrete Fourier transform (DFT) of vector X. For matrices, the FFT
operation is applied to each column. For N-D arrays, the FFT operation operates on
the first non-singleton dimension.

FFT(X,N) is the N-point FFT, padded with zeros if X has less than N points and
truncated if it has more.

19. ABS Absolute value.

ABS(X) is the absolute value of the elements of X. When X is complex, ABS(X) is


the complex modulus (magnitude) of the elements of X.

20. ANGLE Phase angle.

ANGLE (H) returns the phase angles, in radians, of a matrix with complex elements.
21. INTERP Resample data at a higher rate using low pass interpolation.

Y = INTERP(X, L) resamples the sequence in vector X at L times the original sample


rate. The resulting resampled vector Y is L times longer, LENGTH(Y) =
L*LENGTH(X).

22. DECIMATE Resample data at a lower rate after low pass filtering.

Y = DECIMATE(X, M) resamples the sequence in vector X at 1/M times the original


sample rate. The resulting resampled vector Y is M times shorter, i.e., LENGTH(Y)
= CEIL (LENGTH(X)/M). By default, DECIMATE filters the data with an 8th order
Chebyshev Type I low pass filter with cutoff frequency .8*(Fs/2)/R, before
resampling.

DEPARTMENT OF ECE 53 CMRIT


Decomposition and reconstruction of medical images using DWT

7.3: DIP Using MATLAB


MATLAB deals with

1. Basic flow control and programming language


2. How to write scripts (main functions) with matlab
3. How to write functions with matlab
4. How to use the debugger
5. How to use the graphical interface
6. Examples of useful scripts and functions for image processing
After learning about matlab we will be able to use matlab as a tool to help us with our
maths, electronics, signal & image processing, statistics, neural networks, control and
automation. .

Matlab resources

Language: High level matrix/vector language with

 Scripts and main programs


 Functions
 Flow statements (for, while)
 Control statements (if,else)
 data structures (struct, cells)
 input/ouputs (read,write,save)
 object oriented programming.
Environment

 Command window.
 Editor
 Debugger
 Profiler (evaluate performances)

Mathematical libraries

 Vast collection of functions

DEPARTMENT OF ECE 54 CMRIT


Decomposition and reconstruction of medical images using DWT

API

 Call c function from matlab


 Call matlab functions from c
Scripts and main programs

In matlab, scripts are the equivalent of main programs. The variables declared
in a script are visible in the workspace and they can be saved. Scripts can therefore
take a lot of memory if you are not careful, especially when dealing with images. To
create a script, you will need to start the editor, write your code and run it.

7.4 MATLAB Functions:


ADDPATH :
Add directories to MATLAB search path

Graphical Interface:

As an alternative to the ‘addpath’ function, use the Set Path dialog box. To
open it, select Set Path from the File menu in the MATLAB desktop.

Syntax:

 addpath('directory')
 addpath('dir','dir2','dir3' ...)
 addpath('dir','dir2','dir3' ...'-flag')
 addpath dir1 dir2 dir3 ... -flag

Description:

 Addpath('directory') adds the specified directory to the top (also called front) of
the current MATLAB search path. Use the full pathname for directory.
 Addpath('dir','dir2','dir3' ...) adds all the specified directories to the top of the
path. Use the full pathname for each dir.

 Addpath('dir','dir2','dir3' ...'-flag') adds the specified directories to either the top


or bottom of the path, depending on the value of flag.

 Addpath dir1 dir2 dir3 ... -flag is the unquoted form of the syntax.

DEPARTMENT OF ECE 55 CMRIT


Decomposition and reconstruction of medical images using DWT

INPUT :
Request user input

Syntax:

 user_entry = input('prompt')
 user_entry = input('prompt', 's')

Description:

 The response to the input prompt can be any MATLAB expression, which is
evaluated using the variables in the current workspace.
 user_entry = input('prompt') displays prompt as a prompt on the screen, waits
for input from the keyboard, and returns the value entered in user entry.
 user_entry = input('prompt', 's') returns the entered string as a text variable
rather than as a variable name or numerical value.

DISP:

Display text or array

Syntax:

 disp(X)

Description:

 disp(X) displays an array, without printing the array name. If X contains a text
string, the string is displayed.
 Another way to display an array on the screen is to type its name, but this prints
a leading "X =," which is not always desirable.

EXIST:

Check if a variable or file exists

DEPARTMENT OF ECE 56 CMRIT


Decomposition and reconstruction of medical images using DWT

Syntax:

 a = exist('item')
 ident = exist('item','kind')

Description:

 a = exist('item') returns the status of the variable or file item:

0 If item does not exist.

1 If the variable item exists in the workspace.

2 If item is an M-file or a file of unknown type.

3 If item is a MEX-file.

4 If item is a MDL-file.

5 If item is a built-in MATLAB function.

6 If item is a P-file.

7 If item is a directory.

 exist('item') returns 2 if item is on the MATLAB search path. item may be


a MATLABPATH relative partial pathname. item may be item.ext, but the filename
extension (ext) cannot be mdl, p, or mex.
 ident = exist('item','kind') returns logical true (1) if an item of the
specified kind is found, and returns 0 otherwise. kind may be:

DEPARTMENT OF ECE 57 CMRIT


Decomposition and reconstruction of medical images using DWT

var Checks only for


variables.

builtin Checks only for


built-in functions.

file Checks only for


files.

dir Checks only for


directories.

UIGETFILE:

Interactively retrieve a filename

Syntax:

 uigetfile
 uigetfile('FilterSpec')
 uigetfile('FilterSpec','DialogTitle')
 uigetfile('FilterSpec','DialogTitle',x,y)
 [fname,pname] = uigetfile(...)

Description:

 uigetfile displays a dialog box used to retrieve a file. The dialog box lists the
files and directories in the current directory.
 uigetfile('FilterSpec') displays a dialog box that lists files in the current
directory. FilterSpec determines the initial display of files and can be a full filename
or include the * wildcard. For example, '*.m' (the default) causes the dialog box list to
show only MATLAB M-files.

DEPARTMENT OF ECE 58 CMRIT


Decomposition and reconstruction of medical images using DWT

 uigetfile('FilterSpec','DialogTitle') displays a dialog box that has the


title DialogTitle.
 uigetfile('FilterSpec','DialogTitle',x,y) positions the dialog box at position [x,y],
where x and y are the distance in pixel units from the left and top edges of the screen.
Note that some platforms may not support dialog box placement.
 [fname,pname] = uigetfile(...) returns the name and path of the file selected in
the dialog box. After you press the Done button, fname contains the name of the file
selected and pname contains the name of the path selected. If you press
the Cancel button or if an error occurs, fname and pname are set to 0.

IMREAD: Read image from graphics file

Synopsis:

 A = imread(filename,fmt)
 [X,map] = imread(filename,fmt)
 [...] = imread(filename)
 [...] = imread(...,idx) (TIFF only)
 [...] = imread(...,ref) (HDF only)
 [...] = imread(...,'BackgroundColor',BG) (PNG only)
 [A,map,alpha] = imread(...) (PNG only)

Description:

 A = imread(filename,fmt) reads a grayscale or truecolor image


named filename into A. If the file contains a grayscale intensity image, A is a two-
dimensional array. If the file contains a truecolor (RGB) image, A is a three-
dimensional (m-by-n-by-3) array.
 [X,map] = imread(filename,fmt) reads the indexed image
in filename into X and its associated colormap into map. The colormap values are
rescaled to the range [0,1]. A and map are two-dimensional arrays.
 [...] = imread(filename) attempts to infer the format of the file from its content.
filename is a string that specifies the name of the graphics file, and fmt is a string that
specifies the format of the file. If the file is not in the current directory or in a directory
in the MATLAB path, specify the full pathname for a location on your system.

DEPARTMENT OF ECE 59 CMRIT


Decomposition and reconstruction of medical images using DWT

Format File type

'bmp' Windows Bitmap (BMP)

'hdf' Hierarchical Data Format (HDF)

'jpg' or 'jpeg' Joint Photographic Experts Group (JPEG)

'pcx' Windows Paintbrush (PCX)

`png' Portable Network Graphics (PNG)

'tif' or 'tiff' Tagged Image File Format (TIFF)

'xwd' X Windows Dump (XWD)

STRCAT:

String concatenation

Syntax:

 t = strcat(s1,s2,s3,...)

Description:

 t = strcat(s1,s2,s3,...) horizontally concatenates corresponding rows of the


character arrays s1, s2, s3, etc. The trailing padding is ignored. All the inputs must
have the same number of rows (or any can be a single string). When the inputs are all
character arrays, the output is also a character array.
 When any of the inputs is a cell array of strings, strcat returns a cell array of
strings formed by concatenating corresponding elements of s1,s2, etc. The inputs must

DEPARTMENT OF ECE 60 CMRIT


Decomposition and reconstruction of medical images using DWT

all have the same size (or any can be a scalar). Any of the inputs can also be a character
array.

SUBPLOT:

Create and control multiple axes

Syntax:

 subplot(m,n,p)
 subplot(h)
 subplot('Position',[left bottom width height])
 h = subplot(...)

Description:

 subplot divides the current figure into rectangular panes that are numbered row-
wise. Each pane contains an axes. Subsequent plots are output to the current pane.
 subplot(m,n,p) creates an axes in the p-th pane of a figure divided into an m-by-
n matrix of rectangular panes. The new axes becomes the current axes.
 subplot(h) makes the axes with handle h current for subsequent plotting
commands.
 subplot('Position',[left bottom width height]) creates an axes at the position
specified by a four-element vector. left, bottom, width, and height are in normalized
coordinates in the range from 0.0 to 1.0.
 h = subplot(...) returns the handle to the new axes.

IMSHOW:
Display image

Syntax:

 imshow(I)
 imshow(I,[low high])
 imshow(RGB)
 imshow(BW)

DEPARTMENT OF ECE 61 CMRIT


Decomposition and reconstruction of medical images using DWT

 imshow(X,map)
 imshow(filename)
 himage = imshow(...)
 imshow(...,param1,val1,param2,val2)

Description:

 imshow(I) displays the grayscale image I.


 imshow(I,[low high]) displays the grayscale image I, specifying the display
range for I in [low high]. The value low (and any value less than low) displays as
black; the value high (and any value greater than high) displays as white. Values in
between are displayed as intermediate shades of gray, using the default number of
gray levels. If you use an empty matrix ([]) for [low high], imshow uses [min(I(:))
max(I(:))]; that is, the minimum value in I is displayed as black, and the maximum
value is displayed as white.
 imshow(RGB) displays the truecolor image RGB.
 imshow(BW) displays the binary image BW. imshow displays pixels with the
value 0 (zero) as black and pixels with the value 1 as white.
 imshow(X,map) displays the indexed image X with the colormap map. A
color map matrix may have any number of rows, but it must have exactly three
columns. Each row is interpreted as a color, with the first element specifying the
intensity of red light, the second green, and the third blue. Color intensity can be
specified on the interval 0.0 to 1.0.
 imshow(filename) displays the image stored in the graphics file filename. The
file must contain an image that can be read
by imread or dicomread. imshow calls imread or dicomread to read the image from
the file, but does not store the image data in the MATLAB workspace. If the file
contains multiple images, the first one will be displayed. The file must be in the
current directory or on the MATLAB path.
 himage = imshow(...) returns the handle to the image object created
by imshow.

DEPARTMENT OF ECE 62 CMRIT


Decomposition and reconstruction of medical images using DWT

Parameter Value

'DisplayRange' Two-element vector [LOW HIGH] that controls the


display range of an intensity image. See above for
more details about how to set [LOW HIGH].
Note: Including the parameter name is optional,
except when the image is specified by a filename.
The syntax imshow(I,[LOW HIGH]) is equivalent
to imshow(I,'DisplayRange',[LOW HIGH]).
However, the 'DisplayRange' parameter must be
specified when calling imshow with a filename, for
example imshow(filename,'DisplayRange'[LOW
HIGH]).

'InitialMagnification' Any scalar value, or the text string 'fit' that


specifies the initial magnification used to display
the image. When set to 100, imshow displays the
image at 100% magnification (one screen pixel for
each image pixel). When set to 'fit', imshow scales
the entire image to fit in the window.
Initially, imshow always displays the entire image.
If the magnification value is large enough that the
image would be too big to display on the
screen, imshow warns and displays the image at the
largest magnification that fits on the screen.
By default, the initial magnification parameter is
set to the value returned
by iptgetpref('ImshowInitialMagnification').

'XData' Two-element vector that establishes a nondefault


spatial coordinate system by specifying the image
XData. The value can have more than two

DEPARTMENT OF ECE 63 CMRIT


Decomposition and reconstruction of medical images using DWT

elements, but only the first and last elements are


actually used..

'YData' Two-element vector that establishes a nondefault


spatial coordinate system by specifying the image
YData. The value can have more than two
elements, but only the first and last elements are
actually used.

IMRESIZE:
Resize image

Syntax

 B = imresize(A,m)
 B = imresize(A,m,method)
 B = imresize(A,[mrows ncols],method)
 B = imresize(...,method,n)
 B = imresize(...,method,h)

Description:

 B = imresize(A,m) returns image B that is m times the size of A.A can be an


indexed image, grayscale image, RGB, or binary image. If m is between 0 and
1.0, B is smaller than A. If m is greater than 1.0, B is larger than A. When resizing the
image, imresize uses nearest-neighbor interpolation.
 B = imresize(A,m,method) returns an image that is m times the size of A using
the interpolation method specified by method. method is a string that can have one of
these values. The default value is enclosed in braces ({}).

DEPARTMENT OF ECE 64 CMRIT


Decomposition and reconstruction of medical images using DWT

Value Description

{'nearest'} Nearest-neighbor interpolation

'bilinear' Bilinear interpolation

'bicubic' Bicubic interpolation

 B = imresize(A,[mrows ncols],method) returns an image of the size specified


by [mrows ncols]. If the specified size does not produce the same aspect ratio as the
input image has, the output image is distorted.
 B = imresize(...,method,n) When the specified output size is smaller than the
size of the input image, and method is 'bilinear' or 'bicubic', imresize applies a
lowpass filter before interpolation to reduce aliasing. n is an integer scalar specifying
the size of the filter, which is n-by-n. If n is 0 (zero), imresize omits the filtering step.
If you do not specify a size, the default filter size is 11-by-11.
 B = imresize(...,method,h) When the specified output size is smaller than the
size of the input image, and method is 'bilinear' or 'bicubic', you can also specify a
two-dimensional FIR filter, h, such as those returned by ftrans2, fwind1, fwind2,
or fsamp2.

WAITBAR:
Display waitbar

Syntax:

 h = waitbar(x,'title')
 waitbar(x,'title','CreateCancelBtn','button_callback')
 waitbar(...,property_name,property_value,...)
 waitbar(x)
 waitbar(x,h)
 waitbar(x,h,'updated title')

DEPARTMENT OF ECE 65 CMRIT


Decomposition and reconstruction of medical images using DWT

Description

 A waitbar shows what percentage of a calculation is complete, as the calculation


proceeds.
 h = waitbar(x,'title') displays a waitbar of fractional length x. The handle to the
waitbar figure is returned in h. x must be between 0 and 1.
 waitbar(x,'title','CreateCancelBtn','button_callback') specifying CreateCancel
Btn adds a cancel button to the figure that executes the MATLAB commands specified
in button_callback when the user clicks the cancel button or the close figure
button. waitbar sets both the cancel button callback and the
figure CloseRequestFcn to the string specified in button_callback.
 waitbar(...,property_name,property_value,...) optionalarguments property_nam
e and property_value enable you to set corresponding waitbar figure properties.
 waitbar(x) subsequent calls to waitbar(x) extend the length of the bar to the new
position x.
 waitbar(x,h) extends the length of the bar in the wait bar h to the new position x.

DEPARTMENT OF ECE 66 CMRIT


Decomposition and reconstruction of medical images using DWT

CHAPTER-8
FUSION OF MEDICAL IMAGES IN
MATLAB
The principle of image fusion using wavelets is to merge the wavelet
decompositions of the two original images using fusion methods applied to
approximations coefficients and details coefficients. This provides tools for the
analysis & Wavelet toolbox software, is a collection of many functions built on the
MATLAB technical computing environment. It also helps to synthesis the
deterministic and random signals of images using wavelets and wavelet packets
using the MATLAB language. The image fusion process in MATLAB using wavelet
toolbox.

8.1 STEPS FOR IMAGE FUSION PROCESS

STEP1-RegistertheCTandMRImedicalimage

STEP2-Performthewaveletdecomposition

STEP3-Mergethetwoimagesafterdecomposition

STEP4-Restorethenewimageusingimagefusion

Figure8.1 Image fusion process in toolbox

DEPARTMENT OF ECE 67 CMRIT


Decomposition and reconstruction of medical images using DWT

Figure8.2 Block diagram

8.2 PERFORMANCE ASSESSMENT


8.2.1 STANDARD DEVIATION

In statistics, the standard deviation (SD, also represented by the Greek


letter sigma σ or the Latin letter s) is a measure that is used to quantify the amount of
variation or dispersion of a set of data values.[1] A low standard deviation indicates
that the data points tend to be close to the mean (also called the expected value) of
the set, while a high standard deviation indicates that the data points are spread out
over a wider range of values.

The standard deviation of a random variable, statistical population, data set,


or probability distribution is the square root of its variance. It
is algebraically simpler, though in practice less robust, than the average absolute

DEPARTMENT OF ECE 68 CMRIT


Decomposition and reconstruction of medical images using DWT

deviation.[2][3] A useful property of the standard deviation is that, unlike the variance,
it is expressed in the same units as the data.

There are also other measures of deviation from the norm, including mean
absolute deviation, which provide different mathematical properties from standard
deviation.

In addition to expressing the variability of a population, the standard deviation


is commonly used to measure confidence in statistical conclusions. For example,
the margin of error in polling data is determined by calculating the expected
standard deviation in the results if the same poll were to be conducted multiple
times. This derivation of a standard deviation is often called the "standard error" of
the estimate or "standard error of the mean" when referring to a mean. It is computed
as the standard deviation of all the means that would be computed from that
population if an infinite number of samples were drawn and a mean for each sample
were computed. It is very important to note that the standard deviation of a
population and the standard error of a statistic derived from that population (such as
the mean) are quite different but related (related by the inverse of the square root of
the number of observations). The reported margin of error of a poll is computed
from the standard error of the mean (or alternatively from the product of the standard
deviation of the population and the inverse of the square root of the sample size,
which is the same thing) and is typically about twice the standard deviation—the
half-width of a 95 percent confidence interval. In science, researchers
commonly[citation needed]report the standard deviation of experimental data, and only
effects that fall much farther than two standard deviations away from what would
have been expected are considered statistically significant—normal random error or
variation in the measurements is in this way distinguished from likely genuine
effects or associations. The standard deviation is also important in finance, where the
standard deviation on the rate of return on an investment is a measure of
the volatility of the investment.

DEPARTMENT OF ECE 69 CMRIT


Decomposition and reconstruction of medical images using DWT

8.2.2 SIGNAL TO NOISE RATIO


signal-to-noise ratio, often-abbreviated SNR, is an engineering term for the
ratio between the maximum possible power of a signal and the power of corrupting
noise that affects the fidelity of its representation. Because many signals have a very
wide dynamic range, SNR is usually expressed in terms of the logarithmic decibel
scale.
SNR is most commonly used to measure the quality of reconstruction of lossy
compression codecs (e.g., for image compression). The signal in this case is the
original data, and the noise is the error introduced by compression. When comparing
compression codecs, SNR is an approximation to human perception of reconstruction
quality.
Although a higher SNR generally indicates that the reconstruction is of higher
quality, in some cases it may not. One has to be extremely careful with the range of
validity of this metric; it is only conclusively valid when it is used to compare results
from the same codec (or codec type) and same content.
SNR is most easily defined via the mean squared error (MSE). Given a noise-
free m×n monochrome image I and its noisy approximation K, MSE is defined. Here,
MAXI is the maximum possible pixel value of the image. When the pixels are
represented using 8 bits per sample, this is 255. More generally, when samples are
represented using linear PCM with B bits per sample, MAXI is 2B−1. For color
images with three RGB values per pixel, the definition of PSNR is the same except
the MSE is the sum over all squared value differences di-vided by image size and by
three. Alternately, for color images the image is converted to a different color space
and SNR is reported against each channel of that color space, e.g., YCbCr or HSL.
Typical values for the SNR in lossy image and video compression are between
30 and 50 dB, provided the bit-depth is 8 bits, where higher is better. For 16-bit data,
typical values for the PSNR are between 60 and 80 dB. Acceptable values for wireless
transmission quality loss are considered about 20 dB to 25 dB. In the absence of noise,
the two images K and I are identical, and thus the MSE is zero. In this case the SNR
is infinite (or undefined, see Division by zero).

DEPARTMENT OF ECE 70 CMRIT


Decomposition and reconstruction of medical images using DWT

Figure8.3 Signal to noise ratio

DEPARTMENT OF ECE 71 CMRIT


Decomposition and reconstruction of medical images using DWT

CHAPTER-9
RESULT AND DISCUSSION

INPUT

Example1:

The image1 shown below is an MRI image

Figure 9.1 MRI Image


By using discrete wavelet transform method, the input image1 is taken as MRI
image for further fusion process and is fused with CT image

DEPARTMENT OF ECE 72 CMRIT


Decomposition and reconstruction of medical images using DWT

The image2 shown below is an CT image

Figure 9.2 CT Image


Here, input image2 is taken as CT and fused with MRI Image using image fusion
processing technique..

DEPARTMENT OF ECE 73 CMRIT


Decomposition and reconstruction of medical images using DWT

OUTPUT

Figure 9.3 Fused image

As shown in the figure the input image1(MRI) And input2(CT) is fused using image
fusion processing technique for further analysis.The output image obtained is an
fused image

DEPARTMENT OF ECE 74 CMRIT


Decomposition and reconstruction of medical images using DWT

CHAPTER-10

CONCLUSION AND FUTURE SCOPE

CONCLUSION

In conclusion, using wavelet transform and weighted fusion, we get a


good fusion image of CT/MRI compared to the single CT or MRI, we precisely
discover and locate the diseased region in human body. It leads to accuracy which is
far more improved and leads to extraction of maximum information as compared to
other methods. Wavelet transform shows better resemblance with human visual
system hence resulting in improved visualization and interpretation.

FUTURE SCOPE

Use of Computer Tomography (CT), Magnetic Resonance (MR), Single


Photon Emission Computed Tomography (SPECT) techniques in biomedical imaging
has revolutionized the process of medical diagnosis in recent years. Further
advancements in biomedical imaging are being done by development of new tools of
image processing. One of tools is image fusion. The fusion of CT scan, MRI and
SPECT images can make medical diagnosis much easier and accurate.

DEPARTMENT OF ECE 75 CMRIT


Decomposition and reconstruction of medical images using DWT

REFERENCES

[1]YongYang,“PerformingWaveletBasedImageFusionthroughDifferentIntegrationSc
heme”,
InternationalJ.ofDigitalContentTechnologyand
itsApplications,Vol.5,No3,2011,pp156-166.

[2]H.Irshad, et al, “Image fusion using computational intelligence: A survey”,


2ndInternational conference on Environmental and Computer Science,
IEEEComputerSociety,2009,pp.128-132.

[3]TheImagefusionwebsite[Online]Available:http://www.Imagefusion.org.

[4]R. Maruthi, Dr. K. Sankarasubramanian, “Multifocus Image based on the


information level in the region of the images”, JATIT(20052007).,pp162-172.

[5]R.Gonzalez&R.E.Woods,2ndEdition,DigitalImageProcessing,PrenticeHall,NewJe
rsey,2002.

[6]Gao,Z.LiuandT.Ren,“Anewimagefusionschemebasedonwavelettransform”,3rdInt.
ConferenceonInnovativeComputingInformationand

[7]Maruturi Haribabu,CH.Hima Bindu, Dr.K.Satya Prasad, “Multimodal Medical


Image Fusion of MRI - PETusing Wavelet”, International
ConferenceonAdvancesinMobileNetwork,CommunicationandIts

[8]Neetu Mittal and Rachana Gupta, “Comparative Analysis of Medical Images


Fusion Using Different Fusion Methods for Daubechies Complex
WaveletTransform”,Int.JofAdvancedResearchinComputerScienceandSoftwareEngin
eering(IJARCSSE),

[9]Changtao He, Quanxi Liu, Hongliang Li, Haixu Wange, “Multimodal medical
image fusion based on IHS & PCA”, Procedia Engineering Vol. 7, 2010, pp280-285.

[10]Xiaoqing Zhang; Yongguo Zheng; Yanjun Peng; Weike Liu; Changqiang Yang,
“Research on multi-mode medical image fusion algorithm
basedonwavelettransformandedgecharacteristicsofimages".IEEE,
Coll.ofInf.Sci.&Eng.,ShandongUniv.ofSci.&Technology,2009.
Changtao He, Quanxi Liu, Hongliang Li, Haixu Wange. "Multimodal medical image
fusion based on IHS & PCA", Procedia Engineering

DEPARTMENT OF ECE 76 CMRIT

You might also like