Professional Documents
Culture Documents
G.MANISHA (13R01A04D3)
S.BHARGAVI (14R05A0409)
CERTIFICATE
This is to certify that the major project report entitled ‘Decomposition and
reconstruction of medical images using DWT method’ is the bonafide work done
and submitted by
B.LIKHITHA (13R01A0465)
G.MANISHA (13R01A04D3)
S.BHARGAVI (14R05A0409)
towards the partial fulfillment of the requirement for the award of Bachelor of
Technology in Electronics and Communication Engineering from Jawaharlal
Nehru Technological University, Hyderabad. The results embodied in this
dissertation have not submitted to any other university or organization for the
award of any other degree.
External Examiner
DECLARATION
B.LIKHITHA (13R01A0465)
G.MANISHA (13R01A04D3)
S.BHARGAVI (14R05A0409)
We would like to express our deep gratitude to Mr. Nagaraja Kumar Pateti,
Assistant Professor, Department of ECE (Project coordinator) for providing
us an opportunity to work and guiding in our college.
Finally, yet importantly, we would like to thank my parents. They are my first
teachers when we came into this world, who taught me the value of hard work by
their own example and to our friends whose support was very valuable in the
completion of the work.
B.LIKHITHA (13R01A0465)
G.MANISHA (13R01A04D3)
S.BHARGAVI (14R05A0409)
images containing the data which has important clinical significance for doctors
during their analysis. The idea behind the concept of image fusion is to improve
the image content by fusing two images like MRI (Magnetic resonance imaging)
used to fuse two medical images to decompose the functional & anatomical
images. The fused image contains both functional information and more spatial
characteristics with no color distortion. Experimental results show the best fusion
CHAPTER 1 ....................................................................................................................................... 1
INTRODUCTION ............................................................................................................................... 1
1.1 BACKGROUND ....................................................................................................................... 1
1.2 SCOPE OF THESIS ................................................................................................................... 1
1.3 SIGNIFICANCE ........................................................................................................................ 1
1.4 ORGANIZATION OF THESIS .................................................................................................... 2
CHAPTER 2 ....................................................................................................................................... 3
INTRODUCTION TO IMAGE PROCESSING ........................................................................................ 3
2.1 IMAGE PROCESSING .............................................................................................................. 3
2.2 TYPES OF IMAGE PROCESSING .............................................................................................. 3
2.3 DIGITAL IMAGE FUNDAMENTALS .......................................................................................... 5
2.3.1 DIGITAL IMAGE REPRESENTATION ................................................................................ 5
2.3.2 COORDINATE CONVENTIONS ........................................................................................ 5
2.3.4 IMAGES AS MATRICES .................................................................................................... 6
2.3.5 RELATIONS BETWWEN PIXELS ........................................................................................ 7
2.4 IMAGE TRANSFORMS IN SPATIAL DOMAIN.......................................................................... 9
2.5 HISTOGRAM EQUALIZATION .............................................................................................. 10
CHAPTER 3 ..................................................................................................................................... 13
IMAGE RESTORATION AND IMAGE SEGMENTATION .................................................................... 13
3.1 IMAGE RESTORATION ......................................................................................................... 13
3.1.1 DEGRADATION MODEL ................................................................................................. 13
3.2 IMAGE SEGMENTATION ...................................................................................................... 14
3.2.1 NON CONTEXTUAL THRESHOLDING ............................................................................. 15
3.2.2 CONTEXTUAL SEGMENTATION ..................................................................................... 17
3.3 MORPHOLOGICAL IMAGE PROCESSING .............................................................................. 18
3.4 IMAGE COMPRESSION ........................................................................................................ 22
3.4.1 MODELS ....................................................................................................................... 23
CHAPTER 1
INTRODUCTION
1.1 BACKGROUND
Image processing is a method to convert an image into digital form
and perform some operations on it, in order to get an enhanced image or to extract
some useful information from it. It is a type of signal dispensation in which input
is image, like video frame or photograph and output may be image or
characteristics associated with that image.
1.3 SIGNIFICANCE
The purpose of image processing is divided into 5 groups. They are:
Chapter 9: Results
CHAPTER 2
INTRODUCTION TO IMAGE PROCESSING
2.1 IMAGE PROCESSING
Image processing is a method to convert an image into digital form and perform some
operations on it, in order to get an enhanced image or to extract some useful information from it. It
is a type of signal dispensation in which input is image, like video frame or photograph and output
may be image or characteristics associated with that image. Usually Image Processing system
includes treating images as two dimensional signals while applying already set signal processing
methods to them. It is among rapidly growing technologies today, with its applications in various
aspects of a business. Image Processing forms core research area within engineering and computer
science disciplines too.
The coordinate convention used in the Image Processing Toolbox to denote arrays is
different from the preceding paragraph in two minor ways. First, instead of using (,)xy, the toolbox
uses the notation (,)rc to indicate rows and columns. Note, however, that the order of coordinates is
the same as the order discussed in the previous paragraph, in the sense that the first element of a
coordinate tuple, (,)ab, refers to a row and the second to a column. The other difference is that the
origin of the coordinate system is at (,)(,)rc=11; thus, r ranges from 1 to M, and c from 1 to N, in
integer increments. Figure 2.1(b) illustrates this coordinate convention.
Image Processing Toolbox documentation refers to the coordinates in Fig. 2.1(b) as pixel
coordinates. Less frequently, the toolbox also employs another coordinate convention, called
spatial coordinates, that uses x to refer to columns and y to refers to rows. This is the opposite of
our use of variables x and y.
With a few exceptions, we do not use the toolbox’s spatial coordinate convention in this book,
but many MATLAB functions do, and you will definitely encounter it in toolbox and MATLAB
documentation.
The right side of this equation is a digital image by definition. Each element of this array
is called an image element, picture element, pixel, or pel. The terms image and pixel are used
throughout the rest of our discussions to denote a digital image and its elements. A digital image
can be represented as a MATLAB matrix:
where f(1, 1) =f(,)00 (note the use of a mono space font to denote MATLAB quantities). Clearly,
the two representations are identical, except for the shift in origin.
The notation f(p, q) denotes the element located in row p and column q. For example, f(6, 2) is the
element in the sixth row and second column of matrix f. Typically, we use the letters M and N,
respectively, to denote the number of rows and columns in a matrix. A 1N* matrix is called a row
vector, whereas an M1* matrix is called a column vector. A 11* matrix is a scalar.
1. Adjacency
2. Connectivity
Neighbours of a Pixel
There are two different ways to define the neighbours of a pixel P located at (x,y) :
4-neighbours
8-neighbours
The 8-neighbors of pixel p, denoted by , include the four 4-neighbors and four pixels along
Adjacency
Two pixels are connected if they are neighbours and their gray levels satisfy some
specified criterion of similarity. For example, in a binary image two pixels are connected if they are
4-neighbors and have same value (0/1). Let v:a set of intensity values used to define adjacency and
connectivity. In a binary Image v={1},if we are referring to adjacency of pixels with value 1. In a
Gray scale image, the idea is the same, but vtypically contains more elements, for example v= {180,
181, 182,....,200}. If the possible intensity values 0 to 255, vset could be any subset of these 256
values.
Connectivity
Let S represent a subset of pixels in an image, two pixels p and q are said to be connected in S if
there exists a path between them. Two image subsets S1and S2are adjacent if some pixel in S1is
adjacent to some pixel in S2.
The simplest form of operation is when the operator T only acts on a 1x1pixel neighbourhood in the
input image, that is 𝐹̂ (x, y) only depends on the value of F at (x, y). This is a grey scale
transformation or mapping.
The simplest case is thresholding where the intensity profile is replaced by a step function, active at
a chosen threshold value. In this case any pixel with a grey level below the threshold in the input
image gets mapped to 0 in the output image. Other pixels are mapped to 255.
Figure2.8 The original image and its histogram, and the equalized versions.
Filtering
Low pass filtering involves the elimination of the high frequency components in the image. It results
in blurring of the image (and thus a reduction in sharp transitions associated with noise). An ideal
low pass filter would retain all the low frequency components, and eliminate all the high frequency
components. However, ideal filters suffer from two problems: blurring and ringing. These problems
are caused by the shape of the associated spatial domain filter, which has a large number of
undulations. Smoother transitions in the frequency domain filter, such as the Butterworth filter,
achieve much better results.
Homomorphic filtering
Images normally consist of light reflected from objects. The basic nature of the image F(x,y) may
be characterized by two components: (1) the amount of source light incident on the scene being
viewed, and (2) the amount of light reflected by the objects in the scene. These portions of light are
called the illumination and reflectance components, and are denoted i(x,y) and r(x,y) respectively.
The functions i and r combine multiplicatively to give the image function F: F(x,y) = i(x,y)r(x,y),
We cannot easily use the above product to operate separately on the frequency components of
illumination and reflection because the Fourier transform of the product of two functions is not
separable; that is
Then
where Z, I and R are the Fourier transforms of and respectively. The function Z
represents the Fourier transform of the sum of two images: a low frequency illumination image and
a high frequency reflectance image.
CHAPTER 3
IMAGE RESTORATION AND IMAGE
SEGMENTATION
where g is the corrupted image obtained by passing the original image f through a low pass filter
(blurring function) b and adding noise to it. We present four different ways of restoring the image.
I. Inverse Filter
In this section we implement image restoration using wiener filtering, which provides us with the
optimal trade-off between de-noising and inverse filtering. We will see that the result is in general
better than with straight inverse filtering.
In this method, we assume nothing about the image. We do not have any information about the
blurring function or on the additive noise. We will see that restoring an image when we know
nothing about it is very hard
The segmentation depends on image property being thresholded and on how Threshold is
chosen. Generally, the non-contextual thresholding may involve two or more thresholds as well as
produce more than two types of regions such that ranges of input image signals related to each region
type are separated with thresholds.
Simple thresholding
The most common image property to threshold is pixel grey level: g(x,y) = 0 if f(x,y) < T and
g(x,y) = 1 if f(x,y) ≥ T, where T is the threshold. Using two thresholds, T1 < T1, a range of grey levels
related to region 1 can be defined: g(x,y) = 0 if f(x,y) < T1 OR f(x,y) > T2 and g(x,y) = 1 if T1 ≤ f(x,y)
≤ T2.
The main problems are whether it is possible and, if yes, how to choose an adequate threshold
or a number of thresholds to separate one or more desired objects from their background. In many
practical cases the simple thresholding is unable to segment objects of interest, as shown in the
above images.
Adaptive thresholding
Since the threshold separates the background from the object, the adaptive separation may
take account of empirical probability distributions of object (e.g. dark) and background (bright)
pixels. Such a threshold has to equalise two kinds of expected errors: of assigning a background
pixel to the object and of assigning an object pixel to the background. More complex adaptive
thresholding techniques use a spatially varying threshold to compensate for local spatial context
effects.
Colour thresholding
Color segmentation may be more accurate because of more information at the pixel level
comparing to greyscale images. The standard Red-Green-Blue (RGB) colour representation has
strongly interrelated colour components, and a number of other colour systems (e.g. HSI Hue-
Saturation-Intensity) have been designed in order to exclude redundancy, determine actual object /
background colours irrespectively of illumination, and obtain more more stable segmentation.
A morphological operation on a binary image creates a new binary image in which the pixel
has a non-zero value only if the test is successful at that location in the input image.
The structuring element is a small binary image, i.e. a small matrix of pixels, each with a value of
zero or one:
A common practice is to have odd dimensions of the structuring matrix and the origin defined
as the centre of the matrix. Structuring elements play in morphological image processing the same
role as convolution kernels in linear image filtering.
Compound operations
Many morphological operations are represented as combinations of erosion, dilation, and simple
set-theoretic operations such as the complement of a binary image:
A pixel belonging to an object is preserved by the hit and miss transform if and only if s1
translated to that pixel fits inside the object AND s2 translated to that pixel fits outside the object. It
is assumed that s1 and s2 do not intersect; otherwise it would be impossible for both fits to occur
simultaneously. Morphological filtering of a binary image is conducted by considering compound
operations like opening and closing as filters. They may act as filters of shape. For example, opening
with a disc structuring element smooth’s corners from the inside, and closing with a disc smooth’s
corners from the outside. But also these operations can filter out from an image any details that are
smaller in size than the structuring element, e.g. opening is filtering the binary image at a scale
defined by the size of the structuring element.
Only those portions of the image that fit the structuring element are passed by the filter;
smaller structures are blocked and excluded from the output image. The size of the structuring
element is most important to eliminate noisy details but not to damage objects of interest.
` In image or in general data compression we make use of the difference between information
and data. Information is what is actually essential for an image or data set, that which we really need
to have for what we would like to proceed to do with it. What that information is, thus depends on
what the further use of the image will be.
To assess "lossy" compression methods on the suitability for certain applications we often use
quality metrics:
Another criteria could be based on the applications of the images and can be objective or
subjective, for example, judgment by a panel of human observers could occur in terms as excellent,
good, acceptable or poor.
3.4.1 MODELS
It is customary to use the names "encoder" and "decoder", which have its roots in the field of
Information Theory, rather than names as "compressor" and "decompressor". If the transmission or
storing channel is error-free, the channel encoder and decoder are omitted. Other wise, extra data
bits can be added to be able to detect (for example parity, Cyclic Redundancy Checks) or correct
(Error Correcting Code for memory) errors, often using special hardware. We shall not pay any
more attention to encoders and decoders. With "lossless" compression it holds that g(x,y)=f(x,y).
The "mapper" transforms the data to a format suitable for reducing the inter-pixel redundancies.
This step is generally irreversible and can reduce the amount of data; used for Run Length Encoding,
but not in transformations to the Fourier or Discrete Cosinus domains.
CHAPTER-4
INTRODUCTION TO IMAGE FUSION
The image is decomposed into spatial frequency bands at different scales in wavelet transform
method, such as low-low, high-high, high- low and low- high- band.
The average image information is given by the low-low band . Other bands High-high,
High-low contain directional information due to spatial orientation.
In high bands higher absolute values of wavelet coefficients correspond to salient
features such as edges or lines. The common element idea in almost all of them is the use of wavelet
transforms to decompose images into a multi resolution scheme . MRI images provide greater
contrast of soft tissues of brain than CT images, but the brightness of hard tissues such as bones is
higher in CT images. CT &MRI images individually have some Shortcomings such as MRI images
not concentrate on hard tissues & in CT image soft tissues can’t be clearly visible. In this paper
image fusion of CT & MRI images has been carried out so that the fused image which is the
combination of soft & hard tissues proven as the focused image for doctors & their clinical
treatment. This paper further quantitatively evaluates the fused images quality through two
performance measures Standard Deviation (SD) and SNR.
IHS(INTENSITY-HUE-SATURATION) TRANSFORM
Intensity, Hue and Saturation are the three properties of a color that give controlled visual
representation of an image. IHS transform method is the oldest method of image fusion. In the IHS
space, hue and saturation need to be carefully controlled because it contains most of the spectral
information. For the fusion of high resolution PAN image and multispectral images, the detail
information of high spatial resolution is added to the spectral information.
This paper presents many IHS transformation techniques based on different color models.
These techniques include HSV, IHS1, IHS2, HIS3, IHS4, IHS5, IHS6, YIQ. Based on these different
formula, IHS transformation gives different results .
PYRAMID TECHNIQUE
Image pyramids can be described as a model for the binocular fusion for human visual
system. By forming the pyramid structure an original image is represented in different levels. A
composite image is formed by applying a pattern selective approach of image fusion. Firstly, the
pyramid decomposition is performed on each source image. All these images are integrated to form
a composite image and then inverse pyramid transform is applied to get the resultant image. The
MATLAB implementation of the pyramid technique is shown in this paper. Image fusion is carried
out at each level of decomposition to form a fused pyramid and the fused image is obtained from it.
The high resolution multispectral images are obtained from high pass filtering. The
high frequency information from the high resolution panchromatic image is added to the low
resolution multispectral image to obtain the resultant image. It is performed either by filtering the
High Resolution Panchromatic Image with a high pass filter or by taking the original HRPI and
subtracting LRPI from it. The spectral information contained in the low frequency information of
the HRMI is preserved by this method.
Despite of being similar to IHS transform, the advantage of PCA method over IHS
method is that an arbitrary number of bands can be used. This is one of the most popular methods
for image fusion. Uncorrelated Principal components are formed from the low resolution
multispectral images. The first principal component (PC1) has the information that is common to
all bands used. It contains high variance such that it gives more information about panchromatic
image. A high resolution PAN component is stretched to have the same variance as PC1 and replaces
PC1. Then an inverse PCA transform is employed to get the high resolution multispectral image.
WAVELET TRANSFORM
In Fourier transform, the signal is decomposed into sine waves of different frequencies whereas the
wavelet transform decomposes the signal into scaled and shifted forms of the mother wavelet or
function. In the image fusion using wavelet transform, the input images are decomposed into
approximate and informative coefficients using DWT at some specific level. A fusion rule is applied
to combine these two coefficients and the resultant image is obtained by taking the inverse wavelet
transform. Mamta Sharma / (IJCSIT) International Journal of Computer Science and Information
Technologies,
Discrete cosine Transform has found importance for the compressed images in
the form of MPEG, JVT etc. By taking discrete cosine transform, the spatial domain image is
converted into the frequency domain image. Chu-Hui Lee and Zheng-Wei Zhou have divided the
images into three parts as low frequency, medium frequency and high frequency. Average
illumination is represented by the DC value and the AC values are the coefficients of high frequency.
The RGB image is divided into the blocks of with the size of 8*8 pixels. The image is then grouped
by the matrices of red, green and blue and transformed to the grey scale image.
Single Sensor
Single sensor captures the real world as a sequence of images. The set of images are fused
together to generate a new image with optimum information content. For example in illumination
variant and noise full environment, a human operators like detector operator may not be able to
detect objects of his interest which can be highlighted in the resultant fused image. The
shortcoming of this type of systems lies behind the limitations of the imaging sensor that are being
used in other sensing area. Under the conditions in which the system can operate, its dynamic
range, resolution, etc. are all restricted by the competency of the sensor. For example, a visible-
band sensor such as the digital camera is appropriate for a brightly illuminated environment such
as daylight scenes but is not suitable for poorly illuminated situations found during night time, or
under not good conditions such as in fog or rain.
Multi Sensor
A multi-sensor image fusion scheme overcomes the limitations of a single sensor image
fusion by merging the images from several sensors to form a composite image an infrared camera
is accompanying the digital camera and their individual images are merged to obtain a fused
image. This approach overcomes the issues referred to before. The digital camera is suitable for
daylight scenes; the infrared camera is appropriate in poorly illuminated environments. It is used
in military area, machine vision like in object detection, robotics, medical imaging. It is used to
solve the merge information of the several images.
Multiview Fusion
In this images have multiple or different views at the same time. Multimodal Fusion:
Images from different models like panchromatic, multispectral, visible, infrared, remote sensing.
Multifocus Fusion:
Images from 3d views with its focal length. The original image can be divided into regions such that
every region is in focus in at least one channel of the image.
2) It must use in medical imaging where disease should analyses through imaging vision through
spatial resolution and frequency perspectives.
3) Image fusion used in military areas where all the perspectives used to detect the threats and other
resolution work based performance.
4) For machine vision it is effectively used to visualize the two states after the image conclude its
perfect for the human vision.
5) In robotics field fused images mostly used to analyse the frequency variations in the view of
images.
6) Image fusion is used in artificial neural networks in 3d where focal length varies according to
wavelength transformation.
4.5.1ADVANTAGES
1. It is easiest to interpret.
4. It is low in cost
8. Image fusion has so many contrast advantages basically it should enhance the image with all
the perspectives of image.
10. Image fusion reduced the data storage and data transmission.
4.5.2 DISADVANTAGES
1. Images have less capability in adverse weather conditions it is commonly occurred when image
fusion is done by single sensor fusion technique.
2. Not easily visible at night it is mainly due to camera aspects whether it is in day or night.
3. More source energy is necessary for the good Mamta Sharma / (IJCSIT) International Journal of
Computer Science and Information Technologies, Vol. 7 (3) , 2016, 1082-1085 www.ijcsit.com
1084 visualization of mages based on spatial frequency.
4. Due to rain or fog visualization is not cleared if one click the two source images in this type of
weather conditions it will give the worst output.
CHAPTER-5
DECOMPOSITION AND RECONSTRUCTION OF
MEDICAL IMAGES
Before wavelet transform, most well known method for this purpose was Fourier transform (FT).
Limitations of FT have been overcome in Short Time Fourier Transform (STFT), which is able to
retrieve both frequency and time information from a signal. In STFT along with FT concept, a
windowing concept is used. Here FT is applied over a windowed part of the signal and then moves
the window over the signal.
The advantage of wavelet transform over Fourier is local analysis. That means wavelet analysis can
reveal signal aspects like discontinuities, breakdown points etc. more clearly than FT.
A wavelet basis set starts with two orthogonal functions: the scaling function or father wavelet (t)
and the wavelet function or mother wavelet ø(t), by scaling and translation of these two orthogonal
functions we obtain a complete basis set. Wavelet transform can be expressed
Where the ᵠ* is the complex conjugate symbol and function ø is called wavelet function or mother
wavelet.
Wavelet transform can be implemented in two ways: continuous wavelet transform and
discrete wavelet transform. Continuous wavelet transform (CWT) can be defined by:
The transformed signal XWT ( , s) is a function of the translation parameter and the scale parameter
s. The mother wavelet is denoted by ø, the * indicates that the complex conjugate.
Where CWT performs analysis by contraction and dilation of mother function, in discrete wavelet
transform (DWT) this scenario is different. DWT uses filter banks to analyse and reconstruct signal.
This appealing procedure was presented by S. Mallat in 1989 that utilizes the decomposition of the
wavelet transform in terms of low pass (averaging) filters and high pass (differencing) filters. A
filter bank separates a signal in different frequency bands. DWT of a discrete time-domain signal is
computed by successive low pass and high pass filtering as shown in figure, which is known as
Mallat Tree decomposition. In the figure, the signal is denoted by the sequence x[n], where n is an
integer. The low pass filter is denoted by L0 while the high pass filter is denoted by H0. At each
level, the high pass filter produces detail information or detail coefficients, d[n], while the low pass
filter associated with scaling function produces approximate coefficients, a[n]. The input data is
passed through set of low pass and high pass filters. The output of high pass and low pass filters are
down sampled by 2. Increasing the rate of already sampled signal is called up sampling whereas
decreasing the rate is called down sampling.
The DWT of an image represents the image as sum of wavelets. Here four isometrics S 0, SH, SV,
and SD with mutually orthogonal ranges, satisfies the following sum rule:
With I denoting the identity operator in an appropriate Hilbert space are used. Human eyes are
less sensitive to high frequency details. Here the Haar DWT - simplest type of DWT has been
applied. In 1D-DWT average of fine details in small area is recorded.
In case of 2D-DWT we first perform one step of the transform on all rows. The left side of the matrix
contains down sampled low pass coefficients of each row; the right side contains the high pass
coefficients as shown in the figure.
Next, we apply one-step to all columns. These results in four types of coefficients:
LL, HL, LH, HH as follows:
The subdivided squares represent the use of the pyramid subdivision algorithm
to image processing, as it is used on pixel squares. At each subdivision step the top
left-hand square represents averages of nearby pixel numbers, averages taken with
respect to the chosen low-pass filter; while the three directions, horizontal, vertical,
and diagonal represent detail differences, with the three represented by separate bands
and filters. We can continue decomposition of the coefficients from low pass filtering
in both directions further in the next step.
Since its early development in the 1970s and 1980s, MRI has proven to be a
highly versatile imaging technique. While MRI is most prominently used in diagnostic
medicine and biomedical research, it can also be used to form images of non-living
objects. MRI scans are capable of producing a variety of chemical and physical data,
in addition to detailed spatial images.
MRI is widely used in hospitals and clinics for medical diagnosis, staging of
disease and follow-up without exposing the body to ionizing radiation.
The term "computed tomography" (CT) is often used to refer to X-ray CT,
because it is the most commonly known form. But, many other types of CT exist, such
as positron emission tomography (PET) and single-photon emission computed
tomography (SPECT). X-ray tomography is one form of radiography, along with
many other forms of tomographic and non-tomographic radiography.
Figure5.7 CT Image
CHAPTER-6
DISCRETE WAVELET TRANSFORM
Mathematically, the wavelet will correlate with the signal if the unknown signal
contains information of similar frequency. This concept of correlation is at the core of
many practical applications of wavelet theory.
As a mathematical tool, wavelets can be used to extract information from many
different kinds of data, including – but certainly not limited to – audio signals and
images. Sets of wavelets are generally needed to analyze data fully. A set of
"complementary" wavelets will decompose data without gaps or overlap so that the
decomposition process is mathematically reversible. Thus, sets of complementary
wavelets are useful in wavelet based compression/decompression algorithms where it
is desirable to recover the original information with minimal loss. In formal terms,
this representation is a wavelet series representation of a square-integral function with
respect to either a complete, orthonormal set of basic functions, or an over complete
set or frame of a vector space, for the Hilbert space of square integral functions. This
is accomplished through coherent states.
Scaling filter:
An orthogonal wavelet is entirely defined by the scaling filter – a low-pass finite
impulse response (FIR) filter of length 2N and sum 1. In biorthogonal wavelets,
separate decomposition and reconstruction filters are defined.
For analysis with orthogonal wavelets the high pass filter is calculated as the
quadrature mirror filter of the low pass, and reconstruction filters are the time reverse
of the decomposition filters. The scaling filter can define Daubechies and Symlet
wavelets.
Scaling function:
Wavelets are defined by the wavelet function ψ (t) (i.e. the mother wavelet) and
scaling function φ (t) (called father wavelet) in the time domain.
The wavelet function is in effect a band-pass filter and scaling it for each level halves
its bandwidth.
This creates the problem that in order to cover the entire spectrum, an infinite
number of levels would be required. The scaling function filters the lowest level of
the transform and ensures the entire spectrum is covered. See for a detailed
explanation.
For a wavelet with compact support, φ (t) can be considered finite in length and
is equivalent to the scaling filter g. Meyer wavelets can be defined by scaling functions
Wavelet function:
The wavelet only has a time domain representation as the wavelet function ψ
(t). For instance, Mexican hat wavelets can be defined by a wavelet function. See a
list of a few Continuous wavelets.
Generalized transforms:
There are a number of generalized transforms of which the wavelet transform is
a special case. For example, Joseph Segman introduced scale into the Heisenberg
group, giving rise to a continuous transform space that is a function of time, scale, and
frequency. The CWT is a two-dimensional slice through the resulting 3d time-scale-
frequency volume.
Hungarian mathematician Alfred Haar invented the first DWT. For an input
represented by a list of numbers, the Haar wavelet transform may be considered to
pair up input values, storing the difference and passing the sum. This process is
repeated recursively, pairing up the sums to prove the next scale, which leads
to differences and a final sum. Other forms of discrete wavelet transform include
the non- or undecimated wavelet transform (where down sampling is omitted),
the Newland transform (where an orthonormal basis of wavelets is formed from
appropriately constructed top-hat filters in frequency space). Wavelet packet
transforms are also related to the discrete wavelet transform. Complex wavelet
transform is another form.
The Haar DWT illustrates the desirable properties of wavelets in general. First,
it can be performed in operations; second, it captures not only a notion of the
frequency content of the input, by examining it at different scales, but also temporal
content, i.e. the times at which these frequencies occur. Combined, these two
properties make the Fast wavelet transform (FWT) an alternative to the
conventional fast Fourier transform (FFT).
Like some other transforms, wavelet transforms can be used to transform data,
and then encode the transformed data, resulting in effective compression. For
example, JPEG 2000 is an image compression standard that uses biorthogonal
wavelets. This means that although the frame is over complete, it is a tight frame (see
types of frames of a vector space), and the same frame functions (except for
conjugation in the case of complex wavelets) are used for both analysis and synthesis,
i.e., in both the forward and inverse transform. For details, see wavelet compression.
As a representation of a signal:
This motivates why wavelet transforms are now being adopted for a vast
number of applications, often replacing the conventional Fourier transform. Many
areas of physics have seen this paradigm shift, including molecular dynamics,
ab initio calculations, astrophysics, density-matrix localisation, and seismology,
optics, turbulence and quantum mechanics. This change has also occurred in image
processing, EEG, EMG, ECG analyses, brain rhythms, DNA analysis, protein
analysis, climatology, human sexual response analysis, general signal processing,
speech recognition, computer graphics, multifractal analysis, and sparse coding.
Wavelet denoising
CHAPTER-7
SOFTWARE TOOLS
It also allows you to put a list of your processing requests together in a file and
save that combined list with a name so that you can run all of those commands in the
same order at some later time. Furthermore, it allows you to run such lists of
commands such that you pass in data and/or get data back out (i.e. the list of
commands is like a function in most programming languages). Once you save a
function, it becomes part of your toolbox (i.e. it now looks to you as if it were part of
the basic toolbox that you started with). For those with computer programming
backgrounds: Note that MATLAB runs as an interpretive language (like the old
BASIC). That is, it does not need to be compiled. It simply reads through each line of
the function, executes it, and then goes on to the next line. (In practice, a form of
compilation occurs when you first run a function, so that it can run faster the next time
you run it.)
Graphics window: the output of all graphics commands typed in the command
window are flushed to the graphics or figure window, a separate gray window with
white background color the user can create as many windows as the system memory
will allow
Edit window: This is where you write edit, create and save your own programs in
files called M files.
Data Type: the fundamental data –type in MATLAB is the array. It encompasses
several distinct data objects- integers, real numbers, matrices, character strings,
structures and cells. There is no need to declare variables as real or complex,
MATLAB automatically sets the variable to be real.
a) Command window
Note: Save all M files in the folder 'work' in the current directory. Otherwise you
have to locate the file during compiling.
Typing quit in the command prompt>> quit, will close MATLAB Mat lab
Development Environment.
For any clarification regarding plot etc, which are built in functions type help topic
i.e. help plot
This instruction indicates a vector T which as initial value 0 and final value 10 with
an increment of 1
Therefore T = [0 1 2 3 4 5 6 7 8 9 10]
2. F= 20: 1: 100
3. T= 0:1/pi: 1
4. zeros (1, 3)
The above instruction creates a vector of one row and three columns whose values are
zero
Output= [0 0 0]
5. zeros( 2,4)
Output = 0 0 0 0
0000
6. ones (5,2)
The above instruction creates a vector of five rows and two columns
Output = 11
11
11
11
11
7. a = [ 1 2 3] b = [4 5 6]
a.*b = [4 10 18]
8. If C= [2 2 2]
b.*C results in [8 10 12]
9. plot (t, x)
If x = [6 7 8 9] t = [1 2 3 4]
This instruction will display a figure window which indicates the plot of x versus t
10. stem (t,x) :- This instruction will display a figure window as shown
11. Subplot: This function divides the figure window into rows and columns.
Subplot (2 2 1) divides the figure window into 2 rows and 2 columns 1 represent
number of the figure
Subplot (3 1 2) divides the figure window into 3 rows and 1 column 2 represent
number of the figure
12. Conv
Syntax: w = conv(u,v)
Description: w = conv(u,v) convolves vectors u and v. Algebraically, convolution is
the same operation as multiplying the polynomials whose coefficients are the elements
of u and v.
13.Disp
Syntax: disp(X)
Description: disp(X) displays an array, without printing the array name. If X contains
a text string, the string is displayed. Another way to display an array on the screen is
to type its name, but this prints a leading "X=," which is not always desirable. Note
that disp does not display empty arrays.
14.xlabel
Syntax: xlabel('string')
Description: xlabel('string') labels the x-axis of the current axes.
15. ylabel
Syntax : ylabel('string')
Description: ylabel('string') labels the y-axis of the current axes.
16.Title
Syntax : title('string')
Description: title('string') outputs the string at the top and in the center of the current
axes.
17. grid on
Syntax: grid on
FFT(X) is the discrete Fourier transform (DFT) of vector X. For matrices, the FFT
operation is applied to each column. For N-D arrays, the FFT operation operates on
the first non-singleton dimension.
FFT(X,N) is the N-point FFT, padded with zeros if X has less than N points and
truncated if it has more.
ANGLE (H) returns the phase angles, in radians, of a matrix with complex elements.
21. INTERP Resample data at a higher rate using low pass interpolation.
22. DECIMATE Resample data at a lower rate after low pass filtering.
Matlab resources
Command window.
Editor
Debugger
Profiler (evaluate performances)
Mathematical libraries
API
In matlab, scripts are the equivalent of main programs. The variables declared
in a script are visible in the workspace and they can be saved. Scripts can therefore
take a lot of memory if you are not careful, especially when dealing with images. To
create a script, you will need to start the editor, write your code and run it.
Graphical Interface:
As an alternative to the ‘addpath’ function, use the Set Path dialog box. To
open it, select Set Path from the File menu in the MATLAB desktop.
Syntax:
addpath('directory')
addpath('dir','dir2','dir3' ...)
addpath('dir','dir2','dir3' ...'-flag')
addpath dir1 dir2 dir3 ... -flag
Description:
Addpath('directory') adds the specified directory to the top (also called front) of
the current MATLAB search path. Use the full pathname for directory.
Addpath('dir','dir2','dir3' ...) adds all the specified directories to the top of the
path. Use the full pathname for each dir.
Addpath dir1 dir2 dir3 ... -flag is the unquoted form of the syntax.
INPUT :
Request user input
Syntax:
user_entry = input('prompt')
user_entry = input('prompt', 's')
Description:
The response to the input prompt can be any MATLAB expression, which is
evaluated using the variables in the current workspace.
user_entry = input('prompt') displays prompt as a prompt on the screen, waits
for input from the keyboard, and returns the value entered in user entry.
user_entry = input('prompt', 's') returns the entered string as a text variable
rather than as a variable name or numerical value.
DISP:
Syntax:
disp(X)
Description:
disp(X) displays an array, without printing the array name. If X contains a text
string, the string is displayed.
Another way to display an array on the screen is to type its name, but this prints
a leading "X =," which is not always desirable.
EXIST:
Syntax:
a = exist('item')
ident = exist('item','kind')
Description:
3 If item is a MEX-file.
4 If item is a MDL-file.
6 If item is a P-file.
7 If item is a directory.
UIGETFILE:
Syntax:
uigetfile
uigetfile('FilterSpec')
uigetfile('FilterSpec','DialogTitle')
uigetfile('FilterSpec','DialogTitle',x,y)
[fname,pname] = uigetfile(...)
Description:
uigetfile displays a dialog box used to retrieve a file. The dialog box lists the
files and directories in the current directory.
uigetfile('FilterSpec') displays a dialog box that lists files in the current
directory. FilterSpec determines the initial display of files and can be a full filename
or include the * wildcard. For example, '*.m' (the default) causes the dialog box list to
show only MATLAB M-files.
Synopsis:
A = imread(filename,fmt)
[X,map] = imread(filename,fmt)
[...] = imread(filename)
[...] = imread(...,idx) (TIFF only)
[...] = imread(...,ref) (HDF only)
[...] = imread(...,'BackgroundColor',BG) (PNG only)
[A,map,alpha] = imread(...) (PNG only)
Description:
STRCAT:
String concatenation
Syntax:
t = strcat(s1,s2,s3,...)
Description:
all have the same size (or any can be a scalar). Any of the inputs can also be a character
array.
SUBPLOT:
Syntax:
subplot(m,n,p)
subplot(h)
subplot('Position',[left bottom width height])
h = subplot(...)
Description:
subplot divides the current figure into rectangular panes that are numbered row-
wise. Each pane contains an axes. Subsequent plots are output to the current pane.
subplot(m,n,p) creates an axes in the p-th pane of a figure divided into an m-by-
n matrix of rectangular panes. The new axes becomes the current axes.
subplot(h) makes the axes with handle h current for subsequent plotting
commands.
subplot('Position',[left bottom width height]) creates an axes at the position
specified by a four-element vector. left, bottom, width, and height are in normalized
coordinates in the range from 0.0 to 1.0.
h = subplot(...) returns the handle to the new axes.
IMSHOW:
Display image
Syntax:
imshow(I)
imshow(I,[low high])
imshow(RGB)
imshow(BW)
imshow(X,map)
imshow(filename)
himage = imshow(...)
imshow(...,param1,val1,param2,val2)
Description:
Parameter Value
IMRESIZE:
Resize image
Syntax
B = imresize(A,m)
B = imresize(A,m,method)
B = imresize(A,[mrows ncols],method)
B = imresize(...,method,n)
B = imresize(...,method,h)
Description:
Value Description
WAITBAR:
Display waitbar
Syntax:
h = waitbar(x,'title')
waitbar(x,'title','CreateCancelBtn','button_callback')
waitbar(...,property_name,property_value,...)
waitbar(x)
waitbar(x,h)
waitbar(x,h,'updated title')
Description
CHAPTER-8
FUSION OF MEDICAL IMAGES IN
MATLAB
The principle of image fusion using wavelets is to merge the wavelet
decompositions of the two original images using fusion methods applied to
approximations coefficients and details coefficients. This provides tools for the
analysis & Wavelet toolbox software, is a collection of many functions built on the
MATLAB technical computing environment. It also helps to synthesis the
deterministic and random signals of images using wavelets and wavelet packets
using the MATLAB language. The image fusion process in MATLAB using wavelet
toolbox.
STEP1-RegistertheCTandMRImedicalimage
STEP2-Performthewaveletdecomposition
STEP3-Mergethetwoimagesafterdecomposition
STEP4-Restorethenewimageusingimagefusion
deviation.[2][3] A useful property of the standard deviation is that, unlike the variance,
it is expressed in the same units as the data.
There are also other measures of deviation from the norm, including mean
absolute deviation, which provide different mathematical properties from standard
deviation.
CHAPTER-9
RESULT AND DISCUSSION
INPUT
Example1:
OUTPUT
As shown in the figure the input image1(MRI) And input2(CT) is fused using image
fusion processing technique for further analysis.The output image obtained is an
fused image
CHAPTER-10
CONCLUSION
FUTURE SCOPE
REFERENCES
[1]YongYang,“PerformingWaveletBasedImageFusionthroughDifferentIntegrationSc
heme”,
InternationalJ.ofDigitalContentTechnologyand
itsApplications,Vol.5,No3,2011,pp156-166.
[3]TheImagefusionwebsite[Online]Available:http://www.Imagefusion.org.
[5]R.Gonzalez&R.E.Woods,2ndEdition,DigitalImageProcessing,PrenticeHall,NewJe
rsey,2002.
[6]Gao,Z.LiuandT.Ren,“Anewimagefusionschemebasedonwavelettransform”,3rdInt.
ConferenceonInnovativeComputingInformationand
[9]Changtao He, Quanxi Liu, Hongliang Li, Haixu Wange, “Multimodal medical
image fusion based on IHS & PCA”, Procedia Engineering Vol. 7, 2010, pp280-285.
[10]Xiaoqing Zhang; Yongguo Zheng; Yanjun Peng; Weike Liu; Changqiang Yang,
“Research on multi-mode medical image fusion algorithm
basedonwavelettransformandedgecharacteristicsofimages".IEEE,
Coll.ofInf.Sci.&Eng.,ShandongUniv.ofSci.&Technology,2009.
Changtao He, Quanxi Liu, Hongliang Li, Haixu Wange. "Multimodal medical image
fusion based on IHS & PCA", Procedia Engineering