You are on page 1of 4

2020 5th International Conference on Devices, Circuits and Systems (ICDCS)

Medical Image Fusion using Transform Techniques


B. Ashwanth K.Veera Swamy
ECE department ECE department
Vasavi College of Engineering Vasavi College of Engineering
Ibrahimbagh, Hyderabad, India Ibrahimbagh, Hyderabad, India
ashu.bolli12@gmail.com k.veeraswamy@staff.vce.ac.in
2020 5th International Conference on Devices, Circuits and Systems (ICDCS) 978-1-7281-6368-0/20/$31.00 ©2020 IEEE 10.1109/ICDCS48716.2020.243604

Abstract - Medical image fusion is a process of gathering invariance is by using un-decimated DWT that is
relevant data information from two or more images, the called SWT.
important information from the images sources are taken
which results in having better or more reliable medical image.
Many researchers have developed different
The Paper present Medical image fusion in transform domain algorithms performed on the transforms like average
by using DWT(Discrete Wavelet Transform) and and maximum. The proposed algorithm edge and
SWT(Stationary Wavelet Transform).The edge and energy energy gave better results than the average and
rule for the decomposed bands of DWT, SWT based image maximum method. In the edge and energy method the
fusion are performed. The new fusion rule gave better fused
image. Entropy is used to assess the performance of the higher edges information and the higher energy of the
proposed algorithm. The two images (MRI, PET) are used for decomposed bands are used for the fusion which gives
the fusion by using DWT, SWT by using the proposed better performance than the average and maximum
algorithm edge and energy algorithm the decomposed band method.
which contains higher information is present in the fused
II. Transforms
image.

Keywords:- Image Fusion, DWT,SWT, Average and Maximum, A. DWT


Edge and Energy The wavelet transform quantifies the matching
I. Introduction of the signal with wavelets [1]. If the values of the
With the developments in the medical imagining signal match with the wavelet value then the greater
equipments, which can have different modalities of values of the transform is obtained [1]. If the values of
imagining techniques like X-ray which is used to the signal does not match with the signal lesser values
display the bones, Computed Tomography(CT) which of the transform is obtained. It is used to detect the
can be used to display the hard tissues, Magnetic regional characteristics of the image.
Resonance Tomography(MRI) which can display the
soft tissues, Positron Emission Tomography which can The DWT decomposes the image into bands
display the physiology and pathology contents. By by using decimation. The 2-D transform is obtained by
using medical image fusion the information present in using two 1-D transform[1]. The source or input
the fused image is more better than the original source images is firstly filtered along the rows and then
image, so the diagnosis will be easy and also the image decimated by two then the filtering of the input or
fusion is used in different applications like space source image by columns. The decomposition of the
research, defence, remote sensing etc. input image gives the following sub-bands. The size of
By using the transform domain, images are the sub-bands are half of the original signal. To get
transferred from time domain to frequency domain. this scaling is implemented. DWT is a multi-resolution
Wavelet Transform are the multi-resolution image transform. LL band is almost like a spatial image.
decomposition tool that provides different types of Higher bands represent various frequencies. The
channel representing of image features by using decomposition and reconstruction of the image is done
different frequency sub-band’s[7]. When the by using ‘db2’. The 2-D wavelet is given as
decomposition on the image is performed the detailed
and approximation coefficients are separated. The 1 N 1 N 1
DWT and SWT will first convert the image from  ( j , u , v ) 
NN
 f ( x, y) 
x 0 y 0
j ,u ,v ( x, y ) (1)
spatial domain into transform domain. This domain .
conveys the sharpness and edges. DWT is proven to be N 1 N 1
1
efficient in 1-D singularity[1], which gives better  ( j , u , v ) 
NN
 f ( x, y) 
x 0 y 0
j ,u ,v ( x, y ) (2)
spectral contents. It is a shifted variant, so DWT .

transform require down sampling. The draw back with


the DWT is limited directionalities. The DWT is not where as scaling function is
time invariant transform, the way to get back [5] the  j ,u ,v ( x, y)  2 j  (2 j x  u,2 j y  v)

978-1-7281-6368-0/20/$31.00 ©2020 IEEE 303


2020 5th International Conference on Devices, Circuits and Systems (ICDCS)

translation function is
H
 j ,u ,v ( x, y)  2 j  (2 j x  u,2 j y  v) . HH
H
The values of u and v are from 0 to 2 j  1 L HL
Input
Image H LH
g(n) 2 LL
L
2 L
g(n) 2 LL
2 h(n) 2
2 LH Fig. 2. SWT DECOMPOSITION
Input
g(n) 2 HL III. Methodology
2 A. Average and Maximum
h(n) 2 In average and maximum algorithm [7] the
2 h(n) 2 average for the LL bands of the decomposed images
HH
2 and maximum operation for the other bands of the
Fig. 1. DWT Decomposition decomposed image the LL band obtained by the
average and other bands formed by applying the
maximum operation is used for constructing the fused
B. SWT image.
The SWT is called as un-decimated DWT, It
does by up sampling the signal instead of down B. Proposed Algorithm
sampling by adding zeros in between the filter For performing the fusion rules firstly we need
coefficients [5]. The algorithm in which the zeros are two different input images, here I considered the MRI,
added in between the filter coefficients is called as PET images of the same source.
‘trous’ which means with holes. The pre-processing for the input images has been done
by making the size of all the input images to be same
In SWT the decomposed image will give sub-bands i.e., 256 256 .
with one approximation coefficients and three detailed Step-1:-Applying transform to the input images which
coefficient of the images. The size of the sub-bands decomposes the input image to four sub-bands namely
will be same as the size of the original or input image. LL, LH, HL, HH
The approximation coefficients of images from the un- Step-2:-Applying proposed fusion rule that is
decimated algorithm are therefore represented as the calculating the edge information of the low level sub-
levels in a parallelepiped[6], with the spatial resolution band and calculating the energy for the other three
of the image becoming coarser at every higher level so sub-bands. In the proposed algorithm the edge
the size of the sub-band remains the same information is calculated by using the sobel operator,
the information of edges can be calculated by using
The 2D-SWT is on the idea without using the G x  I  hx , G y  I  h y
decimation. It applies the Discrete wavelet transform Where I is the source image
but without the down sampling during the Edge intensity can be calculated by using
decomposition and applies up-sampling during the re-
construction by using inverse transform. To be G  (Gx2  G y2 )
précised, it applies transform at every point of the Where

image and saves the detailed coefficient and uses low


 1 0 1 1  2  1
 
h x    2 0 2  h y  0 0
frequency information at every level[5].  0 
  1 0 1  1 2 1 

304
2020 5th International Conference on Devices, Circuits and Systems (ICDCS)

The energy of the band is calculated by using The operation on the following data input sets have
i 1 j 1 performed and the results are tabulated below in
E (i, j )   I (i, j ) , where I is the source image TABLE I.
i 1 j 1
Step-3:- Selecting the sub-bands with higher edge TABLE I: RESULTS
information for the low level sub-band and selecting Metric S DWT DWT SWT SWT
E (Avg & (proposed) (Avg & (proposed)
the sub-band with more energy for the other three sub- T Max)[7] Max)
bands
1 4.6939 4.8321 4.6468 4.9143
2 4.8110 4.8830 4.7416 4.9746
Step-4:- The sub-bands with higher edge information 3 5.1973 5.4090 5.0590 5.4385
and the higher energy is selected and used for Entropy 4 4.8922 4.9035 4.5749 4.9818
generating the fused image by applying inverse 5 4.9196 5.0149 4.6166 5.0120

transform.
B. OUTPUTS FOR DWT
Input image A Input image B
SET-1

Pre-processing Pre-processing

(a) (b) (c) (d)


DWT/SWT DWT/SWT
SET-2

LPF HPF LPF HPF

Edge Based Fusion Energy Based Fusion (a) (b) (c) (d)

SET-3

IDWT/ISWT

(a) (b) (c) (d)

Fused Image SET-4

Fig. 3. Suggested System Block Diagram

IV. Results
A. ENTROPY (a) (b) (c) (d)
The entropy gives the measure of available
information present in the fused image. The maximum SET-5
and minimum value of the entropy is from 0 to 8( for
8-bit representation of the image). Greater the entropy
value gives the more information present in the fused
image.
L
H e   hif (i ) log 2 hif (i ) (3) (a) (b) (c) (d)
i 0 Fig. 4. Outputs for the DWT

305
2020 5th International Conference on Devices, Circuits and Systems (ICDCS)

method can be extended by decomposing into several


C. OUTPUTS FOR SWT levels. Experiments are performed with DWT and
SWT. SWT transform is more efficient than the DWT
SET-1 transform for image fusion.
REFERENCE

[1] K. KoteswaraRao and K. Veera Swamy,


" Multimodal Medical Image Fusion using NSCT
(a) (b) (c) (d) and DWT Fusion Frame Work", Volume-9 Issue-
2, December 2019,pp3644-3645
SET-2 [2] Jagalingam Pa and Arkal Vittal Hegdeb, " A
Review of Quality Metrics for Fused
Image",2015.135-137
[3] Hebam,El-Hoseny1,El-Sayed,M.Rabaie2, and
Wael ABD, Elrahman1and Fathi B Abd EL-
Samiel2, "Medical image fusion techniques based
(a) (b) (c) (d)
on combined discere transform
domain",2017,pp482-493
SET-3
[4] TianLan, ZheXiao, YiLi,Yi Ding, and Zhiguang,
|"Medical image fusin techniques based on
combined discrete transform
domains",2016,pp788-881
[5] www.rroioj.com
(a) (b) (c) (d) [6] Ijcaonline.org
[7] Meda Balachandra Mule and Padmavathi
SET-4 N.B,"Basic Medical Image Fusion Methods",
2015,pp1046-1048

(a) (b) (c) (d)

SET-5

(a) (b) (c) (d)


Fig. 5. Outputs for the SWT

(a)MRI image-1 (b)PET image-2


(c)fused image by using average and maximum rule
(d)fused image by using edge and energy rule

V. Conclusion

Image fusion using edge and energy features is


implemented in transform domain. LL band selection
is based on edges. Higher bands are selected using
energy. Entropy is improved by 5 percentage. This

306

You might also like