You are on page 1of 6

ASSIGNMENT 2

IMAGE FUSION

I. What is image fusion?


Image fusion is the process in which all the necessary information
is obtained from the multiple images and is fused into one single
image and that single image comes out to be more informative as
all the unnecessary data in omitted.

II. Types of image fusion:


Image can be classified into different levels:
 Signal level fusion: In this signals from different levels are
obtained and they are combined and a new signal is created
from it. That resultant signal has better signal to noise ratio as
compared to the original signals.
 Pixel or data level fusion: Multiple sources give raw data
that is combined into single resolution data which is more
accurate and also provides the information about the changed
between the data sets acquired at the different times.

16BEC1041, 16BEC1051
 Feature level fusion: It extracts features like edges, corners,
lines, textures parameters etc. All these features are
combined from different image sources and used to obtain
one or more feature maps which can be used for further
processing.
 Decision level fusion: In this the result of different algorithm
is combined to get the final decision. Methods of decision
fusion consist of voting methods, statistical methods and
fuzzy logic based methods.

III. Literature survey:


[1]Region based fusion scheme is one of the method. First, image
segmentation is done to get set of regions. Thus by obtaining
various properties of an image it can be determined that which of
those would be included in fused image. It has numerous
advantages over the pixel based methods.
[2] This paper demonstrates the use of DWT i.e discrete wavelet
transform. In this the wavelets are discretely sampled. For
multispectral images, the lower frequency components can be
merged with higher spatial resolution images. In DWT different
fusion rules are adopted for different parts of the image after
decomposition.
[3] It proposes improved guided image fusion especially for
computed tomography and magnetic resonance. Different
modifications in basic filter and weighted graphs are proposed.
[4] In this non-linear translation variant filter is used to approach
exposure fusion. First using the guided filter, fine details are
extracted. Next, using multi resolution pyramid all the base layers
are fused. And the final layer is obtained using integration of
fused detail layer and fused base layer.

16BEC1041, 16BEC1051
[5] Guided filter is proposed in this paper. In this the resultant
image is linear transform of an image. It has efficient smoothening
ability.
[6] Comparative image fusion is proposed in this.
[7] This provides combination of fuzzy logic concept and wavelet.
[8] This uses support vector machines for images with different
focus.

IV. Proposed design:


Using guided filter for multi focus image.
Guided filter is edge preserving filter and the time it takes for
computation is not dependent on filter size. The output image is
assumed to be linear transformation of the guidance image. It can
be observed as generalized expression for bilateral.
First two scale representation is obtained using averaging filter.

To get the base layer:


Bl=II*Z
Here Ii is the ith image source.

16BEC1041, 16BEC1051
Now to get the detail layer simply subtract the base layer from the
image source
Dl=Im-Bl
Then the fusion of base layer and detail layer occurs with the aid of
guided filtering based weighted average.
To perform this weighted average or weighted map construction:
WiBl=Gr1,α1
WiDl=Gr2,α2
Where r1,α1,r2,α2 are the various parameters of guided filters.
Large scale variation intensity of each source image is into base
layer and small scale details constitute the detail layer. Then the
reconstruction of two scale image is done.
To get the fused image:
F=B+D
Base +detail layer.

16BEC1041, 16BEC1051
V. Comparison with other techniques:
1. Complementary information of different sources can be well
preserved.
2. It can refine the noisy and inaccurate weights by using strong
correlation between adjacent pixels.
3. It can preserve the colour and detail information of the source
image.

VI. Conclusion:
Image fusion can be used to collect the best characteristics of an
images and fuse them or to fuse the different sensor data.
The method to be used for image fusion depends on the type of
source or image, specification needed in the final image and also
the experience of the user.
Getting the highest spatial content is the main aim of the image
fusion without disrupting its original spectral quality.

VII. References:
[1] . J. J. Lewis R. J. O’Callaghan S. G. Nikolov D. R. Bull C. N.
Canagarajah, ”Region-Based Image Fusion Using Complex
Wavelets”,2004.
[2] A. L. Choodarathnakara, Dr. T. Ashok Kumar, Dr.
Shivaprakash Koliwad, Dr. C. G. Patil, “Image Fusion by means of
DWT for Improving Classification Accuracy of RS Data” 2012.
[3] . Amina Jameel, Abdul Ghafoor , Muhammad Mohsin Riaz,
“Improved Guided Image Fusion for Magnetic Resonance and
Computed Tomography Imaging” 2013.

16BEC1041, 16BEC1051
[4] Harbinder Singh, Vinay Kumar , Sunil Bhooshan, “ a novel
approach for detail-enhanced exposure fusion using guided filter”
2013.
[5] D Prasanthi ,” Guided Fastest Edge Preserving Filter “
International Journal of Computer Science & Communication
Networks”.
[6] Sadjadi, F. (Comparative image fusion analysais. In Computer
Vision and Pattern Recognition-Workshops .
[7] Yong-Guang, M. A., & Zhi-xin, L. (A new arithmetic for
image fusion based on fuzzy wavelet transform. In Information and
Computing Science, 2009. ICIC'09.
[8] Yong-Guang, M. A., & Zhi-xin, L. A new arithmetic for image
fusion based on fuzzy wavelet transform. In Information and
Computing Science, 2009.

16BEC1041, 16BEC1051

You might also like