You are on page 1of 8

SIViP (2009) 3:137144

DOI 10.1007/s11760-008-0065-4
ORIGINAL PAPER
A modied statistical approach for image fusion using wavelet transform
S. Arivazhagan L. Ganesan T. G. Subash Kumar
Received: 31 March 2007 / Revised: 9 February 2008 / Accepted: 8 May 2008 / Published online: 4 June 2008
Springer-Verlag London Limited 2008
Abstract The fusion of images is an important technique
within many disparate elds such as remote sensing, robot-
ics and medical applications. For image fusion, selecting the
required region from input images is a vital task. Recently,
wavelet-based fusion techniques have been effectively used
to integrate the perceptually important information gener-
ated by different imaging systems about the same scene.
In this paper, a modied wavelet-based region level fusion
algorithm for multi-spectral and multi-focus images is dis-
cussed. Here, the low frequency sub-bands are combined,
not averaged, based on the edge information present in the
high frequency sub-bands, so that the blur in fused image
can be eliminated. The absolute mean and standard devia-
tion of each image patch over 3 3 window in the high-
frequency sub-bands are computed as activity measurement
and are used to integrate the approximation band. The per-
formance of the proposed algorithm is evaluated using the
entropy, fusion symmetry and peak signal-to-noise ratio and
is compared with recently published results. The experimen-
tal result proves that the proposed algorithm performs better
in many applications.
S. Arivazhagan (B)
Department of Electronics and Communication Engineering,
Mepco Schlenk Engineering College, Mepco Engineering College
Post, Sivakasi 626 005, Tamil Nadu, India
e-mail: s_arivu@yahoo.com
L. Ganesan
Department of Computer Science and Engineering,
Alagappa Chettiar College of Engineering and Technology,
Karaikudi 623 004, India
T. G. Subash Kumar
Jasmin Infotech Pvt Ltd, Chennai 600 100, India
Keywords Wavelet transform Image fusion Multi-focus
images Multi-spectral images Fusion performance
measure
1 Introduction
Image fusion is the process of combining two or more images
of the same scene into a single image, retaining important
features, so that the fused image is more suitable for the
purpose of human visual perception and computer process-
ing. Compared with the input images, the new image con-
tains more comprehensive, more accurate and more stable
information. Therefore, with the availability of multi-sensor
data in many elds such as remote sensing, medical imaging,
machine vision and military applications, the multi-sensor
image fusion has emerged as a new and promising research
area [1].
The definition of multi-sensor fusion is very broad and
fusion can take place at the pixel and feature level. Pixel-level
fusion can be used to increase the information content asso-
ciated with each pixel in an image through multiple image
combination. Feature-level fusion can be used to increase the
likelihood that a feature is retained and is used as a means
of creating additional composite features. If both the spectral
properties and the pixel spacing of input images are very sim-
ilar, then the two images would generally be fused at the pixel
level [2]. A number of methods at pixel level fusion have
been proposed for merging multi-spectral and multi-focus
images. In 1990, Carper et al. [3] fused the multi-spectral
images using intensity-hue-saturation-based method. In the
same year, Toet [4] discussed the fusion of images using
Laplacian pyramid mergers. Haeberli [5] proposed a method
for the fusion of multi-focus images using pyramid-based
technique, which is based on the fact that the intensity of the
1 3
138 SIViP (2009) 3:137144
focused portion of the image will be high. With the ability
of multi-resolution analysis, wavelet transform is playing
an important role in image feature extraction, image com-
pression and image fusion since it has the ascendant perfor-
mance in dealing with time-frequency signal [6]. The ability
of wavelet transform to localize information as a function of
space and scale comes in handy in fusing information from
the two images [7]. Jiang et al. [8] proposed a wavelet-based
algorithm for the fusion of multi-focus images. A fusion rule
by calculating the wavelet transformation modulus maxima
of input images at different bandwidths and levels is dis-
cussed in [9]. A fusion algorithm based on Lifting Wavelet
Transform has been proposed in [10]. Hill et al. [11] in 2002
showed that the better fusion can be obtained if the images
are fused using complex wavelets. Du et al. [12] fused the
satellite images of different scales using wavelet transform.
A comparative study on various image fusion methods is
presented in [13]. Recently, Zheng et al. [14] used Support
Value Transform for multi-source image fusion, where the
original multi-source images are decomposed into sequence
of support value images plus the low-frequency component.
The decomposed low-frequency components of multi-source
images are merged into the resulting low-frequency compo-
nents by averaging method.
The fused images are blurred in such image fusion
techniques, where low-frequency components are normally
combined by averaging method. In this paper, an enhanced
algorithm for region-based image fusion is proposed, which
combines low frequency sub-bands based on the edge infor-
mation present in the high frequency sub-bands, so that, blur
in the fused image can be eliminated.
This paper is organized as follows. Section 2 gives a
brief introduction to Discrete Wavelet Transform (DWT).
Section 3 discusses the image fusion algorithm and Sect. 4
discusses the parameters to measure the fusion performance.
In Sect. 5 experimental results and discussion are provided.
The performance of the proposed method is discussed in
Sect. 6. and concluding remarks are given in Sect. 7.
2 Discrete wavelet transform
Wavelets are functions generated from one single function
by dilations and translations [15]. The basic idea of the
wavelet transform is to represent any arbitrary function as
a superposition of wavelets. Any such superposition decom-
poses the given function into different scale levels where each
level is further decomposed with a resolution adapted to that
level [16].
The DWT is identical to a hierarchical sub band system
where the sub bands are logarithmically spaced in frequency
andrepresent octave-banddecomposition. ByapplyingDWT,
the image is actually divided, i.e., decomposed into four sub
Fig. 1 Image decomposition. a One level, b two level
bands and critically sub sampled as shown in Fig. 1a. These
four sub bands arise from separable applications of vertical
and horizontal lters. The sub bands labeled LH1, HL1 and
HH1 represent the nest scale wavelet coefcients, i.e., detail
images while the sub band LL1 corresponds to coarse level
coefcients, i.e., approximation image. To obtain the next
coarse level of wavelet coefcients, the sub band LL1 alone
is further decomposed and critically sampled. This results in
two-level wavelet decomposition as shown in Fig. 1b. Sim-
ilarly, to obtain further decomposition, LL2 is used. This
process continues until some nal scale is reached.
The values or transformed coefcients in approximation
and detail images (sub-band images) are essential features,
which are useful for image fusion. The features derived from
these DWT transformed images are shown here as useful for
image fusion, and are discussed in the next section.
3 Image fusion algorithm
Since the wavelet coefcients having large absolute values
contain the information about the salient features of the
images such as edges and lines, a good fusion rule is to take
the maximum(absolute values) of the corresponding wavelet
coefcients [17].
The block diagram of an Image Fusion System is shown
in Fig. 2. Here, wavelet transform is rst applied on input
images. Then the wavelet coefcients of sub-images (i.e., the
different sub bands at different scales) are fused using fusion
rules. Finally, the fused image is obtained by applying the
inverse DWT on the fused wavelet coefcients. There are
two types of fusion rules for fusing the sub-images, such as,
(i) Pixel-based and (ii) Region-based. In pixel-based fusion,
pixel-by-pixel wavelet coefcients are fused, either by select-
ing the maximum (or minimum) value of the correspond-
ing co-efcient in the two sub-images, or by selecting the
weighted coefcient of the two sub-images. In region-based
method, the fusion rule is based on the statistical quantities
of local windows of size nn. Here, the statistical quantities
1 3
SIViP (2009) 3:137144 139
Fig. 2 Image fusion system
Fused Image
Fused
wavelet
coefficients
Fusion rules
DWT
DWT
) , (
2
n m f
Fusion
decision
map
Inverse
DWT
) , ( n m g
Image 1
Image 2
) , (
1
n m f
of each image patch over 3 3 windows are computed as an
activity measurement associated with the pixel centered in
the window. The fusion rule is that if the activity measure-
ment of one window in image-1 is greater (or smaller) than
the corresponding window in image-2, then the value of the
center pixel of the window in image-1 will be considered as
the new pixel value. The commonly used activity measure-
ments are average value, standard deviation and energy.
The image fusion algorithm, proposed in this paper is
given as follows. The input images are rst subjected to two
levels of DWT decomposition. The wavelet coefcient in
low frequency sub-bands, i.e., either LL
1
2
(of the rst image)
or LL
2
2
(of the second image) is chosen based on the com-
bined edge information in the corresponding high frequency
sub-bands, i.e., LH
1
2
, HL
1
2
and HH
1
2
or LH
2
2
, HL
2
2
and HH
2
2
.
Here, mean and standard deviation over 3 3 windows are
used as activity measurement to nd the edge information
present in the high frequency sub-bands. The 3 3 win-
dows in the high frequency sub-bands are centered at the
corresponding pixel of LL
2
. In cases, where the activity mea-
sures do not satisfy the criterion, given in step 2 of the algo-
rithm, the average of wavelet coefcients in LL
1
2
and LL
2
2
can
be taken. In addition, the wavelet coefcients in high fre-
quency sub-bands, i.e., either LH
1
m
or LH
2
m
, HL
1
m
or HL
2
m
and HH
1
m
or HH
2
m
(m = 1, 2 represents the levels of DWT
decomposition) will be chosen or combined based on stan-
dard deviation as activity measure, as given in step 3 and 4
of the algorithm. Then, the nal fused image is obtained by
applying the inverse DWT on the fused wavelet coefcients.
Algorithm Step 1 Apply DWT decomposition (level 2) to
input images, f
1
(m, n) and f
2
(m, n).
Step2For eachpixel of LL
k
2
(m, n) (i.e., approximationor low
frequency sub band), identify 3 3 windows CLH
k
2
(m, n), CHL
k
2
(m, n) and CHH
k
2
(m, n), centered at the cor-
responding pixels in LH
k
2
(m, n), HL
k
2
(m, n) and HH
k
2
(m, n)
sub-bands (i.e., detail or high-frequency sub bands), where
k = 1, 2 and represents the images involved in fusion.
(a) Find CA
k
2
(m, n) = abs{CLH
k
2
(m, n)} +abs{CHL
k
2
(m, n)} + abs{CHH
k
2
(m, n)}
(b) Find the mean MA
k
2
(m, n) and standard deviation SA
k
2
(m, n) of CA
k
2
(m, n) as activity levels.
(c) Fuse wavelet coefcients of LL
2
using the Fusion rule
as follows:
F(m, n)
=

LL
1
2
(m, n); if MA
1
2
(m, n) > MA
2
2
(m, n)
andSA
1
2
(m, n) > SA
2
2
(m, n)
LL
2
2
(m, n); if MA
1
2
(m, n) < MA
2
2
(m, n)
and SA
1
2
(m, n) < SA
2
2
(m, n)
{LL
1
2
(m, n)
+LL
2
2
(m, n)}/2; Otherwise
Step 3 For each pixel of LH
k
m
(m, n) (i.e., high frequency sub
band), nd the standard deviation, SD
k
m
over 33 windows,
where k = 1, 2 represent the images involved in fusion and
m = 1, 2 represents the levels of DWT decomposition. Fuse
the wavelet coefcients of LH
k
m
(m, n) sub-band using the
fusion rule as follows:
F(m, n)
=

LH
1
m
(m, n); if : SD
1
m
(m, n) > SD
2
m
(m, n)
LH
2
m
(m, n); if : SD
1
m
(m, n) < SD
2
m
(m, n)
{LH
1
m
(m, n)
+LH
2
m
(m, n)}/2; Otherwise
Step 4 Repeat step 3 for other high frequency sub bands
Step 5 Apply Inverse DWT to get the fused image.
4 Parameters to measure the fusion performance
Standard deviation of the difference between the ideal image
and the fused image can be used as a performance measure
of the fusion scheme. However, in a practical situation, an
ideal image may not be available. Hence, mutual informa-
tion-based criterion is used as the measure for evaluating
the performance [10]. In order to compare the effectiveness
of the proposed fusion algorithm Entropy, Fusion Symmetry
and Peak Signal to Noise Ratio are used for evaluation.
Entropy (H): The Entropy of an image is the measure of
information present in an image and is dened as follows:
1 3
140 SIViP (2009) 3:137144
H =
L1

i =0
P(i ) log P(i ) (1)
where L denotes the gray-level of the image, P(i ) denotes
the ratio of the number of pixels with gray-level i (n
i
) to the
total number of pixels in the image (n), i.e., P(i ) = n
i
/n.
Fusion symmetry (FS): Suppose A and B are input images,
while F is a fused image, the contribution of the image A (or
B) to the fused image F can be found using FS to evaluate
the fusion performance.
FS = abs

M
AF
M
AF
M
BF
0.5

(2)
where M
AF
denotes mutual information of images A and F.
M
BF
denotes mutual information of images B and F. Mea-
sure of mutual information gives the amount of correlation
between two distributions [18]. Given two images A and F,
the mutual information is dened as
M
AF
=

x,y
P
AF
(x, y) log
P
AF
(x, y)
P
A
(x)P
F
(y)
(3)
where P
A
(x) and P
F
(y) are the probability density functions
in individual images and P
AF
(x, y) is the joint probability
density function. Estimations for the joint and marginal den-
sity functions can be obtained by simple normalization of the
joint and marginal histograms of both the images.
Peak signal to noise ratio (PSNR) : The PSNR is dened
as follows:
PSNR = 20 log

255
RMSE

(4)
where RMSE is the root mean square error. Suppose, R is the
standard reference image and F is the fused result image,
then the RMSE is dened as
RMSE =

M
i =1

N
j =1
[R(i, j ) F(i, j )]
2
M N
(5)
A better fusion performance is characterized by higher
entropy, lower fusion symmetry, smaller RMSE and higher
PSNR.
5 Experimental results and discussion
The proposedfusionalgorithm, explainedinSect. 3is applied
on a set of multi-focus and multi-spectral images. For the
purpose of comparison and to show the effectiveness of the
proposed method, the same set of input images are subjected
to image fusion algorithm, used in [1], but using separable
orthogonal wavelet transform, such as Daubechies tap 4 l-
ter, called hereafter as old method where the LL or approx-
imation sub bands of DWT decomposed images are fused
by a simple pixel-by-pixel averaging technique. But in our
proposed method, approximation sub bands are fused using
information from corresponding detail sub bands following
the modied algorithm discussed in Sect. 3.
5.1 Fusion of multi-focus images:
The proposed fusion algorithm is applied on a set of multi-
focus images, such as (i) clock, (ii) book and (iii) bottle and
the results obtained are shown in Fig. 3, where the column
(a) and (b) represent the multi-focus input images while the
column (c) shows the corresponding fused image, obtained
with the old method and column (d) shows the fused image
obtained using the proposed method.
The right portions of images inFig. 3a are distinct, whereas
the left portions are blurred. On the contrary, the right por-
tions of the images in Fig. 3b are blurred, whereas the left
portions are distinct. Figure 3c shows the fused image using
the old method where both portions are distinct, but at the
same time the left and right portions are slightly blurred com-
pared to the distinct portions in Fig. 3b and Fig. 3a, respec-
tively. Figure 3d shows the fused image, obtained using the
proposed method where the visual quality of the image is
better than the fused image obtained using old method.
5.2 Fusion of multi-focus microscopic images:
Since the microscope has a limited depth of eld, the image
cannot be completely focused [19]. The various parts of the
object are out of focus in each image when they are acquired
at different distances using microscope [20]. The proposed
fusion algorithm is applied on a set of multi-focus micro-
scopic images, such as (i) Cell and (ii) texture and the results
obtained are shown in Fig. 4, where the column (a) and (b)
represent the multi-focus input images while the column (c)
shows the corresponding fused image, obtained with the old
method and column (d) shows the fused image, obtained
using the proposed method.
Figure 4a, b show a set of microscopic images at various
depths. Figure 4c shows the fused image, obtained using the
old method where all portions of the cell are distinct, but at
the same time they are blurred. Figure 4d shows the fused
image, obtained using the proposed method where the visual
quality of the image is better.
5.3 Fusion of infrared and visible images:
The proposed fusion algorithm is applied to the infra red
(IR) and Visible images obtained from[21]. Figure 5a shows
1 3
SIViP (2009) 3:137144 141
Fig. 3 Results of multi-focus
image fusion for clock, book
and bottle images. a, b
multi-focus images; c fused
image (old method); d fused
image (proposed method)
Fig. 4 Results multi-focus
microscopic image fusion.
a, b Multi-focus microscopic
images; c fused image (old
method); d fused image
(proposed method)
Fig. 5 Results of fusion of
infrared and visible images.
a Infrared image; b visible
image; c fused image (old
method); d fused image
(proposed method)
infrared image, in which the contour of a person could be
seen, but the bridge could not be seen. In Fig. 5b, which is
a visible light image, the contour of bridge could be seen,
whereas the person could not be seen. Figure 5c shows the
fused image, obtained using the old method, in which the
person appears distinct, while the edges of the bridge are
1 3
142 SIViP (2009) 3:137144
Fig. 6 Results of fusion of FIR
and NIR images. a FIR image; b
NIR image; c fused image (old
method); d fused image
(proposed method)
Fig. 7 Results of fusion of aero
and satellite images. a Aero
image; b satellite image; c fused
image (old method); d fused
image (proposed method)
Fig. 8 Results of fusion of
MMW and visible images.
a MMW image; b visible image;
c fused image (old method);
d fused image (proposed
method)
blurred. Figure 5d shows the fused image, obtained using
the proposed method, in which both person and bridge are
visible with clear edges than the old method.
5.4 Fusion of FIR and NIR images:
Figure 6a, b show a pair of Far Infra Red (FIR) and Near
Infra Red (NIR) images. The FIRimage is bright, the objects
in it are easily recognized, and also its contrast is higher.
But, most parts of NIR image are dim, it is difcult to recog-
nize the objects, while the roads and the background in the
image are bright. Figure 6c is the fused image obtained using
the old method, which contains the features from both FIR
and NIR images and also it is bright. Figure 6d shows the
fused image, obtained using the proposed method in which
the visual quality is improved than the old method.
5.5 Fusion of aero and satellite images:
Figure 7a, b show aero image and satellite image, respec-
tively. The contrast of aero image is higher and their edges
are distinct. But, the contrast of satellite image is lower and
their edges are blurred. Figure 7c is the fused image obtained
using the old method. Figure 7d is the fused image, obtained
using proposed method which contains the features of both
aero image and satellite images. Further, its contrast is higher
and the edges of the objects can be recognized easily, com-
pared with the fused image, shown in Fig. 7c.
5.6 Concealed weapon detection:
The effectiveness of the proposed algorithm can be proved
by extending it to the application of concealed weapon detec-
tion. Figure 8a, b show MMW and Visible images of a group
of persons. Figure 8c shows the image fused obtained using
the old method in which the image is darkened. Figure 8d
shows fused image obtained using the proposed method in
which the quality of the image is much improved. Moreover,
the weapon, i.e., the pistol concealed by the third person from
the left can be easily identied visually.
From all the pictorial results, given in Figs. 38, it is
observedthat the fusionresults, obtainedwiththe oldmethod,
are blurred. This is mainly due to the simple averaging of
approximation coefcients. But, the proposed algorithm
choose/combine the approximation coefcients based on the
edge information present in the high frequency sub-bands
and results in better fusion results, as evident in the pictorial
results.
1 3
SIViP (2009) 3:137144 143
Table 1 Performance of multi-focus image fusion
Sl. no. Images Fusion performance evaluation Visual quality
Old method Proposed method
Entropy Fusion symmetry PSNR Entropy Fusion symmetry PSNR Old method Proposed method
1 Clock 5.0707 0.0038 31.2046 5.0678 0.0837 33.3894 Blurred Good
2 Book 4.9910 0.0025 29.5388 5.0545 0.0078 38.0545 Blurred Good
3 Bottle 5.1832 0.0002 27.3026 4.2371 0.0002 31.2725 Blurred Good
4 Texture 5.0588 0 30.1730 5.1075 0.0143 32.6585 Blurred Good
5 Cell 3.2771 0.0584 3.2772 0 Blurred Good
Table 2 Performance of multi-spectral image fusion
Sl. No Image1 Image2 Fusion performance evaluation Visual quality
Old method Proposed method
Entropy Fusion symmetry Entropy Fusion symmetry Old method Proposed method
1 Infrared Visible 4.6324 0.0043 5.0097 0.0407 Good Improved
2 Far-IR Near-IR 5.2025 0.0016 5.2313 0.0019 Good Good
3 Aero Satellite 4.4074 0.0012 4.5187 0.0732 Good Good
4 MMW Visible 4.0874 0.0011 4.5136 0.0066 Dim Improved
6 Performance evaluation and analysis
The performance of the old and proposed fusion methods are
compared with the objective delity criterion measures such
as Entropy, Fusion Symmetry and PSNR.
Table 1shows the performance of the fusionof multi-focus
images, where as the old method indicates the algorithmpro-
posed in [1]. Theoretically, the Entropy and PSNR of the
fused image should be high whereas the fusion symmetry
should be small.
From the Table 1, it is clear that when multi focus clock
and bottle images are fused, the entropy is high and fusion
symmetry is low in the old method, whereas the PSNR and
the visual quality of the fused image is much improved in the
proposed method than in the old method. Though the fusion
symmetry is low in the old method for multi-focus book and
microscopic texture images, entropy, PSNR and the visual
quality of the fused image is much improved in the proposed
method than in the old method. In the fusion of microscopic
multi-focus cell image, the fusion symmetry is low, entropy
is high in the proposed method and the visual quality of the
fused image is much improved than in the old method. Since
the reference image (which can be formed by cutting the
required portions of the input images) is not available, the
PSNR is not obtained.
Table 2 shows the performance of the fusion of multi-
spectral images. The fusion is performed on the set of (i)
infrared and visible, (ii) far infrared and near infrared, (iii)
aero and satellite and (iv) MMW and visible images. The
entropy of the fused images is high in the proposed method
but the fusion symmetry is low in the old method. However,
the visual qualities of the fused images are improved in the
proposed method than in the old method.
7 Conclusion
In this paper, a new improved fusion algorithm based on
wavelet transformis presented. Fusionperformance was eval-
uated using objective delity criterion such as entropy, fusion
symmetry and PSNR and the proposed method is compared
with a recent algorithm. All fusion performance measures of
the proposed method are comparable and better to the mea-
sures obtained for the results with the old method. This is
mainly due to the fusion of approximation coefcients based
on the edge information in the high frequency sub-bands
instead of simple averaging of approximation coefcients.
Also, the strength of our proposed method is emphasized
with more number of multi focus and multi spectral images.
Moreover, the visual quality of all the fused images, obtained
using the proposed method shows very good improvement
which is evident from the results shown.
References
1. Bin, L., Jiaxiong, P.: Image fusion method based on short support
symmetric non-separable wavelet. Int. J. Wavel. Multi-Resolut.
Inf. Process. 2(1), 8798 (2004)
1 3
144 SIViP (2009) 3:137144
2. Cohen, A., Kovacevic, J.: Wavelets: the mathematical back-
ground. IEEE Proc. 84(4), 514522 (1996)
3. Carper, J.W., Lilles, T.M., Kiefer, R.W.: The use of intensity-hue
saturation transformations for merging SPOT panchromatic and
multi-spectra image data. Photogr. Eng. Remote Sens. 56, 459467
(1990)
4. Toet, A.: Hierarchical image fusion. Mach. Vis. Appl. 3(1), 111
(1990)
5. Haeberli, P.: A multi-focus method for controlling depth of eld
(1994). http://www.sgi.com/graca/depth
6. Bruce, L.M., Cheriyadat, A., Burns, M.: Wavelets Getting Perspec-
tive. IEEE Potentials pp. 2427 (2003)
7. Daubechies, I.: The wavelet transform, time-frequency localiza-
tion and signal analysis. IEEE Trans. Inf. Theory 36(5), 9611005
(1990)
8. Zhi-guo, J., Dong-bing, H., Jin, C., Xiao-kuan, Z.: A wavelet
based algorithm for multi-focus micro-image fusion. In: Proceed-
ings of the Third International Conference on Image and Graphics
(ICIG04), 0-7695-2244-0/04 (2004)
9. Qu, G., Zhang, D., Yan, P.: Medical image fusion by wavelet trans-
form modulus maxima. Opt. Express 184 9(4) (2001)
10. Ranjith, T., Ramesh, C.: A lifting wavelet transform based algo-
rithm for multi-sensor image fusion. CRL Tech. J. 3(3), 1922
(2001)
11. Hill, P., Canagaraj, N., Bull, D.: Image fusion using complex
wavelets. In: BMVC, pp. 487496 (2002)
12. Du, Y., Vachon, P.W., Vander Sanden, J.J.: Satellite image fusion
with multi-scale wavelet analysis for marine applications. Can. J.
Remote Sens. 29(1), 1423 (2003)
13. Wang, Z., Ziou, D., Armenakis, C., Li, D., Li, Q.: A comparative
analysis of image fusion methods. IEEE Trans. Geosci. Remote
Sens. 43(6), 13921402 (2005)
14. Zheng, S., Shi, W.-Z., Liu, J., Zhu, G.-X., Tian, J.-W.: Multisource
image fusion method using support value transform. IEEE Trans.
Image Process. 16(7), 18311839 (2007)
15. Rao, R.M., Bopardikar, A.S.: Wavelet transforms: introduction to
theory and applications. Pearson Education Asia (2000)
16. Antonini, M., Barlaud, M., Mathieu, P., Daubechies, I.: Image cod-
ing using wavelet transform. IEEE Trans. Image Process. 1(2),
205220 (1992)
17. Nikolov, S.G., Bull, D.R., Canagarajah, C.N., Halliwell, M., Wells,
P.N.T.: Fusion of 2-D images using their multi-scale edges. In:
ICPR 2000, pp. 30453048 (2000)
18. Haykins, S.: Communication systems, 4th edn. Wiley, New York
(2001)
19. Image Analyzer plugins, http://meesoft.logicnet.dk/Analyzer/
plugins/
20. RVC-Image Archiving software producers. http://www.rvc.nl/en/
microscopy/multifocus.html
21. Fusion of visible and infrared images, http://www.geocities.com/
alexjouan/image_fusion.htm
1 3

You might also like