You are on page 1of 7

J. Shanghai Jiaotong Univ. (Sci.

), 2010, 15(1): 6-12


DOI: 10.1007/s12204-010-7186-y

Review of Pixel-Level Image Fusion

YANG Bo 1∗ ( ), JING Zhong-liang 2 (), ZHAO Hai-tao 2 ()


(1. College of Information and Electrical Engineering, Shandong University of Science and
Technology, Qingdao 266510, Shandong, China;
2. Institute of Aerospace Science and Technology, Shanghai Jiaotong University, Shanghai 200240, China)

© Shanghai Jiaotong University and Springer-Verlag Berlin Heidelberg 2010

Abstract: Image fusion can be performed at different levels: signal, pixel, feature and symbol levels. Almost
all image fusion algorithms developed to date fall into pixel level. This paper provides an overview of the most
widely used pixel-level image fusion algorithms and some comments about their relative strengths and weaknesses.
Particular emphasis is placed on multiscale-based methods. Some performance measures practicable for pixel-level
image fusion are also discussed. At last, prospects of pixel-level image fusion are made.
Key words: image fusion, pixel-level, wavelets, performance evaluation
CLC number: TP 391 Document code: A

1 Introduction In pixel-level image fusion, some general


requirements[12] are imposed on the fused result:
In recent years, multi-sensor image fusion has re-  the fusion process should preserve (as far as pos-
ceived significant attention for both military and non- sible) all salient information in the source images; 
military applications. Image fusion techniques intelli- the fusion process should not introduce any artifacts
gently combine multi-modality sensor imagery of the or inconsistencies;  the fusion process should be
same scene for the purpose of providing an enhanced shift-invariant. Even if all requirements listed above
single view of a scene with extended information con- have been fully satisfied so that perfect fused images
tent. In Ref. [1], a framework for the field of image are yielded, there remains one fundamental and
fusion was proposed. Fusion process was performed at overriding system-level issue to be considered: real
different levels of information representation, which is time. The large input data generated by multisensor
sorted in ascending order of abstraction: signal, pixel, arrays make the realization of a real-time fusion system
feature, and symbol levels. This paper focuses on the very difficult. An algorithm designer runs the risk of
pixel-level fusion. At this level the fused result, a single the benefits of an excellent fusion algorithm not being
gray or chromatic image, is created by fusing individual realized if the latency is unacceptable high[12] . Speed
pixels, usually after some forms of processing have been improvement has become one of the main drivers for
applied to the source images (e.g. extraction of edges developing image fusion algorithms.
or texture). The fused image should be more useful for
human visual or machine perception. Almost all image 2 Pixel-Level Image Fusion System
fusion algorithms, from the simplest weighted averag- Architecture
ing to more advanced multiscale methods, belong to
this level[2] . Currently, pixel-level image fusion systems A typical pixel-level image fusion system can be di-
are used extensively in the fields of military[3-4] , med- vided into six subsystems: imaging, registration, pre-
ical imaging[5-6] , remote sensing[7] , and security and processing, fusion, post-processing and displaying, as
surveillance[8] . The benefits of image fusion include im- shown in Fig. 1. Fusion subsystem that combines infor-
proved spatial awareness, increased accuracy in target mation from source images and produces a single fused
detection and recognition, reduced operator workload, image is, of course, the core of the system. Further
and increased system reliability[9-11] . details of pixel-level fusion algorithms are given in the
next section. Five non-fusion subsystems should also
be considered systemically at the stage of system re-
Received date: 2007-09-01
Foundation item: the National Natural Science Founda- quirement and design.
tion of China (Nos. 60775022 and 60705006) Imaging subsystem is composed of multiple
∗E-mail: yangbo.sd@gmail.com (normally two) imaging sensors. Typical sensor
J. Shanghai Jiaotong Univ. (Sci.), 2010, 15(1): 6-12 7

Imaging sensors Registered Processed


Image 1 image 1 image 1
Processed
Registered Processed Fused fused
Image 2 Image image 2 Pre- image 2
Image image Post- image Displaying
registration processing fusion processing device
Registered Processed
Image N image N image N

Fig. 1 Generic pixel-level image fusion system architecture

combinations include: visible light (VL) and infrared (WA) to more advanced multiscale methods. For con-
(IR), millimeter waves (MMW) and IR, synthetic venience, we assume two source images A(x, y) and
aperture radar (SAR) and IR, multispectral (MS) and B(x, y), and a fused image C(x, y) produced by them,
panchromatic (Pan), and computerized tomography where x, y denote pixel coordinates; correspondingly,
(CT) and magnetic resonance imaging (MRI). two decomposed sub-images SA (x, y) and SB (x, y), and
Source sensors for most image fusion systems have the fused sub-image SC (x, y).
different fields of view, resolutions, lens distortions and 3.1 Weighted Averaging
frame rates. It is vital to align the input images prop- The simplest image fusion method is to take the av-
erly with each other, both spatially and temporally, and erage of the source images pixel by pixel
this is a problem addressed by image registration[13].
C(x, y) = αA A(x, y) + αB B(x, y), (1)
To minimize this problem, the imaging sensors in many
practical systems are rigidly mounted side-by-side and where αA , αB are scalar weights. The WA method is
physically aligned as closely as possible. However, in easy to implement and fast to execute. In addition, the
more complex systems where sensors are mobile, the WA is able to suppress noises existing in the source im-
registration of input images becomes a very challeng- ages. Unfortunately it also suppresses salient features
ing problem, to some extent, larger than the fusion al- that should be preserved for the fused image, produc-
gorithm itself. ing a low contrast result. This can be reduced to some
The pre-processing before fusion is generally desir- extent through the selection of “optimal” weights. One
able. Most image fusion algorithms will enhance arte- popular approach to finding such weights is principal
facts (equivalent to detail information) in input images, component analysis (PCA)[14] .
and result in unreliable fused output. Clever appli- 3.2 Multiscale-Based Schemes
cation of the pre-processing effectively removes known The limitations of the WA led to the development
artefacts (e.g. lining) introduced by input sensors, and of multiscale schemes. Multiscale decomposition is
significantly improves system performance[2] . In addi- very useful for extracting the salient features of im-
tion, the noise characteristics, sensitivity and dynamic ages for the purpose of fusion. Originally, pyrami-
range of each imaging sensor are different in most cases. dal decomposition methods were applied to image fu-
All those issues can be solved by pre-processing. sion. Recently, the developments of wavelet theory
The post-processing will depend on the type of dis- provided a new tool to fusion problems. Multiscale-
play and the purpose of the fusion system. The most based schemes generally produce sharp, high-contrast
basic post-processing would be automatic gain and off- images that have more information content than the
set correction for the fused image. However, more sub- WA method. Figure 2 illustrates a generic multiscale-
tle or display-specific processing might also be required, based image fusion scheme[9] , which can be divided into
such as formatting of data for compatibility with a par- three steps. Firstly, both source images A and B are
ticular type of display and adding symbols to the fused decomposed using a multiscale decomposition method,
image[2] . which may be a pyramidal transform or a wavelet trans-
Display subsystem shows the fused image to ob- form. Secondly, the decomposed sub-images {SA } and
servers. The quality of the final output image is to {SB } are fused using a fusion rule θ. Finally, the fused
some extend decided by the display. Proper selection image C is reconstructed by applying the inverse trans-
of display can take the best advantages of system and form to the fused sub-images {SC }.
avoid nugatory processing. 3.2.1 Multiscale Decomposition
(1) Pyramidal decomposition. The concept of an im-
3 Pixel-Level Image Fusion Algorithms age pyramid was discussed in the early 1980s. Marr[15]
and Burt et al [16] showed that pyramidal decomposition
Many fusion algorithms for the pixel-level fusion have can be useful in a number of image processing applica-
been proposed, from the simplest weighted averaging tions, as a fast method of representing the multiscale
8 J. Shanghai Jiaotong Univ. (Sci.), 2010, 15(1): 6-12

Multiscale
decomposition Decomposed
Image A sub-images
{SA} Multiscale
Fusing Fused reconstruction Fused
using sub-images
Multiscale image C
rule {SC}
decomposition Decomposed
Image B sub-images
{SB}

Fig. 2 Generic multiscale-based image fusion scheme

information that simulates multiple scales of processing The RoLP pyramid introduced by Toet is very similar
in the human visual systems. The most basic example to the LP. Instead of subtraction, division is performed
is the Gaussian pyramid constructed using a Gaussian between adjacent levels of the Gaussian pyramid.
kernel ω and a 2D downsampler. Let g (i) be the ith (2) Wavelet decomposition. The wavelet representa-
level of the Gaussian pyramid for a source image g (0) . tion introduced by Mallat[20] was used in image cod-
Then for i > 0 there is ing, pattern matching and texture analysis in the late
g (i) (x, y) = [ω(x, y) ∗ g (i−1) (x, y)]↓2 , (2) 1980s. The wavelet representation provides a way for
analyzing signals in both time and frequency domains.
where “∗” is the convolution operator and [·]↓2 is down-
This makes it ideal for representing non-stationary sig-
sampling operator by a factor of 2 along both row and
nals, to which most real-world signals belong. It is well
column. The convolution and downsampling operations
known that the discrete wavelet transform (DWT) can
result in a loss of high-frequency information from one
be implemented by iterating a perfect reconstruction
level to the next higher level. Accordingly, the Lapla-
filter bank {h0 , h1 , k0 , k1 }[21] . Here h0 , h1 denote low-
cian pyramid (LP) is defined to represent the lost infor-
pass and bandpass analysis filters respectively, and k0 ,
mation between adjacent levels of the Gaussian pyra- (0)
mid. As shown in Fig. 3, the LP is calculated by k1 are corresponding synthesis filters. Let W00 be the
original image. As shown in Fig. 4, a set of four sub-
l(i−1) (x, y) = g (i−1) (x, y) − 4ω(x, y) ∗ [g (i) (x, y)]↑2 , (3) (j) (j) (j) (j)
images {W00 , W10 , W01 , W11 } is generated at the
(j)
where [·]↑2 denotes upsampling operator and the factor jth step of iteration, where W00 (j = 1, 2, · · · ) con-
4 is chosen to offset the power loss led by downsampling. tains low-frequency content and other sub-images are
It is clear that the original image can be perfectly con- high-frequency contents at three (horizontal, vertical
structed by its LP and the top level (assuming n) of and diagonal) directions. The reconstruction process
Gaussian pyramid {l(0) , l(1) , · · ·, l(n−1) , g (n) }. This of the DWT, opposite to the decomposition, is based
structure was initially used by Burt and Adelson for on the synthesis filters and upsamplers[21] .
image compression, but its applicability to the image The first wavelet based image fusion schemes[22-23]
fusion task was soon realized and the LP fusion scheme emerged in the mid-1990s and reported both qualita-
was proposed in 1985[17]. tive and quantitative superior to the standard pyra-
midal methods. Unfortunately, the DWT is not shift-
invariant because of the downsampling used in its cal-
culation. This results in ringing artifacts in intraframe
and moving artifacts in interframe. Rockinger et al [12]
overcome this problem by using the undecimated DWT
(UDWT)[24] , which achieved exact shift invariance by
discarding the downsampling and upsampling opera-
tions. However, the UDWT method is much more ex-
pensive in computation and storage due to the high de-
Fig. 3 One stage of the Laplacian pyramid decomposition gree of redundancy, making it impractical for very large
data sets (such as image sequences). This method also
There are also some alternative pyramidal decom- presents a significant challenge when real-time imple-
position methods for image fusion, such as the gradi- mentation is considered.
ent pyramid (GP)[18] and the ratio-of-low-pass (RoLP) Hill et al [25] further developed the wavelet fusion
pyramid[19] . The GP can be obtained by applying a scheme by introducing the dual-tree complex wavelet
gradient operator to each level of the Gaussian pyra- transform (DTCWT)[26] , which is nearly shift-invariant
mid. Four gradient pyramids were generated, repre- but has lower redundancy than the UDWT. The fil-
senting horizontal, vertical and two diagonal directions. ters used by the DTCWT must be designed strictly
J. Shanghai Jiaotong Univ. (Sci.), 2010, 15(1): 6-12 9

Fig. 4 One stage of the 2D DWT

to achieve appropriate delay such that the frequency over a small local area is computed firstly. The fused
aliases from different trees can be mainly cancelled each coefficient is then calculated based on this measure via
other. However, short finite impulse response (FIR) a weighted average of the two sub-images. Another
filters are difficult to satisfy these extra constraints famous rule is the window based verification rule pro-
besides such usual constraints as orthogonality, linear posed by Li et al [22] , where a binary decision map for
phase (symmetry), and regularity. In addition, the de- the selection of coefficients is created using a majority
composition coefficients from different trees, serving as filter.
the real or imaginary parts of complexes, should be (3) Region-based rules. The most applications of a
combined together to form a shift-invariant magnitude fusion scheme are interested in features within the im-
representation. This increases the burden of subse- age, not in the actual pixels or a fixed window. It seems
quent processing. The situation becomes rather com- reasonable to incorporate feature information into the
plex in multidimensional cases, though good directional fusion process[29] . Therefore, a number of region-based
selectivity is achieved. Therefore, Yang et al [27] devel- fusion schemes have been proposed[29-31] . Edge detec-
oped another low-redundant and nearly shift-invariant tion and region segmentation are the most important
wavelet transform, which was related to the concept of steps for these rules. The development of edge detec-
frames and regarded as an optimal combination of the tion technique and region labeling algorithms makes it
DWT and UDWT. possible to fuse coefficients more intelligently.
3.2.2 Fusion Rules
In addition to the selection of image decomposition 4 Application Examples
methods, there is a great deal of flexibility in the selec-
tion of fusion rule that is applied at each level of the 4.1 Digital Camera Application
decomposed pyramid or sub-images. Generally, there It is difficulty to get an image that contains all rel-
are three categories of rules to fuse decomposed images. evant objects “in focus” due to the limited depth-of-
(1) Point-based rules. The point-based rules consider focus of cameras. The pixel-level image fusion tech-
each pixel-point separately in the decomposition do- niques have already been shown to be useful to solve
main. The simplest point-based rule is choose-max[28] this problem[9] . Figure 5 shows an example. Figures
rule, which can be depicted by 5(a) and (b) are two original images, taken by a digital
 camera, which have different focus. Figure 5(c) shows
SA (x, y), |SA (x, y)| > |SB (x, y)| a fused image using the wavelet-based method. In ad-
SC (x, y) = . (4)
SB (x, y), else dition, there are other applications of pixel-level image
fusion, such as deblurring and cloud removal[9] .
(2) Area-based rules. It is unstable to depend on only 4.2 Medical Imaging
single pixel to determine a fused coefficient. Therefore, With the increasing of clinic application demanding,
area-based (or window-based) rules are proposed. In researchers pay more attention to medical image fu-
this case, the decomposed sub-images are filtered by a sion. Medical diagnosis often benefits from the com-
small fixed window. One area-based rule is developed plementary information in images obtained by different
by Burt et al [18] . A normalized correlation between medical imaging devices. For example, CT image pro-
sub-images that generated by different source images vides the best information on denser tissue with less
10 J. Shanghai Jiaotong Univ. (Sci.), 2010, 15(1): 6-12

distortion, while MRI image provides better informa- The fusion of those two sensors provides a more accu-
tion on soft tissue with more distortion. The combi- rate description of the scene than any of the individual
nation of the two images with different modalities can image. Figure 7 shows an example of IR and VL fusion
provide a more precise and sound information on pa- for video surveillance.
tient. Figure 6 shows an example in medical image 4.4 Remote Sensing Application
fusion. Another significant application of the pixel-level im-
4.3 Surveillance and Navigation age fusion techniques is in the remote sensing field. In
Image fusion technique is increasingly being em- this field, the color information is provided by MS satel-
ployed in the fields of surveillance and navigation. Two lite images with a low spatial resolution. Detailed spa-
most common sensors in these fields are IR and VL. tial information can be observed from a Pan image. By
The IR sensor provides less detail information in low a fusion process a single image can be achieved contain-
thermal contrast area in the scene, whereas the VL ing both high spatial resolution and color information.
sensor may represent the background in great detail. Figure 8 shows an example in this case.

Fig. 5 Multifocus image fusion

Fig. 6 Medical image fusion

Fig. 7 IR and VL image fusion

Fig. 8 Remote sensing image fusion


J. Shanghai Jiaotong Univ. (Sci.), 2010, 15(1): 6-12 11

5 Performance Evaluation by

There is yet no universally accepted measure for eval- 


M 
N

uating image fusion performance, though algorithmic EI = [QAC (x, y)wA (x, y)+
x=1 y=1
techniques for fusing images are now well known and

understood. Image fusion results were usually evalu- QBC (x, y)wB (x, y)]
ated visually. Quantitatively assessing is a complicated
M 
 N
and difficult issue in practical applications because the
[wA (x, y) + wB (x, y)], (8)
ideal fused images are normally unknown. In Ref. [32],
x=1 y=1
this issue was studied systematically and a quite com-
plete architecture for performance evaluation was de- where QAC is edge preservation value between images A
veloped. Here, four quantitative measures usually used and C; QBC is the value between B and C; wA and wB
in pixel-level image fusion are reviewed. are weight functions defined based on the edge strength
(1) Root mean square error (RMSE). One way of of image A and B, respectively[33] .
assessing performance, especially in multifocus image (4) Spatial frequency (SF). Evaluation metric based
fusion, is to synthetically generate a pair of distorted on SF[34-35] can be computed only depending on the
source images from a known test image, and then com- fused image

pare the fused image with the original test image based 2
SF = frow + f2 , (9)
on the RMSE measure. The RMSE between fused im- col
age and ideal image is computed by where frow and fcol are the spatial frequencies of image
 along rows and columns respectively and there are
 M N 
  
 [C(x, y) − I(x, y)]2  1  M  N
 x=1 y=1
RMSE = , (5) frow =  [C(x, y) − C(x, y − 1)]2 , (10)
MN M N x=1 y=1

 M  N
where I(x,y) is the ideal image; M , N denote the sizes  1 
fcol =  [C(x, y) − C(x − 1, y)]2 . (11)
of the image. M N x=1 y=1
(2) Mutual information (MI). Performance analysis
is more complicated for heterogeneous-sensors fusion 6 Conclusion
problems where the ground truth in the form of an ideal
fused image is unknown. The MI[6] between source im- In this paper, we reviewed the techniques on pixel-
ages and fused image is computed to achieve this. The level image fusion. The pixel-level image fusion has
MI metric is defined as been rapidly developing and gradually becoming ma-
ture. However, there still remain many issues that
deserve to be studied further. The explosive growth
MI = H(A, B) + H(C) − H(A, B, C), (6)
of computing power enables real-time pixel-level im-
age fusion systems to become a reality. Therefore,
where H(C) denotes the entropy of the fused image C; system-level issues and practical matters associated
H(A, B) is the joint entropy of the source images A, with the implementation of a real-time image fusion
B; H(A, B, C) is the joint entropy of source and fused system should be considered seriously. Although the
images. The (joint) entropy can be computed using algorithmic techniques for pixel-level image fusion have
the normalized (joint) histogram of the corresponding been well studied by researchers in this domain, chal-
image (images). For example, letting hX (i) be gray- lenges remain with regard to developing intelligent fu-
level histogram of an image X, image X entropy is sion methods adapting to vastly different situations. In
addition, the development of pixel-level image fusion al-
255
 gorithms urgently demands widely accepted, objective
H(X) = − hX (i) log hX (i). (7) quality metrics.
i=0
References
(3) Edge information (EI). EI based metric proposed [1] Abidi M A, Gonzalez R C. Data fusion in robotics
by Xydeas et al [33] is another quality metric that does and machine intelligence [M]. San Diego, CA: Aca-
not require ideal image. The EI metric is based on the demic Press, 1992.
idea of measuring the preservation of input edge infor- [2] Smith M I, Heather J P. Review of image fusion
mation in the fused image. The metric can be computed technology in 2005 [J]. Proc SPIE, 2005, 5783: 29-45.
12 J. Shanghai Jiaotong Univ. (Sci.), 2010, 15(1): 6-12

[3] Ferris D D, Mcmillan R W, Currie N C, et al. [21] Vetterli M, Herley C. Wavelets and filter banks:
Sensors for military special operations and law enforce- Theory and design [J]. IEEE Trans Signal Process,
ment applications [J]. Proc SPIE, 1997, 3062: 173-180. 1992, 40(9): 2207-2232.
[4] Smith M I, Ball A, Hooper D. Real-time image [22] Li H, Manjunath B S, Mitra S K. Multisensor im-
fusion: A vision aid for helicopter pilotage [J]. Proc age fusion using the wavelet transform [J]. Graph Mod-
SPIE, 2002, 4713: 83-94. els Image Process, 1995, 57(3): 235-245.
[5] Hill D, Edwards P, Hawkes D. Fusing medical im- [23] Chipman L J, Orr T M, Lewis L N. Wavelets and
ages [J]. Image Processing, 1994, 6(2): 22-24. image fusion [C]// Proc IEEE ICIP 3. Washington D
[6] Qu G H, Zhang D L, Yan P E. Medical image fu- C : IEEE Press, 1995: 248-251.
sion by wavelet transform modulus maxima [J]. Opt [24] Unser M. Texture classification and segmentation us-
Express, 2001, 9(4): 184-190. ing wavelet frames [J]. IEEE Trans Image Process,
[7] Daniel M M, Willsky A S. A multiresolution 1995, 4(11): 1549-1560.
methodology for signal-level fusion and data assim- [25] Hill P, Canagarajah N, Bull D. Image fusion us-
ilation with applications to remote sensing [J]. Proc ing complex wavelets [C]// Proc BMVC. Cardiff, UK:
IEEE, 1997, 85(1): 164-180. British Machine Vision Association Press, 2002: 487-
[8] Slamani M A, Ramac L, Uner M, et al. Enhance- 496.
ment and fusion of data for concealed weapons detec-
[26] Kingsbury N G. Complex wavelets for shift invariant
tion [J]. Proc SPIE, 1997, 3068: 8-19.
analysis and filtering of signals [J]. Applied and Com-
[9] Zhang Z, Blum R S. A categorization of multiscale- putational Harmonic Analysis, 2001, 10(3): 234-253.
decomposition-based image fusion schemes with a per-
[27] Yang B, Jing Z L. Image fusion using a low-
formance study for a digital camera application [J].
redundant and nearly shift-invariant discrete wavelet
Proc IEEE, 1999, 87(8): 1315-1326.
frame [J]. Optical Engineering, 2007, 46(10): 107002.
[10] Toet A, Ijspeert J K, Waxman A M, et al. Fu-
sion of visible and thermal imagery improves situa- [28] Huntsberger T, Jawerth B. Wavelet based sensor
tional awareness [J]. Proc SPIE, 1997, 3088: 177-188. fusion [J]. Proc SPIE, 1993, 2059: 488-498.
[11] Toet A, Franken E M. Perceptual evaluation of dif- [29] Piella G. A region-based multiresolution image fu-
ferent image fusion schemes [J] Displays, 2003, 24: 25- sion algorithm [C]// ISIF Fusion 2002 Conference.
37. Annapolis: ISIF, 2002: 1557-1564.
[12] Rockinger O, Fechner T. Pixel-level image fusion: [30] Zhang Z, Blum R. Region-based image fusion scheme
The case of image sequences [J]. Proc SPIE, 1998, for concealed weapon detection [C]// Proc 31st An-
3374: 378-388. nual Conference on Information Sciences and Systems.
[13] Brown L G. A survey of image registration techniques Baltimore, USA: John Hopkins University Press, 1997:
[J]. ACM Computing Surveys, 1992, 24(4): 325-376. 168-173.
[14] Jia Y H. Fusion of landsat TM and SAR images based [31] Lewis J J, O’callaghan R J, Nikolov S G,
on principal component analysis [J]. Remote Sensing et al. Region based fusion using complex wavelets
Technology and Application, 1998, 13(1): 46-49. [C]//7th International Conference on Information Fu-
[15] Marr D. Vision [M]. San Francisco, CA: W H Free- sion. Stockholm, Sweden: ISIF, 2004: 555-562.
man Press, 1982. [32] Xiao G, Jing Z L, Wu J M, et al. Synthetically eval-
[16] Burt P J, Adelson E. The Laplacian pyramid as a uation system for multi-source image fusion and ex-
compact image code [J]. IEEE Trans Commun, 1983, perimental analysis [J]. Journal of Shanghai Jiaotong
31(4): 532-540. University (Science), 2006, E-11(3): 263-270.
[17] Burt P J, Adelson E H. Merging images through [33] Xydeas C S, Petrović V. Objective image fusion
pattern decomposition [J]. Proc SPIE, 1985, 575: 173- performance measure [J]. Electronics Lett, 2000, 36(4):
181. 308-309.
[18] Burt P J, Kolczynski R J. Enhanced image cap- [34] Mckeown D M, Cochran S D, Ford S J, et al.
ture through fusion [C]// Proc 4th Int Conf Computer Fusion of HYDlCE hyper spectral data with panchro-
Vision. Berlin, Germany: IEEE Press, 1993: 173-182. matic imagery for cartographic feature extraction [J].
[19] Toet A. Hierarchical image fusion [J]. Mach Vision IEEE Trans Geosciences and Remote Sensing, 1999,
Appl, 1990, 3: 1-11. 37(3): 1261-1277.
[20] Mallat S G. A theory for multiresolution signal de- [35] Laliberte F, Gagnon I, Sheng Y I. Registration
composition: The wavelet representation [J]. IEEE and fusion of retinal images:An evaluation study [J].
Trans Pattern Anal Machine Intell, 1989, 11(7): 674- IEEE Trans Medical Imaging, 2003, 22(5): 661-673.
693.

You might also like