You are on page 1of 15

Defence Science Journal, Vol. 58, No. 3, May 2008, pp.

338-352 Ó 2008, DESIDOC

Pixel-level Image Fusion using Wavelets and Principal Component Analysis
V.P.S. Naidu and J.R. Raol
National Aerospace Laboratories, Bangalore-160 017
ABSTRACT Image registration and fusion are of great importance in defence and civilian sectors, e.g. ,recognising a ground/air force vehicle and medical imaging. Pixel-level image fusion using wavelets and principal component analysis has been implemented and demonstrated in PC MATLAB. Different performance metrics with and without reference image are implemented to evaluate the performance of image fusion algorithms. As expected, the simple averaging fusion algorithm shows degraded performance. The ringing tone presented in the fused image can be avoided using wavelets with shift invariant property. It has been concluded that image fusion using wavelets with higher level of decomposition showed better performance in some metrics and in other metrics, principal components analysis showed better performance. Keywords: Pixel-level image fusion, wavelets transform, principal component analysis, multi-sensor image fusion

NOMENCLATURE

y a ,b
l

Mother wavelet Eigen value Mean of the reference image Standard deviation Covariance function

I,I f ,I L , I H ,I r I LL , I LH , I HL , I HH , I 1, I 2 m,n norm M, N P 1, P 2 T CORR CF

Representation of source, fused, LPF, HPF, and reference images Representation of average, horizontal, vertical, diagonal information of I Representation of input images to be fused Integers Norm operator Sizes of the image Principal components Matrix transpose Correlation Column frequency of the image

mIr
s cov

CE ( I j ; I f ) Cross entropy of the j th input image and fused image
E f (x)
hI r I f

Expectation operator One dimensional signal Joint histogram of reference and fused images Pixel index

(i, j )
338

Received 29 September 2006, revised 01 August 2007

such as wavelets. Some generic requirements could be imposed on the fusion scheme: (a) the fusion process should preserve all relevant information contained in the source images. FUSION ALGORITHMS The details of wavelets and PCA algorithm and their use in image fusion along with simple average fusion algorithm are described in this section. The problem that MIF tries to solve is to merge the information content from several images (or acquired from different imaging sensors) taken from the same scene in order to accomplish a fused image that contains the finest information coming from the original images 1. feature level and decision level2. etc. In this paper. Laplacian pyramids. and gradient pyramid have been proposed. Multi-resolution wavelet transforms could provide good localization in both spatial and frequency domains. and simple averagebased image fusion algorithms. . One of the important prerequisites to be able to apply fusion techniques to source images is the image registration.e.. and (c) irrelevant features and noise should be suppressed to a maximum extent. In this paper. This technique would produce several undesired effects and reduced feature contrast. PCA. Performance metrics are used to evaluate the wavelets. 2. multiscale transforms. The fusion is achieved by weighted average of images to be fused. INTRODUCTION Multi-sensor image fusion (MIF) is a technique to combine the registered images to increase the spatial resolution of acquired low detail multi-sensor images and preserving their spectral information. Although the wavelet theory was introduced as a mathematical tool in 1980s. Of late MIF has emerged as a new and promising research area. MIF could be performed at three different levels viz. The benefiting fields from MIF are: Military. the fused image would provide enhanced superiority image than any of the original source images. Principal component analysis (PCA) is a mathematical tool which transforms a number of correlated variables into a number of uncorrelated variables. The weights for each source image are obtained from the eigen vector corresponding to the largest eigenvalue of the covariance matrices of each source. it has been extensively used in image processing 339 1 . Dependent on the merging stage.1 Wavelet Transform Wavelet theory is an extension of Fourier theory in many aspects and it is introduced as an alternative to the short-time Fourier transform (STFT). (b) the fusion process should not introduce any artifacts or inconsistencies which would amuse the human observer or following processing stages.The PCA is used extensively in image compression and image classification. 2.NAIDU & RAOL: PIXEL-LEVEL IMAGE FUSION USING WAVELETS AND PRINCIPAL COMPONENT ANALYSIS He MAE RF PFE PSNR RMSE SSIM Entropy Mean absolute error Row frequency of the image Percentage fit error Peak signal-to-noise ratio Root mean square error Measure of structural similarity morphological pyramid. In Fourier theory. Discrete wavelet transform would provide directional information in decomposition levels and contain unique information at different resolutions 3. the information in the source images is needed to be adequately aligned and registered prior to fusion of the images.4. The simplest MIF is to take the average of the grey level source images pixel by pixel. remote sensing. the signal is decomposed into sines and cosines but in wavelets the signal is projected on a set of wavelet functions. Hence. it is assumed that the source images are already registered. Image fusion algorithm that utilises the PCA is described in this paper. Fourier transform would provide good resolution in frequency domain and wavelet would provide good resolution in both time and frequency domains. pixel-level-based MIF is presented to represent a fusion process generating a single combined image containing an additional truthful description than individual source image. i. machine vision. robotic. pixel level. and medical imaging. To over come this problem.

y) H I ( x. Wavelet separately filters and down samples the 2-D data (image) in the vertical and horizontal directions (separable filter bank). In Fourier analysis the signal is decomposed into sine waves of different frequencies. 340 . NO. A wavelet as its name implies is a small wave that grows and decays essentially in a limited time period. it has to satisfy two basic properties: (i) time integral must be zero µ ¥ Wa . In discrete wavelet transform (DWT).The coefficient matrices I L ( x. where X indicates block name Figure 1. y ) are both low pass and high pass filtered in vertical direction and down sampled by Filtering column wise -µ ò y(t )dt = 0 µ and (1) (ii) square of wavelet integrated over time is unity -µ òy 2 (t ) dt = 1 (2) Wavelet transform of a 1-D signal f ( x) onto a basis of wavelet functions is defined as: Filtering row wise H ¯C I H ( x.b ( x ) = 1 æ x-bö yç ÷ a è a ø (4) The mother wavelet would localise in both spatial and frequency domain and it has to satisfy zero mean constraint. y ) I LH ( x.b ( f ( x )) = x =-¥ ò f ( x )y a . where m and n are integers. 1. y ) filtered by low pass filter L and high pass filter H in horizontal direction and then down sampled by a factor of two (keeping the alternative sample) to create the coefficient matrices I L ( x. The input (source) image is I ( x. One level of 2-D image decomposition. y ) L ¯R ¯R I HL ( x. 58.DEF SCI J. 3. In wavelet analysis the signal is decomposed into scaled (dilated or expanded) and shifted (translated) versions of the chosen mother wavelet or function . A wavelet to be a small wave.b ( x )dx (3) Basis is obtained by translation and dilation of the mother wavelet as: y a . y ) Notations: ¯C ¯R X keep 1 column out of 2 (down sampling in columns) Keep 1 row out of 2 (down sapling in rows) Convolve with X . y ) L ¯R I HH ( x. y ) H ¯C I L ( x. the dilation factor is a = 2m and the translation factor is b = n2m . VOL. y ) . MAY 2008 that provides a multi-resolution decomposition of an image in a biorthogonal basis and results in a non-redundant image representation. y ) and I H ( x. y ) and I H ( x. The basis are called wavelets and these are functions generated by translation and dilation of mother wavelet. The information flow in one level of 2-D image decomposition is illustrated in Fig. y ) L ¯R I LL ( x.

y ) insert 1 column of zeros in between columns (up sampling in columns) insert 1 row of zeros in between rows (up sapling in rows) Figure 2. One level of 2-D image reconstruction. y ) Notations: ­R ­R ­R ­R ­C ­R ~ H ­C ­C ­C ­C ~ H ~ H ~ L ~ L ~ L ~ H ~ L I ( x. pass L Row up sampling and filtering with low pass filter % and high pass filter H % of the resulting image and L summation of all matrices would construct the image I ( x. It represents the approximation of source image I ( x. Inverse 2-D wavelet transform is used to reconstruct the image I ( x. y ) . I HL ( x. DWT ( I 2 ( x. y ) and I 2 ( x. y ) . y ) . y )). y ) . y ) . are detailed sub images which contain directional (horizontal. y ) . contains the average image information corresponding to low frequency band of multi scale decomposition. y ) . are decomposed into approximation and detailed coefficients at required level using DWT. y ) I HL ( x. the source images I1 ( x. y ) .NAIDU & RAOL: PIXEL-LEVEL IMAGE FUSION USING WAVELETS AND PRINCIPAL COMPONENT ANALYSIS a factor of two to create sub bands (sub images) I LL ( x. y ) .1. and I HH ( x.based image fusion algorithm is shown in Fig. This involves column up sampling (inserting zeros between samples) and filtering using low % and high pass filter H % for each sub images. I LH ( x. y ) . The second Filtering row wise I HH ( x. The I LL ( x. y ) 5.and I HH ( x. due to spatial orientation. y ) . I HL ( x. from sub images I LL ( x. y ) I LL ( x. 2. 3. The approximation and detailed coefficients of both images are combined using fusion rule f . First principal component is taken to be along the direction with the maximum variance.6. y ) I LH ( x. y ) = IDWT éf ë { DWT ( I1 ( x.2 Principal Component Analysis The PCA involves a mathematical procedure that transforms a number of correlated variables into a number of uncorrelated variables called principal components. 2. I LH ( x. vertical and diagonal) information of the source image I ( x. It computes a compact and optimal description of the data set.5. y ) . It could be considered as smoothed and sub sampled version of the source image I ( x. I HL ( x. The fused image ( I f ( x. Multi-resolution could be achieved by recursively applying the same algorithm to low pass coefficients from the previous decomposition 1. y ) . y ) . y ) . y ))}ù û (5) The fusion rule used in this paper is simply averages the approximation coefficients and picks the detailed coefficient in each sub band with the largest magnitude. 2. y ) as shown in Fig.1 Image Fusion by Wavelet Transform The information flow diagram of wavelet. y ) . The first principal component accounts for as much of the variance in the data as possible and each succeeding component accounts for as much of the remaining variance as possible. and I HH ( x. 341 . y ) ) could be obtained by taking the inverse discrete wavelet transform ( IDWT ) as: I f ( x. I LH ( x. In wavelet Filtering column wise image fusion scheme.

. i. 58. VOL. MAY 2008 I1 DWT IDWT FUSION rules If Fused images I2 DWT Fused Wavelet coefficients Wavelet coefficients Registered source images Figure 3...Vd ] (7) and 342 .. The PCA is also called as Karhunen-Loève transform or the Hotelling transform.. 2. The covariance of Y . one V cov(Y ) = VV T cov( X )V = cov( X )V One could write V as V = [V1 . Using matrix algebra 7 él1 0 ê0 l 2 ê ê M M cov(Y ) as ê ê0 0 ê ë0 0 L 0 0ù L 0 0ú ú O M M ú ú L l d -1 0 ú L 0 ld ú û (8) Substituting Eqn (6) into the Eqn (7) gives [l1V1 . Let X be a d-dimensional random vector and assume it to have zero empirical mean.cov( X )V2 . The empirical mean vector Me has a dimension of 1 x 2.. DCT and wavelet etc. NO. The resulting matrix Z is of dimension 2 x n.. Subtract the empirical mean vector M e from (6) = V T cov( X )V gets Multiplying both sides of Eqn (6) by V . Orthonormal projection matrix V would be such that Y = V T X with the following constraints. The steps followed to project this data into a 2-D subspaces are: 1. this component points the direction of maximum variance.DEF SCI J.. principal component is constrained to lie in the subspace perpendicular of the first. Compute the empirical mean along each column. d and Vi is an eigenvector of cov( X ) .1 PCA Algorithm Let the source images (images to be fused) be arranged in two-column vectors... Organise the data into column vectors... Information flow diagram in image fusion scheme employing multi-scale decomposition.V2 ..The PCA does not have a fixed set of basis vectors like FFT. Within this subspace.e. 2. 3... l dVd ] = [cov( X )V1 . The third principal component is taken in the maximum variance direction in the subspace perpendicular to the first two and so on. 2.cov( X )Vd ] (9) This could be rewritten as l iVi = cov( X )Vi (10) cov(Y ) = E {YY T } = E{ (V X )(V X ) } = E {(V X )( X V )} = V E {XX }V T T T T T T T where i = 1.. 3.. l 2V2 . and its basis vectors depend on the data set. cov(Y ) is a diagonal and inverse of V is equivalent to its transpose ( V -1 = V T ).2.

PERFORMANCE EVALUATION 3.3 Image Fusion by Simple Average This technique is a basic and straightforward technique and fusion could be achieved by simple averaging corresponding pixels in each input image as: I f ( x. 4. The resulting vector has a dimension of n x 2. 4. shown in Fig. the performance of image fusion algorithms can be evaluated using the metrics shown in Table 1. The normalized components P 1 and P 2 (i.1 With Reference Image When the reference image is available. 4.. 3. The input images (images to be fused) I1 ( x. y ) + I 2 ( x . y ) + P 2 I 2 ( x. y ) = I1 ( x . The resulting matrix X is of dimension 2 x n. where n is length of the each image vector. RESULTS AND DISCUSSION Two sets of source/input images are used to evaluate the image fusion algorithms. y ) (12) I1 Principal component analysis P1 P2 Fused images If P1 I 1 + P2 I 2 I2 Registered source images Figure 4. 343 . Consider the first column of V which corresponds to larger eigenvalue to compute P 1 and P 2 as: 2.1 Data Set 1 Analysis The National Aerospace Laboratories indigenous aircraft Hansa . y ) = P 1 I 1 ( x. P 1 + P 2 = 1) using Eqn (9) are computed from the obtained eigenvector.e.2 Image Fusion by PCA The information flow diagram of PCA-based image fusion algorithm is shown in Fig.2 Without Reference Image When the reference image is not available. the metrics shown in Table 2 could be used to test the performance of the fused algorithms. C=XX T mean of expectation = cov(X ) 5. 6. The fused image is: I f ( x. Find the covariance matrix C of X i. y ) are arranged in two column vectors and their empirical means are subtracted. Compute the eigenvector and eigenvalues for this resulting vector are computed and the eigenvectors corresponding to the larger eigenvalue obtained. 5(a) is considered as a reference image to evaluate the performance of the fusion algorithm. y ) 2 (13) 3. Both V and D are of dimension 2 x 2. y ) and I 2 ( x. 4.NAIDU & RAOL: PIXEL-LEVEL IMAGE FUSION USING WAVELETS AND PRINCIPAL COMPONENT ANALYSIS each column of the data matrix Z . The complementary pair P 1 = V (1) åV and P2 = V (2) åV (11) 2.2. Information flow diagram in image fusion scheme employing PCA. Compute the eigenvectors V and eigenvalue D of C and sort them by decreasing eigenvalue.e.

j) )2 ç ç MN åå i =1 j =1 è where L in the number of gray levels in the image M N 2C rf where C r = åå I r (i. r MN i =1 j =1 MN i =1 j =1 M N M N 1 1 2 s I2r = I r (i. 2 ( )( ) Measure of structural similarity10 Natural image signals would be highly structured and their pixels reveal strong ma dependencies.I f (i. j ) 2 CORR = Cr + C f i =1 j =1 M N DEF SCI J.I f (i. j )hI (i. 58. m I = 1 åå I f (i.I f ) norm( I r ) * 100 where norm is the operator to compute the largest singular value Signal to noise ratio9 M N æ ö ç ÷ ( I r (i. This will be zero when both reference and fused images are exactly alike and it will be increased when the fused image is deviated from the reference image. ( ) ( ). i =1 j =1 Mutual information11 Universal quality index12 MI = åå hI r I f i =1 j =1 M N æ hI r I f (i. j) . j ) ) 2 åå i j =1 =1 ç ÷ SNR = 20log10 ç M N ÷ 2 ( ) ( . j ) . SSIM = (2m Ir m I f + C1 ) 2s Ir I f + C2 2 Ir 2 If 2 Ir 2 If ( ) m I2r + m I2f is close to zero and C 2 is a constant that is included to avoid the instability when s I2r + s I2f is close to zero . j ) . j ) Percentage fit error8 PFE = norm ( I r . j ) I f (i. j ) 2 and C rf = åå I r (i. j ) . j ) log 2 ç ç hI (i. j ) . ) ( . MAY 2008 Peak signal to noise9 ö ÷ ÷ ÷ ÷ ÷ ø Its value will be high when the fused and reference images are similar.m Ir I f (i.1 i =1 j =1 MN . Computed as the norm of the difference between the corresponding pixels of reference Mi and fused image to the norm of the reference image. 3. j ) i =1 j =1 M N This shows the correlation between the reference and fused image. These dependencies would carry vital information about the structure of (m + m + C1 )(s + s + C2 ) the object. VOL. ma QI = 4s I r I f ( m I r + m I f ) 2 2 2 (s 2 It + sI f ) ( m I r + mI f ) M N M N where m I = 1 åå I r (i.1 i =1 j =1 f This measures how much of the salient information contained in reference image has ma been transformed into the fused image. Root mean square error8 Mean absolute8 Formula Features/properties W for Computed as the root mean square error of the corresponding pixels in the reference mi image I r and the fused image I f . s I2f = I f (i . It compares local patterns of pixel intensities that have been normalized for where C1 is a constant that is included to avoid the instability when luminance and contrast. j ) . This metric will be high when the reference and fused images are alike. j) . The ideal value is ma one when the reference and fused are exactly alike and it will be less than one when the dissimilarity increases.m I f MN .m I r .1 i =1 j =1 M N 1 s I2r I f = åå I r (i.Table 1. j ) . The lowest value of -1 would occur when I f = 2 m I r . j ) ÷ f è r ø Larger value implies better image quality.I f (i. Correlation8 C f = åå I f (i. The range of this metric is -1 to 1 and the best value 1 would be achieved if and only if reference and fused images are alike.I r . Metrics for performance evaluation when reference image is available Name of the metric/ ref. ) ç ÷ I i j I i j r f ç åå ÷ è i =1 j =1 ø æ ç L2 ç PSNR = 20log10 ç 1 M N (I r (i. j ) . Higher value ma implies better fusion. j ) ö ÷ (i. NO. Higher value ma implies better fusion.m I f åå åå MN . j ) ) 2 1 MN åå I i =1 j =1 (i. 344 RMSE = MAE = 1 MN M åå (I i =1 j =1 N r M N r (i. Computed as the mean absolute error of the corresponding pixels in reference and fused mi images.

j ) hI (i. Fused and reference images containing the same information would have a low cross entropy. Then the mutual information between source and fused images are: FMI = MI I1I f + MI I 2 I f M N æ h I1 I f ( i . I f ) = CE ( I 1 . I ) = å h (i ) logç hI (i) ÷ f I Spatial frequency14 ç hI (i) ÷ è f ø 1 i =0 2 ç hI (i ) ÷ è f ø 2 spatial frequency criterion SF is: SF = where row frequency of the image: RF = column frequency of the image: CF = RF 2 + CF 2 1 å å [ I f ( x. s I22 over a window ( ) c( w) is a normalized version of C ( w) & QI ( I 1 . I f ) + CE ( I 2 .1)]2 MN x =1 y = 2 M N This frequency in spatial domain indicates the overall activity level in the fused image. C ( w) = max s I21 . maxim Fusion quality index16 wÎW ( ) The range of this metric is 0 to 1. j ) ö ÷ where MI I1I f = åå hI1I f (i. maxim s I1 I if s s f I1I f +s f I1 I 345 +s >1 I2I f . I f | w) + (1 . y ) and I f ( x. I 2 and the fused image I f is: CE ( I 1 . I f | w) ö ÷ + QI ( I 2 .I f ( x. Cross-entropy evaluates the similarity in information content between input images and fused image. An image with high information content would have high entropy. j ) ÷ i =1 j =1 f ø è 2 FQI = å c( w) l ( w)QI ( I 1 . j ) log 2 ç ç hI (i.I f ( x . This metric would be more efficient in the absence of noise. y )] 2 Fusion mutual information15 If the joint histogram between I 1 ( x. A larger measure implies better quality. j ) log 2 ç ç hI (i. What look fo min/m maxim NAIDU & RAOL: PIXEL-LEVEL IMAGE FUSION USING WAVELETS AND PRINCIPAL COMPONENT ANALYSIS s = å (i . An image with high contrast would have a high standard deviation. y ) . i = å ihI f i =0 L where hI f (i ) is the normalized histogram of the fused image I f ( x. I 2 . I f | w) is the quality index over a window for a Fusion similarity metric15 given source image and fused image æ QI ( I 1 . Using the entropy. I 2 . I f ) 2 i =0 1 minim L L æ ö æ ö where CE( I1 . I f ) = å hI (i) logç hI (i) ÷ & CE ( I 2 . I f ì ï ï ï ï | w) = í ïs ï ï ï î 0 s I1I f if f s s 0 £ I1I f I1 I f 2I f f I1I +s 1 if I2I f +sI sII 1 < 0 £1 I2I f It takes into account the similarity between the source and fused image block within the same spatial position. j ) ö ÷ MI I 2 I f = åå hI 2 I f (i. y ) and L number of frequency Entropy13 bins in the histogram. j ) hI (i. j ) . y . | ) QI I I w Î w W 2 f è ø where sim ( I 1 . Entropy is used to measure the information content of an image. Metrics for performance evaluation when reference image is not Name of the metric Standard deviation10 Formula Features/properties It is known that standard deviation is composed of the signal and noise parts. It measures the contrast in the fused image.1. Entropy is sensitive to noise and other unwanted rapid fluctuations. I 2 . I f | w) FSM = å sim ( I 1 . j ) ÷ i =1 j =1 1 f ø è M N æ h I 2 I f (i .l ( w))QI ( I 2 . The value one indicates that the fused image contains all the information from the source images. y ) and I f ( x. One indicates the fused image contains all the information from the source images. The range of this metric is zero to one. y) is defined as hI1I f (i. y) .Table 2. I f | w)ç ç ÷ ( . j ) and I 2 ( x. maxim 1 MN åå [ I y =1 x = 2 N M f ( x. y ) as hI 2 I f (i . the information content of a fused image is: He = - åh i =0 L If (i ) log 2 hI f (i ) maxim Cross entropy13 overall cross entropy of the source images I 1 . maxim where l ( w) = s I21 s + s I22 2 I1 computed over a window.i ) i =0 L 2 hI f (i ) . I f | w) It measures the degree of dependence of the two images.

4931]. taking the complementary pairs. I e ( x. 7. Similarly. y ) = I r ( x. y) (a) Fused image (b) Source image I1(x. Reference and source images (data set 1). 6. 0. First two letters indicate the wavelet and the number following the letters indicates the level of decomposition. The fused and error images by wavelet with different levels of decomposition are shown in Figs 8 to 10. The complementary pair has been created by blurring the reference image of size 295 x 400 with a Gaussian mask using diameter of 12 pixels.I f ( x. y ) . 3. y) Figure 5. is approximately 0. NO. Fused and error images by simple average (data set 1). The error (difference) image is computed by taking the corresponding pixel difference of reference mage and fused image.5. VOL. MAY 2008 input images and are taken to evaluate the fusion algorithm and these images are shown in Fig. 5(b) and 5(c). The images are complementary in the sense that the blurring occurs at the left-half and the righthalf respectively. and in this situation. The first column in Figs 6 to 10 show fused images and the second column shows the error images. the fused and error images by PCA algorithm are shown in Fig.5069. (c) Source image I2(x. the PCA is equivalent to a simply average [Eqn (11)]. y ) . By observing the error images from Figs 8 to 10. The fused and error images by simple average image fusion algorithm are shown in Fig.e. The reason could be because of (a) Reference image Ir(x. y) (b) Error image Figure 6. i. It is observed that the performance of average and PCA fusion algorithms are similar for these images. it is seen that better fusion could be achieved with 346 . The principal components are [0. 58.DEF SCI J..

wavelet transform with higher level of decomposition performed well. Fused and error images by wavelet 1 (wt1) (data set 1). (b) Error image Figure 8. it is also observed that there are increased oscillations (ringing tone) presented in the fused image at the sharp edges. Fused and error images by wavelet 3 (wt3) (data set 1). high level of decomposition. It is similar to Gibb's phenomenon. Other performance metrics where the reference image is not considered are shown in Table 4. This ringing tone could be avoided using wavelets with shift invariant property. The metrics showed in tables in bold font are better among others. These oscillations are high when wavelets with higher level of decomposition are used in fusion process. Fused and error images by PCA algorithm (data set 1). 347 . (a) Fused image (b) Error image Figure 9. It is possibly due to consideration of all frequency bands in the process of fusion. From Figs 8 to 10. The performance metrics for evaluating the image fusion algorithms are shown in Table 3. From the tables it is observed that except in few metrics.NAIDU & RAOL: PIXEL-LEVEL IMAGE FUSION USING WAVELETS AND PRINCIPAL COMPONENT ANALYSIS (a) Fused image (a) Fused image (b) Error image Figure 7.

95 50.78 0. The fused image obtained by simple average and PCA are shown in Figs 12 and 13 respectively.96 0.49 4.00 55.50 48.35 2.77 0.08 82.4 3.87 20.10 20.86 0.9998 metrics. 11.79 19.30 67.16 FMI 2.9987 0.62 FQI 0.79 2.23 45. forward-looking infrared image and low-light television image of size 512 x 512 are considered for evaluation of the fusion algorithms.9985 0.75 2.81 0. 4.41 1.47 2.72 84.43 1.9987 0.25 CE 0.77 FSM 0.32 3.85 SSIM 0.50 3.83 0.23 45. For this data set.29 3. Fused and error images by wavelet 5 (wt5) (data set 1).83 0.96 0.92 3.58 PSNR 76.91 10. It is observed that simple average image fusion algorithm shows degraded performance.11 SF 10.11 5.15 0.85 0. Performance evaluation metrics to evaluate image fusion algorithms without reference image (data set 1) Ave PCA wt1 wt2 wt3 wt4 wt5 He 3.73 4.9994 0. Performance evaluation metrics to evaluate image fusion algorithms with reference images (data set 1) Ave PCA wt1 wt2 wt3 wt4 wt5 Table 4. Figures 14 to 18 show the fused images by wavelets with different level of decomposition. VOL.79 0.24 3.97 Table 3. It could be due to combination of source images taken from different modalities/sensors. where the reference image is not available.81 0.08 78. 3.9991 0.44 1.79 3.96 0.15 4. It is observed that in some metrics.40 1.95 0.26 SD 45.82 0.25 3. wavelets with higher level of decomposition show better results.40 1.73 2.83 0.27 0.36 MI 1. OBSERVATIONS The performance of the image fusion algorithms could be estimated precisely when the ground truth (reference) image is available. NO. These images are shown in Fig.11 19.31 3. Hence the new results are not included in this paper.2 Data Set 2 Analysis In this data set.88 0.18 5.81 0. The performance RMSE 9.95 0.44 77. 5 .05 6.40 QI 0.41 3.00 64.41 1.20 80.88 0.86 9.43 47.96 PFE 5. The reference image is available in data set 1 and the performance metrics are shown in Table 3.31 0.79 2.04 MAE 5.26 59.74 53.09 4.53 0.66 348 .42 76. 58.DEF SCI J.65 2. It is observed that wavelets with higher level of decomposition show SNR 51. are shown in Table 5.90 4.72 46.66 51.94 16.18 8.75 0.71 2.15 3.73 0. separate fusion algorithms were actually run for each of the left and right half of the picture and it was found that the results were very close to those of Tables 3 and 4.00 CORR 0.97 0. MAY 2008 (a) Fused image (b) Error image Figure 10.66 0. PCA shows better performance and the other metrics.91 9.7 0.9997 0.51 0. Reference image is not available for this data set.78 0.

Fused image by wavelet transform (wt2). Some metrics standard deviatio (SD). CE and SF would be appropriate metrics to access the quality of the fused image when there is no reference image is available. Fused image by wavelet transform (wt1) (data set 2). From these results one can observe that the metrics SD. Other metrics (H). y) Figure 11. The metrics SD. CE. 349 . Figure 13. it is not. Fused image by simple average. The SD showed that PCA is better but by observing the fused image. Source images used for image fusion (data set 2). and SF show different results for data set 2 where the images to be fused are obtained from different modalities as shown in Table 5.fusion mutual information (FMI). Figure 15. CE and spatial frequency (SF) in Table 4 where the reference image is not considered in estimation also showed the similar observation.NAIDU & RAOL: PIXEL-LEVEL IMAGE FUSION USING WAVELETS AND PRINCIPAL COMPONENT ANALYSIS (a) FLIR image I1(x. fusion quality index (FQI) and fusion similarity matrix (FSM) failed to give similar observation. y) (b) TV image I2(x. Figure 14. superior results except in mutual information (MI) and universal quality index (QI). It may be Figure 12. Fused image by PCA (data set 2).

56 0. Figure 16. 58.54 19. 37 . From Table 4 and 5.16 2.56 50. A wavelet-based image fusion tutorial.. Engg . for technical discussions. it is observed that SF would be the best fusion performance check indicator irrespective of the origin of source images.59 0. 6 . Multi-sensor data fusion.52 14. 1997. NO. Fused image by wavelet transform (wt5) (data set 2).15 2.30 CE 2.40 0. Performance evaluation metrics to evaluate image fusion algorithms without reference image (data set 2) Ave PCA wt1 wt2 wt3 wt4 wt5 He 7. 2004. CONCLUSIONS Pixel-level image fusion using wavelet transform and principal component analysis are implemented in PC MATLAB.61 0. The simple averaging fusion algorithm shows degraded performance. Elec.43 due to the consideration of contrast in SD calculations. Pattern Recognition .71 5. Gonzalo.44 2. 245-53. Some further investigation is needed to resolve this issue.19 2.96 6.61 0. Fused image by wavelet transform (wt4). The other metric SF showed that wavelets with higher decomposition level show better performance. P. Figure 18.DEF SCI J. ACKNOWLEDGEMENTS The authors would like to thank Dr Girija G.46 7. National Aerospace Laboratories. Varsheny. Fused image by wavelet transform (wt3) (data set 2). Bangalore.45 0. Flight Mechanics and Control Division.87 7.44 0. 350 .97 17.14 FQI 0.27 2.47 47.91 SD 47. the PCA shows better performance.77 SF 10.45 0.38 0. 3. de la Cruz.49 58. VOL. REFERENCES 1.39 96.62 FMI 2. it is observed that SD would be a better fusion indicator where the images to be fused are obtained from the same or similar modalities.84 6.K.64 4.29 54.37 2. 2.15 13.60 0.11 18.18 2. Comm.03 5. Image fusion using wavelets with higher level of decomposition shows better performance in some metrics while in other metrics. MAY 2008 Table 5. 9 (12). Figure 17.30 7. and Dr Jatinder Singh.19 6.84 5.37 19.38 5. Scientists. Pajares & Jesus Manuel.83 48.60 FSM 0.54 0. Different image fusion performance metrics with and without reference image have been evaluated. From Table 4 and Table 5.42 0. 1855-872.

J.M. and data fusion. H. 351 . Liu. Girija. Gemma & Heijmans. A universal image quality index. IEEE Trans. Singapore..S. Real-time multi-focus image fusion using discrete wavelet transform and Laplasican pyramid transform. Artur. Loza. A. USA. 16.S. Letters. Piella. P. Naidu.Leung. In Regular Conference Series in Applied Maths. 173-76. Sweden. Peng. Daubechies. & Thomas. 13. Publication. Nedeljko. Blum. G. Rick S. March 2002. 6. Wiley-Interscience Inc. 91. 149 (5). & Zheng. W. 1989. tracking. 674-93. King. CRC Press. He is working as a Scientist at National Aerospace Laboratories (NAL).A. Barcelona. 10.. in 1997. Chalmess University of Technology. & Bovik. Boca Raton. New York. Gonzalo R. Ten lectures on wavelets. 2006. Z.G.. Image quality measures and their performance.M. Eskicioglu. Multi-sensor image fusion and its applications. 5-9. Wang. Goteborg.NAIDU & RAOL: PIXEL-LEVEL IMAGE FUSION USING WAVELETS AND PRINCIPAL COMPONENT ANALYSIS 3. 1992. Philadelphia. pp. . A new quality metric for image fusion.R.In AIAA Conference on Navigation. Pattern Anal. Vol. 2002. 12. & Wu. Visual Image Signal Process. Contributors Mr V. Nov. A similarity metric for assessment of image fusion algorithms. Taylor & Francis Group. Vijay. 11-14. Chennai. Elements of information theory. Commu. pp. Bruce & Vohora. J. since December 2001 in the area of multi-sensor data fusion and target tracking. h t t p : / / e n . J. & Raol. Fusion algorithm for multisensor image based on discrete multiwavelet transform. 9. 14. Mach. Mitra Jalili-Moghaddam. SIAM. pp. Wang. Nonlinear signal processing A statistical approach.. Bangalore. Int. o r g / w i k i / Principal_components_analysis 8. In Proceedings of IEEE International Conference on Image Processing. A. . 4. 1995 . Cover. Bull.S. Wiley. 2005. 7. & Fisher. A heory for multiresolution signal decomposition: The wavelet representation. Masters thesis. IEEE Trans. Naidu obtained his ME (Medical Electronics) from Anna University. USA. 11 (7). I. 43 (12).C. T. Cvejic. 11. Comparison of image data fusion techniques using entropy and INI. lau Wai. David & Cangarajah. 81-84. 2001. 2003. w i k i p e d i a . 2005. .P. 1991. Guidance and Control. 2959-965. August 2003. Evaluation of data association and fusion algorithms for tracking in the presence of measurement loss. Signal Process. 2 (3). Nishan. Mallet. His areas of interest are image registration. V. IEEE Signal Proc. IEE Proc. Spain. 5. Austin. 2005. 15. J.P. Henk. 178-82. Intel.. 9 (9). S. Arce. In 22nd Asian Conference on Remote Sensing.

UK.R. Currently he is the Professor Emeritus in the Dept of Instrumentation Technology of the MS Ramaiah Institute of Technology. Baroda. in 2004. Canada. in 1986. Bangalore. fuzzy systems. VOL. in 1971 and 1974 and PhD from McMaster University. genetic algorithms and neural networks. 3. MAY 2008 Dr J. parameter estimation. both from the MS University. Raol obtained BE and ME. multi-sensor data fusion. He worked at NAL Bangalore from 1975 to 1981 and was actively involved in the multidisciplinary control group's activities on human pilot modelling in fix.DEF SCI J. He has carried out sponsored R&D in several areas. He re-joined NAL in 1986 and served as a senior scientist and the Head of the Flight Mechanics and Control Division for several years. 58. His current activities include: Modelling.and motion-based research simulators. NO. published by IEE. He has authored jointly a book on "Modelling and Parameter Estimation of dynamic system". Vadodara. He has 100 research publications to his credit. 352 .