You are on page 1of 9

Vol. XX, No.

ACTA AUTOMATICA SINICA

Month, 200X

Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled Contourlet Transform
YANG Xiao-Hui1 JIAO Li-Cheng1
Abstract Considering of the human visual system and the characteristics of remote sensing images, a novel image fusion strategy is presented for panchromatic high resolution image and multispectral image in nonsubsampled contourlet (NSCT) domain. The NSCT can give the asymptotic optimal representation of the edges and contours in image by virtue of the characteristics of good multiresolution, shift-invariance and highly directionality. An adaptive intensity component addition method based on the LHS transform is introduced into NSCT domain to preserve spatial resolutions and color content simultaneously. Experimentals show that the method proposed in this paper can improve spatial resolution and keep the spectral information, simultaneously. There are improvements both in visual eects and quantitative analysis, which is compared with that of the traditional PCA method, IHS transform technique, wavelet transform weighted fusion method, the corresponding wavelet transform-based fusion method and the contourlet transform-based fusion methods, respectively. Key words Image fusion, nonsubsampled contourlet transform, LHS transform, adaptive intensity component addition

Image fusion is a process by combining two or more source images from dierent modalities or instruments into a single image with more information. The successful fusion is of great importance in many applications, such as remote sensing, computer vision and medical imaging, et al. In the pixel level fusion, some generic requirements can be imposed on the fusion results [1]: 1) The fused image should preserve all relevant information contained in the source images as closely as possible; 2) The fused process should not introduce any artifacts or inconsistencies, which can distract or mislead the human observer, or any subsequent image processing steps; 3) In the fused image, irrelevant features and noise should be suppressed to a maximum extent. Panchromatic images (PAN) of high spatial resolution can provide detailed geometric information, such as shapes, features and structures of objects of the earths surface. While multi-spectral (MS) images with usually lower resolution are used to obtain the spectral information necessary for environmental applications. The dierent objects within images of high spectral resolution are easily identied. Data fusion methods aim to obtain the images with high spatial and spectral resolution, simultaneously. The PAN and MS remote sensing image fusion are dierent to other fusion applications, such as image fusion in military missions or computer-aided quality control. The specicity is to preserve the spectral information for subsequent classication of ground cover. The classical fusion methods are Principle Component Analysis (PCA) and Intensity-HueSaturation (IHS) transform et al. In recent years, with the development of wavelet transform (WT) theory and multiresolution analysis, two-dimensional separable wavelets have been widely used in image fusion and achieved good results [2-4]. Thus, the fusion algorithms mentioned above can hardly make it by themselves. They usually cause some characteristic degradation, spectral loss, or color distortion, et al. For example, the IHS transform can enhance the texture information and the spatial features of the fused images, but suers from much spectral distortion. The PCA method will lose some original spectral features in the process of principal component substitution. And the IHS always distorts the spectral feature. The wavelet transReceived June 16, 2007; in revised form September 6, 2007 Supported by National Natural Science Foundation of P. R. China (60472084); National Key Program for Developing Basic Research 973 Program of China (2001CB309403) 1. Institute of Intelligence Information Processing, Xidian University, Shannxi Xian 710071 DOI: 10.1360/aas-007-0583

form can preserve spectral information eciently, but cannot express the spatial characteristics well. Furthermore, the isotropic wavelets are scant of shift-invariance and the multi-directionality and fail to provide and the optimal expression of the highly anisotropic edges and contours in images. The geometry multiresolution analysis (GMA) of images succeeds the advantages of wavelet multiresolution analysis, and can take full advantage of the geometric regularity of image intrinsic structures. As a GMA tool, the contourlet transform (CT) have the characteristics of localization, multi-direction and anisotropy [5]. The CT can give the asymptotic optimal representation of contours and applied in image fusion eectively. However, the CT is lack of shift-invariance and results in artifacts along the edges in some extend. The nonsubsampled contourlet transform (NSCT) is in virtue of nonsubsampled lter banks to meet the shift-invariance [6]. Therefore, the NSCT is more suitable for image fusion. To meet dierent application demands either for human visual system and statistical performance, we propose a highly eectively NSCT-based fusion algorithm. Experimental results clearly demonstrate the superiorities of the proposed new method when compared with other fusion algorithms. This paper discusses the fusion of multi-spectral and panchromatic remote sensing images. A NSCT-based panchromatic and multi-spectral image fusion method is presented after analyzing the basic principles of remote sensing image system and fusion purpose. The rest of this paper is organized as follow: Section 2 give the NSCT of images. Section 3 introduces the new NSCT fusion algorithm based on LHS transform and adaptive intensity component addition strategy, and discusses the fusion rules of detail and approximate NSCT coecients especially. Section 4 reports about the fusion experiments tested on remote sensing image sets using the proposed new algorithm and some other traditional ones, followed with the subjective and objective evaluations based on them. Conclusions are drawn in Section 5.

Nonsubsampled form of images

contourlet

trans-

Image decomposition is an important link of image fusion and aects the information extraction quality, even the whole fusion quality. In recent years, along with the development and application of the wavelet theory, the favorable time-frequency localization for express local signal make wavelet applicant in multi-sensor image fusion widely.

ACTA AUTOMATICA SINICA

Vol. XX

In the process of fusion, wavelet transform show its orthogonality and eciency on compression and directional information. However, wavelet bases are isotropy and limited directions and fail to represent high anisotropic edges and contours in images well, the GMA emerges as the times require. The GMA comes from wavelet multiresolution, but beyond it. The GMA can take full advantage of the geometric regularity of image intrinsic structures and obtain the asymptotic optimal representation. 1.1 Contourlet transform Do and Vetterli proposed a true two-dimensional transform called the contourlet transform, which combined with nonseparable lter banks and provides an ecient directional multiresolution image representation. By virtue of the Laplacian pyramids (LP) and directional lter banks (DFB), the CT provides the multiresolution decomposition and directional decomposition, respectively. The CT can capture the intrinsic geometric structure information of images and achieves better expression than discrete wavelet transform (DWT), especially for edges and contours. The CT express image by rst applying a multi-scale transform, followed by a local directional transform to gather the nearby basis functions at the same scale into linear structures. For example, the Laplacian pyramid is rst used to capture the point discontinuities, and then followed by a direction lter banks to link these point discontinuities into linear structures. In particular, contourlets have elongate supports at various scales, directions and aspect ratios. The CT has a redundancy no more than 33%, which comes from the oversampling of LP. The contourlets satisfy the anisotropy principle and can approximate geometric structure information in image using contour segments. However, because of the downsampling and upsampling, the CT is lack of shift-invariance and results in artifacts. However, the shift-invariance is desirable in image analysis applications, such as edge detection, contour characterization, image fusion and image enhancement, et al [7]. Especially, during the realization of the CT, the analysis lter banks and synthesis lter banks of LP decomposition are nonseparable bi-orthogonal lter banks with band width lager than/2 . Based on multi-sampled rate theory, downsample on ltered image may result in lowpass and highpass frequency aliasing. Therefore, the frequency aliasing aects lies in directional subbands, which comes from the highpass subbands ltered by direction lter banks. The frequency aliasing will result in the information in a direction appears in dierent directional subbands at the same time. This must weaken the directional selectivity of contourlets. 1.2 Nonsubsampled contourlet transform In order to get rid of the frequency aliasing of contourlets and enhance its directional selectivity and shift-invariance, Cunha, Jianping Zhou and Do proposed nonsubsampled contourlet transform based on nonsubsampled pyramid decomposition and nonsubsampled lter banks. The NSCT is the shift-invariant version of the CT and is build upon iterated nonseparable two-channel nonsubsampled lter bank (NSFB) to obtain the shift-invariance. The NSCT provides not only multiresolution analysis, but also geometric and directional representation. In paper [8], Zhou et al applied NSCT in image enhancement, by virtue of the NSCT to capture extracts the geometric information of images and distinguish noised from weak edges. Dierent to the CT, the multiresolution decomposition step of NSCT is realized by the shift-invariant lter banks

satised Bezout identical equation (perfect reconstruction) not the LP lter banks. Because of no decimation in the pyramid decomposition, the lowpass subband does not bring frequency aliasing, even the band width of the lowpass lter is lager than /2. Hence, the NSCT have better frequency characteristics than CT. The two-level NSCT decomposition is shown in gure 1.

(a) NSFB structure that implements the NSCT

(b) Frequency partitioning obtained with the proposed structure

Fig. 1

Two level Nonsubsampled Contourlet transform decomposition

The core of the NSCT is the nonseparable two-channel nonsubsampled lter banks. It is easier and more exible to design the needed lter banks that lead to a NSCT with better frequency selectivity and regularity when compared to the CT. Based on mapping approach and ladder structure fast implementation, the NSCT frame elements are regularity, symmetric and the frame is close to a tight frame. The multiresolution decomposition of NSCT can be realized by nonsubsampled pyramid (NSP), which can reach the subband decomposition structure similar to LP. On j-th decomposition, the desired bandpass support of the low-pass is[/2j , /2j ]2 . And then the corresponding band-pass support of the high-pass are the complement set of the low-pass, that is[/2j1 , /2j1 ]2 \ [/2j , /2j ]2 . The lters of subsequent scales can be required through upsampling that of the rst stage, which gives the multiscale property without of needing for additional lters design. Form the computational complexity, one bandpass image is produced at each stage resulting in J + 1redundancy. By contrast, the corresponding NSWT produces three directional images at each stage and resulting in3J + 1 redundancy.

No. X YANG Xiao-Hui et al.: Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled Contourlet Trnsform 3 In this paper, the NSFB is built from a lowpass analysis lterH0 (z) andH1 (z) = 1H0 (z) . The corresponding synthesis lterG0 (z) = G1 (z) = 1 . The perfect reconstruction (PR) condition is given as H0 (z)G0 (z) + H1 (z)G1 (z) = 1 (1)

In this way, the system is perfect reconstruction. Since the NSCT is an oversampled redundancy representation, the PR condition is much easer to satisfy than that of critically sampled lter banks, and thus allows better lters to be designed. The directional lter bank of Bamberger and Smith [9] is constructed by combining critically-sampled two-channel fan lter banks and resampling operations. The result is a tree-structured lter bank that splits the two-dimensional frequency plane into directional wedges. In the design of DFB, the shift-invariant directional expansion is not obtained because of existing downsamplers and upsamplers. A shift-invariant directional expansion is obtained with a nonsampled DFB (NSDFB), which is constructed by eliminating the downsamplers and upsamplers in the DFB tree structure and upsampling the lters accordingly. The upsampling is based on the quincunx lattice, in which an image consisting of rectangular lattices was split into the round and square dot subsets. The adopted scale factor with rotated ability is as follows: A= 1 1 1 1 (2)

(b) The coordinate system after upsampling

Fig. 2

Quincunx upsample

The change of before and after quincunx interpolation is shown in gure 2,where a, b, c and d are the four sampled points. The correlation of interpolation results before and after is given as equation 3: are even number, otherwise, (3) The corresponding z transform is denoted as equation 4. 0,
1 Y (z1 , z2 ) = X(z1 z2 , z1 z2 )

If we give a two-level four-channel lter banks, in the second level, the upsampled fan lter Uj (z Q ), j = 0, 1, and eq when combined with the lters in the rst level Uk (z) = Q Ui (z)Uj (z )(i = 0, 1), give the four directional frequency decomposition. The synthesis lter bank is obtained similarly. Just like the critically sampled directional lter banks, all lter banks in the nonsampled directional lter bank(NSDFB) tree structure are obtained from a singer NSFB with fan lters. Moreover, each lter bank in the NSDFB tree has the same computational complexity as that of the building-block NSFB. In order to simply computation, the NSCT adopts mapping approach factor lters into ladder structures. The Euclidean algorithm enables us to factor the lters into the following ladder structures, which is shown in equation 5 [11, 12]: H0 (x) (1D) H1 (x)
(1D) N

y(n1 , n2 ) =

x( n1 +n2 , 2

n1 n2 ), when n1 , n2 2

=
i=0

1
(1D) Pi (x)

0 1

1 0

Qi

(1D)

(x)

1 0

(4)

The detail proof can confer to [10].

(5) (1D) (1D) where H0 (x)andH1 (x) are adopted 1-D coprime prototype lowpass and highpass lters. The corresponding (1D) (1D) synthetic lters G0 (x) andG1 (x) satisfy the Bezout identity: H0
(1D)

(x)G0

(1D)

(x) + H1

(1D)

(x)G1

(1D)

(x) = 0

(6)

Take full consideration of the PR and anti-aliasing conditions in designing the lter banks, this paper can choose the needed prototype lters as in equation 7: H1
(1D)

(x) = G0 =

(1D)

(x) (7)

(1D) G1 (x)

(1D) H0 (x)

For concretely, we adopt the following prototype lter banks: H1 G1


(1D) (1D)

(x) =

1 (x + 1)[ 2 + (1 2)x] 2

(a)The coordinate system before upsampling

(x) =

1 (x + 1)[ 2 + (4 3 2)x + (2 2 3)x2 ] (8) 2

ACTA AUTOMATICA SINICA

Vol. XX

Remote sensing images fusion in nonsubsampled contourlet domain

In this section, an adaptive panchromatic and multispectral remote sensing image fusion technique based on the NSCT and the LHS is presented after analyzing the basic principles of remote sensing image system and fusion purpose. The adaptive means that the high-resolution information is added directly on the RGB bands. Here, we adopt an adaptive intensity component addition method, that is to say, the detail information of the high-resolution PAN image be added in the corresponding intensity component of the low-resolution images high frequency subbands to preserve some spectral information. An image can be represented by RGBRed, Green, Bluecolor system in computer. However, the RGB color system disagrees with the comprehensive and cognition habits of the human visual system. Human always recognize the color with the three features, that is, Intensity (I), Hue (H) and Saturation (S), and then the color system is called HIS system. I component is decided by the spectral main wave length and denotes the nature distinction. S component symbols the proportion of the main wave length of the intensity. I component means the brightness of the spectral. In the HIS space, spectral information mostly reected on the hue and the saturation. From the visual system, we can conclusion that the intensity change has little eect on the spectral information and easy to deal with. For the fusion of the high-resolution and multi-spectral remote sensing images, the goal is on the premise of ensuring holding the spectral information and adds the detail information of high spatial resolution. Therefore, the fusion is even more adequate for treatment in IHS space. IHS color space transform means that the change of image is from RGB space components to IHS space information I component and spectral information H and S components. However, the general IHS color system has the disadvantage lies in neglecting two components when computing the brightness values. The IHS system results in the brightness of pure color is the same as the achromatic color. Therefore, we adopt the LHS color system to solve the problem. The LHS color system generates the brightness with the value of 255 to achromatic color pixel and that of 85 to pure color pixel [13]. 2.1 High resolution and multispectral remote sensing images fusion based on LHS transform and nonsubsampled contourlet transform

1 I 3 1 v1  6
v2
1 2

1 3 1 6 1 2

1 3 2 6

R G
B

(12) (13) (14)

0 v1 ) v2

H = tan1 ( S=

2 2 v1 + v2

3) Apply histogram matching between the original panchromatic image and multispectral intensity component to get new panchromatic high-resolution (PAN HR) image and multispectral intensity (MSI) component image. 4) Decompose the matched MSI image and PAN HR image to get the NSCT decomposition coecients. 5) Fuse the detail and approximate coecients of the MSI and PAN HR according to equations (15) and (16) respectively. F uselow = M SIlow F usehigh = M SIdetails + P AN HRdetails (15) (16)

6) Apply the inverse NSCT transform to the fused detail and approximate coecients, to reconstruct the new intensity component Inew . 7) Perform the inverse LHS transform to the new intensity component new I together with the hue and saturation components, to obtain the fused RGB image.

1 R G 1
B 1

1 6 1 6 2 6

1 2 1 2

Inew  v1
v2

(17)

2.2

Experiments and results

The detailed process of the new fusion algorithm is as follows: 1) Perform polynomial interpolation to keep the edges of the linear Landmark and make the PAN and SPOT images with the same sizes. 2) Transform the RGB representation of the multispectral image by LHS transformation into the intensity, hue and saturation (L, H, S) components. L= r+g+b 3 min(r, g, b) r+g+b (9)

S =13 H=

(10)

arccos(0.5 ((r g) + (r b))) (r g)2 + (r b)(g b)

(11)

The corresponding matrix expression is as follows,

The test source images are the SPOT PAN image and LANDSAT TM 5, 4, 3 bands image of the same area. The TM image is acquired on 1993-2-17, and the SPOT PAN image is obtained on 1995-5-28. The two source images are geometric adjustment and with the size of 256 256. The fusion methods are traditional PCA and IHS, wavelet transform-based weighted fusion (WT-W), wavelet transform and LHS transform-based (WT-LHS), contourlet transform and LHS (CT-LHS), and nonsubsampled contourlet transform and LHS transform (NSCT-LHS). Without of generality, the decomposition level of the adopted transform is all three. The WT adopts the 9-7 biorthogonal wavelet. The corresponding LP lter banks of CT and NSCT are all adopted 9-7 lter banks obtained from 9-7 1-D prototypes. And the DFB are adopted pkva ladder lters proposed by Phong et al [14], which are with the decomposition 0, 3, 4 corresponding to the three levels LP decomposition, respectively. The fusion results are shown in gure 3. From the human visual system, we can see that our fusion technique based on NSCT and LHS transform can improve the space resolution and at the same time hold the spectral information well. Our adaptive intensity (brightness) added fusion technique based on LHS transform is superior than classical PCA fusion method and IHS transform fusion method, and the wavelet-based weighted fusion method. The fusion image has the more information of the source images, which is demonstrated in spatial resolution, denition, micro detail dierence and contrast. The adaptive intensity component addition method

No. X YANG Xiao-Hui et al.: Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled Contourlet Trnsform 5 preserves the whole spatial information, which has the advantage of the utilization of the detail information of the two source images. Moreover, the fusion methods only use high-resolution information adjust intensity component and better hold the multi-spectral information and texture information and introduce the high-resolution characteristic in multi-spectral image. Moreover, the fusion algorithm based on NSCT-LHS has more outstanding detail information than that of based on WT-LHS, CT-LHS.

(a)SPOT image

(b)TM image

(c)PCA fusion image

(d)IHS fusion image

(e)wavelet weighted fusion image

(f)wavelet+LHS fusion image

ACTA AUTOMATICA SINICA

Vol. XX

(g)contourlet+LHS

(h)nonsubsampled contourlet+LHS

Fig. 3

Remote sensing image fusion based on NSCT and LHS transform

Subjective visual perception gives the direct comparison. However, it is easily inuenced by visual psychological factors. Therefore, the eect of image fusion should base on subjective vision and combined with objective quantitative valuation criterion. For the remote sensing images, the desired standard image cannot be acquired. Hence, the index such as root mean square error and peak value of signal to noise is unusable. In this paper, we adopt the following statistic indexes to performance the fusion results entirely, such as mean value, standard deviation, entropy, average gradient, correlation coecient, spectrum Distortion, weighted fusion quality index and edge-dependent fusion quality index, et al. (1) Mean value (MV): The MV is the gray mean value of the pixels in an image and the average brightness reecting to human eye. Suppose the size of the image isM N , I(i, j)is the pixel in the image, then the MV is dened as: MV = 1 MN
N1 M 1 i=0 j=0

(4) Average Gradient (AG): AG is the index to reect the expression ability of the little detail contrast and texture variation, and the denition of the image. The calculation formula is:
(M 1)(N1) i=1

1 g= (M 1)(N 1)

f ( x )2 + ( f )2 y

(21)

Generically, the large gis, the more the hierarchy is, and the more denite the fused image is. (5) Correlation coecient (CC): The CC denotes the degree of correlation of two images. The more the CC closes to 1, the higher the correlation degree is. The denition is denoted as A corr( ) = B

I(i, j)

(18)

(2) Standard deviation (STD): The variance of image reects the dispersion degree between the gray values and the gray mean value. The STD is the square root of the variance. The large the STD is, the more disperse the gray level. The denition of the STD is:

(22) Where A, B are two images, xi,j andxi,j denotes the pixels of A and B, respectively, (A)and(B) are the corresponding mean values of the two images. (6) Spectrum Distortion (SD): SD means the distortion degree of a multi-spectral image and dened as follows:

N M (x (A))(x (B)) i,j j=1 i=1 i,j N M (x (A))2 (x (B))2

j=1

i=1

i,j

i,j

ST D =

N1 i=0

M 1 I(i, j)
j=0

NM

(19)

W =

1 M N

j=1 i=1

|If (i, j) I(i, j)|

(23)

(3) Information entropy (IE): The IE of the image is an important index to measure the abound degree of the image information. Based on the principle of Shannon information theory, the IE of the image is denition as:
255

E=

Pi log2 Pi
i=0

(20)

where I(i, j)andIf (i, j) are the pixels of the source and fused images, respectively. The lager the W is, the higher the distortion is. (7) Bias index: this is an index of the deviation degree between fused image and low-resolution multi-spectral image: Bias = 1 M N
N M

where Pi is the ratio of the number of the pixels, which the gray equals to i, and the total number of the pixels. IE reects the capacity of the information carried by images. The large the IE is, the more information the image carries.

j=1 i=1

|If (i, j) I(i, j)| I(i, j)

(24)

(8) Weighted fusion quality index (WFQI) and Edgedependent fusion quality index (EFQI) [15]: WFQI and

No. X YANG Xiao-Hui et al.: Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled Contourlet Trnsform 7 EFQI are evaluation indexes without standard referred image and consider some aspect of the human visual sys tem. Suppose yA ,yB ,yF are edge images of the source imagesyA ,yA and fused imageyF , respectively. WFQI is introduced to weight feature information of the fused images comes from source images. EFQI focuses on human visual system sensitivity to the edge information. The two measures have a dynamic range of [-1, 1]. The closer the value to 1, the higher the quality of the composite image is. QW F QI (yA , yB , yF ) =
Q

c()(A ()Q0 (yA , yF , |))+ (25)

(1 A ())Q0 (yB , yF , |) QEF QI (yA , yB , yF ) = QW F QI (yA , yB , yF )1


QW F QI (yA , yB , yF )

wherec() = and C() = Q C( )] , max(((yA|), (yB |))) denotes the overall saliency
Table I
Fusion Metgods Source image: PAN Source image:SPOT PCA fusion IHS fusion WT-weight fusion WT-LHS fusion CT-LHS fusion NACT-LHS fusion MV 110.28 92.452 94.0803 92.4500 102.8837 112.6562 112.6634 112.7205 STD 10.251 9.5533 46.6214 50.8016 37.5430 54.3067 53.2326 52.6581

C()/[

(26)

of a window, A () = ((yA |))/(((yA |) + (yB |)), (yA |))is some salient features of imageyA in the window . In this paper, we select the energy as the salient feature and the size of the window is 3 3.Q is the summation of the total windows andQ0 is the general image quality index. The parameter in equation (26) expresses the contribution of the edges images compared to the original images, and its variation range is [0, 1]. In this paper, we select = 0.2 . Paper [9] adopted LOG operator to obtain the edge image. However, the LOG operator cannot provide the edge directional information and sensitive to noise. Therefore, we select canny operator to detect the edge information, which detect the edges by searching the local maximum of image gradient. Canny operator detects the strong edges and weak edges with the two thresholds, respectively, where the thresholds are system automatic selection. Just when the weak edges and strong edges are jointed and the weak edges may be combined in the output. The canny operator is not sensitive to noise and can detect the true weak edges.

Comparison of fused results


IE 4.9623 5.0436 5.0873 5.2449 4.9977 5.3260 5.3133 5.3046 AG 9.4946 12.749 18.7378 13.8455 8.8577 16.2095 16.7984 16.7634 CC 0.5211 0.6564 0.8520 0.9171 0.9198 0.9268 SD 58.4023 38.2370 32.2293 14.7896 15.0153 14.2235 Bias 0.5829 0.3625 0.2681 0.1390 0.1399 0.1329 Q W 0.5373 0.5195 0.5157 0.5113 0.5194 0.5390 QE 0.5133 0.5033 0.4271 0.4392 0.4481 0.4704

From the table I, we can see that the quantitative evaluation indexes are in accord with the visual eect. The fusion results based on our proposed adaptive fusion technique are superior to the traditional PCA and IHS fusion methods, which embody in the moderate brightness and the dispersion degree between the gray values, the larger entropy, the stronger correlation degree, the smaller spectrum distortion degree and bias deviation degree. From the whole eects, and by virtue of our proposed adaptive fusion technique, the nonsubsampled contourlet transform-based fused results are better than that of the wavelet transformbased, and the contourlet transform-based, respectively, es-

pecially for the spectral holding. The better values are with underline. The comparison of the histogram images of the R,G,B components of the TM multispectral image and the NSCTbased fusion image are shown in gure 4, respectively. From the comparison of the R, G, B components histograms, we can conclusion that the dynamic range of fused image is larger than that of the source image, which show that the fused image has the non-extra detail information are more distinguish and have the higher special resolution than that of the source image.

700 600 500 400 300 200 100 0 0 50 100 150 200 250

700 600 500 400 300 200 100 0 0 50 100 150 200 250

ACTA AUTOMATICA SINICA


(a)R component of TM source image
900 1000 900 800 700 600 500 400 300 300 200 100 0 0 50 100 150 200 250 200 100 0 0 50 100 150 200 250 800 700 600 500 400

Vol. XX

(b)R component of NSCT fuseion images

(c)G component of TM source image


800 700 600 500 400 400 300 300 200 100 0 0 50 100 150 200 250 200 100 0 0

(d)G component of NSCT fuseion images

900 800 700 600 500

50

100

150

200

250

(e)B component of TM source image

(f)B component of NSCT fuseion images

Fig. 4

The R, G, B components histograms of TM source image and the NSCT fusion image, respectively

Conclusions

A novel panchromatic high-resolution image and multispectral image fusion technique is proposed in this paper, which is based on nonsubsampled contourlet transform and LHS transform. We take full advantage of the nonsubsampled contourlet transform with the good multiresolution, shift-invariance and multi-directional decomposition. And an adaptive intensity component addition technique is introduced into the NSCT domain to better improve the spatial resolution and hold the spectral information and texture information, simultaneously. From the subjectivity and objectivity, we can conclusion that the proposed fusion technique is more eective than other traditional fusion methods and has some improvements, especially for the holding of spectral information, texture information and contour information. Experiments show that the proposed fusion technique is a preferred and eective remote sensing image fusion method to a certain extent. References
1 Rockinger, O. Pixel-level fusion of image sequences using wavelet frames. In Mardia K V, Gill C A and Dryden I L, editors, In: Proceeding of Image Fusion and Shape Variability Tec hniques, Leeds, UK, Leeds University Press, 1996, 149154

2 Mar G A, JosL S, Raquel G C, and et al. Fusion of multia e spectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition. IEEE Trans. on Geoscience and Remote Sensing, 2004, 42 (6): 12911299 3 Wang Z J, Ziou D, Armenakis C. A Comparative analysis of image fusion methods. IEEE Trans. on Geoscience and Remote Sensing, 2005, 43(6): 13911402 4 Nunez J, Otazu X, Fors O, and et al. Multiresolution based image fusion with additive wavelet decomposition. IEEE Trans. on Geoscience and Remote Sensing, 1999, 37(3): 12041211 5 Do M N, Vetterli M. The contourlet transform: An ecient directional multiresolution image representation. IEEE Trans. on Image Process, 2005, 14(12): 20912106 6 Arthur L Da C, Zhou J P, Do M N. The nonsubsampled contourlet transform: theory, design and application. IEEE Trans. on Image Process, 2006, 15(10): 30893101 7 Simoncelli E P, Freeman W T, Adelson E H, and et al. Shiftable multiscale transforms. IEEE Trans. on Inform. Th. 1992, 38(2): 587607 8 Zhou J P, Arthur L C, Do, M. N. Nonsubsampled contourlet transform: construction and application in enhancement. In: Proceeding of IEEE international conference on Image processing, 2005, 1(1): 469472 9 Bamberger R H, Smith M J T. A lter bank for the directional decomposition of images: Theory and design. IEEE Trans. on Signal Process. 1992, 40(4): 882893

No. X YANG Xiao-Hui et al.: Fusion Algorithm for Remote Sensing Images Based on Nonsubsampled Contourlet Trnsform 9
10 Yang F S. Project analysis and application of wavelet transform. Beijing, Scientic Press, 1999: 121126 11 Vaidyanathan P P. Multirate systems and lter banks. Englewood Clis, NJ: Prentice-Hall, 1993 12 Kovaevi, J., Vetterli, M.: Nonseparable multidimensional c c perfect reconstruction lter banks and wavelet bases forRn . IEEE Trans. on Inf. Theory. 1992, 38(2): 533555 13 Tan, Z., Bao, F. M., Li, A. G., and et al. Digital Image Fusion. Xian Jiaotong University Press, 2004 14 Phoong, S. M., Kim, C. W., Vaidyanathan, P. P., and et al.: A new class of two-channel biorthogonal lter banks and wavelet bases. IEEE Trans. on Signal Process, 1995, 43(3): 649661 15 Piella, G. New quality measures for image fusion. [Online], available: http://www.fusion2004.foi.se/papers/IF040542.pdf, 2004 YANG Xiao-Hui Received her master degree from in College of Mathematics and Information Science from Henan University in 2004. She is currently working towards the Ph. D. degree in Institute of Intelligent Information Processing at Xidian University. She is actively engaged in research in the areas of multiresolution geometry analysis and the applications in image processing, and lter design. Corresponding author of this paper. E-mail: xhyang lc@163.com JIAO Li-Cheng Professor and president in Institute of Intelligent Information Processing at Xidian University. His main research interests include nonlinear science, intelligent information processing, and multiresolution geometry analysis. E-mail: lchjiao@mail.xidian.edu.cn

You might also like