You are on page 1of 22

Image Fusion

Photogrammetry and Remote Sensing
Slides courtesy: Ms.M Kumar,PRSD

Data Fusion
Data Fusion is a process dealing with data and information from multiple

sources to achieve refined/improved information for decision-making

Often, in case of data fusion, not only remote sensing images are fused
but also further ancillary data (e.g. topographic maps, GPS coordinates,
geophysical information etc.) contribute to the resulting image.

Image Fusion
Image Fusion is the combination of two or more different images to

form a new image by using a certain algorithm.

Data integration comprises algorithms for image fusion too. Some terms define different approaches to image fusion whilst others can be used equivalently. image integration. Another term information fusion refers to a different level of image processing. Image merging. Here the images are already interpreted to reach the information or knowledge level before being fused. It requires the input of data that provide complementary rather than redundant information . This term is often used in the context of geographical information systems (GIS). multi sensor data fusion is some of the expressions accepted in this context.Related terms In the literature a large number of terms related to fusion can be found. Synergy or synergism of remote sensing data summarizes a very wide field of applications and approaches in image fusion.

¾ Usually the multispectral mode has a better spectral resolution than the panchromatic mode. more detailed information about a landuse is present in the imagery ¾ To combine the advantages of spatial and spectral resolutions of two different sensors. ¾ Most of the satellite sensors are such that the panchromatic mode has a better spatial resolution than the multispectral mode. ¾ Better is the spatial resolution. image fusion techniques are applied . ¾ ¾ The panchromatic mode corresponds to the observation over a broad spectral band (similar to a typical black and white photograph) and ¾ the multispectral (color) mode corresponds to the observation in a number of relatively narrower band.IMAGE FUSION Why ? Most of the sensors operate in two modes: multispectral mode and the panchromatic mode.

due to the different physical characteristics of the data .Aim and use of Image Fusion ¾ Sharpening of Images • improvement of spatial resolution by fusing highresolution images with lower resolutions • preserve the interesting spectral resolution while incorporating a higher spatial resolution. ¾ enhance features not visible in either of the single data alone • the combined use of different data sources yields in improved interpretation results.

g. clouds in NIR and Shadows in SAR) • replace defective data .Aim and use of Image Fusion • Change Detection • combination of images taken at different moments in time ( multitemporal data) gives information on changes that may have occurred. • substitute missing information in one image with signals from another sensor image ( e.

• Typical features extracted from an image and used for fusion include edges and regions of similar intensity • Decision Level • represents a method that uses value-added data where the input images are processed individually for information extraction • combined applying decision rules . • Feature Level • If the sensors are measuring different physical phenomena.Image Fusion Approaches • Pixel Based • The fused image can be produced either through pixel-by-pixel fusion or through the fusion of associated local neighborhood of pixels in each of the component image . then the sensor data must be fused at feature/symbol level.

which is required in the desired fused image. 9The common pixel spacing should be the one. Any spatial enhancement performed prior to image fusion is of benefit to the resulting fused image. Therefore as a general rule one must attempt to produce the best single sensor geometry and radiometry and then fuse the images. 9 After the fusion process the contribution of each sensor cannot be distinguished or quantified. 9Besides all sensor-specific corrections and enhancements of image data have to be applied prior to image fusion. 9 These images have different pixel size. It is the preprocessing work and the efforts to bring the diverse data to a common platform that requires a greater attention and effort. . Proper Pre-Processing of the image data 9 Fusion involves the interaction of the images having different spatial resolution. 9 Hence the image data is resample to a common pixel spacing and map projection.The prerequisites of pixel based fusion The actual procedure of fusion does not consume much time and energy. which creates problems while merging.

Techniques of Pixel based fusion Ø Band substitution Ø Multiplicative Ø Color space transformations (RGB. IHS) Ø Numerical Ø Statistical Brovey Transform PCA .

green and red energy.B color theory Advantage of this method Radiometric quality of any of the data is not changed.G. .Band Substitution Procedure ¾ Geometric Rectification/Registration fitting ( both imageries) using polynomial curve ¾ Resampling of low spatial resolution data to high resolution pixel size ¾ Since Panchromatic data is a record of blue. it can be substituted directly for either of the bands and displayed as three bands using R.

People involved in urban or suburban studies. city planning. Vol.” Photogrammetric Engineering & Remote Sensing. . Robert E. 3: 327-331. and utilities routing often want roads and cultural features (which tend toward high reflection) to be pronounced in the image. The result is an increased presence of the intensity component. 1989.Multiplicative Technique The Multiplicative equation is given by DNB1new = DN B1 * DN high Resolution Image DNB2new = DN B2 * DN high Resolution Image DNB2new = DN B2 * DN high Resolution Image Crippen. “A Simple Spatial Filtering Routine for the Cosmetic Removal of Scan-Line Noise from Landsat TM P-Tape Imagery. No. The algorithm shown above operates on the original image. this is desirable. For many applications. 55.

Merged PAN+ Multispectral using Multiplicative tech. .

and multiplies the result by any desired data to add the intensity or the brightness component of the image.BROVEY TRANSFORM Brovey transform is a formula that normalizes multispectral bands used for a RGB display. The Brovey transform equation is given by DNB1new = DN B1 DN B1 + DNB2 + DNB3 x DN high Resolution Image DNB2new = DN B2 DN B1 + DNB2 + DNB3 x DN high Resolution Image DNB3new = DN B3 DN B1 + DNB2 + DNB3 x DN high Resolution Image Where B is a band .

. the Brovey Transform should not be used if preserving the original scene radiometry is important. However.. only three bands at a time should be merged from the input multispectral scene. it is good for producing RGB images with a higher degree of contrast in the low and high ends of the image histogram and for producing visually appealing images. Consequently.BROVEY TRANSFORM contd. water and high reflectance areas such as urban features).. to provide contrast in shadows. The Brovey Transform was developed to visually increase contrast in the low and high ends of an image’s histogram (i. Since the Brovey Transform is intended to produce RGB images.e. 2. 3 to RGB. The resulting merged image should then be displayed with bands 1.

Color Space Transformation ¾ There are three primary colors: Red. ¾ None of these colors can be produced by a linear combination of the other. green and blue pigments are mixed. ¾ In addition to this. we get a black color . 9 This theory is also called the additive color theory. Blue and Green. which is based on what happens when the light is mixed. any color can be produced by a linear combination of these three colors. which is based on what happens when the pigments are mixed. 9 The other theory is the subtractive color theory. 9 Thus when the red.

Image Fusion using IHS transformation Procedure 1. IHS to RGB The modified IHS dataset is transformed back to RGB color space using an inverse IHS transformation.H. 4. Contrast Enhancement The high spatial resolution image (Panchromatic) is contrast stretched so that it has approximately the same variance and mean as intensity image 3.S components 2. Substitution The stretched high spatial resolution image is substituted for the intensity (I) Image. RGB to IHS transformation Three bands of low spatial resolution data in RGB are transformed into three I. .

¾Principal component images may be analysed as separate black and white images. ¾The objective of this transformation is to reduce the dimensionality (i. ¾For example. ¾The "new" bands that result from this statistical procedure are called components. and compress as much of the information in the original bands into fewer bands.e. respectively) typically have similar visual appearances since reflectances for the same surface cover types are almost equal ¾Principal component analysis is a pre-processing transformation that creates new images of with uncorrelated components as band. . the number of bands) in the data.PRINCIPAL COMPONENT ANALYSIS ¾The multispectral image data is usually strongly correlated from one and to the other thus contain similar information. or any three component images may be colour coded to form a colour composite. Landsat MSS Bands 4 and 5 (green and red.

Image Fusion using PCA Procedure 9 Principal component images are derived from raw image data set 9 Panchromatic data is stretched to have approximately the same variance and average as the first principal component 9 The stretched PAN image is substituted for first principal component and the data are transformed back into RGB space ( the reason for substituting PAN image for PC1 is that normally PC1 contains all the information unique to all the bands input to PCA) .


Pixel by Pixel Addition of High Frequency data Brovey transform normalizes three bands to be used for an RGB display. pixel by pixel to the thelower scene spatial resolution image . ¾ The spatial filter removes most of the spectral information The advantage of this method is that it optimally maintains spectral whilst sharpens ¾ Thethe high pass information filtered results were added. Each normalized band is multiplied by a Procedure high resolution set. so each transformed fused band is ¾ A high pass filter is applied to high spatial resolution image defined as as follows ¾ The resultant high pass image contains high frequency image contains high frequency information that is related mostly to spatial characteristics of the scene.

180-cyan. 60-yellow. . 240blue. •Saturation is represented by distance from the point to cone axis. 120-green. •Hue is the angle of the vector from the cone axis to the point with respect to the red axis (0-red. 300-magenta.•Intensity is represented by the distance from the color point to the origin. 360-red).

red corresponds to 0°. S=1 and I=1. Tones are created by decreasing both S and I. pure blue is has H=240°. The point S=0 and I=1 (situated on the chromatic axis) is white. Figure1. red again to 360°. S=1 and I=1. A point at the apex (R=G=B=0) is black. changing H corresponds to selecting a pure color than S=I=1. For exemple. pure red is at H=0. The value of S is a ratio ranging from 0 to 1 on the side of the color circle. Shades are created by decreasing I and fixing S at unity. and pixels that are predominantly blue will be represented by low intensity levels.In the hue image. pixels that are predominantly green will be represented by intermediate levels. green to 120°. Intermediate value values of I for S=0 are the grays. This image expresses the average intensity of the three components in a gray scale. If the levels are represented by the angles from 0° to 360°. pixels that are predominantly red in the trichromatic image will be represented by high levels. Low saturation results in a gray aspect images. So. middle saturation produces pastels and high saturation results in vivid colors. the value of H is undefined. II) In the saturation image the areas of the tri-colors image where the colors are purest will be represented by the high levels. Note that when S=0. The coordinate system for HSI model is cylindrical. blue to 240°. . Adding white color to a pure color correspond to decreasing S without changing I. III) The intensity image is the one that most resembles each of the monochromatic components of the tri-colors image.