Image Fusion

Prepared by: Rushabh P Jhaveri (15)

Introduction 
  

Developments in the field of sensing technology Multi-sensor systems in many applications such as remote sensing, medical imaging, military, etc. Result is increase of data available Can we reduce increasing volume of information simultaneously extracting all useful information?

Basics of Image Fusion 

Aim of image fusion is to

reduce the amount of data .

retain important features and .

. create new image that is more suitable for the purposes of human/machine perception or for further processing tasks.

Single sensor image fusion system    Sequence of images are taken by a sensor Then they are fused in a image It has some limitations due to capability of sensor .

Multi-sensor image fusion system    Images are taken by more than one sensor Then they are fused in a image It overcomes limitations of single sensor system .

Fusion Categories  Multi-view fusion .

Images are taken from different viewpoints to make 3D view Multi-modal fusion Multi-focus fusion   .

Multi-modal Fusion N M R S P E C T Fused image .

Multi-focus fusion Fused image .

System level consideration .

System level consideration  Three key non-fusion processes: .

Image registration .

Image pre-processing .

Image post processing .

Continued. Pre-processing makes images best suited for fusion algorithm. .    Post-processing stage depends on the type of display. Image registration is the process of aligning images so that their details overlap accurately. fusion system is being used and the personal preference of a human operator.

Image registration     Fields of view. Affine. resolutions. In all application fundamental problem is same. polynomial and projective transformations are more advanced global approaches. y) in one image and the pixels (u. v) in another. Straightforward geometric translation or rotation is the simplest technique. lens distortions and frame rates cannot be expected to match. . to find mapping between the pixels (x.

Methodology     Feature detection .

Algorithm should be able to detect the same features Feature matching .

Correspondence between the features detected in the sensed image and those detected in the reference image is established Transform model estimation .

Type and parameters of the mapping functions are chosen Image resampling and transformation .

The sensed image is transformed .

Example  How to register these two images? .

. The user specifies and pairs points.

.

Methods of Image fusion .

Classification   Spatial domain fusion .

Weighted pixel averaging .

Brovey method .

Principal component analysis (PCA) .

Intensity-Hue-Saturation (IHS) Transform domain fusion .

Lapacian pyramid .

Curvelet transform .

Discrete wavelet transform (DWT) .

Weighted pixel averaging     Simplest image fusion technique F (x. .WB are scalars It has an advantage of suppressing any noise present in the source imagery. y) + WB * B (x. y) = WA * A (x. WA . It also suppresses salient image features. y) Where. inevitably producing a low contrast fused image with a µwashed-out¶ appearance.

each representing pattern information of a different scale. high-contrast images that are clearly more appealing and have greater information content than simpler ratio-based schemes. Image pyramid is essentially a data structure consisting of a series of low-pass or band-pass copies of an image.Pyramidal Method  Produce sharp.  .

Flow of pyramidal method convolution with gaussian kernal k G1 sub-sampling repeating until G N source image G 0 G1 to G N repeat until L N-1 duplicating each row & column of image G k+1 Lk=Gk .E k Ek convolving with k .

Discrete Wavelet Transform method  It represents any arbitrary function x (t) as a superposition of a set of such wavelets or basis functions -mother wavelet by dilation or contractions (scaling) and translation (shifts) .

Allows the image decomposition in different kinds of coefficients. Coefficients coming from different images can be appropriately combined to obtain new coefficients. Final fused image is achieved through the IDWT.Advantages of DWT in Image Fusion     Well suited to manage the different image resolutions. . where the information in the merged coefficients is also preserved.

Algorithm     2 level DWT of each image Low frequency sub-band is chosen based on the combined edge information in the corresponding high frequency sub-bands. . Mean and standard deviation over 3 × 3 windows are used as activity measurement to find the edge information. Final fused image is obtained by applying the inverse DWT on the fused wavelet coefficients.

Results MMW Visible Fused .

IR Visible Fused .

Applications of Image Fusion .

Medical image fusion   Helps physicians to extract the features from multi-modal images Two types-structural (MRI. SPECT) MRI-T2 PET Fused . CT) & functional (PET.

Remote sensing     Remote sensing systems measure and record data about a scene. Powerful tools for the monitoring of the Earth surface and atmosphere Different types of images are taken by different sensors but multi-spectral and multi-polarization images are most important because they increase the separation between the segments. So what is the requirement of image fusion in remote sensing? .

Enhanced the capabilities of features display. Improve classification accuracy. . Improve the geometric precision. Enhance the visual interpretation.Objectives of image fusion in remote sensing        Improve the spatial resolution. Replace or repair the defect of image data. Enhance the capability of the change detection .

Example PAN (1 m) Color (4 m) Fused .

Conclusion .

Thank you .

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.