You are on page 1of 15

Low Complexity Detail Preserving Multi-

Exposure Image Fusion

Submitted by

Shrinivas Chidrawar
Shajid Islam
Hithesh Reddy Velkooru

Supervisor
Prof. Bin Chen
Abstract

In this project first we tried to register images and then implement the fusion of image. Image
registration is done for aligning the images so that fusion will perfectly happened.

Normal display and cameras have a limited dynamic range; on the contrary, the real world scenes have a
large dynamic range. Multi exposure image fusion therefore forms an important part of high dynamic
range photography. The field has advanced a lot in recent years, however efforts are still on to develop
algorithms with less computational complexity and better quality of fusion. The project involves the
study of the existing fusion algorithms.

The proposed low complexity algorithm does not involve any filtering or transformation, and therefore
consumes less computational time. It is fully automatic and does not involve any user inputs.
Experimental results show that the algorithm gives comparable quality of fusion with very low
computational time. Finally, the performances of the algorithms are compared using both subjective
and objective measures. Results of both the algorithms are comparable with the existing algorithms.
Results show that the proposed low complexity algorithm is better at preserving details.
Introduction:
Data fusion is an important engineering methodology which helps to analyze a system more efficiently.
One general example of data fusion can be an Audio Visual clip, which gives one a better experience and
a clear idea about what is being communicated. Multi- exposure image fusion is an image fusion
technique used to fuse images captured at varying exposure rates. Image fusion algorithms are mainly
classified depending upon the nature of input images and the way in which the fusion process is carried
out. Fusion process can be pixel wise or feature based. Pixel based image fusion is done at lowest level
of abstraction where each pixel has an impact on the final output. Further, pixel based fusion can be
carried out in both spatial or transform domain. Although the scope of pixel based algorithm is local, the
transformation based methods create the fused image globally; i.e. a single coefficient change in
transform domain propagates throughout the image in spatial domain. Further based on the nature of
input images and the objective, fusion algorithms are classified as multi-temporal image fusion, multi-
sensor image fusion, multi-focus image fusion and multi-exposure image fusion. Multi-temporal image
fusion algorithms are used to integrate the image information captured at various instances of time.
Multi-focus image fusion is used to combine images captured at various focal lengths to get information
present at various depths. Multi-sensor image fusion is carried out to integrate data acquired from
various sensors. Multi-exposure image fusion is used to fuse images with different exposures.

When we capture an image what we are doing is mapping the luminance on a 2-D plane at a higher level
of abstraction (image pixel values). Given that in current digital world most of the devices operate at
fixed resolution (usually 8-bit/channel for display devices); the final pixel values are product of several
non-linear mappings encountered during the process of image formation [1]. These pixel values do not
give the actual measurement of the true relative radiance in the scene. A scene with higher dynamic
range therefore aggravates this problem. At constant exposure rates a captured high dynamic range
scene will either be over-exposed or under-exposed, thereby resulting in loss of details in either of the
two extreme intensity regions. Figure 1, and Figure 2, depicts this scenario.

Figure 1: Mapping between Radiance Values and Integer Pixel Value


In Figure 2, it can be seen that the details near the window with high radiance are captured well in the
under-exposed image; while the inner room details with relatively low radiance are clearly visible in
over-exposed image. A photographer would therefore have to make a choice of radiance values and
adjust the camera exposure rate accordingly. Another way to get complete information of a scene is to
capture it at various exposure rates and pick up required details from available sequence. Multi-
exposure image fusion is a way to do this.

The target application of Multi exposure image fusion is HDR photography and applications in computer
vision. It is therefore necessary to built algorithms with minimum processing time. Another practical
aspect to be considered is that of image registration. Since we are processing on a sequence of images
taken within some interval of time, there is always a possibility of image tilting. The algorithm therefore
should be able to bring out a registered fusion. Work presented in the report is focused on pixel level
image fusion. The objective of the work is to develop fusion schemes which would produce a high detail
image.

Figure 2: Example of underexposed and overexposed images


The simplest possible way to obtain a fused image is to take weighted sum of the bracketed images.
However to get a seamless fusion it is essential to have smooth weights if one wants to make the
process transformation free. Also it is important that all the important features of a particular image get
proper weights.

It is clear that under-exposed and over-exposed images show lack of information in low radiance and
high radiance areas of a scene respectively. Hence, their corresponding weight maps should have lower
weights in these regions. This simple fact governs the proposed algorithm. We use a smooth Gaussian
function to generate weight maps for the given sequence of images. Selection of peak and spread of this
function for a particular image determines the quality of the output. Further we limit the scope of
algorithm for images with balanced exposures, i.e. for sequence of images which have evenly distributed
average intensities without any bias.
Method
Consider Ik, k = 1, 2, ..., N to be the set of N color images with multiple exposures with IMono, k
representing their gray scale versions. The pixel values are assumed to be normalized in the range of 0
to 1. We use a Gaussian function to construct the weight map of these images. Let mk denote the
average gray level of the kth image. The weight map function for the kth image is given by

  ( I Mono,k ( x, y )   k ) 2 
Wk ( x, y )  exp   

  2 2  (1)

where,
mk  min{ mk , k 1, , N }
k  1 
max{mk , k 1, , N }  min{ mk , k 1, , N }
1 / N N 5 (2)
 
 0. 2 N 5

The final fused image F is obtained by a pixel-by-pixel weighted sum of the input images:-

N
F ( x, y )  WNorm , k ( x, y )  I k ( x, y )
k 1 (3)

The final fused image F is obtained by a pixel-by-pixel weighted sum of the input images:

image.
Input Images

Figure 3

Figure 4
Weight maps

Figure 5

Here, IMono,k(x, y) denotes the gray level of the kth image at pixel location (x, y) and Wk(x, y) represents
the weight assigned to that pixel. The parameter µk decides the location of the peak of the Gaussian
weight map function depending on the image exposure. For the most overexposed image, the peak will
be located at 0, and as the exposure decreases, the peak will move towards 1. For the most under
exposed image, the peak will be located at 1. The parameter controls the spread of Gaussian weight
map function and is determined based on the image mean and standard deviation. Selection of sigma is
presented in next section. The weight maps so obtained are normalized by:

Wk ( x, y )
WNorm , k ( x, y )  N

 W ( x, y )
k 1
k

This ensures that the weight maps sum to one at each pixel location (x, y). An example of weight maps is
shown in figures above. Here figure 3 shows the input sequence. Figure 4 shows the corresponding
weight map functions based on the exposure of the images. Figure 5 shows the weight maps according
to equation 1. One can clearly observe that these weight maps are consistent, and they do not have any
spurious transitions. This enables the fusion process without any need of multi-resolution blending,
thereby reducing computations. For color images, equation 3 is performed separately on R, G and B
color channels to obtain the fused RGB image.
Selection of sigma ()
The choice of peak depends upon the average intensity of the image and is given by
equation 2. Sigma (σ) determines the proportion in which the images are blended together. Improper
choice of sigma can result in spurious transitions in output, or may create a washed out or dark image.
In the proposed algorithm the value of sigma is calculated based on the average intensity of the image.
Equation 4.5 gives the value of sigma for kth image.

(5)

Where µk is the peak of kth weight map as in equation 4.1. τk =mk/m1. mk stands for the average intensity
kth gray scale input image.

Results
Result for the image

Weight map graph and images


Output

Input Image sequence and weight maps


Output

Results with Registration:


Here we test the algorithm with tilted pictures and different exposures. First the
registration part is done after that the fusion works.
Input Images
Output Registered Image
Conclusion and future work
In this dissertation, a study of different multi-exposure fusion techniques was performed and
few of the known techniques were implemented. The aim of the dissertation was to build algorithms
which give better fusion result at low computational cost. As there is no ground truth to the fusion
result, the obtained results were compared using different quality measures. A survey of different fusion
measures was performed and few of them were implemented. Subjective analysis was also done to
make a thorough comparison.

The proposed detail preserving algorithm is driven by pixel-by pixel fusion based on simple
weight maps. It consumes very less computational time due to absence of any filtering or
transformation. Results show that the proposed algorithm gives comparable results and has good detail
preserving ability even in the extreme dark and bright intensity regions of an image. Moreover the
algorithm is automatic and does not require any human intervention. The proposed motion blur free
fusion algorithm is good at eliminating the ghosting artifacts caused by moving objects.
Bibliography
[1] Low complexity detail preserving model for image fusion. Sanket Deshmukh, Ashish Vanmali

[2] Kalcic, M.; Tasic, J.F.; , "Colour spaces: perceptual, historical and applicational background," EUROCON
2003. Computer as a Tool. The IEEE Region 8, vol.1, no., pp. 304- 308 vol.1, 22-24 Sept. 2003

[3] A. Goshtasby and Nikolov, "Image fusion: Advances in the state of the art". Information Fusion, vol.8,
pp.114118, 2007

[4] S. Mann and R. W. Picard. "On being undigital with digital cameras: extending dynamic range by
combining exposed pictures". In In Proc. of IS and T 48th annual conference, pp.422-428, 1995

[5] Erik Reinhard, Michael Stark, Peter Shirley, and James Ferwerda, "Photographic tone reproduction for
digital images," Proceedings of the 29th annual conference on Computer graphics and interactive
techniques (SIGGRAPH). ACM Transactions on, vol.21, Issue 3, pp. 267 276, July 2002

[6] K.K. Biswas, and S. Pattanaik, "A Simple Spatial Tone Mapping Operator for High Dynamic Range
Images," 13th Color Imaging Conference: Color Science and Engineering Systems, pp.1107-11, 2005

[7] R. Szeliski (2010,September 3). Computer vision : Algorithms and application . Available
:szeliski.org/Book/drafts/SzeliskBook_20100903_draft.pdf[May 21, 2013]

[8] Rui Shen; Cheng, I.; Jianbo Shi; Basu, A.; , "Generalized Random Walks for Fusion of Multi-Exposure
Images," Image Processing, IEEE Transactions on , vol.20, no.12, pp.3634-3646, Dec. 2011

[9] Kartalov, T.; Petrov, A.; Ivanovski, Z.; Panovski, L.,"A real time algorithm for exposure fusion of digital
images," 15th IEEE Mediterranean Electrotechnical Conference, pp.641-646, April 2010

You might also like