Professional Documents
Culture Documents
Objective Performance Evaluation of Feature Based Video Fusion
Objective Performance Evaluation of Feature Based Video Fusion
HTTPS://SITES.GOOGLE.COM/SITE/JOURNALOFCOMPUTING/
WWW.JOURNALOFCOMPUTING.ORG 124
Abstract— An objective fusion measure should extract all the perceptually important information that exists in the input images and meas-
ure the ability of the fusion process to transfer as accurately as possible this information into the output image. Most fusion evaluation algo-
rithms dealing with still image fusion explicitly aim at achieving optimal accuracy of representing spatial information, from the inputs in the
fused image. This does not preclude their use in video fusion as they can be applied to each multi-sensor frame independently. This paper
deals with the objective evaluation of multi-sensor video fusion. For this purpose an established static image fusion evaluation framework,
based on only edge information and not regional information, is used. The metric reflects the quality of visual information obtained from the
fusion of input images and we use it to compare the performance of the feature-level video fusion with pixel-level video fusion algorithms.
—————————— ——————————
1 INTRODUCTION
3.2 Implementation
Consider two input images A and B, and a resulting fused
image F. Note that the following methodology can be eas-
ily applied to more than two input images. A Sobel edge
operator is applied to yield the edge strength g(n,m) and
orientation (n,m) information for each pixel p(n,m), 1 ≤
n ≤ N and 1 ≤ m ≤ M. Thus for an input image A:
Fig. 3. Fusion using (a) Average (b) Block (c) Maxima (d) Wavelet
where, L is a constant.
3.3 Results
Xydeas-Petrovic is an index that measures the quality of
the object introduced in the final result. The experimental
results of the performance measurement of this metric are
shown to be in agreement with preference scores ob-
tained from informal subjective tests. Furthermore, this
clearly indicates that the Xydeas-Petrovic fusion measure
is perceptually meaningful.
TABLE I
PERFORMANCE RESULTS FOR THE DIFFERENT FU-
SION TECHNIQUES
Wavelet 0.49374
REFERENCES
[1] V Petrović, T. Cootes, Objectively adaptive image fusion, Information
Fusion, Vol. 8(2), Elsevier, 2007, 168- 176
[2] N Cvejić, D Bull, C Canegarajah, A New Metric for Multimodal
Image Sensor Fusion, Electronics Letters, Vol. 43(2), 95-96, IEE,
2007
[3] G Qu, D Zhang, P Yan, Information measure for performance of
image fusion, Electronics Letters, Vol. 38(7), 313-315, IEE, 2002
[4] V Petrović, T Cootes, Information Representation for Image Fusion
Evaluation, Proceedings of Fusion 2006, Florence, ISIF, July 2006
[5] C. Pohl, J. L. Genderen, “Multisensor Image Fusion in Remote Sens-
ing: Concepts, Methods and Applications”, International Journal of
Remote Sensing, vol. 19, no. 5, pp. 823-854, 1998.
[6] G. Corsini, M. Diani, A. Masini, M. Cavallini, “Enhancement of
Sight Effectiveness by Dual Infrared System: Evaluation of Image Fu-
sion Strategies”, ICTA’05
[7] C Xydeas, V Petrović, “Objective Pixel-level Image Fusion Perfor-
mance Measure”, Proc. of SPIE, Vol. 4051, April 2000, pp 89-99