2.2K views

Uploaded by pi194043

Image enhancement technique using Fusion algorithm ,document contains basic description and code for algorithm

save

You are on page 1of 21

December 30, 2012

❈♦♥t❡♥ts

❈♦♥t❡♥ts

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

Introduction . . . . . . . . . . . . Weight Map Calculation . . . . . 0.2.1 Contrast : . . . . . . . . . 0.2.2 Saliency : . . . . . . . . . . 0.2.3 Well-exposedness: . . . . 0.2.4 Local Contrast Measure : 0.3 Image Enhancement . . . . . . . 0.4 CODE . . . . . . . . . . . . . . . . 0.5 Results . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . 0.1 0.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

✷

2 2 3 4 5 6 7 7 8 8

**❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t
**

✵✳✶ ■♥tr♦❞✉❝t✐♦♥

Image fusion is the process of combining information in two or more images of a scene into a single enhanced image.The aim of the fusion process is to integrate complementary data inorder to enhance the information in respective source images. The method is based on the one desribed in paper by Ancuti et al., 2012 with small changes. Here the two inputs images of fusion process are derived from gray world algorithm and global contrast stretching algorithm.The ﬁnal images is a weighted blend of the two input images.

✵✳✷ ❲❡✐❣❤t ▼❛♣ ❈❛❧❝✉❧❛t✐♦♥

The weight map are scalar image derived from the original image to aid in the fusion process. A weigth map’s are derived from each input image.The weights are calculated based on some local or global feature of pixel.

2 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

Some of features commonly used are contrast,saturation and exposedness. For each pixel infromation from different measures are combined into a scalar weight map using simple additive or multiplicative functions.

✵✳✷✳✶ ❈♦♥tr❛st ✿

The contrast of image is estimated by applying a Laplacian ﬁlter to each channel of the image, and the absolute value of the ﬁlter response is taken.The laplacian ﬁlter will enhance the edges/texture in the image .Edges are textured value will give high value of contrast while homogenous regions will give low values for contrast measures. However this measure is not capable of distinguishing between a ramp and ﬂat regions of the image. However It is capable of distinguishing step edges .For underwater images this map will predominantly carry low weights .

(a) input image 1

(b) input image 2

(c) weight 1

(d) weight 2

Figure 1: Contrast Weights for input images

3 | 21

**❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t
**

✵✳✷✳✷ ❙❛❧✐❡♥❝② ✿

The saliency weight map aims to emphaize the discriminating objects that loose prominance in underwater scene.saliency algorithm emphaizes such objects . The comman method to detect saliency is to detect the contrast difference between image region and its surrounding which is known as center surround contrast. The image saliency algorithm is take from the paper Achanta et al., 2009

(a) input image 1

(b) input image 2

(c) weight 1

(d) weight 2

Figure 2: Saliency Weights for input images

4 | 21

**❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t
**

✵✳✷✳✸ ❲❡❧❧✲❡①♣♦s❡❞♥❡ss✿

Looking at just the raw intensities within a channel, reveals how well a pixel is exposed. The aim is to retain pixels that are not over or under exposed.Each pixel is weighted on how it is close to 128 using a gaussian function.The gaussian function is applied to each color channel seperately and results are multiplied/added depending on requirement of application.

(a) input image 1

(b) input image 2

(c) weight 1

(d) weight 2

Figure 3: exposedness Weights for input images

5 | 21

**❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t
**

✵✳✷✳✹ ▲♦❝❛❧ ❈♦♥tr❛st ▼❡❛s✉r❡ ✿

For underwater images the global contrast measure is not sufﬁcient to effectively represent the contrast. A local contrast measure would be able to better represent the contrast measure at a pixel location. The impact of this measure is to strengthen the local contrast to capture ramp transitions highlighted and shadowed parts which are not captured by global contrast method. The local contrast measure is computed as the standard deviation between the pixel luminance level and its local average around a pre-deﬁned neighborhood. This can be easily computed by ﬁrst obtaining a low pass ﬁltered version of the image. The absolute value of difference between the image and its low pass version is used as local contrast measure map.

(a) input image 1

(b) input image 2

(c) weight 1

(d) weight 2

Figure 4: local contrast Weights for input images

The weight maps are normalized so that sum of weights at each pixel location is 1. ¯ Wk = Wk K ∑ k =1 W k

6 | 21

**❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t
**

✵✳✸ ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

The enhanced image R( x, y) is obtained by fusing deﬁned inputs with weight measures at every location. R( x, y) =

∑ W k (x, y) I k (x, y)

k =1

K

In present application two input images are derived from the a modiﬁed global contrast stretched image and white balence image in LabColorSpace. The weight maps based on global contrast and exposedness are obtained for the set of image based on saliency,exposedness,global contrast and local contrast. Thus we have a linear combination of 2 input images and 4 weight maps a total of weight combination of 8 images. Below results only only considering individual measures using laplacian lending and using naive blending all all the masks are shown. The code for laplacian blending was take from Roy and arnon, 2010 In naive blending some artifacts are seen,the laplacian blending method would reduce this effect. Thus considering computional efﬁciency and requirement way may choose methods corresponding to any one of the results.

✵✳✹ ❈❖❉❊

for code for above routines refer the site ❤tt♣✿✴✴❝♦❞❡✳❣♦♦❣❧❡✳❝♦♠✴♣✴♠✶✾✹✵✹✴s♦✉r❝❡✴ ❜r♦✇s❡✴❋✉s✐♦♥❊♥❤❛♥❝❡♠❡♥t✴

7 | 21

**❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t
**

✵✳✺ ❘❡s✉❧ts

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 5: Example 1

8 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 6: Example 2

9 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 7: Example 3

10 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 8: Example 4

11 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 9: Example 5

12 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 10: Example 5

13 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 11: Example 6

14 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 12: Example 8

15 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 13: Example 9

16 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 14: Example 10

17 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 15: Example 10

18 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 16: Example 11

19 | 21

❋✉s✐♦♥ ❛❧❣♦r✐t❤♠ ❋♦r ❯♥❞❡r❲❛t❡r ■♠❛❣❡ ❊♥❤❛♥❝❡♠❡♥t

(a) original

(b) input1

(c) input2

(d) Global contrast

(e) saliency

(f) exposedness

(g) local contrast

**(h) using naive blending
**

Figure 17: Example 12

20 | 21

❇✐❜❧✐♦❣r❛♣❤②

❇✐❜❧✐♦❣r❛♣❤②

[1] Radhakrishna Achanta et al. “Frequency-tuned Salient Region Detection”. In: IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2009). For code and supplementary material, click on the url below. Miami Beach, Florida, 2009, pp. 1597 –1604. doi: ✶✵ ✳ ✶✶✵✾ ✴ ❈❱P❘ ✳ ✷✵✵✾ ✳ ✺✷✵✻✺✾✻. url: ❤tt♣ ✿ ✴✴✐✈r❣✳❡♣❢❧✳❝❤✴s✉♣♣❧❡♠❡♥t❛r②❴♠❛t❡r✐❛❧✴❘❑❴❈❱P❘✵✾✴✐♥❞❡①✳❤t♠❧. [2] Cosmin Ancuti et al. “Enhancing Underwater Images and Videos by Fusion”. In: CVPR. 2012. [3] Tom Mertens, Jan Kautz, and Frank Van Reeth. “Exposure Fusion”. In: Proceedings of the Paciﬁc Conference on Computer Graphics and Applications, Paciﬁc Graphics 2007, Maui, Hawaii, USA, October 29 - November 2, 2007. Ed. by Marc Alexa, Steven J. Gortler, and Tao Ju. IEEE Computer Society, 2007, pp. 382–390. isbn: 978-0-76953009-3. doi: ❤tt♣✿✴✴❞♦✐✳✐❡❡❡❝♦♠♣✉t❡rs♦❝✐❡t②✳♦r❣✴✶✵✳✶✶✵✾✴P●✳✷✵✵✼✳✶✼. [4] Roy and arnon. simple laplacian blender using opencv. 2010. url: ❤tt♣ ✿ ✴ ✴ ✇✇✇ ✳ ♠♦r❡t❤❛♥t❡❝❤♥✐❝❛❧ ✳ ❝♦♠ ✴ ✷✵✶✶ ✴ ✶✶ ✴ ✶✸ ✴ ❥✉st ✲ ❛ ✲ s✐♠♣❧❡ ✲ ❧❛♣❧❛❝✐❛♥ ✲ ♣②r❛♠✐❞ ✲ ❜❧❡♥❞❡r✲✉s✐♥❣✲♦♣❡♥❝✈✲✇❝♦❞❡✴.

21 | 21

- temporal filtersUploaded bypi194043
- shape classification using Histogram of oriented gradientsUploaded bypi194043
- polynomial approximation of a 2D signalUploaded bypi194043
- Polynomial Approximation of 1D signalUploaded bypi194043
- A simple color balance algorithmUploaded bypi194043
- Multi Class Logistic Regression Training and TestingUploaded bypi194043
- A linear channel filterUploaded bypi194043
- Adaptive Skin Color DetectorUploaded bypi194043
- automatic white balance algorithm 1Uploaded bypi194043
- Control Limited Adaptive Histogram Equalization for Image EnhancementUploaded bypi194043
- Continuous Emission Hidden Markov Model for sequence classificationUploaded bypi194043
- Fast 2D Separable Symmetric/Anti-Symmmetric ConvolutionUploaded bypi194043
- A detailed descriptions and results for different color constancy algorithmsUploaded bypi194043
- Normalized convolution for image interpolationUploaded bypi194043
- under water image enhancementUploaded byMadhu Nole
- Gesture Recognition using Hidden Markov ModeUploaded bypi194043
- Android : shape Classification using OpenCV,JavaCV and SVMUploaded bypi194043
- Quick Sort algorithm in HaskellUploaded bypi194043
- Computing Rectangular Haar Features for Cascade Detection TrainingUploaded bypi194043
- Modified Canny Edge DetectionUploaded bypi194043
- Gaussian Multivariate Distribution -Part 1Uploaded bypi194043
- OpenCL Image Convolution Filter - Box FilterUploaded bypi194043
- Single Passs Connected Component LabellingUploaded bypi194043
- Fast Asymmetric Learning for Cascade Face Detection Training/Testing UtilityUploaded bypi194043
- OpenVision Library Gaussian Mixture Model ImplementationUploaded bypi194043
- Seminar Report(Chiranjivi)Uploaded byShashiKumar
- Symmetic Nearest Neighbour Anisotropic 2D image filterUploaded bypi194043
- Dense optical flow expansion based on polynomial basis approximationUploaded bypi194043
- OpenCL Heterogeneous parallel program for Gaussian FilterUploaded bypi194043
- Implementation of discrete hidden markov model for sequence classification in C++ using EigenUploaded bypi194043

- Modified Canny Edge DetectionUploaded bypi194043
- Polynomial Approximation of 2D image patch -Part 2Uploaded bypi194043
- Compiling Native C/C++ library for AndroidUploaded bypi194043
- Multi Class Logistic Regression Training and TestingUploaded bypi194043
- Gaussian Multivariate Distribution -Part 1Uploaded bypi194043
- ARM Neon Optimization for image interleaving and deinterleavingUploaded bypi194043
- A linear channel filterUploaded bypi194043
- Adaptive Skin Color DetectorUploaded bypi194043
- OpenVision Library Gaussian Mixture Model ImplementationUploaded bypi194043
- Continuous Emission Hidden Markov Model for sequence classificationUploaded bypi194043
- Fast 2D Separable Symmetric/Anti-Symmmetric ConvolutionUploaded bypi194043
- Dense optical flow expansion based on polynomial basis approximationUploaded bypi194043
- Implementation of discrete hidden markov model for sequence classification in C++ using EigenUploaded bypi194043
- Markov chain implementation in C++ using EigenUploaded bypi194043
- C++ InheritanceUploaded bypi194043
- Local Binary PatternUploaded bypi194043
- Mean Shift AlgorithmUploaded bypi194043
- C++ Const,Volatile Type QualifiersUploaded bypi194043
- Random Ferns for Patch DescriptionUploaded bypi194043
- Integral Image for Computation of Mean And VarianceUploaded bypi194043
- Embedded Systems Programming with ARM on Linux - Blinking LEDUploaded bypi194043
- C++ virtual functions and abstract classUploaded bypi194043
- C++ static members and functionUploaded bypi194043
- Uniform Local Binary Pattern and Spatial Histogram ComputationUploaded bypi194043
- Tan and Triggs Illumination normalizationUploaded bypi194043
- Normalized convolution for image interpolationUploaded bypi194043

- A Novel Color Image Fusion for Multi Sensor Night Vision ImagesUploaded byATS
- VISIÓN ARTIFICIAL.docxUploaded byXavi Franciis Cevallos
- Glint Detection and Evaluation Using Edge DetectorsUploaded byIJSTA
- Topological Structural Analysis of Digitized Binary Images by Border FollowingUploaded byTuThanh
- Dicom Ihe Hl7v6Uploaded byPablo Cardozo
- Deteccion de BordesUploaded byDanielQC
- monografia_poc_2_duilio_campos_sasdelli_sift_no_androidUploaded byDuílio Campos Sasdelli
- Editor de ImagensUploaded byJosé Gonçalves
- Dronyx Company ProfileUploaded byConsorzio Quadrifoglio Martina Franca
- dicom.docxUploaded byMIsión NIño FEliz
- Journal of RadiologyUploaded byEasy Orient Dewantari
- 246821018-FOTOGRAMETRIA-IntroduccionUploaded bygonzalogandarillas
- Using GIMP to Create an Artistic Regional RPG Map - Part 4Uploaded byshaun
- Finger Mouse Project ReportUploaded byanupamdubey
- vector vs bitmap questionsUploaded byapi-267289152
- Plants Images Classification Based on Textural Features using Combined ClassifierUploaded byAnonymous Gl4IRRjzN
- Application-oriented approach to Texture feature extraction using Grey Level Co-occurrence Matrix (GLCM)Uploaded byIRJET Journal
- Brain Tumor Detection Using Artificial Neural Network Fuzzy Inference System (ANFIS)Uploaded byATS
- Color Constancy at a Pixel PDFUploaded byStephanie
- EC2029-DIP Important QuestionUploaded bybalasha_937617969
- An Improved Sobel Edge DetectionUploaded byavbhujle2
- Advanced Sharpening Screen (for Photoshop)Uploaded bycronnin20044967
- Texton Boost: Joint Appearance, Shape and Context Modeling for Multi-class object recognition and segmentationUploaded byzukun
- 2.Professors UOUUploaded bySagar Dhar
- Accenture-TV17-Full.pdfUploaded byYasushi Kanematsu
- ICIIT 2010 Conference ScheduleUploaded byexceptionalone
- Tutorial Propio Vray - Render Interiores - Taringa!Uploaded bysidbarreti
- Curvature Scale Space Image in Shape Similarity RetrievalUploaded bymizzakee
- Detection of Knee Osteoarthritis by Measuring the Joint Space Width in Knee X-ray ImagesUploaded byAnonymous vQrJlEN
- 2rnyrp_tecnicas_clasicas-ppt.pdfUploaded bylp111bj