Attribution Non-Commercial (BY-NC)

6 views

Attribution Non-Commercial (BY-NC)

- Advanced Excel Formulas
- Unit 1 Skill Set.pdf
- Digital Image Processing Tutorial
- Centeral Tendency Part 3
- Report 2
- 20140280_finalpaper
- 8 Predicting
- Your Local Executive Summary
- Chapter 03 Describing Data: Numerical Measures
- null
- Statistics
- 00519944
- 02642060902793284
- The Living Company
- Videomicpro User Manual
- results-biodailychallengeclassc
- Single Antenna Interface Cancellation
- Applic Sem1 2006
- QNT 275 Week 5 Final Exam
- One-And-One-Half-Bound Dichotomous Choice Contingent Valuation (Hanemann, 2002)

You are on page 1of 10

Pranam Janney & Guang Deng Department of Electronic Engineering La Trobe University, Melbourne, Australia pranamjanney@yahoo.com

ABSTRACT:

Analysis is performed on weighted median filters given a group of predictors. The tests were performed on different test images. The report also presents a brief explanation for choosing the proposed methods of taking weighted median of a group of predictors as an alternative and a competitive adaptive image prediction method.

1. INTRODUCTION:

Signal processing always provides challenges when it comes to approximating or predicting the next sample. The main reason being that samples have abrupt changes between each other, thus its an arduous task to predict these abrupt changes. Linear filters were introduced to counter these challenges. But Linear filters were not optimal class filters and often unable to recover the desired signal effectively if the governing distribution of the corrupting noise samples is other than Gaussian [1]. Thus weighted median (WM) filters were first introduced as generalisation standard median filters, where a non-negative integer weight is assigned to each position in the filter window and a median value is chosen using the sample and their corresponding weights.

Analysis of WM Filters

In lossless compression the algorithm is designed in a way to scan the input image matrix row by row, predicting each pixel as a linear combination of previously predicted pixels and encoding the prediction error. The standard uses totally eight different kinds of predictors as listed in Table 1[2]. Even with these eight predictors, we have to use the best predictor possible for that particular image, thus we base our selection criterion for the best predictor depending on different parameters. Even though selected by the group of predictors in [2], the best predictor is not optimized; we have to improve on the selected predictor to optimize it. This leads us to the optimisation problem wherein we have to maximise the predictor:

( eqn 2.1)

Considering X to be the original image and P to be the prediction output. Then, the error e is given by e=XP The predicted output can be expressed as:

P = i Pi

Mean square error of the prediction is: L = E[ e2 ]

Analysis of WM Filters

Table 1. JPEG Predictors for lossless coding [2]

Where E = represents the correlation. In case of discrete variables, the above equation can be read as: L = ei2 P( ei ) For minimum error: dL / di = 0 Considering one of the predictors: (eqn 2.2)

S [ n ] = a1 S[ n 1 ] + a2 S[ n 2 ] L = ( S [ n ] - S [ n ] )2

dL/di = 0

(eqn 2.3)

(eqn 2.4)

( S[ n ]. S [ n 1 ] ) = (a1 S[ n 1 ] 2 + a2 S[ n 2 ]. S[ n 2 ] )

Analysis of WM Filters

For the first order predictor coefficients, i = 1; if R(x , y ) represents the correlation coefficient between two variables x and y, then R( 1 ) = a1 R( 0 ) + a2 R( 1 ) i = 2, R( 2 ) = a1 R( 1 ) + a2 R( 0 ) Representing the correlation coefficients in the form of a matrix R( 0 ) = R( 1 ) Or Rxa=r equation can also be generalised for any n > 2. R( 0 ) R( 1 ) . R( n 1) R( 1 ) R( 2 ) . R( n 2 ) R= : : R(n-1) R(n-2) : : R( 0 ) (eqn 2.5) Where R, a and r represent the above three matrices respectively.The above R( 0 ) a2 R( 2 ) R( 1 ) a1 R( 1 )

The matrix R is in the circular Toeplitz form. Assuming the Markovs model for the above equation, we have: R( k ) = | k | reduces to: max Prob ( 2 , / { S[ n ]} , I ) (eqn 2.7) (eqn 2.6) Where and are constants. Now, the optimisation problem (eqn 2.1)

Analysis of WM Filters

From the above equation (eqn 2.2), we can see that probability is a function of the constants, which means by varying the constants we can optimise the predictor. Constants are henceforth called the Weights. For a Laplacian model we have [3]: Prob ( x k | , I ) = Where

1 2 k

e | xk |/ k

(eqn 2.8)

2

optimised prediction is then given by the Weighted Median filter output of the predictions, using 1/ k

[4]

= WM ({Pi , i }n =1:N )

Pi = WM ({Pi , i }n =1:N ) ,

Now considering a simple case of Laplacian Distribution, we have

(eqn 2.9)

In our case, the parameter of interest is the optimised predictor, thus we have (eqn 2.10)

2 j

(if i j )

(eqn 2.11)

Thus the weighted median reduces down to a simple median; therefore the optimised prediction solution reduces to:

Pi = Median ({Pi })

(eqn 2.12)

The median is the maximum likelihood estimate of the signal level in the presence of uncorrelated additive biexponentially distributed noise [5]. Weighted Median filters belong to the broader class of stack filters tools. In

Analysis of WM Filters

the binary domain WM filters are self dual, linear separable positive Boolean functions [5].

3.1 Definition:

3.11 Positive integer weights: For the discrete time continuos valued input vector, X = [X1, X2, X3XN], The output Y of the WM filter of width N with corresponding integer weights: W = [W1, W2, W3WN], is given by the filtering procedure [5] : Y = MED [W1*X1, W2*X2WN*XN] (eqn 3.1) Where MED is the median operation and * denotes Multiplication, The median value is chosen from the sequence of the products of the samples and their corresponding weights. 3.12 Positive non-integer weights: The weighted median of X is the value minimizing the expression [5] L( ) = 3.13 Filtering procedure: Sorting the samples inside the filter window; adding up the corresponding weights from the upper end of the sorted set until the sum just

N i =1

Wi | X i |

(eqn 3.2)

Analysis of WM Filters

exceeds half of the total sum of weights, i.e.,

1 2

N i =1

Wi ; output of the WM

Median filtering was performed on various images using weights. Analysis was performed using MATLAB (ver. 6.1). For analysis purposes the entropy and the signal to noise ratio were used for the prediction errors and the predicted image, respectively. The signal to noise ratio (dB) was calculated by SNR = 10 * log10

2

(eqn 4.1)

Where xij = original pixel value and Pij = predicted pixel value. Signal to

noise ratio is calculated for the predicted image with respect to the original image. Histogram of an image can be defined by

n=

(eqn 4.2)

Where n = histogram of the image, mpv = number of image pixels with pixel value (pv) and N = total number of image pixels. Thus entropy was calculated using:

Np i =1

E=

ni log ni , log e 2

(eqn 4.3)

Analysis of WM Filters

Table 2. Entropy of the prediction errors

2

The weights in each case were assigned using different parameters. There are two kinds of weights assigned: global weights (entropy (Ent), variance (Var), random (rand) etc), Simple median using the number of predictors (N) and local weights (Sum of Squared Errors (SSE)). When the weights are assigned considering the whole image it is called the global weights, when SSE is used as weights it is called the local weight, because the SSE is independent for each pixel. Experimental tests were conducted on various images and the results obtained are shown in Table 2 and Table 3. Table 2,shows the entropy of the prediction errors for different images. The entropy of the prediction error is quite less when the weights are assigned using the variance parameter. When random weights were used the results for some of the images were good, but random weights cannot be taken into consideration due to the fact that the methodology of getting these random weights is a random process and the probability of getting the best results is very low. . When localised weights are used the results are also better in cases where the image was large (eg: Saturn.tif(328 x 438)). Thus by using the variance as the weights for a medium sized image or localised

Analysis of WM Filters

Table 3. Shows the signal to noise ratio (dB) of the predicted image for different images.

2

weights for a large image, the prediction error entropy decreases denoting that it is the best possible prediction. From Table 3, we can see that the signal to noise ratio is consistently high for predicted images derived using variance assigned to weights especially for medium sized images. Here again the usage of localised weights has shown better results in cases where the image is too large (eg: Saturn (328 x 438)). Higher signal to noise ratio indicates that the even though there is noise addition due to this prediction process, the noise rejection capacity is high; thus using variance as weights on whole image or localised weights for large image would result in better noise rejection than others.

5. CONCLUSION:

From the experimental results i.e. Table 2 and Table 3, we can see that using particular weights for prediction could result in lower prediction errors and hence resulting in better signal to noise ratio compared with the original image.

Analysis of WM Filters

In conclusion, weighted median of a group of predictors could be an alternate and a competitive method for adaptive image prediction technique. From our experiments we could suggest that variance can be used as global weights for attaining the best possible weighted median filter with better prediction and noise rejection capacity especially for medium sized images, but in cases where the image size is large, then the localised weights seem to be better than variance.

6.REFERENCES:

[1] L. Yin, Y. Neuvo, Fast adaptation and performance characteristics of FIR-WOS Hybrid filters, IEEE Trans. Signal Processing, v. 42, issue:7,pp. 1610-1628, July 1994. [2] Rashid Ansari, Nasir Memon,The JPEG Lossless Standards, http://isis.poly.edu/memon/pdf/7.pdf [3] D.S.Sivia,Data Analysis: A Bayesian Tutorial, Clareendon Press, Oxford, 1996. [4] Deng. G, Ye. H, Maximum likelihood based framework for secondlevel adaptive prediction, IEE Proc.- Vis. Image Signal Process., Vol. 150, No. 3, pp. 193-197,June 2003. [5] L. Yin, et al, Weighted median filters: A Tutorial, IEEE Trans on Circuits and System II, v.43, issue:3, pp, 157-192,March 1996.

10

- Advanced Excel FormulasUploaded byPMP
- Unit 1 Skill Set.pdfUploaded bydaveawoods
- Digital Image Processing TutorialUploaded byThamarai Selvi
- Centeral Tendency Part 3Uploaded byscherrercute
- Report 2Uploaded byGinaAmalia
- 20140280_finalpaperUploaded byLương Đình Tháp
- 8 PredictingUploaded byssskgu
- Your Local Executive SummaryUploaded byapi-26216409
- Chapter 03 Describing Data: Numerical MeasuresUploaded bywindyuri
- nullUploaded byapi-20697951
- StatisticsUploaded byheart09
- 00519944Uploaded byMohEllayali
- 02642060902793284Uploaded byHamid Ullah
- The Living CompanyUploaded byOlga Kouzina
- Videomicpro User ManualUploaded byPatrick Arley
- results-biodailychallengeclasscUploaded byapi-267016167
- Single Antenna Interface CancellationUploaded byIvica Putrić
- Applic Sem1 2006Uploaded byFred
- QNT 275 Week 5 Final ExamUploaded by116117Math
- One-And-One-Half-Bound Dichotomous Choice Contingent Valuation (Hanemann, 2002)Uploaded byErick Stevinsonn Arellanos Carrión
- 3 (3)Uploaded byElakkiyaSelvaraj
- GridDataReport-dompuUploaded byPrawiroYudhio Putro Indonesia Negoro
- test.docUploaded bydragos gontariu
- A Development of Accident Prediction Technique based on Monitoring Data for the Area of Dense Energy ConsumptionUploaded bySenthil Kumar
- Project ReviewUploaded byVAMSIKRISHNA CHEMUDUPATI 14BEC0022
- Quary 2-ms4.xlsxUploaded byMehran
- filliben1975.pdfUploaded byHishan Crecencio Farfan Bachiloglu
- Out Put PairedUploaded bysutisna nisa
- darmsUploaded byaremisugdu
- Data Mining Techniques.docxUploaded byJe robs

- heat accumulation labUploaded byapi-271576474
- ECET 310 ASSIST Teaching Effectively ecet310assistdotcomUploaded bycheng458
- Chap 9A Networks (Lecture) IT Slides # 1Uploaded byMuhammad Talha Zaroon
- Chapter3.pdfUploaded byAnonymous 2Mfw6dCoSl
- 1.MF IntroductionUploaded byPaisa Pay
- OSDFSMS Access Method Services for CatalogsUploaded byAnish goyal
- Od 109601410285202000Uploaded byShubhanshu Mishra
- Periodic TrendsUploaded byJTB7
- HackSpace Magazine #3Uploaded byahadzi
- Schneider Electric.pdfUploaded bysemajames
- 01 BL-C1_C20 (2)Uploaded byJay Shantharam
- shocks n vibrationsUploaded bySLACKENGINEER
- 10.1021@acs.chemrev.6b00076Uploaded byPetru Apostol
- Flyer Gamma X EnUploaded byBelalNor
- UT4000AUT6000A Patient Monitors Service GuideUploaded byJose Carlos Serrano
- 4G Aggregate HourlyUploaded bywardana_kusuma4493
- Identification Friend or FoeUploaded byAsri Dasman
- Pl Chapter 07Uploaded byGhulam Abbass
- SP40plus User ManualUploaded byjoy
- FireplaceUploaded byjddoey23
- assigment - eleUploaded byIsuru Thanujaka Subasinghe
- True Feed 2011Uploaded byjuliogiorgi
- (778700195) guidelinessmawUploaded byrezhablo
- UT_Solutions_Catalog-Mistras Immersion Test Equipment.pdfUploaded byAhmad Daniel
- Wireless transmission of electricity.pdfUploaded bycrazymangesh
- ANT A79VP1700 DatasheetUploaded byquirischa
- Hikvision - IVMS-4500(Android) Mobile Client - User Manual of Software_V4.2_20150810 - interside.orgUploaded byMarcos
- GPIO Pin Expansion Using I2C InterfaceUploaded byEkta Kumari
- c4ee01152jUploaded byRohitKumar
- Withe LED DriverUploaded byTri Ary