Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Look up keyword
Like this
14Activity
0 of .
Results for:
No results containing your search query
P. 1
Statistical Signal Processing

Statistical Signal Processing

Ratings: (0)|Views: 372|Likes:
Published by Aaron Merrill
Statistical Signal Processing
Statistical Signal Processing

More info:

Published by: Aaron Merrill on Dec 06, 2009
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

02/17/2013

pdf

text

original

 
6
Statistical Signal Processing
Yih-Fang Huang
Department of Electrical Engineering,University of Notre Dame,Notre Dame, Indiana, USA
6.1 Introduction .......................................................................................9216.2 Bayesian Estimation .............................................................................921
6.2.1 Minimum Mean-Squared Error Estimation • 6.2.2 Maximum a Posteriori Estimation
6.3 Linear Estimation ................................................................................9236.4 Fisher Statistics ....................................................................................924
6.4.1 Likelihood Functions • 6.4.2 Sufficient Statistics • 6.4.3 Information Inequality andCram&-Rao Lower Bound • 6.4.4 Properties of MLE
6.5 Signal Detection ..................................................................................927 
6.5.1 Bayesian Detection • 6.5.2 Neyman-Pearson Detection • 6.5.3 Detection of a KnownSignal in Gaussian Noise
6.6 Suggested Readings ..............................................................................930 References ..........................................................................................931
6.1 Introduction
Statistical signal processing is an important subject in signalprocessing that enjoys a wide range of applications, includingcommunications, control systems, medical signal processing,and seismology. It plays an important role in the design,analysis, and implementation of adaptive filters, such as adap-tive equalizers in digital communication systems. It is also thefoundation of multiuser detection that is considered an effect-ive means for mitigating multiple access interferences inspread-spectrum based wireless communications.The fundamental problems of statistical signal processingare those of signal detection and estimation that aim to extractinformation from the received signals and help make decisions.The received signals (especially those in communicationsystems) are typically modeled as random processes due tothe existence of uncertainties. The uncertainties usually arise inthe process of signal propagation or due to noise in measure-ments. Since signals are considered random, statistical tools areneeded. As such, techniques derived to solve those problemsare derived from the principles of
mathematical statistics,
namely, hypothesis testing and estimation.This chapter presents a brief introduction to some basicconcepts critical to statistical signal processing. To begin
Copyright © 2005 by AcademicPress.All rights of reproduction in any form reserved.
with, the principles of Bayesian and Fisher statistics relevantto estimation are presented. In the discussion of Bayesianstatistics, emphasis is placed on two types of estimators: theminimum mean-squared error (MMSE) estimator and themaximum a posteriori (MAP) estimator. An important classof linear estimators, namely the Wiener filter, is also presentedas a linear MMSE estimator. In the discussion of Fisher statis-tics, emphasis will be placed on the maximum likelihoodestimator (MLE), the concept of sufficient statistics, and infor-mation inequality that can be used to measure the quality of anestimator.This chapter also includes a brief discussion of signal detec-tion. The signal detection problem is presented as one ofbinary hypothesis testing. Both Bayesian and Neyman-Pearsonoptimum criteria are presented and shown to be implementedwith likelihood ratio tests. The principles of hypothesis testingalso find a wide range of applications that involve decisionmaking.
6.2 Bayesian Estimation
Bayesian estimation
methods are generally employed to esti-mate a random variable (or parameter) based on another921
 
922
Yih-Fang Huang
random variable that is usually observable. Derivations ofBayesian estimation algorithms depend critically on the aposteriori distribution of the underlying signals (or signalparameters) to be estimated. Those a posteriori distributionsare obtained by employing Bayes' rules, thus the name Baye-sian estimation.Consider a random variable X that is some function ofanother random variable S. In practice, X is what is observedand S is the signal (or signal parameter) that is to be estimated.Denote the estimate as S(X), and the error as e ~ S - S(X).Generally, the cost is defined as a function of the estimationerror that clearly depends on both X and S, thus ](e)=
l(S, X).
The objective of Bayesian estimation is to minimize theBayes' risk 7"4, which is defined as the ensemble average (i.e.,the expected value) of the cost. In particular, the Bayes' risk isdefined by:
re=~ e{l(e)}
= ](s,
x)~,
x(S, x)dxds,
--00 --00
where
fs,
x(S, x) is the joint probability density function (pdf)of the random variables S and X. In practice, the joint pdf isnot directly obtainable. However, by Bayes' rule:and the maximum a posteriori (MAP) estimator. These twoestimators are discussed in more detail below.
6.2.1 Minimum Mean-Squared Error Estimation
When the cost function is defined to be the mean-squarederror (MSE), as in equation 6.4a, the Bayesian estimate canbe derived by substituting J(e) =
lel 2 = (s -
i(xll 2
into equa-tion 6.2. Hence the following is true:
J {L }
"~MS ~---
(S -- g(x) )2fslx(Slx)ds
fx(x)dx.
--00
(6.5)Denote the resulting estimate as gMS(X), and use the sameargument that leads to equation 6.3. A necessary conditionfor minimizing
7¢MS
results as:
~s (s- i(x))2fslx(Slx)ds
= 0. (6.6)
--(30
The differentiation in equation 6.6 is evaluated at ~ =
gMS(X).
Consequently:
i
oo (S-- iMS(X))fsIx(S]x)ds
= 0;
oo
fs,x(S, x) = fsrx(Slx)fx(x),
(6.1) andthe a posteriori pdf can be used to facilitate the derivation ofBayesian estimates. With the a posteriori pdf, the Bayes' riskcan now be expressed as:
J {I
"4.= ](s, x)fslx(s x)ds x)dx.
(6.2)
--00 O0
Because the cost function is, in general, non-negative and soare the pdfs, minimizing the Bayes' risk is equivalent to min-imizing:
j
o l(s, x)fslx(Slx)ds.
(6.3)
--OO
Depending on how the cost function is defined, the Bayesianestimation principle leads to different kinds of estimationalgorithms. Two of the most commonly used cost functionsare the following:
]MS(e)
: [el 2. (6.4a)0 if lel < ~- (6.4b)]MAP(e): 1 if [e I>~ where A<< 1.These example cost functions result in two popular estimators,namely, the minimum mean-squared error (MMSE) estimator
f
oo
~MS(X) = sfslx(slx)ds = E{slx}.
(6.7)
oo
In essence, the estimate that minimizes the MSE is the condi-tional-mean estimate. If the a posteriori distribution is Gauss-ian, then the conditional-mean estimator is a linear function ofX, regardless of the functional relation between S and X. Thefollowing theorem summarizes a very familiar result.
Theorem
Let X = [X1, X2 .... XK] r and S be jointly Gaussian withzero means. The MMSE estimate of S based on X is
E[SIX]
and
E[SIX] =~:laixi,
where
ai
is chosen such that
E[(S --
~K=I
aiXi)Xj]
: 0 for
anyj = 1, 2 .... K.
6.2.2 Maximum a Posteriori Estimation
If the cost function is defined as in equation 6.4b, the Bayes'risk is as follows:
joo
f
~MAP = fx(X)[1-
f~P+@fslx(Slx)ds]dx.
^ A
To minimize RMAP, we maximize
['MA~fsx(S[x)ds.
When
'J5MAP -~- I
A is extremely small (as is required), this is equivalent to
 
6 Statistical Signal Processing
923maximizing
fslx(Slx).
Thus,
SMAP(X)
is the value of s thatmaximizes
fslx ( s[x).
Normally, it is often more convenient (for algebraic ma-nipulation) to consider
lnfslx(s[x),
especially since in (x) is amonotone nondecreasing function of x. A necessary conditionfor maximizing
lnfslx(slx)
is as written here:
~lnfslx(SlX)
s=e~p(~) = O. (6.8)Equation 6.8 is often referred to as the
MAP equation.
Employing Bayes' rule, the MAP equation can also bewritten as:
~lnfxls(XlS) +
~slnJ~(s) S=eMAp(x)= O. (6.9)
Example (van Trees, 1968)
Let
Xi,
i = 1, 2 ..... K be a sequence of random variablesmodeled as follows:
Xi = S + Ni
i=l;2,...,K,where S is a zero-mean Gaussian random variable with vari-2 and
{Ni}
is a sequence of independent and identicallynce ¢r sdistributed (iid) zero-mean Gaussian random variables with2 DenoteX
[X1X2. XK] r.
ariance %. =K 1 [
fxls(XlS
= i-i~ex p
(xi-
s) 2]i=1 x/2w~yn ---2-
0"n J
1
[- ls 2 ]
fs(s)
= ~=-- exp l-- -ZSqX/2~rO's
L
2%J
Ll,(xls)fs(s)fsl×(s[x) - L(x)
1 1
[/=~ ~1 [ ~(-~=~
(xi --
-~')ls2-- fx(x) x/2~r,
exp
-- O- 2 5)2 JC2 1 xl (Ys i)
fslx(SlX) = C(x)
exp
- ~ s cr2 + o.2/K ~
(6.10)
(~2s ~nn)--i 2 2A 1 K __ °'s r,,
where ¢rp = + - 2 2
K°s +~ n
From equation 6.10, it can be seen clearly that the condi-tional mean estimate and the MAP estimate are equivalent. Inparticular:
O"
Sms(X) =
SMAp(X) -- 2
+
cr2/K
Xi "s i= 1
In this example, the MMSE estimate and the MAP estimate areequivalent because the a posteriori distribution is Gaussian.Some useful insights into Bayesian estimation can be gainedthrough this example (van Trees, 1968).
Remarks
1. If cy2 << ~, the a priori knowledge is more useful thanthe observed data, and the estimate is very close to thea priori mean (i.e., 0). In this case, the a posterioridistribution almost has no effect on the value of theestimate.
2 2
2. If % >> -~, the estimate is directly related to the ob-served data as it is the sample mean, while the a prioriknowledge is of little value.3. The equivalence of SMAP to gins(X) is not restrictedto the case of Gaussian a posteriori pdf. In fact, if thecost function is symmetric and nondecreasing and ifthe a posteriori pdf is symmetric and unimodal andsatisfies lim,-~o~
](s, x)fslx(slx)
= 0, then the resultingBayesian estimation (e.g., MAP estimate) is equivalentto
gMS(X).
6.3 Linear Estimation
Since the Bayesian estimators (MMSE and MAP estimates)presented in the previous section are usually not linear, itmay be impractical to implement them. An alternative is torestrict the consideration to only the class of linear estimatorsand then find the
optimum estimatior
in that class. As such,the notion of optimality deviates from that of Bayesian esti-mation, which minimizes the Bayes' risk.One of the most commonly used optimization criteria is theMSE (i.e., minimizing the error variance). This approach leadsto the class of Wiener filters that includes the Kalman filter asa special case. In essence, the Kalman filter is a realizableWiener filter as it is derived with a realizable (state-space)model. Those linear estimators are much more appealing inpractice due to reduced implementational complexity andrelative simplicity of performance analysis.The problem can be described as follows. Given a set ofzero-mean random variables,
X1, X2 ..... XK,
it is desired toestimate a random variable S (also zero-mean). The objectivehere is to find an estimator S that is linear in Xi and that isoptimum in some sense, like MMSE.Clearly, if S is constrained to be linear in
Xi,
it can beexpressed as S = ~:=l
aiXi.
This expression can be used inde-pendently of the model that governs the relation between Xand S. One can see that once the coefficients
ai, i = 1, 2 .... K
are determined for all i, S is unambiguously (uniquely) speci-fied. As such, the problem of finding an optimum estimatorbecomes one of finding the optimum set of coefficients, andestimation of a random signal becomes estimation of a set ofdeterministic parameters.

Activity (14)

You've already reviewed this. Edit your review.
1 hundred reads
1 thousand reads
ulkrisht liked this
Chia-Yu Yao liked this
kamarajvlsi liked this
corgowin liked this
raghubankapur liked this
Faiad liked this
ykist liked this

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->