This action might not be possible to undo. Are you sure you want to continue?
Chapter 2
1
SOLUTIONS TO CHAPTER Background
X=Wx
2
2.1 The DF'T of a sequence x(n) of length N may be expressed in matrix form as follows
where x = [x(O), .r(l), ... ,.T(N  l)]T is a vector containing the signal values and X is a vector containing the DFT coefficients X(k), (a) Find the matrix W. (b) What properties does the matrix W have? (c) What is the inverse of W? Solution
(a) The DFT of a sequence x(n) of length N is
X(k)
where lVN
=
NI
L :r;(n)ej~nk
NI
=
L x(nYWGk
n=O
n=O
==
ej~.
If we define
then X (k) is the inner product Arranging
X(k) = wf! . x
t.he DEC[' coefficients in a vector we have,
X=
where
r
X(O)
l
X~l)
1
~N
X(; 1)
l
r
w{fx w[Ix
Wf$~IX
1
=Wx
J
W"~l rv
W=
r
(b) The matrix W is sumrnctric and nonsingular. complex exponentials,
H w" . Wi _ NI
wL
wlT
w{f
1r
1 1 1
~2
N
j
of the
WN1 N
W2(NI) N
W(NI)' N
In addition,
due to the orthogonality
'\' LeN
jbr.(ki)
_ 
{
N 0
k =I
n=O
kid
it follows that W is orihoqonol. (c) Due to the orthogonality of W, the inverse is
w :' =
bWH
2 2.2 Prove or disprove each of the following statements (a) The product of two upper triangular matrices is upper triangular. (b) The product of two Toeplitz matrices is Toeplitz. (c) The product of two centrosymmetric matrices is centrosyrnmctric. Solution
(a) With all
a21
Problem Soluiions
A=
(LIn O,2n
1
< j.
If B is also upper triangular,
[
it follows that if A is upper triangular then aij = 0 for all i then the (i,j)th element of the product C = AB is
Cij =
•
an2
ann
L
k~d
.
n
(Lik . bkj
For i < j we have
j1
r:ij
=
L
ai.k
bkj
+ L Q,ik
11..
.
bkj
k=1
k ••j
The first summation is equal to zero since bkj = 0 for k = 1, ... ,j  1, and the second term is equal to zero since Q,ik = 0 for k = j, ... ) n. Therefore, Cij = 0 for i < j and C is upper triangular. (b) The product of two Toeplitz matrices is not necessarily Toeplitz. This may be easily demonstrated by example. Let A be the following 3 x 3 Toeplitz matrix,
a~l [ "0
A and let B be the Toeplitz matrix
al a2
aa
al
a2 al ao
1
B=
The product, AB, is
[!
0 0 0
~1
which is not Toeplitz. (c) If A and Bare centrosymmetric matrices, then
and
Chapter 2
Since J J H = I, then
3
AB = JH ABJ
which means that AB is ccntrosyrnmetric.
4 2.3 Find the minimum norm solution to the following set of undcrdetermined
Problem Solutions linear equations,
102
1 10
Solution
With
o
1
2
0
 ~ J
since the rows of A are linearly independent,
then the minimum norm solution is unique and given by
With
G
2
.2
:3 :3 2
and
14
1
26
it follows that the minimum
norm solution is
Chapter 2 2.4 Consider the set of inconsistent linear equations Ax
= b given by
(a) Find the least squares solution to these equations. (b) Find the projection matrix PAC c)
Find the best approximation
b
= PA b to b.
P:±=I~P;1
(d) Consider the matrix Find the vector b' represent? Solution
(a) Since the columns of A are linearly independent, the least squares solution is unique and given by Xo = (AH A)l AHb With AHA=
p*
=
pi b
and show that it is orthogonal to
b.
What does the matrix
[
~
~
0 1
:l[ : q~[2
1
=1
3
1 12
J
it follows that
(AHA)I and, therefore. Xo [
1
2
~ J
[ ~[
2 1 2 1
1 2
J
[1 0 1 L1
0 1
; jr
2
: 1[ ~1
J
i [~ J
(b) The projection matrix is
PA
A(AH A)lAH
:3
I
:3
I
[ [
2 1 1 2
1
n[:
2 1
=
i
[! :j[
0
2 1
~~
][~
01 11
1
1
n
1 1J
1
6
(c) The best approximation to b is 1 2
Problem Solutions
(d) The matrix
Pi
is 1 1 1 1 1 1
1
and
The inner product between band
b+ is
Therefore, b is orthogonal to bj_. The matrix is a projection matrix that projects a vector onto the space that is orthogonal to the space spanned by the columns of A.
pi
Chapter 2
7
2.5 Consider the problem of trying to model a sequence x(n) as the sum of a constant plus a complex exponential of frequency Wo, n
= 0,1, , , , ,1V  1
where c and a are unknown. We may express the problem of finding the values for c and a as one of solving a set of overdetermined linear equations 1 1
[]
~,
x(O)
x(l)
x(N  1)
(a) Find the least squares solution for c and a, (b) If N is even and Wo and a. Solution
(a) Assuming that Wo
=
211kJN
for some integer k; find the least squares solution for c
=I
0,211, " " the columns of the matrix
arc linearly
independent,
and the least squares solution
for c and a is given by
[~J
Since
NI
=
(A HA)lA1!x
N
NI
~
nO
=
n=O
Therefore,
the inverse of (All A) is
(All A)1  c1,c'os"N"w,01 . N2  coswo and we have
l
l
N
11eJN~o
e}Wo
N
1 1
x(n)
jNwo
IV 1 i 1'1Wo _ 1· _e,_
1 e ejwo 1 N
]\;  ,=
2
1 ~ cos Nwo 1  cosWo
l
j1'1wo
N
1 e IN
1
1'11
1'11
~
11.=0
1
~
11.=0
x(n)e,inwo
8
which becomes
Nl
Problem Solutions
N~ 1
ncccO
1 e 1 __ejwo
J1
'.fI./
IV 1
'\"' ( ,
WI)
L., x n)c:
n=O
jnwo
.
(b) If Wo
=
27rk/N
and k
1= 0,
then 1 ejNwo 1 ejwo 1  ejNwo ,c = 0 1
and
1___ cos Nwo =0 ' 1 coswo we have
Therefore,
Chapter 2 2.6 It is known that the sum of the squares of n. from of the following form
ti
9
=
1 to N 1 has a closed form expression
n=O
Given that a thirdorder polynomial is uniquely determined in terms of the values of the polynomial at four distinct points, derive a closed form expression for this sum by setting up a set of linear equations and solving these equations for 0,0,0,1,0,2,0,.3. Compare your solution to that given in Table 2.3. Solution
Assuming that
lVl
Ln
2
aD + alN
+ a2N2 + a3N3
n=O
we may evaluate this unknowns
SUIll
for N = 1,2,3,4
Solving these equations
for aD, aI, and a2, we find
l
and write down the following set of four equations in four 1 4 9 16
1
1
2 3 4
l
ao 0,2
a3
al
1
l
1/6
0 1/2 1/3
1
which gives the following closedform expression for the sum,
10
Problem Solutions
2.7 Show that a projection matrix P A has the following two properties, L It is idempotent, 2. It is Hermitian. Solution
Given a matrix A, the projection matrix PAis
P;1
= PA..
PA
Therefore,
=
A(AH A)lAH
P~
A(AII A)l All A(Aff A)· 1A H A(AHA)lAH = PA
Also,
and it follows that PA is idempotent.
Since AAH is Hermitian, then so is its inverse,
and Thus, PAis Hermitian.
Chapter 2 2.8 Let A
11
> 0 and B > 0 be positive definite matrices. Prove or disprove the following statements.
> O. (b) A 1 > O. (c) A+B > O.
(a) A2 Solution
(a) Let
Vk
be an eigenvector
and Ak the corresponding
eigenvalue of A. Since
then v t: is an eigenvector of A 2 and A~ is the corresponding Therefore. 't>:')3 > 0. and it follows that A 2 > O.
eigenvalue.
If A
> 0,
then A"
> O.
(b) If A > 0, then the eigenvalues of A are positive, Ak > O. In addition, A : exists and the eigenvalues of A 1 are Ai,;'l. Since Ak > 0, it follows that Ai,;' 1 > 0 and, therefore, AI> O. (c) Let v
#
0 be an arbitrary
vector.
Then
If A
>
0 and B
> 0,
then
Therefore,
yH(A+
and it follows that (A
B)v > 0
+ B) > O.
12 2.9
Problem Solutions (a) Prove that each eigenvector of a symmetric Toeplitz matrix is either symmetric or antisymmetric, i.e., Vk = ±Jvk· (b) What property can you state about the eigenvalues of a Hermitian Toeplitz matrix? Solution
(a) If A is a symmetric Tocplitz matrix, then
where J is the exchange matrix, If
VA:
is an eigenvector of A with eigenvalue Ak, then
and, using the identity above, we have
Since J is unitary, JT J = I, if we multiply both sides of this equation that
AJvk
OIl
the left by J, it follows
=
AkJvk;
Therefore, if v : is an eigenvector with eigenvalue Ak, then JVk is also an eigenvector with the same eigenvalue. Consequently, if the eigenvalue Ak is distinct, then VI< and JVk must be equal to within a constant, v i; = eJvk However, since the exchange matrix reverses the order of the elements of the vector possible values for this constant are c = ±1. Therefore,
Vk,
the only
and the eigenvector
Vk
is either symmetric or antisymmetric.
Now let us consider the case in which the eigenvalue Ak is not distinct. We will assume that the multiplicity is two. The following discussion may be easily generalized to higher multiplicities. In this case, v t« and JVA: span a twodimensional space, and any two linearly independent vectors in this space may be selected as the eigenvectors. Therefore, we may choose
and
'h2
as the two eigenvectors. the proof.
= Vk  JVk
Note that Vk, is symmetric: and Vk2 is antisymmetric.
This completes
(b) In the ease of Hermitian Toeplitz matrices, the eigenvectors are either Hermitian or antiHermitian, i.e.,
Chapter 2 2.10 (a) Find the eigenvalues and eigenvectors of the real 2 x 2 symmetric Toeplitz matrix
13
A=[a
b
a
: J
(b) Find the eigenvalues and eigenvectors of the 2 x 2 Hermitian matrix
Solution (3.) The eigenvalues are the roots of the characteristic equation
det(A Expanding the quadratic in
>.1)
= (0, >.? 
b2
=a
>. we
have
Therefore: the eigenvalues are are solutions to the equation For the first eigenvector,
>'1
0,+ band >'2
=
ab.
The eigenvectors, on the other hand,
VI,
we have a Lbo,
I
b
1 L Vn 1 = t., + 11\ lr v] 1 I J J
V12 \~ ~/ Vll 2 Vl=
which gives Vn
= V12:
or [ 11 ]
Similarly, the eigenvector
V2
is found to be
(b) With
the eigenvalues are the roots of the characteristic det(A or, Thus, Al = a
equation
>.I)
=
(a  A)2 
IW = 0
 (a
>.2 _ 20,>' + a2
+ Ibl2 =
[A  (a
+ Ibl)] [A
Ibl)] = a
+ Ibl and
A2
=
a
Ibl.
14
The eigenvector that has eigenvalue ..\1 is the solution to
Problem Solutions
r
which gives
V12 =
a
b
b* ] a
l
r
1111 V12
J
1 = (a +
IbiVll,
b
or
Similarly, for
V2
we have 1
Nlbl
Chapter 2 2.11 Establish Property 5 on p. 45. Solution
Let B be an u. x
ri
15
matrix with eigenvalues
Ak
and eigenvectors
Vic.
With
A note that
AVA: BVA: AkVk
= B+aI
+ CXVk + aVk =
(Ak
+ a)vk + CX.
Therefore, A and B have the same eigenvectors,
and the eigenvalues of A are Ai
16
Problem Solutions
2.12 A necessary and sufficient condition for a Hermitian matrix A to be positive definite is that there exists a nonsingular matrix W such that
(a) Prove this result, (b) Find a factorization of the form A
= WH W
for the matrix
A
Solution
(",) If A>
=[
1
2 1] 2
0, then A may be factored as follows A=YAyH
where ¥v '~,..L
A.....
'l._
,j'130'{ /\l"'.)/\lvJ \ '0
\"
l with x.
.'1 ......... '""l/.'
"> ()
Therefore
·",'''.._'.',
......... _._~A_._
A rnenT .. ,}
be factored as follows
• ._,..,~'',~
U
~"I;....
A = Al/2Al/2 where A 1/2 = diag{Ai/2, ' .. ,\~2}
> 0,
Thus, we may write
where W = A1/2yH
>
0 is nonsingular.
Conversely, suppose that A may be factored as
where W is a nonsingular
matrix,
Then W may be factored as follows W=YAyH
where A is a diagonal matrix and Y is a unitary matrix.
Thus,
Since the diagonal terms of AZ are positive, then A
> 0,
(b) The eigenvalues of A are Al = 3 and A2 = 1, and the normalized eigenvectors are
Therefore,
Chapter 2 2.13 Consider the 2 x 2 matrix
17
A=[
1
() 0 1]
Are they orthogonal?
(a) Find the eigenvalues and eigenvectors of A. (b) Are the eigenvectors unique? Are they linearly independent? (c) Diagonalize A, i.e., find V and D such that
VHAV=D
where D is a diagonal matrix. Solution
(a) The eigenvalues are the roots of the characteristic det(A  )..I) which are A equation
),2
+1= 0
=
j satisfies the equation
=
±j. The eigenvector
corresponding
to the eigenvalue Al
1)1 ]
rl 1 0
which implies that
V2 = jV1'
0
1
1 [ J1 .7 . [
VI
V2
V2
Therefore,
the normalized
eigenvector
is
Similarly for the eigenvector
corresponding
V2
to the eigenvalue
)\2
== . j
»re have
= y2 [ ~
=)
1
1 J
(b) The eigenvectors
are unique, linearly independent,
(VI, V2) =
and orthogonal,
vfI V2 = 0
1
(c) With V the matrix of normalized
eigenvectors,
V= y"2j
we have
1 [1
j
j
yHAY=D where D=
o
18 2.14 Find the eigenvalues and eigenvectors of the matrix
Problem Solutions
A=
Solution
The eigenvalues of a matrix
[
]
2
1 1
A are the roots of the characteristic e!et(A  AI) = 0
equation
For the given matrix,
we have
e!et(A  AI)
dot (
1 A
2
1 ) 4A
(1  A)(4  .\)
Therefore: the eigenvalues arc Al
+ 2 = A2 i
5.\
+ 6 = (.\.
3)(.\.  2)
=
3 and A2
AVi
=
2. The eigenvectors
= 1. 2
arc foune! by solving the equations
= A;Vi
For Al
=
3 we have
The first equation is or
V12 = 2Vll
Therefore,
the eigenvector
is
VI
= [ 2
1]
Repeating
for A2
=
2 we fine!
Chapter 2 2.15 Consider the following 3 x 3 symmetric matrix
1
19
o
(b) Find the determinant of A.
1
~
:]
(a) Find the eigenvalues and eigenvectors of A. (c) Find the spectral decomposition of A. (d) What are the eigenvalues of A Solution
(a) The eigenvalues are found from the roots of the characteristic det(A  ;\1) = 0 The roots are ,.\ = 3,1, O. Given the eigenvalues, the eigenvectors equations Av, = "\iVi for i = 1,2,3. The eigenvectors (unnormalized) are found by solving the are equation,
+ I and how are the eigenvectors related to those of A?
(b) The determinant
is equal to the product
of the eigenvalues,
det A (c) The spectral decomposition for A is
=
IIAi = 0
;=1
3
3
A = L"\iViV['
i=l
where Vi are the normalized A
eigenvectors of A. Since .\:1 = 0, this decomposition
3·
(t)
[ll
[ 1 2 1 2
2"
1
1 2 2 4 1 2
I+![:][ ] [ ° : ]
1
becomes
10
1 ]
+~
1
0 0 0
1
(d) If the eigenvalues of A are Ai, then the eigenvalues of A + I are Ai the same. Therefore, the eigenvalues of A + I are .\ = 4,2,1.
+ 1, and
the eigenvectors
are
20
2.16 Suppose that an n. x n matrix A has eigenvalues
)'1, ... , /\n
Problem Solutions and eigenvectors
VI, ... , Vn.
(a) What are the eigenvalues and eigenvectors of A2? (b) What are the eigenval ues and eigenvectors of AI? Solution
(a) 'With
Vi
an eigenvector of A with eigenvalue A.;, note that
Therefore, the eigenvectors of A 2 arc the name an those for A, and the eigenvalues are (b) Since then, assuming that A _. exists: j
Vi
AT.
=
A;A""lVi
or A Therefore, A
1
1
1 v, = + v, A 1.
has the same eigenvectors as A, and the eigenvalues are 1/A;.
Chapter 2 2.17 Find a matrix whose eigenvalues are
v2 )q
21
1 and
).2
= [2, IF',
=
4 with eigenvectors v I
[3,
IV
and
Solution
From the given information, we have
Let Then we have
and
Subtracting
these two equations
gives
Also, we have
a2
=
4 [ 8 ]  2al
=
18
[ 10
181
Therefore,
5 A= [ 3
lOJ
1
22
Problem Solutions
2.18 Gerschgorin's circle theorem states that every eigenvalue of a matrix A lies in at least one of the circles C1, ... ,CN in the complex plane where C, has center at the diagonal entry aii and its radius is Ti = L.#i I ail I·
1. Prove this theorem by using the eigenvalue equation Ax
= AX to write
(A and then use the triangle inequality,
ai'i):Yi
=
2:=
:i#i
airYj
2. Use this theorem to establish the bound on 3. The matrix
Amax
given in Property 7.
is said to be diagonally dominant since laiil show that this matrix is nonsingular. Solution
1. Let x = [Xl,' Assume that follows that
l
f412j 230
326
> Ti. Use Gerschgorin's circle theorem to
.. ,XN
Xi
T be an eigenvector, and ,\ the corresponding eigenvalue for the matrix A. is the largest component of x, i.e, 1:2: IXj I for all j =f. i. With Ax = Xx, it
1
L
j=l
N
o,ij1;j
=
'\1:i
or,
(,\ o,ii)X;
=
L
j:;F'i
o,ijXj
Therefore,
Since I 2' I for all j the ith circle defined by where
=f. i, then
the ratios
are less than or equal to one, and ,\ lies in a,i I :S r,
I.\i Ti
=
L Io,ijl
#i
Chapter 2
2. From Gerschgorin's circle theorem, tor each eigenvalue, /\, there is an i such that
23
I). _. aid.::;
Since then
E laij I
jil
n
\).1.::; Elaijl
j=l
Therefore,
I).maxl .::; max
1
E laijl
j=]
n
3. Let A be a matrix that is diagonally
dominant,
laid>
r;
Assume that one of the eigenvalues is zero (A is singular). we know that, for each eigenvalue,
From Gerschgorin's
circle theorem,
I).  (Liil .::;r;
=
However, if )."
=
0, then
I)."  aiil
Io,iil.::; r'i
for some i, Therefore, A is not diagonally dominant, which contradicts the hypothesis, Thus, if A is diagonally dominant, then it cannot have any zero eigenvalues and must. therefore, be nonsingular.
24 2.19 Consider the following quadratic function of two variables
Zl
Problem Solutions and
Z2,
Find the values of Zl and Z2 that minimize and determine the minimum value of f(Zl, Solution
To minimize the function subject to the constraint
ZI
f(
zj , Z2)
subject to the constraint that
Zl + Z2
=1
Z2).
+ Z2 =
1
Q(ZI,
we may use Lagrange multipliers
as follows. If we define the objective function
as follows
then the values for
ZI
and
Z2
that minimize
f(ZI;
Z2)
may be found by solving the equations
5Z1
+ ;Jz2

A=0
Writing the first two equations
in matrix form we have 6 [4 4 6
J' [
ZI Z2
]
Solving for
ZI
and
Z2
we find
ZI] [ Z2
=
20
A
r
l 4
6 above, we may solve for the Lagrange multiplier, A, as
Plugging these values into the third equation follows,
1or Given A we may explicitly evaluate
ZI 
Z2
= 1  rl>  rl>
=
1
*
= ()
Zj
and
ZI
Z2, Z2
= 1/2
= 1/2
Substituting
these values into f(z1,
Z2)
we find that the minimum value of the function is
26
Problem Solutions
SOLUTIONS TO CHAPTER 3 Discrete Time Random Processes
3.1 Let be a random variable" with mean 1nx and variance independent measurements of the random variable x. (a) With
x
a;.
Let
1:;
for i
=
1,2, ... ,N be N
m,,"
the sample mean defined by 1N
=
FILx
N
LXi
i=l
determine whether or not the sample variance
_
ax
N
:';'i ~ TTLx
~2
=
J
IV L...
i=1
1 ~(
~.)2
is unbiased, i.e., is
E{oD
= 0',;'7
(b) If x is a Gaussian random variable, find the variance of the sample variance, E{(2; ~
E{ai})2}. Solution
(a) The expected value of the sample variance is
Expanding
the square we have
N E{an = "~t;E
{
Since the measurements
are assumed to be independent,
then i=j
L'{
and the expression for
if
j
a~becomes
Therefore,
although the sample variance is biased, it is asymptotically
unbiased,
Chapter.3
27
(b) Finding the variance of the sample variance directly is very tedious. A simpler way is as follows. With
~')
it is wellknown that
(1;;' =
N L....
1 0(
;,=1
iV
Xi

Tnx
_)2
is a Chisquare Therefore,
random variable with
ti 
1 degrees of freedom, which has a variance of 2(n  1).
= 2(N  1)
Var and, consequently, we have
IV{j2) ( trJx
~) Var ( (12 \
x
=
2_il!_ (IV  1)
(11
NZ
v
28
Problem Solutions
3.2 Let x(n) be a stationary random process with zero mean and autocorrelation the process, y(n), as follows y(n) = x(n) + f(n) where f(n) is a known deterministic Ty(k, l) of the process y(n).
Tx,(k).
\Ve form
sequence. Find the mean Tny(n) and the autocorrelation
Solution
The mean of the process is
nly(n)
and the autocorrelation Ty(k, is
= E{y(n)}
= E{ x(n)} + f(n) = f(n)
I)
E{y(k)y(l)} E{x(k)x(l)}
=
E{ [x(k)
+ f(k)f(l)
+ f(h:)] [T(l) + fU)l} = Tx(k,l) + f(k)f(l)
Chapter 3 3.3 A discretetime random process x(n) is generated as follows,
29
x(n) =
L a(k)x(n
k=l
p
k)
+ wen)
where wen) is a white noise process with variance ()~. Another process, z(n), is formed by adding noise to x( '11),
z(n)
=
x(n)
+ 1i(n)
where v(n) is white noise with a variance of u~ that is uncorrelated with wen). (a) Find the power spectrum of x(n). (b) Find the power spectrum of z(n). Solution
(a) Since ;];(n) is the output of an allpole filter driven by white noise, x(n) is an AR(p) process with a power spectrum
where
A(ejW)
=
1
L a(k)e/;;=1
p
jkw
(b) The process zen) is a sum of two random processes z(n)
Since x(n) is a linear combination
=
x(n)
+ v(n)
of values of'V}(n):
x(n)
=
L
11
h(k)w(n
 k)
k=yo
where h( n) is the unit sample response of the filter generating :r(n), and since v(n) is uncorrclated with w(n), then v(n) is uncorrolated with x(n), and wc have
Therefore, and
30
3.4 Suppose we are given a linear shiftinvariant
Problem Solutions system having a system function 1=]
~Zl
H(z)
that is excited by zero mean exponentially sequence
IZ 3
1
correlated
=
noise x(n)
with an autocorrelation
T",(k)
Let y(n) be the output process, y(n) (a) Find the power spectrum, (b) Find the autocorrelation (c) Find the crosscorrelation, (d) Find the crosspower correlation Txy ( k). Solution
(a) The power spectrum of x(n) is
=
2 (1) Iki
x(n)
* h(n).
of y(n). and y(n). of the cross
Py(z), 1"xy(k),
of y(n). between x(n)
sequence, Ty(k),
spectral densiis], PX1/(z), which is the z transform
3/4
and the power spectrum of y(n) is
(b) The autocorrelation
sequence for y(n)
may be easily found using the ztransform
f> ..,...,.~
pair
a Ikl
Since
( 1)ikl 3"'
la2
(1  ar1)
(1  az) 8/9
f>
(1 _ ~z1 )(1  ~z)
= ;IT (!)Ik!
32 3
then (c) The crosscorrelation
r y. (k) TJ:y(k)
between x(n) and y(n)
is
This may be easily computed
using ztransforms
as follows,
Pxy(z)
=
P (z)H(z
x
1
)
3/4
(1  ~zl )(1  ~z)
1. 
~z
1  ~z
:3/1
Chepter 3
Writing this in terms of z
·1
31
and performing a partial fraction expansion gives
9/10 1 1  2z1"
Inverse ztransforming gives
+
3/10
~l _ 1 3
(d) The crosspower spectral density, Pxy(z), P (~) _
as computed 3/4
in part (a), is
xy ~  (1 _ ~z1)(1  ~z)
(e) The crosscorrelation, Txy(k), between x(n) and transform of the crosspower spectral density,
y(n)
may found by computing
the inverse z
Inverse transforming
gives
32
3.5 Find
that the power spectrum for each of the following sequences. widesense stationary
Problem Solutions
random processes
have the given autocorrelation
=
(a) T'x(k)
(b) T';:r;(k)
2i5(k) 2i5(k)
= i5(k)
=
+ ji5(k  1) + 2(0.5)lkl.
ji5(k
+ 1).
(c) T'x(k)
(d) T';r:(k)
+ cos(1fk/4).
otherwise
={
10 0 Ikl
1"'1 < 10
Solution (a) This autocorrelation sequence is finite in length, and the power spectrum is simply
jej:.u
=
2
2 sin w
Note that, as required, Px(ejW) (b) With
(Y
is real and nonnegative.
real, using the DTFT pair
we have
\
1  1.
11
2e~J~
412
=1
+
~
11  4 cos io
=
4
5
2 . COSW
5  4cosw
(c) Since the DTFT of a complex exponential
is an impulse,
f>
e
it follmvs that the power spectrum
jnwo
21fi5(w  wo)
of Tx(k)
= 2.5(k)
+ cos(1fk/4)
is
(d) Observe that Fx(k) is a triangle that is symmetric about k = 0 and extends from k = ~·9 to k = 9. Therefore, Tx(k) may be written as the convolution of two pulses,
where
pC!;)
Since the DTFT of p(k) is P(ejW) then the power spectrum is =
={ ~
O~k~9 else
L
k=O
9
e~jkw
e~jlOw
I
e~j:.u
=e
~j!!.:.u
2
sin

ow
sinw/2
P(jW)_IF)ljW)12_ xe 
Ie
.

. 25 SIn ,w 2 sin w/2
Cliepier 3 3.6 Find the autocorrelation sities. (a) Px(ejW) (b) Pr(eJW) . sequence corresponding to each of the following power spectral
33 den
=
3
2cosw.
]
= _2Z2
5+3cosw
(c) P1:(z) = ;;Solution (a) Expanding Px(c.iW)
+
5z  2
+ 10z + 3
in terms of complex exponentials,
it follows that 1'x(O) = 3 and 1'x(1) = 1'c(l)
(b) Recall the DTFT pair
= 1.
(1 + 0'2)  2O'cosw Since
it follows that
r x (k) = l(_l)lkl ,1 3
(c) With
PA z) .
using the pair
=
2z2+5z2 +lOz+3
= :CC::::c:7
···2z+52z1 (::~+z)(3+z1)
+ 5  2z1 9 (1 + 1z)(1 + 1z1) 3 .1
1
2z
it follows that
34
3.7
Problem Solutions Let x(n) be a zero mean vVSS process with an N x N autocorrelation whether each of the following statements are True or Paise. matrix R:r. Determine
(a) If the eigenvalues of R~ are equal,
1,2, ... .N: (b) If Al
/\1
= A2 = .. ,
then Tl,(k)
=
=
AN, then Tx(k)
o
for k
> 0 and Ak
=
0 for k
=
2,3, ... ,N,
Ae.ikwo.
Solution
(a) This statement is True. To show this, rho first step is to recall the Bordering Theorem which, for Toeplitz matrices, states that if Rp1 is a p x p Toeplitz matrix with ordered eigenvalues .\1 :s: '\'2 S ... :s: '\'1" and if Rp is the (p + 1) x (pt 1) Toepli tz ma.trix that is formed by adding one row and column to Rp1, then the ordered eigenvalues of Rp, denoted by are interlaced with those of Rp_1 as follows
.\1 < .\2 < ... < .\1'+1
 s: A1 Al
:s: A2 :s: A2 S ... :s: Ap :s: /\p


s: .\1'+1
=
'\pH, tbon the
What this implies is that if the eigenvalues of Rp are equal, >:1\2 ... eigenvalues of each of the lowerorder Toeplitz matrices must a.lso be equal. The next step is to note that for any 2 x 2 Toeplitz matrix
if the eigenvalues are equal then 1'",(1) = O. This follows easily from the fact that the eigenvalues of R1 are Al = T",(O) + r,r.(I) and A2 = 1'",(0)  rxCl). Therefore, A1 will be equal to A2 if and only if rx(l) = O. This property may also be established for a 3 x 3 Toeplitz matrix by equating the coefficients of the powers of A in the equation
where the three eigenvalues of R2 are equal to '\0, We may now establish the result by induction. with equal eigenvalues, and that Assume that Rk1 is a k x k: Toeplitz matrix
Rk1
We will now show that if Rk is a (Ie
=
Toep{Tx(O), 0, 0, ... ,O}
x (k
+ 1)
+ 1)
Toeplitz matrix
with equal eigenvalues, then 1',"(k)
=
O. The eigenvalues of Rk are the roots of the polynomial
det(Rk  AI) = 0
If the eigenvalues are to be equal, then det(Rk However, note that  ,xl) = (.\  .\0)
k
Chapter 3
where the sign depends on whether k is even or odd. Therefore, Ao) if and only if Ao = IX(O) and rx(k) 0 as was to be shown.
, k
35
(b) This statement is True. To show this, note that if Al the autocorrelation matrix has the form
>
0 and Ak
=
°
for k
=
2,3, ... , N, then
where v I is the eigenvector associated cients of the eigenvector VI,
with the nonzero eigenvalue A I. Let 111(k) be the coeffi
Then
vI(1) 111(2) [
1
[vi (1) 111'(2)
...
11i'(N)
1=
",(2)v;(1)
[
VI
1111(1)12
VI
I
(J)'Uj' (2)
V1(2W
(N)vl (2)
vJ(2)vJ(N)
l'vl
VI(1)V7(N)
(N)12
1
uI(N)
(JV)v* (1)
VI
Since R; is 'Iocplitz, then the terms along the main diagonal must be equal,
Therefore, the coefficients
VI
(k) must have the form,
In addition, the Toeplitz structure main diagonal must be equal,
of R, implies that the terms along the diagonal below the
Therefore, (Qk  dJkl)
must be a constant,
and, with
VI
(1)
= VAe.i¢l,
it follows that
Finally, note that since the first column of R; 0,1, ... , N  1, then
contains the autocorrclations
rx(k)
for k
as was to be shown.
36
3.8 Consider the random process
Problem Solutions
x(n)
Acos(nw + 1;) + 1J)(n)
where w( n) is zero mean white Gaussian noise with a variance !T~. For each of the following cases, find the autocorrelation sequence and if the process is\NSS, find the power spectrum. (a) A is a Gaussian random variable with zero mean and variance constants. (b) 1; is uniformly distributed (c)
w
!T~
and both wand 1; are
'iT] and both A and ware constants. is a random variable that is uniformly distributed over the interval [WD~' ~)WD +~]
over the interval ['iT)
and both A and 1; are constants. Solution
(a) When wand
"~'''~'
¢ are constants, Tx(k, I)
then
=
E{ x(k)x(I)}
= E {A
cos(kw
+ ¢)A
cos(lw
+ al)} + (J~5(k
I)
 I)
Thus,
E{ A2} cos(kw
iY~
cos(kw
+ ¢) cos(kw + ¢) + (J~5(k· + ¢) cos(lw + ¢) + iY;,5(kl)
Note that stationary,
since T",(k,l) does not depend on the difference k  I, then x(n) and the power spectrum is not defined for this process.
is not widesense over the
(b) When A and u: arc constants and rjJ is a random variable that is uniformly interval 'iT], then the autocorrelation is
distributed
Tx(k, I)
E{A2cos(kw
~A2 E{cos[(k
+ ¢)cos(lw + ¢)} + 0":5(kl) + l)w + 2¢]} + ~A2 E{ cos(k
= 0, then the autocorrelation
 I)w} + 0":5(k I) is
However, since
E{
cos[(k
+ I)w + 2¢J}
Tx(k, I)
=
1 2A 2 cos(k
I)w
2 + iYw5(k
l)
Therefore, T'x(k, I) depends on the difference (kI), is
and the process is WSS, The power spectrum
) 'iTA2 2 + 2 5( w + w ) + iYw
Px ( e
J
'W)
'iTA = 2,( u
2
w  u:
(c) As in parts (a) and (b), the autocorrelation
of the process x(n)
is
Tx(k, I)
= E{ x(k)x(l)}
=
E{ A cos(kw
+ ~)A
cos(lw
+ ¢)} +

I)
In this case, however, w is a random variable, and the expectation of the product of the cosines is E{ Acos(kw + ¢)Acos(lw + ¢)} = A2 cos[(k l)w] + ~cos[(k ~ l)w + 29]}
E{ ~
Chapter .3
Since wis uniformly term is distributed over tbe interval
37 [wo  fl.,wo
t
fl.], tbe expectation
of the first
E{cos[(kl)w]}
_l:_·l 2fl.
the expectation
W
+6.
cos
 l)w]dw
w[)6.
2fl.(~ I)
With ¢ a constant,
[sin[(k l)(wu
+ fl.)] 
sin[(k l)(wo
... fl.)]]
of tbe second term is
E{ cos[(k
+ l)w + 2¢]} =
2t.l.(~ + l) [sin[(k
+ l)(wo + fl. + 2<;6)]
sin [(k
+ l)(wo  fl. + 2¢)]]
over
Therefore, x(n) is not \VSS. However, if 1; is a random variable that is uniformly distributed the interval [7r,1fJ, then this second expectation is zero, and the autocorrelation becomes
_·2
4t.l.(~ I) {Sin[(k l)(wo
+ t.l.)] 
sin[(k
l)(wo  t.l.)]}
l)wol
and the process is VVSS. With an autocorrelation sequence given by
7
. (k)  1fA2 sinkt.l.. k x.2t.l. trk cos Wo
using the DTFT pair Iwl < t.l. else it follows that the power spectrum of ;1:(71.) is
4t.l.
o
else
38 3.9 Determine whether or not each of the following are valid autocorrelation are not, explain why not.
Problem Solutions matrices. If they
(a) Rl
4 = [ 1
1
[1
1
1
(b) R2 (c) R3
[] =[
=
2 2 2 2 1 1j 2
n
1
]
(d) R4
= =
rL ~ 1
4
2 2
(e) R5
l
r
.7 .7
n
l+j 1
]
1
it is not a valid autocorrelation ma.trix. matrix. and nonnegative definite, this is a valid autocorrelation is negative,
~.7
4j
1
2
Solution
(a) Since Rl is not symmetric, (b) Since R2 is symmetric (c) Although
R3 is Hermitian, note that its determinant
detR3
=
1  (1
+ j)(l
 j)
= 1
Therefore,
R3 is not nonnegative definite and, therefore, it is not a valid autocorrelation
matrix since it is symmetric and nonnegotive definite.
matrix.
(d) R4 is a valid autocorrelation
(e) The entries along the diagonal of an autocorrelation matrix must be realvalued (this follows from the Hermitian property, and the fact that the ith entry along the diagonal is equal to E {1:r(iW} which is real). Since the middle element is imaginary, this is not a valid autocorrelation matrix.
Chapter 3 3.10 The input to a linear shiftinvariant filter with unit sample response hen)
=
39
o(n)
+ ~o(n
 1)
+ ~o(n 
2)
is a. zero mean widesense stationary process with autocorrelation
(a) 'What is the variance of the output process? (b) Find the autocorrelation of the output process, r y (k), for all k,
Solution
Before we find the variance, lets find the autocorrelation. \VUh
ry(k)
=
rx(k)
* h(k) * h( k)
(k)
=
rAk)
*
mark)
+ ~a(k  1) + ~5(k + 1) + ~a(k  2) + ~a(k  2)]
it follows that
T "Y'
= g(l)lk'
16 2
+ 2(1)lkl[ + 2(1)'k+iI + 1(1).k2 + 1(1)ik+21 82 82 42 42
1
Finally, since :r(n) has zero mean, the variance is
(J"y
2_

7
. Y
(0) _

(21
Ti'i
5 5 1 1 + 16 + 16 + 16 + 16 )
16
33
40
Problem Solutions
3.11 Consider a firstorder AR process that is generated by the difference equation
y(n)
where la!
=
ay(n  1) + w(n)
cr~!.
< 1 and w(n) is a zero mean white noise random process with variance
of y( n).
(a) Find the unit sample response of the filter that generates y(n) from w(n). (b) Find the autocorrelation (c) Find the power spectrum of y(n). Solution
(a) The process y(n) is generated function given by by filtering white noise with a Firstorder filter that has a system H(z)=~ 1Thus, h(n) = anu(n) (b) Since the autocorrelation sequence for w( n) is rw (k) = a~5 (k), then 1
O,zl
(c) The power spectrum of y(n) is
Chapter 3 3.12 Consider an MA(q) process that is generated by the difference equation
41
Y('II,) =
L b(k)w(n
k=O
q
 k)
where w(n) is zero mean white noise with variance a~,. (a) Find the unit sample response of the filter that generates y(n) from w(n). (b) Find the autocorrelation of y(n). (e) Find the power spectrum of y(n).
Solution
(a) The process y(n) is generaLed by filtering white noise with an FIR filter that function given by
q
has
3
system
II(z)
Thus, the unit sample response is
=
Lb(k)zk
k=O
h(n)
(b) The autocorrelation sequence for y(n) is
=
L b(k)5(n
k=O
q
 k)
qIkl
l' '.t/
(1'\ 
\t)~UWU
_2
A(k\ ;.,..,I)\",)T ,., /,(J,\
±
711_/,.\ _2 '\ lU)U'W
"""" ~
b(l)b(lkl
+ /\
(oj
1=0
(c) The power spectrum
of Y(71,) is
42 3.13 Suppose we are given a zeromean process x(n) with autocorrelation
Tx
Problem Solutions
(k) = "(l)lki 10 '2
'(l)lk+l1 + 3 (1) k~ll + 3 '2 '2
(a) Find a filter which, when driven by unit variance white noise, will yield a random process with this autocorrelation. (b) Find a stable and causal filter which, when excited by x(n), will produce zero mean, unit variance, white noise. Solution
(a) The power spectrum of x(n) is
Px(Z)
Therefore, if
=
3/4 [" ~l (1. _ ,!,z~l)(l __ ,!,;:) 1(J + 3z
4 4
+ 3z
1

3 4
(1+3z1)(1+3z) 1 1 (1 2z1)(1 2z)
JJ(z)
=
V3 1 + 3z~1
21 ~zl
Izl > ~
process with the
then the response of this filter to unit variance white noise will be a random given autocorrelations. (b) Consider the filter having a system function 2 G ( z ) = Clearly this filter is stable and causal. spectrum of the filtered signal will be
;':
1
!ZI
3V3
1+ ~zl
Furthermore,
if we filter x(n) with 9(n) then the power
Therefore,
g(n) is the whitening [ilter that will produce unit variance white noise from x(n).
Chapter 3 3.14 For each of the following, determine whether or not the random process is 1. Widesense stationary. 2. Mean ergodic. (a) x(n) = A where A is a random variable with probability density function fA(O')' (b) x(n) = Acos(nwo)
uk
2
43
where A is a Gaussian random variable with mean
mil
and variance
(c) x(n)
'iT
= Acos(nwo+¢)
and
'iT.
where ¢ is a random variable that is uniformly distributed between uncorrelated
=
(d) x(n) = A cos (nwo) + Bsin(nwo) where A and Bare variables with variance (J2. (e) A Bernoulli process with Pr{x(n)
zero mean random
p.
= I}
=p
and Pr{x(n)
1}
= 1
(f) yen) = x(n)  x(n  1) where :r(n) is the Bernoulli process defined in part (c).
Solution
(a) We arc given a process :r(n) = A, where A is a random variable with probability density function fA (a). To check for widesense stationarity we need to compute the mean and autocorrelation of :r(n). The mean of this process is
mx (n)
which is a constant. The autocorrelation
= E { x ( n)
} = E {A }
is
rl:(k,l)
which is also a constant. To check for ergodicity Therefore,
=
E{x(k)x(l)}
= E{A2}
l:(n) is \VSS.
in the mean, note that
which is a constant.
Therefore,
1'(71,) is ergodic in the mean only if the variance of A is zero,
c,,(k)
=
O.
(b) 'With 1'(n)
=
A cos (nwo) , note that the mean of the process is
m.; (n)
= E{x(n)}
= E {A
cos(nwo)}
= E{A}
cos(nwo)
=
mA
cos(nwu)
which depends on n. Thus, x(n) is not WSS and, therefore, (c) For and
.T(
not ergodic in the mean. distributed between
7T
n)
7T,
= A cos(nwo + ¢) with ¢ a random variable that is uniformly the mean of x(n) is
mx(n) =
E{ x(n)}
=
E{ A cos(nwo
+ ¢)}
= ;;:
A 2
j'11'
_IT
cos(/l,wo
+ rj»)d¢ = 0
Problem Solutions
which is a constant. For the autocorrelation we have
E{ A cos(kwo
+ (p)A cos(lwo + 9)}
~A2 E{ cos([k ~~·l]wo) + cos([k
+ lJwo + 21/;)}
!A cos(k
2
l)wo x(n) is WSS.
which is a function of (k l). To check for ergodicity
Therefore,
in the mean, note that
 l: cx(k) N
k=O
1
NJ
2
1 A2
NI
L coskwo
k=Q {lejkwo
ejN",'o
42
NI
~ .::_ "
2NL....,2
1. A2
+ lejkwo} 2
+

{I 1
k=()
1_
1
ejNwo
e)WO
4N
.~
}
A2 sin(Nwo/2) 2N sin(wo/2) which goes to zero as N > 00, provided that cx(k) = A2/2. In this case, x(n) is not ergodic mean only if Wo of. O. (d) The mean of this process is
(N  1 ) cos Wo 2
Wo = O. If Wo = 0, then x(n) = A cos 4) and in the mean. Therefore, x(n) is ergodic in the
E{ x(n)}
Since E { A}
=E
{A}
cos(nwo)
+ E{ B}
sin(nwo) we have
= E { B} = 0,
then E { x(n)}
=
= O.
For the autocorrelation
E{ x(k).T(I)}
E{ [A cos(wok)
+ B sin(wok)][A
cos(wol)
+B
sin(wol)]}
E{ A2} cos(kwo) co:;(lwo)
+ E{
B2} sin(kwo) sin(lwo)
+E{ AB
Since A and Bare uncorrelated (T2. Therefore, we have
}[cos(kwo) sin(lwo)
+ sin(kwo)
cos(lwo)J
and have zero mean, then E {A R}
=0
and E { A 2} = E { B2} =
Tx(k, I)
=
(T2[coS(kwo) cos(lwo)
+ sin(kwo)
sin(lwo)]
= (T2cos(k
 l)wo
only t.he difference,
Since the mean is a constant and the correlation function k: I, then x(n) is a widesense stationary process. As in part (c), :1:(n) is ergodic in the mean only if (e) With x(n) a Bernoulli x(n) is and the autocorrelation process with Pr{ x( n)
Wa
Tx(k, I) depends
=
O. Pr{ x(n)
= 1} = P and
= 1}
1
1 p,
the mean of
mx(n)=E{x(n)}=p(1p)=2p
is
rAk,l)
_ .  E{x(k)x(l)}
_{ E{x2(k) E{x(k).T(I)}
Chapter 3
Therefore,
45
, _{ Tx(k, I) or,
p+(lp)=l (1 _ 2p)2  p)5(k I)
k =l k i=l
Tx(k,l)
= 4p(1
+ (1 
2P?
which is a function of k  I. Therefore, x(n) is WSS. 'With
c:£(k)
it follows that 1
Nl
= Tx(k) 
m; = 4p(1 
p)5(k)
N
L cx(k)
=
1
N4p(1
 p) 70 as N
t 00
k=O
Thus, x(n) is ergodic in the mean.
(f) With yen)
= x(n)  x(n  1) where x(n) is a Bernoulli process, widesense stationarity may be easily checked using the direct approach taken in parts (a)(e) of this problem. However, it is easier to note that since yen) is the response of a linear shiftinvariant system to an input that is a WSS process, then yen) will be WSS.
For crgodicity in the mean, note that since
cx(k)
then
= 4p(1
=
 p)5(k)
Cy(k)
and, clearly,
= cx(k) * h(k) * h( k)
1
N
Nl
4p(1  p) [2()(k)  5(k  1)  5(k +
L cx(k)
t
0 as N
t 00
k=O
46
Problem Solutions
3.15 Determine which of the following correspond to a valid autocorrelation sequence for a \VSS random process. For those that are not valid, state why not. For those that are valid, describe a way for generating a process with the given autocorrelation. (a) Tx(k) (b) rx(k)
= 6(k ~ 1) + 15(/.: + 1) = 315(/.:) + 215(/.: ~ 1) + 25(k + 1)
=
=
(c) Tx(k) (d) Tx(k)
exp(jk7r/4)
I {0
Ikl < N
else
N~I/.:\
(e) 'r:r(k:)
={
~
[kl < N
else
Solution
(a) This autocorrelation (b) This autocorrelation sequence is not valid since we must have 'rx(O) 2: sequence is not valid since the power spectrum (1)1.
is not nonnegative
P.T(ejW) (c) This autocorrelation
variable follows,
=:{
+
2ejw
+ 2d = 3 + 4cosw
w
¢ that
sequence is valid, and corresponds to a harmonic process. Given a random is uniformly distributed between 'if and 'if, this process may be generated as
x(n)
= ej
(n1IT
+1»
(d) This autocorrelation
sequence is not valid since the power spectrum
P(e:lW)co
sin(N ~ l)w

x
8inw/2
2
is not nonnegative. (e) This autocorrelation sequence is valid since Tl,(k) is symmetric, and its discretetime Fourier transform is nonnegative for all w. This process may be generated by filtering unit variance white noise with the FIR filter that has a. unit sample response given by
hen)
(f) The sequence
={ ~
o~
else
n
<
N
rx(k) w. Since Tx(k) is symmetric and Px(ejW)
= 2k2 =
e(ln2)k
2
is a sampled Gaussian pulse. The DTFT of
TOx (k) is an aliased Gaussian, which is positive for all 2: 0, this represents a valid autocorrelation sequence.
This process may be generated by filtering white noise with a linear shiftinvariant has a Gaussian shaped unit sample response.
system that
Cbeptex .3
47
3.16 Show that the crosscorrelation, Txy(k), between two jointly widesense stationary processes x(n) and y(n) satisfies the following inequalities,
Solution
(a) Note that for any constant a,
Expanding
the square
ViC
have
This is a quadratic nonpositive, or,
equation
in o., and is nonnegative.
Therefore,
its discriminant
must be
Taking the square root, the result follows. (b) To establish this inequality, note that
E{jx(n+k)
Expanding the square it follows that
±y(n)n
~0
Therefore, and
48
Problem Solutions
3.17 Given a widesense stationary random process x(n), we would like to design a "linear predictor" that will predict the value of x(n + 1) using a linear combination of x(n) and .7:(n ~ 1). Thus, our predictor for x(n i 1) is of the form
£(n
where a and b are constants.
+ 1) = ax(n) + bx(n
E {x(n)}
=
~ 1)
Assume that the process has zero mean 0
and that we want to minimize the meansquare
f; =
error
E{[x(n+
1) ~ £(n+ 1)]2}
of :r(n) by
(a) With TJ.:(k) the autocorrelation of :r(n), determine the optimum predictor finding the values of a and b that minimize the meansquare error. (b) What is the minimum meansquare error of the predictor? of the autocorrelation rAk). (c) If x(n (d) If x(n take'? Solution
(a) The meansquare error that we want to minimize is
Express your answer in terms
+ 1) is uncorrelated with x(n), what form does your predictor take?
+ 1) is
uncorrelated
with both x(n) and :r(n ~ 1), what, form docs your predictor
~ = E {[x(n + 1) ~ x(n + 1)]2} = E {;z)(n + 1) ~ 2x(n + l)x(n + 1) + x2(n + 1)}
Since the estimate of
x(n + 1)
is
x(n
+ 1) = o,J;(n) + bx(n  1)
then setting the derivative of ~ with respect to a and b equal to zero we have 3~
ao,
ab
2E {x(n 2E {x(n
+ l)x(n)} + E{2x(n + 1)x(n)}
+ 1)
=
0
=
a~
+ l)x(n ~ I)} + E {2x(n + 1)x(n ~ I)}
gives
0
Dividing by 2 and substituting
for x(n
a.E {x2(n)} aE {x(n)x(n
Putting
+ bE {x(n)x(n
 I)}
=
E {x(n
+ l)x(n)}
~ I)}
~ I)}
+ bE
{x2(n ~ 1)}
E {x(n
+ l)x(n
these equations in matrix form we have
Solving for a and b we find
Chapter .3
(b) For the minimum meansquare I; error we have
49
E {[x(n E{[x(n
+ 1) 
+ 1)]2}
=
E{[:r(n
+ 1)  x(n + 1)] [x(n + 1)  X(n +
E{[:r(n
1)l}
+ 1)  X(n + 1)] x(n + I)}
+ 1)  X(T! + 1)] X(n + I)}
Note that for values of a and b that minimize the meansquare error, the derivatives of ~ with respect to a and b are equal to zero, which implies that the second term in the equation above is equal to zero. Therefore, the minimum meansquare error is
.; =
E{[::r;(n+
1)  x(n+
l)]x(n+
I)}
=
E{[x(n+
1)
ax(n)
b:r(nl)Jx(n+
I)}
or,
.;
T"x(O)  aTx(l)
c •• (,\ _ •• (' \
 bTx(2) _ .. (2)Tx(O)1'x(2)  1';(1) x 7";(0) _ 1';(1)
J
I
X\O)
I
x
1'x(O)1'x(1)  1'0(1)1',,(2) i) T~(O) 1';(1)
then T"x(l)
(c) If :r(n
+ 1)
and x(n) are uncorrelated,
= 0, and the values for a and b become
In this case, the linear predictor is
_ ) x(n + 1 (el) If :r:(n + 1) is uncorrelated
=
1'x(2) ( ) x n 1 )'x(O)
1), then the values for a and bare
with both x(n) and :r(n
a=b=O
and the linear predictor is
x(n+1)=0
which is equal to the expected value of x(n + 1),
X(n
+ 1) = E
{.r(n
+ I)}
50
Problem Solutions
3.18 True ov False: If x(n) is a WSS process and y(n) is the process that is formed by filtering x(n) with a stable, linear shiftinvariant filter h(n), then a~ where
=
a~
L
n=(XJ
OJ
Ih(n)12
respectively.
a; and a~are the variances
of the processes x(n) and y(n),
Solution
If a WSS process x(n) is filtered with a filter that has a unit sample response h(n), then the autocorrelation of the output process is
ry(k)
Assuming that x(n) has zeromean, then
=
Tx(k)
* h(k) * h*(
and
k)
J~ = Ty
(0)
Therefore, the question is whether or not the following relationship
is true:
ry(O) If this is true, then unless Tx(k) = 0 for
= Tx(O)
L
00
Ih(n)12
n=~oo
J; depends
Ikl > 0,
only on T",(O), and not on r",(k) for k of O. Clearly, this is not true i.e., if x(n) is white noise. Therefore, the statement is False.
Chapter 3
51
3.19 Show that a sufficient condition for a widesense stationary process to be ergodic in the mean is that the auto covariance be absolutely sumrnable,
L
k=oo
oc
Ic,,(k)1 <
ex;
Solution
Beginning with the variance of the sample mean,
Var{ m.x(N)}
note that if
1 =N
NI k~'~+1
(Ikl)
1 N
1
Nl k=~+1
cx(k)::::
N
(k)1
L
b>cxc iVOCo
cc
Ico:(k)1 <
00
then
lim Var{m,xOV)}=
N+oo
lim
lV
:r '\'
NI
k=N+I
Z::
Icx(k)I=O
and the process is ergodic in the mean.
52
3.20 For each of the following, determine whether the statements
Problem Solutions are 'True or False.
(a) All widesense stationary (b) All widesense stationary Solution
moving average processes are ergodic in the mean. autoregressive processes are ergodic in the mean.
(a) This statement is true. Recall that a vVSS moving average process has an autocovariance is finite in length, cx(k) = 0 for all Ikl > ko for some ko. Therefore, if we let
ko
that
C then,
1
IV___.oo
= I.>x(k)
k=O
NI k=O
lim
N
L cr(k)
= j_VtCQ lim
C
lv
~T
=0
and ergodicity is established. (b) This statement is true and may be shown as follows. An autoregressive process is formed by filtering finite variance white noise w(n) with a stable, causal allpole filter h(n),
x(n)
=
h(n)
* w(n)
* h(k)
that c,c(k)
Assuming, without any loss in generality, that w(n) has zero mean, the covariance of x(n) is
cx(k)
= a;
h(k)
where u~, < DO is the variance of w(n), The condition that h(n) is stable guarantees is absolutely smmnable and, therefore, x(n) is ergodic in the mean.
Chapter 3 3.21 Let x( n) be a real \VSS Gaussian random process with autocovariance function ex (k). that x(n) will be correlation ergodic if and only if 1 Nl lim }\' '" Nr+ix:
I ~
Show
k=O
c;.( k)
=0
Hint: Use the moment that
[aciorinq
=
theorem for real Gaussian random variables which states
E{XIX2X3X4}
Solution
E{XIX2}E{X3X4}
+ E{XIX3}E{X2X4}
+ E{XIX4}E{X2X3}
vVe are given a \VSS Gaussian random process, x(n), that is ergodic in the mean. For convenience, let us assume that the process has zero mean. In this problem we will be using the moment theorem fOT Gaussian random variables. This theorem states that if :r(m), .T(n), :r(k), and .T(l), arc zero mean jointly distributed Gaussian random variables then the fourthorder moment E{:r(m):l;(n)x(k)x(l)} is
E{x(m)x(n)x(k)x(l)}
=
E{x(m)x(n)}E{:r(k)x(l)} + E{ x(m)x(l)} E{ x(n)x(k)}
+ E{x(rn)x(k)}E{:r:(n):r(l)}
Now, for a fixed value of k, let
yen)
Since
=
x(n
+ k)x(n)
=rx(k)
E{y(n)}
it follows that x(n) will be correlation ergodic
Nl Nreoo
lim
_!_
N
L :r:(n + k)x(n)
C
=
rx(k)
n~O
if and only ifl
N>yo
. 1 NI lim  + 1 '" 2N L...
l=N+l
y
(I)  0
(P3.211)
where ey(l)
E{y(m E{x(m
+ l)y(rn)}  E{y(rn + 1)}E{y(m)} + l + k):c(m + l)x(rn + k)x(m)} 
r;(k)
Using the moment theorem, we have
cy(l)
=
E{x(m+l+k):r(m+l)}E{x(m+k)x(rn)} E{x(m r;(1) E{x(m
+
+ l + k)x(m + k)}E{x(m + l)x(m)} + + l + k)x(m)}E{x(m + k)x(m + In ~ r;(k) + rx(k + l)rx(k I)
since x(n) is a
1 Note that we are assuming that y(n) is widesense stationary. However, it may be shown that \VSS Gaussian process thcn it is also strict sense stationary and, therefore, y(n) is WSS.
54
Therefore: x(n) will be correlation ergodic if and only if 1 NI ,. ""' [1';(1) 1"/ Z::
1=0
Problem
Solutions
1V.....+oc
lim
+ rx(k + l)r,,(k l)J
=0
(P3.211)
What we woulcllike to establish is the equivalence of Eq, (P3.211) and Eq. (P3.211). It is clear that if Eq. (P3.211) is true, then Eq. (P3.211) holds. We may see this by setting k = 0 in Eq. (P;Ull). To establish the converse, we use the inequality
Therefore,
1)1 <
2~I't.'1~(k+l) r
1=0
+r;(k
Nlk
l)1
r~(k)1
1
1 = 2N
NIIk
I
L
l=k
r;(k)
+
I=k
L
< _I_
 2N where, in the last inequality, we used the fact that
i'~ r'Z(k) I~
1=0 x
+
kr;(O)
N
for all k, Thus, as N >x, the rightside Eq. (P3.211) and Eq. (P3.211).
goes to zero and we have established
the equivalence of
Chapter .3
55
3.22 Ergodicity in the mean depends on the asymptotic behavior of the auto covariance of a process, cAk). The asymptotic behavior of cx(k), however, is related to the behavior of the power spectrum P:c(ejW) at w = O. Show that. .T(n) is ergodic in the mean if and only if PJ:(ejW) is continuous at the origin, w = O. Hint: Express Eq. (3.65) as a limit, as N _, CXJ, of the convolution of cx(k) with a pulse PN(n)
={
0
ljN
0:;' n < N otherwise
with the convolution being evaluated at k Solution
A necessary and sufficient condition ergodic in the mean is
= N.
with autocovariance
for a process \VSS process x(n)
1
cx(k) to be
N~l
lim ~ N~= Iv To show that x(n)
k=O
L cx(k) ~ = 0
is continuous at the origin, let
is ergodic in the mean if and only if Px(ejW)
where PN(n)= Note that {
°
l/N
o s n<
otherwise
N
sN(N)
Since sN(k) is the convolution
=
1 N
NJ
L cx(k)
k=O
of cx(k) with p,v(k):
then
SN(eoiw)
where
=
Px(ejW)PN(ejW)
1)/2
PN(cjW)
Therefore:
=
e~j(N
sin Nw/2 Nsinw/2
sN(N)
=
.2._
21f
J"
SN(eJw)ejNwdw
=.2._
~1r
21f j~"
r
Px(ejW)e~j(Nl)/2
NSll1w/2
sin~wj2
ejNwdw
Note that the term multiplying the power spectrum Px(ejW) inside the integral is bounded by one in magnitude and, as N 7 00, this term goes to zero for all w i 0, and for w = 0 this term is equal to one. Therefore,
and it follows that sN(N) goes to zero as N __, PAciW) is continuous at the origin.
00
if and only if J'x(ejW)lw=o+
=
P",{ejW)lw=o,
i.c.,
56
Problem Solutions
3.23 In Section 3.2.4 it was shown that for realvalued zero mean random variables the autocorrelation is bounded by one in magnitude
Show that this bound also applies when x and yare complex random variables with nonzero mean. Determine what relationship must hold between x and y if
Solution Without any loss of generality, we may assume zero mean for both x and y. For nonzero mean random variables, the following derivation is modified by replacing x with x .. m.; and y with y my. What we want to show is that
Note that for any constant a, with equality if and only if :r: =
E{I:£
ay?} 20
ay with probability one. Expanding the square we have
Now let
Then
Cancelling terms and simplifying this becomes
and the inequality follows.
Chapter 3
57
3.24 Let Px(ejW) be the power spectrum of a widesense stationary process x(n) and let Ak be the eigenvalues of the NI x AI autocorrelation matrix Rx. Szego!s theorem states that if g() is a continuous function then · 1im
g(A1)
+ g(A2) + ... + g(A,'I
M
) _ _!_

M +00
21f
111' 9 In (ejW)]d r,l; 11' .
w
Using this theorem, show that lim [dct
M+oo
Rx] 11M = exp {2_ 11f In [PT( e 21f 11'
jW)]
dW}
Solution
The determinant of a matrix is equal to the product of its eigenvalues,
detRx
Therefore
=
II Ale
k=]
}'vf
(det Rx)
Taking logarithms, we have
I/lv1
=
(II Ak )
AI k=]
/
1M
Using Szego's theorem lim
yields
AI~oc
In (detRr,)l/M
=
1tll>O() J.v~ k~l
lim
1M '1' Lln/\k
=?
1
17f In[Px(ejW)]dw
r.
...,Jr
and the result follows,
58
Problem Solutions
3.25 In some applications, the data collection process may be flawed so that there are either missing data values or outliers that should be discarded. Suppose that we are given N samples of a \VSS process .x(n) with one value, x(no), missing. Let x be the vector containing the given sample values,
x
= [x(O), x(l),
.. , ,x(no
1), .x(no
+ 1),
... ,x(N)]T
(a) Let Rx be the autocorrelation
matrix for the vector x,
Rx =
1. Rx is Toeplitz. 2. n, is Hermitian. 3. R; is positive semidefinite. (b) Given the autocorrelation for the vector that does not have x(no) Solution
(a) The matrix is not Toepliiz.
E{xxH}
Which of the following statements are true:
matrix for x, is it possible to find the autocorrelation
x=
matrix
[x(O), x(1),
... ,x(N)r
missing? If so, how would you find it? If not, explain why not.
This may be shown easily by example.
If x = [x(O), :1:(2),:c(3)] , then
which is clearly not Toeplitz.
However, by definition, Rx is Hermitian, R~J = E { xxH
}H
= E { xxH}
=
Rx
Let v be any nonzero
Finally, R; is nonnegative vector. Then, Therefore, and
definite, which may be shown as follows.
n, 2:
O. matrix for the vector
(b) There are several ways to find the autocorrelation
x= [x(O), .T(l), ... ,x(N)r
that does not have x(no) missing. One way is as follows. Note that the first column of Rx that is formed from the vector that has x(nol missing is as follows,
[1""x(O), 1""x(l),"',
Tx(no  1), Tx(no
+ 1)"",
Tx(iV)]
Therefore, all that we need is the missing correlation TxCnO)' Note, however, that this term is found in the second column of row (no + 2) (see example in part (a) above). Thus, given Tx(no) the Toeplitz matrix may then be formed.
60
3.26 The power spectrum of a widesense stationary PJ' ( e . Find the whitening filter H(z) process x(n) is
Problem
Solution»
jW) _

25  24c:osw
26  10 cosw
that produces unit variance white noise when the input is
x(n).
Solution
Expanding the cosines in the expression for the power spectrum we have p ( jW) = 25  12ejw  I2ejw xe 26 _ 5ejw _ 5ejw or, in terms of z, the power spectrum becomes
1
P (z)
x
= 25 ..
2G .. 5z  5z1
12z  12z
=
(4  3z)(4  3z ) (5  z)(5  z .. 1)
1
= G(z)G(zl)
•
where
G(z)
Therefore, if
=
43z 5z
1
=
34z1 I5rl
1  5z1
H(z)
then
=
G(z)
= 3 _ 4rl
y(n)
will be unit variance white noise.
=
h(n)
* x(n)
Chapter 3 3.27 We have seen that the autocorrelation matrix of a \VSS process is positive semidefinite,
61
The spectral factorization theorem states that if Px(eiw) trum may be factored as where Q( eiw) corresponds to a causal and stable filter. (a) Assuming that IJ5 f 0 and Q(ejW) matrix is positive definite.
is continuous then the power spec
is nonzero for all w, show that the autocorrelation
(b) Given an example of a nontrivial process for which R'E is not positive definite. Solution
(a) The positive definite property of R; may be easily established with the help of the Eigenvalue Extremal Properiu which states that the eigenvalues of the autocorrelation matrix of a zero mean \\lSS random process are upper and lower bounded by the maximum and minimum values, respectively, of the power spectrum, min PA dW) :s; Ai < max Px( ejW)
w w
If
0'5 =I 0 and
Q( ejW) is nonzero for all w, then
and, therefore, 0< minp:c(e:iW)
OJ
:s;
Ai
i = 1,2, ... , n.
Thus, it follows that Rx is positive definite. (b) A nontrivial process for which Rx is not positive definite is the harmonic process
where rj; is a random variable that is uniformly distributed the autocorrelation matrix R; has a rank of one.
over the interval [ii, r.]. For this process,
Chapter 4
63
SOLUTIONS TO CHAPTER Signal Modeling
4.1 Find the Pade approximation
x
4
of second order to a signal .T(n) that is given by
,T
= [2, 1, 0, I, 0, 1, 0, 1, 0, I, ... J
i.e., x(O) form
= 2, x(l) = 1, x(2) = 0, and so on. In other words, using an approximation
H z = b(O) + b(l)zl
of the
()
1 + o,(l)rl
+ o,(2)z2
+ b(2)z2
find the coefficients b(O), b(l), b(2), 0,(1), and 0.(2). Solution
The Padc equations that must be solved are
o
0]
1 The last two equations in this set are 0 [ 1 Solving for 0.(1) and 0.(2) we have
0
1
0.(11) lJ 0.(2)
=
~ ~ jl [
0
l~g~ j
r
b(O)
0 0
1
] 0
J[
0.(1) 0.(2)
J  I1

l0 ]
a(2) 1
0.(1)
Using the first three equations,
=0
and
we may solve for b(O), b(l),
and b(2) as follows:
b(O) b(l) [ b(2)
Therefore, the model is
1 [2 0 0 1 [ 1 1 = [ 2]
=
1
2
0
0
1
0
12
1
2
64
Problem Solutions
4.2 A thirdorder allpole Pade approximation to a signal x( n) has been determined to be II (z)
=
]
cc____=_
1+
2z1
+ z2 + 3z3
What information about :r:(n) can be determined from this model? Solution
The Pade approximation using p poles and q zeros matches the first p + q +1 values of x(n) exactly (assuming that the Pade equations are nonsingular). Therefore, all that can be determincd from tIle Pade model 1 11(z) = 1 + 2z1 + z2 + 3z3 which has p = 3 poles and q finding the inverse zl.ransform
=
0 zeros, are the first 4 values of x(n). These may be determined of H(z) or, alternatively, from the Pade equations, 0 0 0
by
r x(O) x(l) x(2)
.1:(0) I x(3) x(2) x(1)
l
x(O) x(l)
o o o
:e(O) x(l)
10(1) [ a(2) a(3)
j
r b~)
x( 4)
x(3)
:c(2)
l
0 0 0
j
Substituting
the given values for ark) and b(k) these become
[
:r(O) x(O) 00 0
x(l) x(2) x(3) x(l) x(2) x(O) x(1)
0
0
0
j [1 j
2
1 3
[10
0 0
j
it follows that x(O)
x(O)
which may be solved by back substitution as follows. From the first equation, Next, given x(O), we see from the second equation that
=
l.
x(l)
or, J;(l) = 2. Then,
+ 2x(0) = 0
from the third equation we find x(2)
+ 2x(1) + x(O) = 0
we have
or
x(2) = 3.
Finally,
from the last equation
x(:3)
or
+ 2x(2) + x(l) + 3x(O) = 0
1:(3)=7
Therefore,
x
[1, 2, 3, 7, ... J
Chapter 4 4.3 Suppose that a signal x(n) is known to be of the form
L
65
x(n)
= ~ CdAk)"71,(n)
10=1
where the Ak'S are distinct complex numbers. Ca) Show that. the Pade approximation method can be used to determine the parameters and Ak for k: = 1,2, ' , . ,L, Is the answer unique? Ck
(b) The first eight values of a signal x(n), which is known to be of the form given above with L = 3, are x = [:~2, 16, 8, 12, 18, 33, 64,5, 128,25f Determine Ck andX, for k Solution
(a) With
L
=
1,2 ..3.
:c(n)
the ztransform is
L
=~
k=1
Ck(Ak)"u(n)
X(z)
=~
k=1
1
Ck A
beD)
1
+ b(l)z1
kZ "
+ ... + beL _l)z(LJ) 1 + a(l)rl + ... + a(L)zL
B(z) A(z)
which is a rational function of z of order (L 1) in the numerator and order L in the denominator. Therefore, the Pf1d0,approximal.ion may be used to find the polynomials /\(z) and B(z) provided p :2: Land q :2: (L  1). The coefficients Ak are the roots of the polynomial A(z) and the coefficients Ck may be found with a partial fraction expansion ofX(z). (b) The Pade equations for the denominator coefficients are
x(q)
x( 'I ': 1)
:D(I])
x(,+1)
x(qp+l) x(qp+2) x(q)
r
If L
x(q+pl)
x(q+p2)
jr
0.(1)
0.\2)
j
: (Q+l)+ 2) 1 x(q
C
_ o.(p)
r_ x(q+p)
_
= 3, then we set p = :~ and I] = 2. With
x
the Pade equations become
=
':12, Hi, 8, 12, 18, 33, 64.5, 128.25(
66
The solution for a
Problem Solutions
= [1,
0.(1), a(2),
a(3)]7'
is
a
r
0.(1) a(2) 1 0(3)
1 r 0.75 1
1..5 1 0.375 are
For the numerator
coefficients, the Pade equations
x(O) x(2)
x(q)
Thus,
.7:(1)
x(l)
x(O)
0
0 0 x(O) x(q2)
0 0 0 :z;(q  p)
1
0(1)
b(O)
0.(2)
a(p)
b(2) b(q)
b(l)
x(q  1)
~2 ~
Thus, the model is
H ~f;l
oJ
L
jl =
0(3)
32] [ 32
,10
H__
~2 . '321 102 .J (4)  1 _ 1.5z1 _ 0.75z2 + 0.375z3
N , N
Chapter 4
67
4.4 A consumer electronics device includes a DSP chip that contains a linear shiftinvariant digital filter that is implemented in ROIV1.In order to perform some reverse engineering on the product, it is necessary to determine the system function of the filter. Therefore, the unit sample response is measured and it is determined that the first eight values are as listed in the following table. Unit n sample 1
2
response 3
2 ;1
h(n)
0 1
2
;)
5
2
1
6 0
71 11
I
Having no knowlege of the order of the filter, it is assumed t.hat H(z) two zeros. (a) Based on this assumption, determine
contains two poles and
a candidate system function, H(z),
for the filter.
(b) Based on the solution found in (a) and the given values for h( n), is it possible to determine whether or not the hypothesis about the order of the system is correct? Explain. Solution
(a) The Pad« approximation the Pade equations are may be used to find the system function of the filter. With p q
=
2
r
l
1 0 2 1 :3 2 2 3 1 2
0 0 1
2
3
~ a(2) ~
r
a(l) 1
1
b(O) btl) b(2)
1
o o
Using the last two equations we have 3 [2 Solving for a(l) and a(2) we find 2
3
1 [ a(l) J a(2)
] =_[ 2]
1
a(l) [ a(2)
Next, solving for b(O), b(l),
1 = ~ [ 2 21 [ 2 ] i 1/.5 ] 3 5 ;) 1 l lj.5
1 0
and b(2) using the first three equations we have
b(O) b(l) [ b(2)
Therefore,
1=[
2 1 3 2
~ 1/5 1 1 [ ~/5l
=
1~}5l 6/5
the system function is
68
Problem Solutions
(b) vVe may check to see if the inverse ztransform of H(z) matches all of the given values of h(n). Alternatively, if the system function is correct, then e( n) should be equal to zero for n ;:: 5,
e(n) = a(n)
Since
* h(n) =
0
e(5)
then the hypothesis zeros.
= h(5) + a(1)h(4) + a(2)h(3) = 8/5 of 0
'I'here must be more poles and/or
about the model order is not correct.
Chapter 4
69
4.5 The Fade approximation models a signal as the response of a filter to a unit sample input, o(n). Suppose, however, that we would like to model a signal :r(n) as the step 'response of a filter as shown in the following figure
li(n)
H(z)
=
B(z) A(z)
.7(n)
In the following, assume that H(z) form
is a secondorder filter having a system function of the
H(z)
= b(O) + b(l)zl
1
a(l)zl
+ b(2)z2 + a(2)z2
(a) Using the Pade approximation that must he solved so that
method with a lind step input, derive the set of equations
x(n)=x(n)
for n=O,1,
... ,1
(b) If the first eight values of the signal, :c(n), are
X
= [1, 0, 2, 1,
2, 0, 1, 2J
l'
find b(O), b(l), b(2), 0.(1), and 0.(2). Solution
(a) What we would like to find are polynomials
A(z)
and B(z)
= X(z)
so that
8(z) U(z) . A(z)
or
1 _ zl B(z) Multiplying both sides of the equation
1
=
A(z)X(z)
we have
=
by 1  zl:
(1  zl)A(z)X(z)
B(z)
Expressing this equation in the time domain by combining (1  z1) with X (z) leads to the following set of linear equations that may be easily solved for the filter coefficients a( k) and b(k),
x'(O)
x' (1)
0
0
b(O)
x'(2) x'(3) x'(4)
I:'(O) x'(l) x'(2) x'(3)
0 x'(O) x'(l) .1:'(2)
[ ail) 0.(2)
1~
btl)
b(2) 0 0
70
where the sequence
Problem Solutions x' (n) is defined as follows
x'(n) with :e(n) = 0 for T/, < O. Alternatively, following set of equations
=
x(n)  x(n  1) the term (1  zl) with A(z) leads to the
combining
where the coefficients
a' (k) are defined by
l
:e(O) x(l) x(2) x(3) x( 4)
0
x(O) x(1) x(2) x(3)
x(O)
x(l) x(2)
a a
~
o
jr
ai]:
0,'(1) 1 0,'(2) 0,'(:3)
1
__ 
b(l) b(2)
b(O) o
~~~\
a
a'(k)
= ark)
>
1)
possibility is to express the
with 0.(0) = 1 and ark) 0 for k: < 0 and k following equation in the time domain
2. Yet another 1
A(z)X(z)
Since the term on the right is the convolution
=
1
1
z
B(z) b(k) we have
of a step with the coefficients
x(l) x(2) x(3) x(4)
Anyone
x(O)
x(l) x(2) .T(3)
x(O)
a
0
0
x(l) .T(2)
x(O)
[ 0,(2) of!)
1
=
b(O) + b(l) b(O) + b(l) + b(2) b(O) + b(1) + b(2) b(O) + b(l) + b(2) ark) and b(k).
b(O)
of these sets of equations
may be used to solve for the coefficients (a), we first form the sequence
1,
(b) Using the first approach
derived in part
x'(n),
x'
The Pado equations are then
=
[1, 1, 2, 3, 3, 2,
l]T
From the last two equations
l
3 21 3 3 2
i ~~
[ 2 1 J [
3 2
=
b(O) b(l)
b(2)
1
o oJ
we have
0.(1) ] _ [ 0.(2) 3
3]
Solving for 0.(1) and 0.(2) we find
[ (l(1) ] 0.(2)
[23 2 J [ 3 ] = [ ~~] 13 3
Chapter 4
Finally, solving for b(O), b(l), and b(2) we have
71
Therefore,
the model is II (z)
=
;___=_
+ 2z2 1 + ::lz1 + 3z2
1 + 2z1
72
Problem Solutions
4.6 With a realvalued signal x(n) known only for n. = 0,1, ... , N, the backwar·ds covariance method finds the coefficients of an allpole model that minimize the backuiord prediction error
E;;
where
=
I::[e; (n)]2
n=p
N
e; en) = x(n
(a) Show that the coefficients ap(k) the form
 p)
+
I:: o,p(k)x(n
k=l
p
+ k  p)
that minimize E;; satisfy a set of normal equations of
where
and find explicit expressions for the entries in R:r, and
r3;.
(b) Is the solution to the backwards covariance method the same as the solution to the covariance method? Why or why not? (c) Consider a new error that is the sum of the forward and backward prediction errors
E:
=
I::{[e;(n)]2+
n=p
N
[e;(n)]2}
is the forward prediction error used in
where e;;(n) is the error defined above and e;(n) the covariance method,
p
e;(n)
= x(n) + I:: o,p(le)x(n  Ie)
k=l
Derive the normal equations for the coefficients that minimize this error. (This approach is known as the Modified Covariance Method.) (d) Consider the signal
x(n)
=
f3n
n = 0,1, ... ,JV
With p = 1 find the firstorder allpole model that minimizes and determine the minimum value of For what values of ,6 is the model stable? What happens to the model and the modeling error as N * CXJ?
Ei!.
Et
Solution
Ciiepter 4
(a) With Sp = L.;~=p[cp(n)j2, we have setting the partial derivative of
73
i: with respect
1 p)
to o.p(l) equal to zero
~l(,N "':» '\' ao. (I' = L 2cp (n) p) n=p
a e ( n )
(JiL
p
(I)
= 2 L ep (n}r(n+
n=p
,\,_
N
=0
P
Dividing by two: and substituting
for
(71,) yields
P
~
If we define
N
x(n  p)x(n
+ 1N
p)
+~
o.p(k)
[
.~ .r(n
N
+ k  p)x(n + I
.Np
T,,(k, I) = L
n=p
x(n
+k
 p)x(n
+ 1 p)
L x(n
+ k)x(n + I)
n=O
then the normal equations
p
become
L o.p(k)Tx(k,
k=1
I)
=
Tx(O, I)
1=1,2,
... ,p
(b) No, the backwards covariance method does not give thc same solution as the covariance method since the definitions for Tx(k, I) are different in the two methods. Specifically, for the covariance method
N
rx(k, I)
(e) As before, we differentiate
=
L
n=p
oren l)x(n
 k)
E;; with
N
respect to o.p(l), and set the result equal to zero,
ao,p(l)
asB
=
~2[e;(n)x(n
l)+c;(n):r(n+lp)]
we have
=0
Dividing by two: and substituting
N
for c;;(n) and et(n)
o
=
L[x(n)x(n n=p
p
I)
+ x(n
N
 p).r(n
+I
 p)
+L
k=!
o.p(k) L
n=p
[x(n  k)x(n I)
+ x(n + k
 p):r(n
+ 1 p)]
Defining
rAk, I)
we have
=
P
L :r(n 'n=p
N
l)x(n  k)
1',,( I: 0)
Therefore,
p
+ r x (p 
I, p)
are I, p
+ L o,p(k) [T (I:
k=l
k)
+ Tx (p 
I, p  k)]
=0
the normal equations
L
k=1
o.p(k)
(I, k)
+ rx(p
kl] = 
(1,0)+ Tx(p I: p)]
1= 1, ...
,]J
74
(d) With p = 1 we have
Problem Solutions
a(1)
Since x(n) = ,!3n for
=
_,Tx(l,
0) 1'x (1,1)
+ 1'",(0, 1) + 1'",(0,0)
=
1'x(l, 1) + 1'0:(0,0)
17,
= 0,1"
, , ,N,
then
N
~;1:
2
n=l N
L L
(n)
= (3
21  (32N
I
1 !3? ~
,1 (32N 2 x (n  1) = 1 (32 1 _ (J2N (3 1_(:)2
'
n=1
Lx(n)x(n
n=1
N
1)
=
Therefore,
a(l)
Note that for any value of (3, Thus,
=
2(3 1 + (32
1 +(32::0:
and the model is stable for all values of (3. For the modeling error, we have
p
2/1
E{! =1'x(O,O)+1"x(p,p)+
Therefore:
Lap(k)h(O,k)+rx(p,pk)J
k=1
sf
1"",(0,0)
+ 1"",(1, 1) + a(l)
[1":c(O,1)
+ 1"x(l,
O)J
and the modelling
error does not go to zero as N _, cc.
Chapter
4 model for an unknown
75 system S using the
4.7 Suppose that we would like to derive a rational approach shown in the following figure
x(n)
S'
yen)
ern) A(z)
+'"\
./

B(z)
For a given input x(n) the output of the system, yen), is observed. The coefficients of the two FIR filters A(z) and B(z) that minimize the sum of the squares of the error signal ern) are then to be determined. Assume that the sum is for all n 2: 0 as in Eq. (i1. 73). (a) Derive the normal equations and B(z). that define the optimal solution for the coefficients of A(z)
(b) The philosophy of this method is model for 8. Suppose that S is function. Show that this method the orders of the filters A(z) and Solution
(a) Note that E(z)
that if the error is small then B(z)/A(z) is a reasonable a linear shiftinvariant system with a rational system will identify the parameters of S exactly assuming that B(z) arc chosen appropriately.
= Y(z)A(z)
 X(z)B(z),
p
so
ern)
With
=
L a(k)y(n
 k) 
L b(k)x(n
q
 k)
00
n=O
the Normal Equations are found by setting the derivatives equal to zero, BE
of E with respect to ark) and b(k)
0 Ba(k) 
' ,
Thus,
a~t~.)
(n=O
=
I=2e(n)y(nk)
=2I={ta(l)y(nl)
n=O
 tb(I)X(nI)}y(nk)=O
1=0
1=0
and
ab(k)
oe
= ~
co
2e(n)x(n  k)
=
2 ~
00
{
t;
p
a(l)y(n I) 
t;
q
b(l)x(n I)
}
x(n
k) = 0
76
Dividing by two, and rearranging the sums, we have
Problem Solutions
~o.(l){~y(nl)y(nk)}~b(l){~X(n
and
l)y(nk)}=O
;
k=l,
...
,p
~
0(1) {~y(n
l)x(n
 k)}
+~
b(l) {~x(n
I)x(n
 k)}
=0
k: = 0, .... q
If we define
Txy(k, I)
L x(n l)y(n
n=O
00
 k)  k)  k)
Ly(n
n=()
l)y(n
I"",(k, I) then these equations become
p q
L x(n l)x(n
n=()
L
1=0
p
a(I)Ty(k,
l)  L
[=0
q
b(I)Txy(k, l)
O
k = 1,2, ... ,p
 Lo.(l)TYX(k,I)
[=0
+ Lb(l)r.,,(k,l)
o;
k
= O,I, ...,q
Assuming that the coefficients have been normalized so that 0.(0) = 1, we have
p q
Lo.(l)ry(k,l)
[=1 p
 Lb(I)Txy(k,l)
q
Ty(k,O)
k
=
1,2, ... ,p
L
1=1
o.(I)TyAk, I)
+L
b(l)r.,,(k, /)
k=O,I,
... ,q
'Writing these in matrix form we obtain
where aT = [0.(1),0.(2), ... , o.(p)] , bT [b(O), b(l), ... ,b(q)], r~ = [rx (1, 0), Tx(2, 0), ... ,'x (p, 0)], and r~~ = [rxy(l, 0), Txy(2, 0), ... , Txy(q, 0)]. Also, R; is a p x p matrix with entries Tx(k, I), Ry is a (q + 1) x (q + 1) matrix with entries Tq(k, I), and R.Xy is a p x (q + 1) matrix with entries
Tx(k, I), (b) Suppose 5(z)
=
C(z)/D(z).
Then
E(z)
=
B(z)X(z)
.
 C((z))A(Z)X(Z) Dz
and the error can be made equal to zero if
B(z)
=
C(z) D(;r1(z)
or
Chapter 4
4.8 Consider a signal, x(n), system function
77 which is the unit sample response of a causal allpole filter with H (z)
=
~:cc~,.,.
(1+
0.5z1)(1
1 + 0.75z1)(1
+ 2z1)
\Ve observe x(n) over the interval 0 ::; n ::; N where N
» 1.
(a) Using the covariance method, we determine a thirdorder allpole model for 1:('11.). What, if anything, can you say about the location of the poles in the model? Do the pole locations depend on N? If so, where do the poles go in the limit. as N _., oo? (b) Repeat part (a) for the case in which you use the autocorrelation Solution
Note that the sequence that we are trying to model is the unit sample response of a causal filter that has poles at z = 0 ..5, 0.75, 2. (a) Since x(n) is the unit sample response of an allpole filter, the covariance method will produce an exact model for the signal, independent of the value of N (assuming that N ~ 6). (h) For the autocorrelation method, the roots will always lie inside the unit circle and will vary with N. However, as N gets large, the roots will move towards the minimum phase solution, with poles at z = 0.5, 0.75, 0.5.
method.
78
4.9 Equation (4.129) may be used to reduce the amount of computation covariance normal equations.
Problem Solutions required to setup the
(a) Show that the elements along the main diagonal may be computed recursively beginning with Tx(1, 1). (b) Show how the elements along the lower diagonals may be computed recursively beginning with ro:(k, 1). How may the terms along the upper diagonals be obtained? (e) Determine how many multiplies and adds are necessary to setup the covariance normal equations (do not forget the evaluation of the vector on the righthand side). Solution Using the relationship between Tx(k Tx(k+ 1,1+ 1)
=
+ 1,1 + 1) and
xUV
Tx(k,l),  k) +x(p11)x"(plk)
Tx(k,l)
l);r;*(N
we may evaluate the terms in the covariance normal equations recursively. (a) Beginning with Tx(l, 1), the elements along the main diagonal of the covariance normal equations may be found recursively as follows
This requires two multiplications and two additions for each term. (b) Beginning with T'x(k, I), the elements along the lower diagonals may be computed recursively as follows Tx(k + 1,2) = Tx(k, 1)  x(N  l)x*(N  k) + :c(p  2)x*(p  k) and i'x(k or, in general, Tx(lc+l + 1,l +2)
+ 2,3) = Tx(k + 1,2) = i'x(k+l,l
x(N  2)x*(N
 k  1)
+ x(p
 3)1;* (p  k  1)  2)x*(pk 11)
+ 1) ;r;(Nll)x*(N
 k: l)+x(pL
As with the terms along the main diagonal, each term on the lower diagonals requires two multiplications and two additions. Note that the upper diagonals may be found using the conjugate symmetry of the covariance normal equations. (c) The covariance normal equations require finding the elements of a p x p Hermitian matrix. As shown in parts (<1) and (b), given t.he firsLcolumn of the matrix, the remaining entries may be computed recursively. Given (be elements in the first column, i'Ak,l), each of the (p k) terms along the diagonals, i.e, Tx (k + l., l) for I = 1, ... ,p  k  1, require 2 multiplications a.nd 2 additions. This requires
pl
L2(p
k=l
 k)
= 2p(p
1)  p(p 1)
=
p(p  1)
multiplications, and the same number of additions. In addition, it is necessary to evaluate the p terms in the first column,
]V
i'x(k, 1) This requires p(N  p of multiplications is
=
L
n=p
x(n  l)x*(n  k)
A~ = 1,2, ... ,p
+ 1) multiplications
and p(N  p) additions. Therefore, the total number
Chapter 4
79
#
and the total number
mults, is
=
p(N
p
+ 1) + p(p + p(p
 1)
=
Np
of additions
#
adds.
= p( IV 
]J)
 1) = N p 
]J
80
4.10 \Ve want to model a signal .T(n) using an allpole model of the form
Problem Solutions
For example, with p = 2 the model is
H
_
(z) 
1 + a(l)zNl
b(O)
+ a(2)rNZ
Derive the normal equations that define the coefficients ap(k) that minimize the Prony error
e; L le(n)12
n=O
00
where
e(n) = .T(n)
+ Lap(l)x(n
1=1
p
l
N)
and derive an expression for the minimum error. Solution The equations for the coefficients ap(k), k = 1, ... p, that minimize the error t~ are found by setting the derivatives of Ep with respect to ap(k) equal to zero. Thus, assuming that :ten) is real, we have Da (P ) k P.
DE
=
L 2e(n):r:(n
00
 k  N)
=0
n=()
Dividing by two, and substituting for e(n), we have
~
00 [
x(n)
+
t;
P
ap(I).T(nl
N)]X(nkN)=O
or
t;
p
ap(l)
[00 ~
x(n  I  N)x(n
 k  N)
] =  .~ 00
x(n)x(n
 k  N)
If wc define rx(k, I) thcn it is easily shown that rx(k,l)
=
L x(n
n=O
00
oc
 l)x(n  k)
depends only on the difference, k 1, and we may write Tx(k)
=
L x(n)x(n
n=O
 k)
Thus, the normal equations become
L ap(l)rx(k
[=1
p
l)
=
rx(k
+ N)
Chapter 4
Finally, using the orthogonality condition
81
L e(n)x(n
n·.Q
 k  N)
= ()
we have, for the minimum error,
{Ep}min
Therefore,
=.~ ern) x(n) +
cc
[P
t;
1
ap(l)x(n
L:
N)
J =.~
co
e(n):z(n)
~
Tx(O)
[x(n)
p
+ ~op(l)X(nl+ N)
N)]
x(n)
+ L01'(l)1'x(l
l=l
82
Problem
Solutions
4.11 Suppose that we would like to model a signal x(n) as shown in the following figure.
S( n)
.1 •.
H(z)
l
x(n)
L.. __
where hen) is an allpole filter that has a system function of the form H (z) =
p=.
b(O)
I+
I:: ap(k)z2k
''
k=l
Modify the Prony normal equations so that one can determine the coefficients ap(k) in H(z) from a sequence of signal values, x(n). Solution
To minimize the Prony error
e; = Lle(nW
n=O
=
where
e(n)
= x(n) + I::Clp(l)x(n 1=1
p
21)
we set the derivative of E1' with respect to ap(k) equal to zero,
aE
p
aa1'(k)
=2
n=O
L e(n)xln
CXJ
\
 2k)

=0
.
which gives
Therefore,
I::a1'(I)Tx (2k 1.=1
p
2l)
=
1'x
(2k)
where
Tx
00
(2k21)=
2_.x(n21)x(n2k)
n=O
For example, with p
=
2 the equations have the form
' TJ(O) l 1':c(2)
1'x(2)] 1'",(0)
lr a(l) ]
a(2)
=[
1'",(2) 1'x(4)
J
Chapter 4
4.12 Suppose that we would like to model a signal x(n) that we believe to be quasiperiodic. Based on our observations of x(n) we estimate the auto correlations through lag k = 10 to be
Tx(k)
=
[1.0, 0.4, 0.4, (U, 0.2, D.9, 0.4, 0.4, 0.2, 0.1, 0.7f let us
(a) In formulating an allpole model to take into account the suspected periodicity consider a twocoefficient model of the form
H(z)
=
1
+ o,(5)Zd + o,(lO)ZlO
~(O)
Find the values for the coefficients 0,(5) and 0,(10) that minimize the allpole modeling error. (b) Compare the error obtained with the model found in (a) to the error that is obtained with a model of the form Hz _ b(O) ( )  1 + o,(l)zl + o,(2)z2 (c) Now consider an allpole model of the form
Hz_
( )  1+ o,(N)zN
b(O)
where both a(N) and N are considered to he model parameters. Find the value for o,(N) and N that minimize the allpole modeling error and evaluate the modeling error. Solution
(a) With an allpole model of the form
H(z)
we begin by defining
=
.
1
+ 0.(5)r5 + 0.(10)z1O
Let
b(O)
the error that we want to minimize.
with
c(n)
=
T(n)
+ a(5)x(n  k) + 0.(10)x(n 
10) of E with
To find the coefficients 0.(5) and 0.(10) that minimize respect to 0.(5) and 0.(10) equal to zero, 8E
E, we set the partial derivatives
80.(5)
=
2~
=
,OX)
e(71)x(71  5)
2 I:lx(71)
n=O
+ a(5)x(n
 5)
+ o.(lO)x(71 1O)]x(71
 5)
=0
84
and
Problem Solutions
aa(lO)
8E
=__
= 2~
DC'
e(n)x(n  10)
= 2.~
[:r(n)
+ a(.5)x(n 
5)
+ a(lO)x(n
10)]x(n  10)
=0
Dividing by two and rearranging
we have
0.(5)
and
L :r2(n
n=O
ii)
+ a(lO)
L x(n
 10)x(n  5)
=
L :1:(n)];(n
n=O
oc
5)
0.(5)
These equations
L x(n
n=O
 5)x(n  10)
as
+ 0.(10) L x2(n 00
10)
=
L x(n)x(n
00
 10)
n=O
n=O
may be written
where
Ex(k)
Using the given autocorrelations,
=
Lx(n)x(n
n=O
 k)
thsc become 1 [ .9 .9"J [ 0.(5) 1 0.(10)
j = [ j
.9 .7
Solving for 0.(5) and 0.(10) we find 0.(10) Finally, for the modeling error, we have
0.(5)
1 J=
[1..4211 0.5789
j
= 0.1263
E
(b) With a model of the form
=
Ex(O)
+ 0.(5)1'",(5) + o.(1O)rx(10)
b(O)
1 + a(l
H(z)
the normal equations are
)zl + a(2)z2
and the filter coefficients are solutions to the equations
Thus,
[
Finally, the modeling error is
all)]
0.(2)
=
l 2/7
12/7l
j
= 1.2286
E
=
Ex(O) ; 0.(1)1'",(1)
+ o.(2)rx(2)
Chapter 4
(c) Using a model of the form H (z) = __ b..,.:( tO)_= 1 + a(N)zN the value for the coefficient a( N) that minimizes the meansquare error is
85
a(N)
and the minimum meansquare
= _ r'x(N)
Tx(O)
error is given by
c} { ~l
nun 
. _ T~(O)  T~(N) rx(O)
Therefore, the meansquare error is the smallest when N = 5, which is the value of k for which (k)1 is the largest. In other words, to minimize the error we want to find the value of N for which x(n) and x(n + N) have the highest correlation.
86
Problem Solutions
4.13 INe would like to build a predictor of digital waveforms. Such a system would form an estimate of a later sample (say no samples later) by observing p consecutive data samples. Thus we would set
x(T!
+ 'flo) =
I:op(k)x(n
k=l
P
 k)
The predictor coefficients op(k) are to be chosen to minimize t:p (a) Derive the equations (b) If no Solution
(a) We want to find the predictor coefficients op(k) that minimize the linear prediction error
=
I:[x(n + no)
n=O
00
 x(n
+ 710)]2
that define the optimum set of coefficients op(k).
= 0, how is your formulation of this problem different from Prony's method?
e; = L[e(n)]2
n=O
where
e(n)
To find these coefficients, differentiate zero as follows
~ '"
001'
=
x(n
+ no)  x(n + no)
Ep with respect to ap(k), and set the derivatives equal to
= _ ~') ( ) BxCn+no) L..., ~e n C) (' \
n=O
(' )
K;
=
oap';;
J
0
Since
x(n
then
+ no)
=
L ap(k)x(n
k=l
p
 k)
_8x,·(,_'n_+;:cn=o) =:1; ('n~ 8ap(k)
and we have
k)
BE
,_p_
oap(k)
=
2
n=O
L e(n)x(n
00
 k) = 0
Dividing by two, and substituting
for ern), we have
l'
L {x(n
n=O
co
+ no) 
L ap(l)x(n
[=1
p
I) }x(n  k)
=0
k=1,2,
... ,p
Therefore,
the normal
equations are
L ap(l)rx(k,
1=1
I)
= rx(k,
no)
Chapter 4
where
87
rx(k, I)
=
L J(n
cc
 1):[;('11 k)
n=O
(b) With TiO = 0, these equations are the same as the allpole normal equations using Pronys method, except that the righthand side does not have a minus sign. Therefore, the solution differs in sign.
88
Problem Solutions
4.14 You are told that it is always possible to determine whether or not a causal allpole filter is stable from a finite number of values of its unit sample response. For example, if H(z) is a pthorder allpole filter, given h(n) for n = 0,1, ... ,N, then the stability of H(z) may be determined. If this is true, explain the procedure and list any conditions that must be placed on p or N. If false, explain why it cannot be done. Solution
It is true that the stability of a causal allpole filter can be determined of its unit sample response. Given a pthordcr allpole filter, from a finite number of values
H (z)
=
b(O)
=p ''
1+ I>p(k)zk
k=l
the coefficients a(k) may be found using; the Pade approximation
for an allpole model,
o
h(O)
o o
h(O)
h(p 2)
Thus, given h(n) for n
1[
ap(l) ap(2)
.,
1[
h(1) h(2)
h(~)
1
=
0,1, ... P. t.he coefficients may be found, and the roots of the polymomial
A(z) = 1 +
checked to see if they lie inside the unit circle.
E a(k)zk
k=l
p
Chapter 4 4.15 Let IJ(z) be a firstorder model for a realvalued signal x(n)
=
89
6(71)
+ 0(71
 1),
II _ _ b(O) (0)  1_ o.(l)rl
and let
[Li,
00
=
L ""'
n=O
[x(71)  h(71)]
2
be the error that is to be minimized. By setting the derivatives of [LS with respect to b(O) and 0.(1) equal to zero, try to find an analytic solution for the values of b(O) and 0.(1) that minimize [LS. (This problem illustrates how difficult the direct method of signal modeling may be, even for a firstorder model.) Solution
We are given a signal ~;(n) = ()(n) of the allpole filter
+ 6(71 
1) that we would like to model as the unit sample response
H(z)
The goal is to minimize the error Eu; = Thus, the model for x(n) will be
=
cc
b(O) 1 a(l)zl
L [h(n) n=O
x(n)]2
x(n)
=
b(O)[a(l)]"u(n)
the derivative of EU3 with
To find the values for a(l) and /J(O) that minimize ELs, we begin by setting respect to a(l) and b(O) equal to zero. Thus, we have
BELs 8a(1)
Dividing by two, and substituting
=~
00
2 [.T(n)
x(n)]
nb(O)a
n

1
(1)
=
0
for x(n)
and x(n) we have
00
L
n=O
[x(n)  :£(71)] nb(O)an1(1)
b(O)(l  a(l)b(O))
 L(b(O)an(l)
n=2 oo
nb(O)an1(1)
n=2
n=O
b(O) h(O)2a1(1) b(O) _
Simplifying we have
L na2n(1)]
n=O
co
a(1)b(O)2 (1  ((2(1»)2
=0
(P4.151)
90
Differentiating
CLS
Problem Solutions
with respect to 0(0) we have
DELs Db(O)
Again, dividing by two, and substituting
 2 [:D(n) L
n=O
co
 x(n)] an(l)
0
for :r:(n) and x(n) we have
L [x(n)
n=O
 5;(n)] 0."(1)
[1  b(O))
+ 0.(1)(1  a(l)b(O)J

L b(0)a
n=2
2n
(1)
[1  b(O)) + 0.(1)(1  o.(l)b(O)J
[1  b(O)) which may be simplified to
 b(0)a4(1)  ~(~)::~~~
La
0;)
2n
(1)
+ 0.(1)(1
o(l)b(O)]
=0
b(O)
Multiplying
=
1 + 0(1)  02(1)  0.3(1) and using Eq, (P4.151), we have
both sides of this equation by 0.(1),
or
204(1)
+ 0.3(1)
3(?(1)  0(1)
+1=0
which may be factored as follows
(a(l)
+ 1)2(0.(1)  1)(0.(1)  0.5) = 0
Of these roots, clearly we want 0.(1) = 0.5. The value for b(O) is, therefore,
b(O)
Thus, our model for x(n) becomes
=
1 + 0.(1)  0..2(1)  0.3(1) x(n)
=
=~
~(0.5)n1J,(n)
with a squared error of
Eu;
=
(1  ~)2
+ (1 
!iY
oo
+ _2)0,5?" = .259
11.=2
Chapter
/1
91
4.16 We have a signal x(n) for which we would like to ohtain an allpole model of the form
H{z)
\ Using the autocorrelation Tx(O), 1'1:(1), and 1';"(2).
=
1 + a(l)zl
b(O)
+ a(2)r2
and a(2) in terms of
method, find explicit formulas for b(O), a(l),
Solution
The autocorrelation normal equations are
Solving for the coefficients we have
and
92 4.17
Problem Solutions If one is modeling a signal :r(n) whose transform, X (z), contains zero", then an allpole model may be used to effectively model a zero with an infinite number of poles. In this problem, we look at how a zero is modeled with the autocorrelation method. Let x(n) = 5(n)  ao(n  1) where lal < 1 and a is real. (a) Determine the pthorder allpole model Ap(z) for x(n) where p is an arbitrary integer, and find the value for the squared error Sp. (b) For the allpole model determined in part (a), what is the limit of Al'(z) What does Sp converge to as p ) oc? Justify your answers. (c) Repeat Solution
(a) With x(n) = 8(n)  08(71,  1), note that the autocorrelation sequence is
positive
as p ) oo?
parts (a) and (b) for
lal >
1.
Tx(k)
Therefore, the autocorrelation 1
=
(1
+ 02)8(k)
0
 0[8(k
 1) + (l(k
+ 1)J
1 0 0
0
normal equations
0
for a pthorder
allpole model are 1
+ Q;2
a
o
1+
0
02
1 + a2
0
0 0
0
o.p(1) o.p(2) ap(p)
=
tp
000
or, in matrix notation, model is
1 +02
Thus, the pthorder
ap = EpR;IUI where R;;l UI is the first column of the inverse of the (p R1" With /:,.j = det Rj, note that ap may be written as
+ 1)
/:"pI
x (p
+ 1)
autocorrelation
matrix,
O'/:"'p2 02/:"p_3
Furthermore,
since the first coefficient of a1' is equal to one, then we must have
Thus,
/:"pI
0:/:"'p2 2 0 /:"1'_3
Chapter 4
and, for the kth coefficient,
93
Due to the tridiagonal Toeplitz structure of Rp we may find a closed form expression [or 6j as follows. First, note that for j = 0 and .7 = 1 we have 60
III
1+ a2 (1+c,2)2_a2=1+a2+o4
We may show, by induction,
that
Specifically, assume Lhat this relation holds for lljl, llj' Note that the determinant of Rj is
and let us verify that it must also hold for
where
a
o
Ajl
=
o
o
a 1 + a2 a
o
o o o
o
detAj_l
o
allj_2
is the (j  1) x (j  1) submatrix Since
formed by deleting the second column and the first row of Ilj.
then we have the following expression for llj,
Therefore, llj
=
(1 +
(0
2
)
2:= a
k=O
j
jl
2
A: 
a
2
La
j+1 2 A:
=
La
k=O
2k
k=O
as was to be shown. Thus, for the coefficient ap(k) we have
o.p(k)
(b) If we assume that
k
= a ~
IIpkl
UpI
=
Q
A: 1  o,2(pk+l) 2( +1)
1
(P4.171)
CY
P
lal < 1, then
as P
+ 00,
the term multiplying ak
ak in o.p(k) goes to zero, and
(/x(k)
Therefore, in the limit as P ) oo, the allpole model is
1 A:xJ(z)
oc,
=
1  azl
Problem Solutions
(c) Now, let us consider what happens when still holds. However, as p ...... oo we have
al > 1.
The expression for ap(k) given in Eq. (P4.l71)
and, in the limit as p ,
00,
the allpole model is
1 Aoc (z)
co~
1
= 1k
a1z1
~a
k=O
k
z
For the squared error, we have
Thus, for
lal < 1 we
have
and, for
ro:l > 1
Cllapter 1
95
4.18 Find a closedform expression for the FIR least squares inverse of length N for each of the following systems.
(a) G(z)
(b) G(z)
=
1_ 1 QZ
00zl
1
= 11
zI .
1
(c) G(z) = .
Solution
(a) Since
az
1001
< 1.
G(2)
1 of G(z),
is an allpole filter, the FIR least squares inverse is simply the denominator
or
hN(n)
(b) To find the least squares inverse of
=
6(71)  a6(n  1)
=
G(z)
we must solve the linear equations
1
Z1
RghN
Since
= g(O)U[
,,(k) ~ {
and g(O) = 1, then these equations become 2 1 0 1 21 1 2
[
where
C1
o
...]l
l
k=O
Ikl = 1 Ikl > 1
hN(l) hN(O)] hN(2)
·
[1]
0 0
· ·
. . .
The solution to these equations (see Example 4.4.5) is of the form
and
1,
C2
are constants
that are determined
by the boundary
conditions
at 71 = 0 and
71 =!V
Using the given form for hN(n),
these boundary conditions become
96
Solving for
Cj
Problem Solutions
and
C2
we find
Cl = 
N
N+l
Therefore, for n = 0,1, ... ,N  1, we have hN(n) = N and hN(n) N
+1
'/I.
N+l
u :« vi t
= 0 for
all other values of n.
(c) Again, to find the least squares inverse, we must solve the linear equations
Note, however, that G(eiw) is an allpass filter,
IG(eiw)!
Therefore,
=1
=
Y·g(k)
and tho leastsquares inverse is
=
g(k)
* g( k)
o(k)
hN(n) = o(n) Note that the least squares inverse is the same for all systems that arc related by an allpass filter.
Chapter 4
97
4.19 An important application of least squares inverse filtering is deconvolution, which is concerned with the recovery of a signal d(n) that has been convolved with a filter g(n) :1:(n)
=
d(n)
* g(n)
The problem is to design a filter hN(n) that may be used to produce an estimate of cl( n) from x(n), One of the difficulties, however, is that noise in the observed filter. For example, if we observe signal may be amplified by the
y(n)
then the filtered observations become
=
d(n)
* g(n) + v(n) + 1).(n)
y(n)
where
* hN(n) =
d(n)
+ v(n) * hJV(n)
* hN(n)
d(n)
u(n) = v(n) is the filtered noise. that minimizes
One way to reduce this noise is to design a least squares inverse filter
s
where
CXJ
= 2")e(n)[2
n=O
+ AE{[u(n)[2}
hN(n)
e(n) = 5(n  no)
* g(n)
and A > 0 is a parameter that is to be selected. Note that for large values of A, minimizing [; will force a large reduction in the filtered noise at the expense of a decrease in resolution, i.c., larger e( n), whereas smaller values of ), lead to higher resolution and larger noise. (a) Assume that v(n) is zeromean white noise with a variance
a~. Show that
where hJV is a vector containing (b) Derive the normal equations
the coefficients of the filter hN(n). the error
that result from minimizing
[; = eHe+),
a; h~hN
in the form
where e
= [e(O), e(l),
... ]T, and show that they may be written
where Q > 0 is a prewhitening parameter that depends the vector on the rightside of Eq, (4.101).
upon the values of A, and g~o is
98
Solution
(a) From Eq. (3.90) on p. 101, wc have
Problem Solutions
Since
R"
then as was to be shown. (b) The error that we want to minimize is
co
=
IT~I
E
where
=
2.::\e(n)[2
11.::::::0
+ AE{lu(n)12}
* g(n)
Nl
e(n)
and
=
6(n  no)  hN(n)
E{lv(n)12}
To minimize 0,1, ... ,N,
= IT;'h~hN = IT;'
L ihN(I)
1=0
2
the error, we set the derivative
with respect
to hN(k)
equal to zero for k
ahi_~Ek\ = 
I'll }
n=O
f
e(n)g*(77  k)
+ A IT~ hN(k) = o
Substituting
for
e(n)
00
we have
lV1
 L[S(nno)n=O
LhN(l)g(nl)]g*(nk)+AIT;'
1=0
hN(k)=(}
or,
co Nl
g*(no  k)
Interchanging
+L
n=O
[L
[.=0
hN(l)g(n I)Jg*(n
yields
DC
 k)
+ A IT~ hN(k)
= ()
the order of summation
Nl
g*(no  k)
With
+ L hN(l) [L g(n l)g*(n
1=0 n=O
OQ
 k)]
+ A IT~ hN(k)
=
o
Tg(k l)
it follows that
N1
=
L
n=O
g(n l)g*(n
 k)
L
/=0
hN(l)Tg(k l)
+..\ IT;' hN(k)
= g'(no
 k)
Chapter ,1
Written in matrix form, this becomes (R where a .\a~
9
99
+ (YI)h" =
1~
g* 'no
>
O.
100
4.20 We are given a signal, :r(n),
Problem Solutions that we want to model as the unit sample response of an allpole filter. We have reason to believe that the signal is periodic and, consequently, the poles of the model should lie on the unit circle. Thus, assuming a secondorder model for the signal, the system function is constrained to have the form
H(z)
With 10,(1)1 by
_ b(O)  1+0.(1)z1 +z2
< 2 this model produces a pair of poles on the unit circle at an angle of e defined
2 cos e
= 0.(1)
(a) Using the autocorrelation method, derive the normal equations that define the value of 0,( 1) that minimizes the error
e; = L e (n)
2 71'·CO
ex)
(b) Find an expression for the minimum error, Solution
(a) The error that we want to minimize is
{[p hnin.
e; = L e (n)
2 ,t=O
oo
where
e(n)x(n)
+ o,(l)x(n
~" (
 1)
+ x(n
2)
To find the value of 0,(1) that minimizes Ep, we set the derivative of E with respect to 0,(1) equal to zero,
BEp 00,(1)
= f;:o2e n)ao,(1) =0
=
Be(n)
Since the partial of e(n) with respect to 0,(1) is x(n  1), the normal equations are
L e(n)x(n
nc··O
 1)
0
or
f
n=O
[x(n)
+ a(l).T(n
1'x(k)
 1) + x(n  2)]x(n
 1) = 0
With
=
L x(n)::c(n
'11,=0
 1.~)
these become
a(1)r,,(0)
Therefore,
= 21'",(1)
0,(1)
(Note that
= 2rx(1)jrx(0)
10,(1)1::;
2).
Ctuxpter 4
(b) To find the minimum errol", we have
101
{EP}min
=
n~
f f
[x(n) [x(n)
+ a(l);r,(n + x(n
 1)
+ x(n
 2)]e(n) 1)
=
f
n~
[x(n)
+
 2)] [:c(n) +rx(2)]
+ a(l)x(n
+ x(n
2)]
'n=()
2[lx(O)
+u(l)Tx(1)
102
Problem Solutions
4.21 Voiced speech may be modeled as the output of an allpole filter driven by an impulse train Pno(11)
=
2:= (5(n k=1
J(
kno)
where the time between pulses, no, is known as the pitch period. Suppose that we have a segment of voiced speech, and that we know the pitch period, no.vVe extract a subsequence, x(n), of length N = 2no and model this signal f1S shown in the following figure
Pno(n)
b(O)
x(n)
where the input, Pno ('n), consists of two pulses, PnoCn)
=
5(n)
+ 5(n  no)
Find the normal equations that define coefficients op(k) that minimize the error
e; = L e2(n)
n=O
N~l
where
c(n)
and b(n) Solution
=
= op(n) * x(n)
b(n)
* Pno(n)
b(0)8(n).
If we define CLp(O)= 1, then the error (;(n) is
p
e{n)
= CLp{n) *X(n)  b(n) * Pno(n) = LCLp(l)x(n
l=O
l)
 b(O)[5(n)
+ o(n
 no)]
and the meansquare
error that we want to minimize is
2nol
e; =
~
e (n)
2
=.~
2noJ
[p
~ op(l)x(n I)  b(O)o(n)  b(O)r5(n
no) J
12
Setting the derivative with respect to op(k) equal to zero, we have 8~
8o (k) p
If we define
=~
2~~1
2~
[P
op(l)x(n
I)
 b(O)6(n)  b(O)o(n  no)
.
1 x(n
 k)
=0
2nol
T'x(k, I)
=
L
x(n  l)x(n  k)
n=O
Chapter 4
then the normal equations become (recall that ap(O)
p
103
=
1)
=
~ap(I)Tx(k,l)
1=1
 b(O)x(~k)  b(O):r(no  k)
rx(k,O)
k=l,2,
... ,p
Assuming that x(n) = 0 for ri < 0, with x = [xena 1), x(no 2), ... ,x(no p)f, may be written in matrix form as follows
the normal equations
Finally, differentiating
with respect to b(O) we have 2 [~CLp(l)X(n I)  b(O)o(n)
DE
Db(O)
Thus,
.~
 b(O)o(n  no)] [o(n)
+ O(n
no)]
p
x(O)  b(U)
or, in vector form, we have
+ ~ ap(l)x(no
1=1
 I)  b(O)
=
x(no)
xTa  2b(U) = x(O) Putting all of these together in matrix form yields
 x(no)
[ n, 1 xT
x
'J'
[
2b(O)
a]
=
[ x(O) + x(no) rx