You are on page 1of 20

1

CHAPTER 1
1.1 Let
(1)
(2)
We are given that
(3)
Hence, substituting Eq. (3) into (2), and then using Eq. (1), we get
1.2 We know that the correlation matrix R is Hermitian; that is
Given that the inverse matrix R
-1
exists, we may write
where I is the identity matrix. Taking the Hermitian transpose of both sides:
Hence,
That is, the inverse matrix R
-1
is Hermitian.
1.3 For the case of a two-by-two matrix, we may
r
u
k ( ) E u n ( )u
*
n k ( ) [ ] =
r
y
k ( ) E y n ( )y
*
n k ( ) [ ] =
y n ( ) u n a + ( ) u n a ( ) =
r
y
k ( ) E u n a + ( ) u n a ( ) ( ) u
*
n a k + ( ) u
*
n a k ( ) ( ) [ ] =
2r
u
k ( ) r
u
2a k + ( ) r
u
2a k + ( ) =
R
H
R =
R
1
R
H
I =
RR
H
I =
R
H
R
1
=
R
u
R
s
R

+ =
2
For R
u
to be nonsingular, we require
With r
12
= r
21
for real data, this condition reduces to
Since this is quadratic in , we may impose the following condition on for nonsingu-
larity of R
u
:
where
1.4 We are given
This matrix is positive denite because
r
11
r
12
r
21
r
22

2
0
0
2
+ =
r
11

2
+ r
12
r
21
r
22

2
+
=
det R
u
( ) r
11

2
+ ( ) r
22

2
+ ( ) r
12
r
21
0 > =
r
11

2
+ ( ) r
22

2
+ ( ) r
12
r
21
0 >

2 1
2
--- r
11
r
22
+ ( ) 1
4
r
r
11
r
22
+ ( )
2
1
--------------------------------------
,

_
>

r
r
11
r
22
r
12
2
=
R
1 1
1 1
=
a
T
Ra a
1
,a
2
[ ]
1 1
1 1
a
1
a
2
=
a
1
2
2a
1
a
2
a
2
2
+ + =
3
for all nonzero values of a
1
and a
2
(Positive deniteness is stronger than nonnegative deniteness.)
But the matrix R is singular because
Hence, it is possible for a matrix to be positive denite and yet it can be singular.
1.5 (a)
(1)
Let
(2)
where a, b and C are to be determined. Multiplying (1) by (2):
where I
M+1
is the identity matrix. Therefore,
(3)
(4)
(5)
(6)
From Eq. (4):
a
1
a
2
+ ( )
2
0 > =
det R ( ) 1 ( )
2
1 ( )
2
0 = =
R
M+1
r 0 ( )
r
r
H
R
M
=
R
M+1
1
a
b
b
H
C
=
I
M+1
r 0 ( )
r
r
H
R
M
a
b
b
H
C
=
r 0 ( )a r
H
b + 1 =
ra R
M
b + 0 =
rb
H
R
M
C + I
M
=
r 0 ( )b
H
r
H
C + 0
T
=
4
(7)
Hence, from (3) and (7):
(8)
Correspondingly,
(9)
From (5):
(10)
As a check, the results of Eqs. (9) and (10) should satisfy Eq. (6).
We have thus shown that
b R
M
1
ra =
a
1
r 0 ( ) r
H
R
M
1
r
------------------------------------ =
b
R
M
1
r
r 0 ( ) r
H
R
M
1
r
------------------------------------ =
C R
M
1
R
M
1
rb
H
=
R
M
1
R
M
1
rr
H
R
M
1
r 0 ( ) r
H
R
M
1
r
------------------------------------ + =
r 0 ( )b
H
r
H
C +
r 0 ( )r
H
R
M
1
r 0 ( ) r
H
R
M
1
r
------------------------------------ r
H
R
M
1
r
H
R
M
1
rr
H
R
M
1
r 0 ( ) r
H
R
M
1
r
------------------------------------- + + =
0
T
=
R
M+1
1
0
0
0
T
R
M
1
a
1
R
M
1
r
r
H
R
M
1

R
M
1
rr
H
R
M
1
+ =
0
0
0
T
R
M
1
a
1
R
M
1
r
1 r
H
R
M
1
[ ] + =
5
where the scalar a is dened by Eq. (8):
(b) (11)
Let
(12)
where D, e and f are to be determined. Multiplying (11) by (12):
Therefore
(13)
(14)
(15)
(16)
From (14):
(17)
Hence, from (15) and (17):
(18)
Correspondingly,
R
M+1
R
M
r
BT
r
B*
r 0 ( )
=
R
M+1
1
D
e
H
e
f
=
I
M+1
R
M
r
BT
r
B*
r 0 ( )
=
D
e
H
e
f
R
M
D r
B*
e
H
+ I =
R
M
e r
B*
f + 0 =
r
BT
e r 0 ( ) f + 1 =
r
BT
D r 0 ( )e
H
+ 0
T
=
e R
M
1
r
B*
f =
f
1
r 0 ( ) r
BT
R
M
1
r
B*

--------------------------------------------- =
6
(19)
From (13):
(20)
As a check, the results of Eqs. (19) and (20) must satisfy Eq. (16). Thus
We have thus shown that
where the scalar f is dened by Eq. (18).
1.6 (a) We express the difference equation describing the rst-order AR process u(n) as
where w
1
= -a
1
. Solving this equation by repeated substitution, we get
e =
R
M
1
r
B*
r 0 ( ) r
BT
R
M
1
r
B*

---------------------------------------------
D R
M
1
R
M
1
r
B*
e
H
=
R
M
1
R
M
1
r
B*
r
BT
R
M
1
r 0 ( ) r
BT
R
M
1
r
B*

--------------------------------------------- + =
r
BT
D r 0 ( )e
H
+ r
BT
R
M
1
r
BT
R
M
1
r
B*
r
BT
R
M
1
r 0 ( ) r
BT
R
M
1
r
B*

------------------------------------------------
r 0 ( )r
BT
R
M
1
r 0 ( ) r
BT
R
M
1
r
B*

--------------------------------------------- + =
0
T
=
R
M+1
1
R
M
1
0
T
0
0
f
R
M
1
r
B*
r
BT
R
M
1
r
BT
R
M
1

R
M
1
r
B*

1
+ =
R
M
1
0
T
0
0
f
R
M
1
r
B*
1
+ = r
BT
R
M
1
1 [ ]
u n ( ) v n ( ) w
1
u n 1 ( ) + =
u n ( ) v n ( ) w
1
v n 1 ( ) w
1
u n 2 ( ) + + =
7
(1)
Here we have used the initial condition
or equivalently
Taking the expected value of both sides of Eq. (1) and using
for all n,
we get the geometric series
This result shows that if , then E[u(n)] is a function of time n. Accordingly, the
AR process u(n) is not stationary. If, however, the AR parameter satises the
condition:
or
then
Under this condition, we say that the AR process is asymptotically stationary to order
one.
(b) When the white noise process v(n) has zero mean, the AR process u(n) will likewise
have zero mean. Then

=
v n ( ) w
1
v n 1 ( ) w
1
2
v n 2 ( ) w
1
n-1
v 1 ( ) + + + + =
u 0 ( ) 0 =
u 1 ( ) v 1 ( ) =
E v n ( ) [ ] =
E u n ( ) [ ] w
1
w
1
2
w
1
n-1
+ + + + =

1 w
1
n

1 w
1

---------------
,

_
, w
1
1
n, w
1
1 =



' ;



=
0
a
1
1 < w
1
1 <
E n ( ) [ ]

1 w
1

--------------- as n
8
(2)
Substituting Eq. (1) into (2), and recognizing that for the white noise process
(3)
we get the geometric series
When |a
1
| < 1 or |w
1
| < 1, then
for large n
(c) The autocorrelation function of the AR process u(n) equals E[u(n)u(n-k)]. Substituting
Eq. (1) into this formula, and using Eq. (3), we get
var v n ( ) [ ]
v
2
=
var u n ( ) [ ] E u
2
n ( ) [ ]. =
E v n ( )v k ( ) [ ]

v
2
n k =
0, n k

'

=
var u n ( ) [ ]
v
2
1 w
1
2
w
1
4

w
1
2n-2
+ + + + ( ) =

v
2
1 w
1
2n

1 w
1
2

------------------
,


_
, w
1
1

v
2
n, w
1
1 =

'

=
var u n ( ) [ ]

v
2
1 w
1
2

---------------

v
2
1 a
1
2

-------------- =
E u n ( )u n k ( ) [ ]
v
2
w
1
k
w
1
k+2

w
1
k+2n-2
+ + + ( ) =

v
2
w
1
k
1 w
1
2n

1 w
1
2

------------------
,


_
, w
1
1

v
2
n, w
1
1 =

'

=
9
For |a
1
| < 1 or |w
1
| < 1, we may therefore express this autocorrelation function as
for large n
Case 1: 0 < a
1
< 1
In this case, w
1
= -a
1
is negative, and r(k) varies with k as follows:
Case 2: -1 < a
1
< 0
In this case, w
1
= -a
1
is positive and r(k) varies with k as follows:
1.7 (a) The second-order AR process u(n) is described by the difference equation:
Hence
and the AR parameters equal
Accordingly, we write the Yule-Walker equations as
r k ( ) E u n ( )u n k ( ) [ ] =

v
2
w
1
k
1 w
1
2

---------------
-4
-3
-2
-1
0
+1
+2
+3
+4
k
r(k)
-4 -2 -1 0
+2 +3 +4
k
r(k)
-3 +1
u n ( ) u n 1 ( ) 0.5u n 2 ( ) v n ( ) + =
w
1
1 =
w
2
0.5 =
a
1
1 =
a
2
0.5 =
10
(b) Writing the Yule-Walker equations in expanded form:
Solving the rst relation for r(1):
(1)
Solving the second relation for r(2):
(2)
(c) Since the noise v(n) has zero mean, so will the AR process u(n). Hence,
We know that
(3)
Substituting (1) and (2) into (3), and solving for r(0), we get
1.8 By denition,
P
0
= average power of the AR process u(n)
r 0 ( ) r 1 ( )
r 1 ( ) r 0 ( )
1
0.5
r 1 ( )
r 2 ( )
=
r 0 ( ) 0.5r 1 ( ) r 1 ( ) =
r 1 ( ) 0.5r 0 ( ) r 2 ( ) =
r 1 ( )
2
3
-- -r 0 ( ) =
r 2 ( )
1
6
-- -r 0 ( ) =
var u n ( ) [ ] E u
2
n ( ) [ ] =
r 0 ( ). =

v
2
a
k
r k ( )
k=0
2

=
r 0 ( ) a
1
r 1 ( ) a
2
r 2 ( ) + + =
r 0 ( )

v
2
1
2
3
---a
1
1
6
---a
2
+
---------------------------- 1.2 = =
11
= E[|u(n)|
2
]
= r(0) (1)
where r(0) is the autocorrelation function of u(n) for zero lag. We note that
Equivalently, except for the scaling factor r(0),
(2)
Combining Eqs. (1) and (2):
(3)
1.9 (a) The transfer function of the MA model of Fig. 2.3 is
(b) The transfer function of the ARMA model of Fig. 2.4 is
(c) The ARMA model reduces to an AR model when
It reduces to an MA model when
1.10 We are given
Taking the z-transforms of both sides:
a
1
a
2

a
M
, , , { }
r 1 ( )
r 0 ( )
----------
r 2 ( )
r 0 ( )
----------

r M ( )
r 0 ( )
------------- , , ,

' ;

a
1
a
2

a
M
, , , { } r 1 ( ) r 2 ( )

r M ( ) , , , { }
P
0
a ,
1
a
2

a
M
, , , { } r 0 ( ) r 1 ( ) r 2 ( )

r M ( ) , , , , { }
H z ( ) 1 b
1
*
z
1
b
2
*
z
2

b
K
*
z
K
+ + + + =
H z ( )
b
0
*
b
1
*
z
1
b
2
*
z
2

b
K
*
z
K
+ + + +
1 a
1
*
z
1
a
2
*
z
2

a
M
*
z
M
+ + + +
--------------------------------------------------------------------------------- =
b
0
b
1

b
K
0 = = = =
a
1
a
2

a
M
0 = = = =
x n ( ) n ( ) 0.75 n 1 ( ) 0.25 n 2 ( ) + + =
12
Hence, the transfer function of the MA model is
(1)
Using long division, we may perform the following expansion of the denominator in Eq.
(1):
(2)
(a) M = 2
Retaining terms in Eq. (2) up to z
-2
, we may approximate the MA model with an AR
model of order two as follows:
(b) M = 5
Retaining terms in Eq. (2) up to z
-5
, we obtain the following approximation in the
forms of an AR model of order ve:
X z ( ) 1 0.75z
1
0.25z
2
+ + ( )V z ( ) =
X z ( )
V z ( )
----------- 1 0.75z
1
0.25z
2
+ + =
1
1 0.75z
1
0.25z
2
+ + ( )
1
--------------------------------------------------------------- =
1 0.75z
1
0.25z
2
+ + ( )
1
1
3
4
---z
1

5
16
------z
2 3
64
------z
3

11
256
---------z
4

45
1024
------------z
5
+ + =
91
4096
------------z
6

93
16283
---------------z
7 85
65536
---------------z
8 627
262144
------------------z
9

1541
1048576
---------------------z
10
+ + + +
1 0.75z
1
0.3125z
2
0.0469z
3
0.043z
4
0.0439z
5
+ +
0.0222z
6
0.0057z
7
0.0013z
8
0.0024z
9
0.0015z
10
+ + +
X z ( )
V z ( )
-----------
1
1 0.75z
1
0.3125z
2
+
----------------------------------------------------------
X z ( )
V z ( )
-----------
1
1 0.75z
1
0.3125z
2
0.0469z
3
0.043z
4
0.0439z
5
+ +
---------------------------------------------------------------------------------------------------------------------------------------------------
13
(c) M = 10
Finally, retaining terms in Eq. (2) up to z
-10
, we obtain the following approximation in
the form of an AR model of order ten:
where D(z) is given by the polynomial on the right-hand side of Eq. (2).
1.11 (a) The lter output is
where u(n) is the tap-input vector. The average power of the lter output is therefore
(b) If u(n) is extracted from a zero mean white noise of variance
2
, we have
where I is the identity matrix. Hence,
1.12 (a) The process u(n) is a linear combination of Gaussian samples. Hence, u(n) is
Gaussian.
(b) From inverse ltering, we recognize that v(n) may also be expressed as a linear
combination of samples represented by u(n). Hence, if u(n) is Gaussian, then v(n) is
also Gaussian.
1.13 (a) From the Gaussian moment factoring theorem:
X z ( )
V z ( )
-----------
1
D z ( )
-----------
x n ( ) w
H
u n ( ) =
E x n ( )
2
[ ] E w
H
u n ( )u
H
n ( )w [ ] =
w
H
E u n ( )u
H
n ( ) [ ]w =
w
H
Rw =
R
2
I =
E x n ( )
2
[ ]
2
w
H
w =
E u
1
*
u
2
( )
k
E u
1
*

u
1
*
u
2

u
2
[ ] =
14
(1)
(b) Putting u
2
= u
1
= u, Eq. (1) reduces to
1.14 It is not permissible to interchange the order of expectation and limiting operations in Eq.
(1.113). The reason is that the expectation is a linear operation, whereas the limiting
operation with respect to the number of samples N is nonlinear.
1.15 The lter output is
Similarly, we may write
Hence,
1.16 The mean-square value of the lter output in response to white noise input is
k! E u
1
*
u
2
[ ]

E u
1
*
u
2
[ ] =
k! E u
1
*
u
2
[ ] ( )
k
=
E u
2k
[ ] k! E u
2
[ ] ( )
k
=
y n ( ) h i ( )u n i ( )
i

=
y m ( ) h k ( )u m k ( )
k

=
r
y
n m , ( ) E y n ( )y
*
m ( ) [ ] =
E h i ( )u n i ( ) h
*
k ( )u
*
m k ( )
k

=
h i ( )h
*
k ( )E u n i ( )u
*
m k ( ) [ ]
k

=
h i ( )h
*
k ( )r
u
n i m k , ( )
k

=
P
o
2
2

------------------ =
15
The value P
o
is linearly proportional to the lter bandwidth . This relation holds
irrespective of how small is, compared to the mid-band frequency of the lter.
1.17 (a) The variance of the lter output is
We are given
radians/sec.
Hence,
(b) The pdf of the lter output y is
1.18 (a) We are given
, k = 0,1,...,N-1
where u(n) is real valued and
Hence,

y
2 2
2

------------------ =

2
0.1 volt
2
=
2 1 =

y
2 2 0.1 2

-------------------------- 0.4 volt


2
= =
f y ( )
1
2
y
-----------------e
y
2
2
y
2

=
1
0.63 2
---------------------e
y
2
0.8
=
U
k
u n ( ) jn
k
( ) exp
n=
N-1

k
2
N
------k =
16
(1)
By denition, we also have
Moreover, since r(n) is periodic with period N, we may invoke the time-shifting
property of the discrete Fourier transform to write
Thus, recognizing that
k
= (2/N)k, Eq. (1) reduces to
(b) Part (a) shows that the complex spectral samples U
k
are uncorrelated. If they are
Gaussian, then they will also be statistically independent. Hence,
E U
k
U
l
*
[ ] E u n ( )u m ( ) jn
k
jm
l
+ ( ) exp
m=0
N-1

n=0
N-1

=
jn
k
jm
l
+ ( )E u n ( )u m ( ) [ ] exp
m=0
N-1

n=0
N-1

=
jn
k
jm
l
+ ( )r n m ( ) exp
m=0
N-1

n=0
N-1

=
jn
k
( ) r n m ( ) jn
k
( ) exp
n=0
N-1

exp
m=0
N-1

=
r n ( ) jn
k
( ) exp
n=0
N-1

S
k
=
r n m ( ) jn
k
( ) exp
n=0
N-1

jm
k
( )S
k
exp =
E U
k
U
l
*
[ ] S
k
jm
l

k
( ) ( ) exp
m=0
N-1

=
S
k
, l k =
0, otherwise
'

=
17
where
Therefore,
1.19 The mean square value of the increment process dz() is
Hence E[|dz()|
2
] is measured in watts.
1.20 The third-order cumulant of a process u(n) is
= third-order moment.
All odd-order moments of a Gaussian process are known to be zero; hence,
f
U
U
0
U
1
U
N-1
, , , ( )
1
2 ( )
N
det ( )
---------------------------------
1
2
---U
H
U
,
_
exp =
U U
0
U
1
U
N-1
, , , [ ]
T
=

1
2
---E UU
H
[ ] =
1
2
---diag S
0
S
1
S
N 1
, , , ( ) =
det ( )
1
2
N
------ S
k
k=0
N-1

=
f
U
U
0
U
1
U
N-1
, , , ( )
1
2 ( )
N
2
N
S
k
k=0
N-1

----------------------------------------
1
2
---
k=0
N-1

U
k
2
1
2
---S
k
------------
,


_
exp =

N
U
k
2
S
k
------------
,

_
S
k
ln
k=0
N-1

,

_
exp =
E dz ( )
2
[ ] S ( )d =
c
3

1

2
, ( ) E u n ( )u n
1
+ ( )u n
2
+ ( ) [ ] =
18
The fourth-order cumulant is
For the special case of =
1
=
2
=
3
, the fourth-order moment of a zero-mean Gaussian
process of variance
2
is 3
4
, and its second-order moment of
2
. Hence, the fourth-order
cumulant is zero. Indeed, all cumulants higher than order two are zero.
1.21 The trispectrum is
Let the process be passed through a three-dimensional band-pass lter centered on
1
,
2
,
and
3
. We assume that the bandwidth (along each dimension) is small compared to the
respective center frequency. The average power of the lter output is proportional to the
trispectrum, C
4
(
1
,
2
,
3
).
1.22 (a) Starting with the formula
the third-order cumulant of the lter output is
where is the third-order cumulant of the lter input. The bispectrum is
c
3

1

2
, ( ) 0 =
c
4

1

2

3
, , ( ) E u n ( )u n
1
+ ( )u n
2
+ ( )u n
3
+ ( ) [ ] =
E u n ( )u n
1
+ ( ) [ ]E u n
2
+ ( )u n
3
+ ( ) [ ]
E u n ( )u n
2
+ ( ) [ ]E u n
1
+ ( )u n
3
+ ( ) [ ]
E u n ( )u n
3
+ ( ) [ ]E u n
1
+ ( )u n
2
+ ( ) [ ]
C
4

1

2

3
, , ( ) c
4

1

2

3
, , ( )e
j
1

1

2

2

3

3
+ + ( )

3
=-

2
=-

1
=-

=
c
k

1

2

,
k-1
, , ( )
k
h
i
h
i
1
+

h
i
k-1
+
i=-

=
c
3

1

2
, ( )
3
h
i
h
i
1
+
h
i
2
+
i=-

3
19
Hence,
(b) From this formula, we immediately deduce that
1.23 The output of a lter of impulse response h
i
due to an input u(i) is given by the
convolution sum
The third-order cumulant of the lter output is, for example,
For an input sequence of independent and identically distributed random variables, we
note that
C
3

1

2
, ( )
3

1
=-

c
3

1

2
, ( )e
j
1

1

2

2
+ ( )

2
=-

3
h
i
h
i
1
+
h
i
2
+
e
j
1

1

2

2
+ ( )

2
=-

1
=-

i=-

=
C
3

1

2
, ( )
3
H e
j
1
,
_
H e
j
2
,
_
H
*
e
j
1

2
+ ( )
,
_
=
C
3

1

2
, ( ) [ ] arg H e
j
1
,
_
H e
j
2
,
_
H e
j
1

2
+ ( )
,
_
arg arg + arg =
y n ( ) h
i
u n i ( )
i

=
C
3

1

2
, ( ) E y n ( )y n
1
+ ( )t n
2
+ ( ) [ ] =
E h
i
u n i ( ) h
k
u n
1
k + ( ) h
l
u n
2
l + ( )
l

=
E h
i
u n i ( ) h
k+
1
u n k ( ) h
l+
2
u n l ( )
l

=
h
i
h
k+
1
h
l+
2
E u n i ( )u n k ( )u n l ( ) [ ]
l

=
E u n i ( )u n k ( )u n l ( ) [ ]

3
i k l = =
0, otherwise
'

=
20
Hence,
In general, we may thus write
1.24 By denition:
Hence,
We are told that the process u(n) is cyclostationary, which means that
It follows therefore that
1.25 For = 0, the input to the time-average cross-correlator reduces to the squared amplitude
of a narrow-band lter with mid-band frequency . Correspondingly, the time-average
cross-correlator reduces to an average power meter. Thus, for = 0, the instrumentation of
Fig. 1.16 reduces to that of Fig. 1.13.
C
3

1

2
, ( )
3
h
i
h
i+
1
h
i+
2
i=-

=
C
3

1

2

k-1
, , , ( )
k
h
i
h
i+
1
h
i+
k-1
i=-

=
r
( )
k ( )
1
N
---- E u n ( )u
*
n k ( )e
j2n
[ ]e
jk
n=0
N-1

=
r
( )
k ( )
1
N
---- E u n ( )u
*
n k + ( )e
j2n
[ ]e
j k
n=0
N-1

=
r
( )*
k ( )
1
N
---- E u
*
n ( )u n k ( )e
j2n
[ ]e
j k
n=0
N-1

=
E u n ( )u
*
n k + ( )e
j2n
[ ] E u
*
n ( )u n k ( )e
j2n
[ ] =
r
( )
k ( ) r
( )*
k ( ) =

You might also like