Professional Documents
Culture Documents
(Monson H. Hayes) Statistical Digital Signal Proce PDF
(Monson H. Hayes) Statistical Digital Signal Proce PDF
r;. Use Gerschgorin’s circle theorem
to show that this matrix is nonsingular.
2.19, Consider the following quadratic function of two real-valued variables z; and z2,
Sf is 22) = 3a} + 33 + 4zizr +8
Find the values of z; and z, that minimize f(z) , z2) subject to the constraint that z; + z2 = 1
and determine the minimum value of f (zi, 22).DISCRETE-TIME
RANDOM
PROCESSES
3.1 INTRODUCTION
Signals may be broadly classified into one of two types—deterministic and random. A
deterministic signal is one that may be reproduced exactly with repeated measurements.
The unit sample response of a linear shift-invariant filter is an example of a deterministic
signal. A random signal, or random process, on the other hand, is a signal that is not
repeatable in a predictable manner. Quantization noise produced by an A/D converter or a
fixed-point digital filter is an example of a random process. Other examples include tape
hiss or turntable rumble in audio signals, background clutter in radar images, speckle noise
in synthetic aperture radar (SAR) images, and engine noise in speech transmissions from
the cockpit of an airplane. Some signals may be considered to be either deterministic or
random, depending upon the application. Speech, for example, may be considered to be a
deterministic signal if a specific speech waveform is to be processed or analyzed. However,
speech may also be viewed as a random process if one is considering the ensemble of all
possible speech waveforms in order to design a system that will optimally process speech
signals, in general
In this book, many of the signals that we will be characterizing and analyzing will be
random processes. Since a random process may only be described probabilistically or in
terms of its average behavior, in this chapter we develop the background that is necessary
to understand how a random process may be described and how its statistical properties
are affected by a linear shift-invariant system. Although it is assumed that the reader is
familiar with the basic theory of random variables, the chapter begins with a brief review
of random variables in Section 3.2, focusing primarily on the concepts of statistical av-
erages such as the mean, variance, and correlation. Discrete-time random processes are
introduced in Section 3.3 as an indexed sequence of random variables, The characterization
of a random process in terms of its ensemble averages such as the mean, autocorrelation,
and autocovariance are defined, and properties of these averages are explored. The notion
of stationarity and the concept of ergodicity are introduced next. Then, the power spectral
density, or power spectrum, of a discrete random process is defined. The power spectrum,
which is the Fourier transform of the autocorrelation function, provides an important char-
acterization of a random process and will appear frequently in this book. In Chapter 8, for
example, we consider techniques for estimating the power spectrum of a random process
758 DISCRETE-TIME RANDOM PROCESSES
from a single observation of the process. In Section 3.4, we consider the effect of filtering a
random process with a linear shift-invariant filter. Specifically, the relationship between the
autocorrelation of the input process and the autocorrelation of the output process is estab-
lished, Finally, Section 3.6 provides a description of a useful and important class of random
processes: those that may be generated by filtering white noise with a linear shift-invariant
filter. These include autoregressive (AR), moving average (MA), and autoregressive moving
average (ARMA) random processes.
3.2 RANDOM VARIABLES
In this section, we introduce the concept of a random variable and develop some basic
properties and characteristics of random variables. These results are important for two
reasons. First, since random variables are pervasive and may be found in almost any practical
application or natural setting ranging from the theory of games to detection and estimation
theory and the restoration and enhancement of distorted signals, it is important to be able to
understand and manipulate random variables. The second reason is that the characterization
of random variables will serve as the starting point for our development of random processes
in Section 3.3.
3.2.1 Definitions
The concept of a random variable may best be illustrated by means of the following simple
example. Given a fair coin that is equally likely to result in Heads or Tails when flipped,
one would expect that if the coin were to be flipped repeatedly, then the outcome would be
Heads approximately half of the time and Tails the other half. For example, if flipping the
coin Nr times results in nyz Heads and nr Tails, then, for Nr large, we should find that
na ny
“wos ; twos
Nr Nr
Therefore, with a fair coin there is an equal probability of getting either Heads or Tails and
the following probability assignment is made for the two possible experimental outcomes
H = {Heads} and T = {Tails},
Pr{H}=0.5 ; Pr(T}=0.5 GB.)
The set of all possible experimental outcomes is called the sample space or the certain event
and the sample space, denoted by &, is always assigned a probability of one
Pr{Q} = 1
For the coin flipping experiment
Q=(H,T)
and
Pr{H,T} = 1
Subsets of the sample space are called events, and events consisting of a single element are
called elementary events. For the coin tossing experiment there are only two elementaryRANDOM VARIABLES. 59
events,
a ={H} 3 om ={T}
Given the coin flipping experiment, suppose that a real-valued variable, x, is defined in
such a way that a value of 1 is assigned to x if the outcome of a coin flip is Heads and a
value of —1 is assigned if the outcome is Tails. With this definition, a mapping has been
defined between the set of experimental outcomes, «; € @, and the set of real numbers, R,
ie., f : @—> R. This mapping is given by
o ={H} => x=1
@ ={T} = x=-1
Based on the probability assignments defined for the elementary events Heads and Tails in
Eq. (3.1), it follows that there is an equal probability that the variable x will be assigned a
value of 1 or —1
Prix = 1) = 0.5
Prix = 1} = 0.5 (3.2)
Since the only two possible outcomes of the coin flip are Heads and Tails, then the only
possible values that may be assigned to the variable x are x = 1 and x 1. Therefore,
for any number a that is different from 1 or —1, the probability that x = @ is equal to zero
Prix=a)=0 if af+l
and the probability that x assumes one of the two values, x = | or x = —1, is equal to one
Pr{Q) = Prix = £1) =1
With this probability assignment on each of the elementary events in the sample space 2,
and with the mapping of each elementary event «; € & toa value of x, a random variable
has been defined that is specified in terms of the likelihood (probability) that it assumes a
particular value. This random variable has an equal probability of having a value of | or
—1. More generally, a random variable x that may assume only one of two values, x = I
and x = 1, with
Prx=1}=p and Pr{x =-1}=1—p
is referred to as a Bernoulli random variable.
A slightly more complicated random variable may be defined by replacing the coin
tossing experiment with the roll of a fair die. Specifically, if the number that is obtained
with the roll of a fair die is igned to the variable x, then x will be a random variable that
is equally likely to take on any integer value from one to six. In a similar fashion, a complex
random variable may be defined by assigning complex numbers to elementary events in
the sample space. For example, with an experiment consisting of a roll of two fair die, one
black and one white, a complex random variable z may be defined with the assignment
=m+jn
where m is the number showing on the black die and n is the number showing on the white
die.
Each of the random variables considered so far are examples of discrete random vari-
ables since the sample space Q consists of a discrete set of events, @;. Unlike a discrete60 DISCRETE-TIME RANDOM PROCESSES
random variable, a random variable of the continuous type may assume a continuum of
values. An example of a random variable of the continuous type is the following. Consider
an infinite resolution roulette wheel for which any number in the interval [0, 1] may result
from the spin of the wheel. The sample space for this experiment is
Q
{o:0