DIGITAL COMMUNICATION AND COMMUNICATION NETWORKS
Vijay K. Garg
Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
YihChen Wang
Lucent Technologies, Naperville, Illinois, USA
This section is concerned with digital communication and data communication networks. In the first part of the section we focus on the essentials of digital communication and in the second part we discuss important aspects of data communication networks. In a digital communication system, the information is processed so that it can be represented by a sequence of discrete messages. The digital source may be the result of sampling and quantizing an analog source, or it may represent a digital source such as the contents of a computer memory. When the source is a binary source, the two possible values 0 and 1 are called bits for binary source. A digital signal can be processed independently of whether it represents a discrete data source or a digitized analog source. This implies that an essentially unlimited range of signal conditioning and processing options is available to the designer. Depending on the origination and destination of the information being conveyed, it may include source coding, encryption, pulse shaping for spectral control, forward error correction (FEC) coding, special modulation to spread the signal spectrum, and equalization to compensate for channel distortion. In most communication system designs, a general objective is to use as efficiently as possible the resources of bandwidth
and transmitted power. Thus, we are interested in both bandwidth efficiency, defined as the ratio of data rate to signal bandwidth, and power efficiency, characterized by the probability of making a reception error as a function of signaltonoise ratio (SNR). Modulation produces a continuoustime waveform suitable for transmission through the communication channel, whereas demodulation is to extract the data from the received signal. We discuss various modulation schemes. Spread spectrum refers to any modulation scheme that produces a spectrum for the transmitted signal much wider than bandwidth of the information to be transmitted independently of the bandwidth of the informationbearing signal. We discuss the code division multiple access (CDMA) spread spectrum scheme. In the second part of the section, our focus is on data communications network usually involving a collection of computers and other devices operating autonomously and interconnected to provide computing or data services. The interconnection of computers and devices can be established with either wired or wireless transmission media. This part starts with an introduction to fundamental concepts of data communications and network architecture. These concepts
950 would help readers to understand how different pieces of system components are working together to build a data communication network. Many data communication and networking terminologies are explained so that they would also help readers to understand advanced networking technologies. Two important network architectures are described. They are Open System Interconnection (OSI) and TCP/IP architecture. OSI is a network reference model that is referenced by all types of networks while the TCP/IP architecture provides a framework where the Internet has been built upon. The network evolution cannot be so successful without the widespread use
Vijay K. Garg and YihChen Wang
of local networking. The local network technology including the wireless Local Area Network (WLAN) was then introduced followed by various wireless network access technologies, like FDMA, TDMA, CDMA, and WCDMA. The convergence of varieties of networks under the Internet has begun. One important driver has been the introduction of Session Initiation Protocol (SIP) that could provide a framework for Integrated Multimedia Network Service (IMS). The convergence won't be possible without extensive use of a Soffswitch, the creation of ALLIP architecture and providing broadband access utilizing optical networking technologies.
1
Signal Types, Properties, and Processes
Vijay K. G a r g
Department of Electrical and Computer Engineering, University of Illinois at Chicago, Chicago, Illinois, USA
1.1 1.2 1.3
Signal Types ........................................................................................ Energy and Power of a Signal ................................................................. Random Processes ...............................................................................
1.3.1 Statistical Average o f a R a n d o m Process • 1.3.2 Stationary Process • 1.3.3 Properties o f a n Autocorrelation Function • 1.3.4 Crosscorrelation Functions • 1.3.5 Ergodic Processes
951 951 952
YihChen Wang
Lucent Technologies, Naperville, Illinois, USA
1.4 1.5 1.6
Transmission of a Random Signal Through a Linear TimeInvariant Filter .... Power Spectral Density .........................................................................
1.5.1 Properties o f p s d
953 954 955
Relation Between the psd of Input Versus the psd of Output .......................
1.1 Signal Types
A signal is a function that carries some information. Mathematically, a signal can be classified as deterministic or random. For a deterministic signal, there is no uncertainty with respect to its value at any time. Deterministic signals are modeled by explicit mathematical expressions. For a r a n d o m signal, there is some degree of uncertainty before the signal occurs. A probabilistic model is used to characterize a random signal. Noise in a communication system is an example of the random signal. A signal x(t) is periodic in time if there exists a constant To > O, such that:
where x(t) is either a voltage or current of the signal. The energy dissipated during the time interval (  T/2, T/2) by a real signal with instantaneous power given by equation 1.2 is written as:
T/2
Erx = I x2(t)dt" r/2
(1.3)
The average power dissipated by the signal during interval T will be as follows:
r/2 pr = ~.J x l
T/2
x(t) = x(t + To) for  oo < t < cx),
(1.1)
x2(t)dt.
(1.4)
where t is the time. The smallest value of To that satisfies equation 1.1 is called the period of x(t). The period To defines the duration of one complete cycle of x(t). A signal for which there is no such value of To that satisfies equation 1.1 is referred to as a non
periodic signal.
In analyzing communication signals, it is often desirable to deal with w a v e f o r m energy. We refer to x(t) as an energy signal only if it has nonzero but finite energy 0 < E~ < oc for all time, where:
1.2 Energy and Power of a Signal
Instantaneous power of an electrical signal is given as:
Ex = r~oolim J
 T/2
x2(t)dt =
oo
i
x2(t)dt.
(1.5)
p(t) = xZ(t),
Copyright© 2005by AcademicPress. All rights of reproduction in any form reserved.
(1.2)
A signal is defined to be a power signal only if it has finite but nonzero power 0 < Px < oc for all time, where: 951
952 T/2 Px = T~limT1 I T/2
Vijay K. Garg and YihChen Wang
x2(t)dt"
(1.6)
E{X(tk)} = i (x. pXk)dX =
IXx(tk),
(1.8)
1.3 R a n d o m
Processes
A random process X(t, s) can be viewed as a function of two variables: sample space s, and time t. We show N sample functions of time, {Xk(t)} (see Figure 1.1). Each of the sample functions can be considered as the output of a different noise generator. For a fixed sample point Sk, the graph of the function X(t, sk) versus time t is called a realization or sample function of the random process. To simplify the notation, we denote this sample function as: Xk(t) = X ( t , Sk). (1.7)
where X(tk) is the random variable and pXk(X) is the pdf of X(tk) at time tk. We define the autocorrelation function of the random process X(t) to be a function of two variables, h and t2, as:
Rx(h, t2) = E{X(h)X(t2)},
(1.9)
where X ( h ) and X(t2) are random variables obtained by observing X(t) at times h and t2, respectively. The autocorrelation function is a measure of the degree to which two time samples of the same random process are related.
1.3.2 Stationary Process
A random process X(t) is said to be stationary in the strict sense if none of its statistics is affected by a shift in time origin. A random process is said to be widesense stationary (WSS) if two of its statistics (mean and autocorrelation function) do not vary with shift in time origin. Therefore, a random process is WSS if:
We have an indexed ensemble (family) of random variables {X(t,s)}, which is called a random process. To simplify the notation, we suppress the s and use X(t) to denote a random process. A random process X(t) is an ensemble of time functions together with a probability rule that assigns a probability to any meaningful event associated with an observation of one sample function of the random process.
E{X(t)} = tXx = constant,
and
(1.10)
1.3.1 Statistical Average of a Random Process
A random process whose distribution functions are continuous can be described statistically with a probability density function (pdf). We define the mean of the random process X(t) as:
Rx(h, t2) = Rx(h  t2) = Rx(T),
(1.11)
where "r = h  t2. In a strict sense, stationary implies WSS, but WSS does not imply stationary.
1.3.3 Properties of an Autocorrelation Function
Xl(t~.
An auto correlation function reads as:
Rx(T) = E[X(t +'r)X(t)] for all t.
./"3
~6
(1.12)
X2(t)
• The mean square value of a random process can be obtained from Rx(T) by substituting "r = 0 in equation
•
:~(t)
1.12:
Rx(O) = E[X2(t)].
(1.13)
XN(t)
• The autocorrelation function Rx('r) is an even function of r; that is:
Rx('r) = Rx(  "r).
. Time
(1.14)
• The autocorrelation function Rx('r) has its maximum magnitude at ~ = 0; that is: IRx('r) I _< Rx(O). (1.15)
FIGURE 1.1
Random Noise Process
1
Signal Types, Properties, and Processes
To prove this property, we consider the nonnegative quanT/2 T/2
953
tity:
l E[tXx(T)] = T J E[{X(t + w) 4 X ( t ) }
21 > 0.
1 E[x(t)]dt = T I txxdt = tXx, (1.27)
T/2
(1.16)
T/2
Or we can consider the following:
E[X2(t + "r)] i 2E[X(t +'r)X(t)] + E[X2(t)] >_O. (1.17)
2Rx(O) ± 2Rx(~) >_O.
where ~x is the mean of process X(t). We say that the process X(t) is ergodic in mean if two conditions are satisfied: (1) The time average ~x(T) approaches the ensemble average ~x in limit as the observation time T goes to infinity:
T~oo
(1.18)
(1.19) (1.20)
Rx(0) < Rx(T) < Rx(0). IRx('r)l _< Rx(O).
lim
~x(T) = ~x.
(1.28)
1.3.4 Crosscorrelation Functions
We consider two random processes X(t) and Y(t) with autocorrelation function Rx(t, u) and Rr(t, u), respectively. The two crosscorrelation functions of X(t) and Y(t) are defined as:
(2) The variance of ~x(T), treated as a random variable, approaches zero in limit as the observation interval T goes to infinity. lira
var[~x(T) ] = 0.
(1.29)
Rxy(t, u) = E[X(t)Y(u)],
and
(1.21)
Likewise, the process X(t) is ergodic in the autocorrelation function if the following two limiting conditions are fulfilled: (1) lim (2) lim
Rrx(t, u) = E[Y(t)X(u)I. R(t, u) = [ Rx(t, u) Rxr(t, u) l LRrx(t, u) Ry(t, u) '
or
(1.22) (1.23)
T~
Rx('r, T)= Rx('r). var[Rx('r, T)] = 0.
(1.30) (1.31)
r~cx)
Example 1
R('r) = r Rx(,r) Rxr('r)] [Ryx('r) Rr('r) "
(1.24)
whereT=tu. The crosscorrelation function is not generally an even function of % and it does not have a m a x i m u m value at the origin. However, it obeys a symmetry relationship:
A random process is described by x(t) = A cos (2wft + (b), where ~b is a random variable uniformly distributed on (0, 2rr). Find the autocorrelation function and mean square value.
Solutions:
Rx(h, t2) = E[Acos (2zrfil q (b)Acos (2"rrfi2 q qb)] =A2E{~cos27rf(t1t2)+~cos[2~Tf(h t2) + 2qb]}
A2
RXy(T)
1.3.5 Ergodic Processes
=
Rrx('r).
(1.25)
We consider the sample function x(t) of a stationary process X(t), with observation interval  T / 2 < t < T/2. The time average will be as written here:
=   c o s 2 ~ f ( t l  t2). 2 A2 A2 Rx fir) =   cos 27rf(t1  t2) =   COS2Trfx. 2 2
A2
tx~(T) = ~
,J
T/2
Rx(0) =   . 2
x(t)dt.
(1.26)
T/2
The time average tXx(T) is a random variable, and its value depends on the observation interval and which particular sample function of the random process X(t) is selected for use in equation 1.26. Since X(t) is assumed to be stationary, the mean of the time average IXx(T) is given as:
1.4 Transmission of a Random Signal Through a Linear TimeInvariant Filter
Figure 1.2 shows a random input signal X(t) passing through a linear timeinvariant filter. The output signal Y(t) can be
954
Impulse response function h(t)
Vijay K. Gargand YihChen Wang Rv( t, u)= i h(%)d'rl i h('r2)d'r2E[X(t . Y(t)
(x) oo T1) " X(u T2) ] .
x(t)
FIGURE 1.2 Filter
*
Filter
Random Signal Through a Linear Timeinvariant
Rr( t, u)=
ih(#r])d'rlih(~2)d~2Rx(t~l,U~2). 00 ~
(1.37)
related to the input through an impulse response function h(T1)
as:
When the input X(t) is a stationary process, the autocorrelation function of X(t) is only a function of difference between the observation time t  % and u  %. Thus, with r = t  u, we have:
Y(t) = i h(%)X(t  %)d'rl.
00
(1.32)
Ry('r)=ii[h('rl)h('r2).Rx('r%+'r2)](d~l)(d'r2 )
oo oo
(1.38)
The mean of
Y(t) is as follows:
(1.33)
Ry(O)=E[y2(t)]= i i h(~l)h(T2)Rx(~2%)(d%)(d%) (1.39) Dy(t) = E[Y(t)] = E [ ~__J h('rl)X(t  'rl)d'rl
oo o(3
Ry(O) is a constant.
The system is stable provided that the expectation E [X(t)] is finite for all t. We may interchange the order of expectation and integration in equation 1.33, and the following results:
1.5 Power Spectral Density
Let
H(f)
denote the frequency response of the system; then:
lay(t) =
i h('rl)E[X(t  1"l)]d'r1 = i h('rl)~x(t  "rl)d'rl"
00 00
h(%) = i
OO
[H(f).
eJ2~rf~lldf.
(1.40)
(1.34) The following also applies: For a stationary process X(t), the mean lax, so we may simplify equation 1.34 as:
b~x(t) is
constant
E[y2(t)]
= oo oc
[H(f)
oo
"
e j2•f'' ]
. h(T2)Rx('r2 
'1"1)
(d'rl)(d'r2)
[gY = laX ' i
(X3
h('rl)d'rl = Dx " H ( 0 ) ,
(1.35)
= i H(f)df i h("ri)d'r2f [Rx('re'rl)eJ2~rf'q]d"rl.
oo
(1.41)
where H(0) is the zerofrequency (dc) response of the system. Equation 1.35 states that the mean of the random process Y(t) produced at the output of a linear timeinvariant system in response to X(t) acting as the input process is equal to the mean of X(t) multiplied by the dc response of the system, which is intuitively satisfying. The autocorrelation function of the output random process Y(t) is as follows: Let x = T2

T1, and the following results:
S[y2(t)]=iS(f)df'i[h(T2)d72eJ2~xf~2]dT2"i[gx(T)cJ2affT]dT (3o ~ oo
= i IH(f)12df"i [Rx(~)e '2~f¢]d'r,
(3O OO
(1.42)
where ]H(f)] is the magnitude response of the filter. We define: (1.36) Assuming that mean square the system is stable, then:
E[X2(t)]
is finite for all t and
Sx(f) = i [Rx(~)eJ2~f~]d~"
00
1 Signal Types, Properties, and Processes Sx(f) is called the power spectral density or power spectrum of the stationary process X(t), Now, we can rewrite equation 1.42 as:
E[y2(t)] = i [IH(f)12' Sx(f)]df.
oo
955
S x (  f) = ] [Rx(T)eJ2~f~]dv = Sx(f).
oo
7
(1.43)
• Property 5 The psd, appropriately normalized, has the properties usually associated with a probability density function:
The mean square value of a stable linear timeinvariant filter is output in response to a stationary process is equal to the integral over all frequencies of the power spectral density of the input process multiplied by the squared magnitude response of the filter.
Px(f) 
Sx(f) ~ >_ 0 for all f. f Sx(f)df
00
(1.48)
1.6 Relation Between the psd of Input Versus the psd of Output
Sy(f) = I [Ry('r)e j2rff*]d'r.
oo
1.5.1 Properties of psd
We use the following definitions:
Sx(f) = i [Rx(7)eJ2~'ff*]d%
00
[ ~
__ T 1 +
and
Let % = "r + %  %:
Rx(7) = I [Sx(f)eJ2~f;]df"
00
7
Sy(f) = H(f)H*(f)Sx(f) = In(f/I 2. Sx(f).
(1.49)
Using above expressions, we derive the following properties: • Property 1 For f = 0, we get:
The psd of the output process Y(t) equals the power spectral density of the input X(t) multiplied by the squared magnitude response of the filter.
Sx(O) = [ Rx(v)dv.
ox)
(1.44)
• Property 2 The mean square value of a stationary process equals the total area under the graph of psd:
Example 2 Numbers 1 and 0 are represented by pulse of amplitude A and  A volts, respectively, and duration T sec. The pulses are not synchronized, so the starting time ta of the first complete pulse for positive time is equally likely to lie anywhere between 0 and Tsec (see Figures 1.3 and 1.4). That is, ta is the sample value of a uniformly distributed random variable Ta with its probability density function defined by:
Rx(O) = E[X2(t)] = [ Sx(f)df.
(1.45)
fr.(t.)
(~O<td<T.
=
• Property 3 The psd of a stationary process is always nonnegative:
Rx(%) = ~
A2[
T/2
elsewhere.


T/2 X(t)X(t  %)d'r.
Sx(f) >_ 0 for all f.
(1.46)
• Property 4 The psd of a realvalued random process is an even function of frequency: Sx(  f ) = Sx(f). (1.47)
A
X(t)
Sx(  f) = ~[ [Rx(7)eJevf~]d7.
 OC
A
FIGURE 1.3 Pulse of Amplitude A
Because Rx(7) is equal to Rx(  7), we can write:
956
Vijay K. Garg and YihChen Wang A
x(t ~O X(t'q)
"~1~ T
A
FIGURE 1.4 ShiftedPulse of Amplitude A Rx(0) = Total average power.
FIGURE 1.5
Autocorrelation Function
Rx('r) = A2(1 ~~) ,'r, < T
=0
SX(f) = A 2'
ITI _> r.
r(Sin ir' 2 \ ~rfr j
These example equations show that for a random binary wave in which binary numbers 1 and 0 are represented by pulse g(t) and g(t), respectively, the psd Sx(f) is equal to the energy spectral density ~g(f) of the symbol shaping pulse g(t), divided by the symbol duration T. Note the autocorrelation function is shown in Figure 1.5.
i Sx(f)df = Total average power
O(3
T
~ Sx(f) = J A 2 ( 1 )
T A2 T2 ; ( ¢ ' m
J eJ2Wd.r=AgT\(sin'rrfT~ecrfT
T s'nci'a*'
{g(f)
T