Professional Documents
Culture Documents
also from Digital Communications 4th Ed., J. G. Proakis, McGraw-Hill Int. Ed. 2001
CONTENT
Signals and channels (systems) that satisfy the condition that their bandwidth is much
smaller than the carrier frequency are termed narrowband band-pass signals and
channels (systems).
With no loss of generality and for mathematical convenience, it is desirable to reduce all
band-pass signals and channels to equivalent low-pass signals and channels.
Suppose that real valued signal s(t) has a frequency content concentrated in a narrow
band of frequencies in the vicinity of a frequency fc as show in Figure 4.1-1
Figure 4.1.-1 Spectrum of a bandpass signal
|S(f )|
f
0
-fc fc
Our objective is to develop a mathematical representation of such signals. First, we
consider a signal that contains only positive frequencies in s(t). Such a signal may be
expressed as
S + ( f ) = 2u ( f ) S ( f ) (4.1-1)
where S(f) is the Fourrier transform of s(t) and u(f) is the unit step function. The
equivalent time domain expression is
s + (t ) = S
+ ( f ) e j 2 ft df
(4.1-2)
1 1
= F [ 2 u ( f )] F [ S ( f )]
The signal s+(t) is called the analytic signal or pre-envelope of s(t). We note that
F 1[ S ( f )] = s(t )
j (4.1-3)
F 1[2u ( f )] = (t ) +
t
Hence,
j
s+ (t ) = [ (t ) + ] s (t )
t (4.1-4)
1
= s (t ) + j s (t )
t
We define
) 1
s (t ) = s (t )
t
(4.1-5)
1 s ( )
= d
t
)
The signal s (t ) may by viewed as the output of the filter with impulse response
1
h(t ) = <t < (4.1-6)
t
when excited by the input signal s(t). Such a filter is called a Hilbert transformer. The
frequency response of this filter is simply
H ( f ) = h(t )e j 2ft dt
The analytical signal s+(t) is a band-pass signal. We may obtain an equivalent low-pass
representation by performing a frequency translation of S+(f). Thus we define Sl(f)
(l=low) as
Sl ( f ) = S + ( f + f c ) (4.1-8)
or
)
s (t ) + js (t ) = sl (t )e j 2f c t (4.1-10)
sl (t ) = x(t ) + jy (t ) (4.1-11)
The expression (4.1-12) is the desired form for the representation of a band-pass signal.
The low-frequency signal components x(t) and y(t) may be viewed as amplitude
modulation impressed on the carrier components cos 2f c t and sin 2f c t . Since these
carrier components are in phase quadrature, x(t) and y(t) are called the quadrature
components of the band-pass signal s(t).
sl (t ) = a (t )e j ( t ) (4.1-15)
where
a (t ) = x 2 (t ) + y 2 (t ) (4.1-16
y (t )
(t ) = tan 1 (4.1-17)
x(t )
Then
Therefore, 4.1-12, 4.1-14 and 4.1-18 are quivalent representation of band-pass signals.
This is the basic relationship between the spectrum of the real band-pass signal S(f) and
the spectrumof the equivalent low-pass signal Sl(f).
The energy in the signal s(t) is defined as
s (t )dt = {Re[ sl (t )e c ]} dt
j 2f t
= 2 2
(4.1-22)
When the identity in Eq 4.1.-20 is used in Eq. 4.1-22, we obtain the following results:
1 1
= sl (t ) 2 dt + sl (t ) 2 cos[ 4f ct + 2 (t )]dt (4.1-23)
2 2
Consider the second integral in Eq.4.1-23. Since the signal s(t) narrow-band, the real
envelope a(t ) sl (t ) or, equivalently, a2(t) varies slowly relative to the rapid variations
exhibited by the cosine function. A graphical illustration of the integrand in the second
integral of Eq.4.1-23 is shown in Fig.4.1-2. The value of the integral is just the net area
under the cosine function modulated by a2(t). Since the modulating waveform a2(t) varies
slowly relative to the cosine function, the net area contributed by the second integral is
very small relative to the value of the first integral in Eq.4.1-23 and, hence, it can be
neglected. Thus, for all practical purposes, the energy in the band-pass signal s(t),
expressed in terms of the equivalent low-pass signal sl(t) is
1
= | sl (t ) |2 dt (4.1-24)
2
where | sl (t ) | is just the envelope a(t) of s(t).
Example:
4.1.4
Representation of Band-Pass Stationary Stochastic Processes
Suppose that n(t) is a sample function of a wide-sense stationary stochastic process with
zero mean and power spectral density nn ( f ) .The power spectral density is assumed to
be zero outside of an interval of frequencies centered aroud fc. The stochastic process n(t)
is said to be a narrowband band-pass process if the width of the spectral density is much
smaller than fc. Under this condition, a sample function of n(t) can be represented by any
of the three equivalent forms
where a(t) is the envelope and (t ) is the phase of the real-valued signal, x(t) and y(t) are
the quadrature components of n(t), and z(t) is called the complex envelope of n(t).
Let us consider 4.1-38 in more detail. First, we observe that if n(t) is zero mean, then x(t)
and y(t) must also have zero mean values. In addition, the stationarity of n(t) implies that
the autocorrelation and cross-correlation functions of x(t) and y(t) satisfy the following
properties:
xx ( ) = yy ( ) (4.1-40)
xy ( ) = yx ( ) (4.1-41)
nn ( ) = Re[ zz ( )e j 2f ]c
(4.1-50)
The power density spectrum of the stochastic process n(t) is the Fourier transform of
nn ( )
nn ( f ) = {Re[zz ( )e j 2f c ]}e j 2f d
(4.1-51)
1
= [ zz ( f f c ) + zz ( f f c ))]
2
Representation of white noise
White noise is a stochastic process that is defined to have a flat (constant) power spectral
density over entire frequency range. This type of noise cannot be expressed in terms of
quadrature components, as result of wide-band character.
In problems concerned with the demodulation of narrowband signals in noise, it is
mathematically convenient to model the additive noise process as white and to represent
the noise in terms of quadrature components. This can be accomplished by postulating
that the signals and noise at the receiving terminal have passed through an ideal band-
pass filter, having a pass-band that includes the spectrum of the signals but is much
wider. Such a filter will introduce negligible, if any, distortion on the signal, but it does
not eliminate the noise frequency components outside of the passband.
The noise resulting from passing the white noise process through a spectrally flat (ideal)
band-pass filter is termed band-pass white noise and has power spectral density depicted
in Figure 4.1-3
After filtering the only remaining noise power is that contained within the filter
bandwidth (B)
Fig4.1-3
1
0
N ( f B)
zz ( f ) = 2 (4.1-56)
1
0 ( f > B)
2
and its autocorrelation function is
sin B
zz ( ) = N 0 (4.1-57)
zz ( ) = N 0 ( ) (4.1-58)
4.2
SIGNAL SPACE REPRESENTATION
In this section, we demonstrate that signals have characteristics that are similar to vectors
and develop a vector representation for signal waveforms.
4.2.1
Vector Space Concepts
The inner product of two n-dimensional vectors v1=[v11 v12 v1n] and v2=[v21 v22 v2n] is
n
v1 v2 = v1i v2 i (4.2-2)
i =1
v1 v2 = 0 (4.2-3)
n
v = v
i =1
2
i (4.2-4)
A set of m vectors is said to be orthonormal if the vectors are orthogonal and each vector has a
unit norm. A set of m vectors is said to be linearly independent if no one vector can be
represented as a linear combination of the remaining vectors.
2 2 2
v1 + v2 = v1 + v2 + 2v1 v2 (4.2-7)
2 2 2
v1 + v2 = v1 + v2 (4.2-8)
This is the Pythagorean relation for two orthogonal n-dimensional vectors.
Finaly, let us review the Gram-Schmidt procedure for constructing a set of orthonormal vectors
from a set of n-dimensional vectors vi ; 1 i m . We begin by arbitrarily selecting a vector
from the set, say v1 . By normalizing its length, we obtain the first vector, say
v1
u1 = (4.2-11)
v1
Next, we may select v2 and, first, substract the projection of v2 onto u1. Thus, we obtain
u2'
u2 = (4.2-13)
u2'
The procedure continues by selecting v3 and subtracting the projections of v3 into u1 and u2. Thus,
we have
u3'
u3 = (4.2-15)
u3'
b
x1 (t ), x2 (t ), = x1 (t ) x2 (t )dt (4.2-16)
a
b 1
x(t ) = ( | x(t ) |2 dt ) 2 (4.2-17)
a
x1 (t ) + x2 (t ) x1 (t ) + x2 (t ) (4.2-18)
Cauchy-Schwarz inequality is
1 1
b b 2 b 2
x (t ) x (t )dt | x (t ) | | x (t ) |
2 2
1 2 1 dt 2 dt (4.2-19)
a a a
4.2.3
Orthogonal Expansions of Signals
In this section, we develop a vector representation for signal waveforms, and, thus, we
demonstrate an equivalence between a signal waveform and its vector representation.
Es = [ s (t )]2 dt (4.2-20)
There exists a set of function {fn(t), n=1,2,,K} that are orthonormal. We may
approximate the signal s(t) by a weighted linear combination of these functions, i.e.,
K
)
s (t ) = sk f k (t ) (4.2-22)
k =1
Let us select the coefficients {sk} so as to minimize the energy Ee of the approximation
error.
K
)
Ee = [s (t ) s (t )]2 dt = [s (t ) sk f k (t )]2 dt (4.2-24)
k =1
The optimum coefficients in the series expansion of s(t) may be found by differentiating
Eq.4.2-24 with respect to each of the coefficients {sk} and setting the first derivatives to
zero.
Under the condition that Emin= 0 we may expres s(t) as
K
s (t ) = sk f k (t ) (4.2-29)
k =1
When every finite energy signal can be represented by a series expansion of the form(4.2-
29) for which Emin= 0, the set of orthonormal functions {fn(t)} is said to be complete.
Example 4.2-1. Trigonometric Fourier series
Consider a finite energy signal s(t) that is zero everywhere except in the range 0 t T
and has a finite number of discontinuities in this interval. Its periodic extension can be
represented in a Fourier series as
a0 2kt 2kt
s (t ) = + (ak cos + bk sin ) (4.2-30)
2 k =1 T T
where the coefficients {ak,bk} that minimize the mean square error are given by
2kt
T
2
ak = s(t ) cos dt ; k 0
T0 T
(4.2-31)
2kt
T
2
bk =
T0 s(t ) sin
T
dt ; k >1
The set of trigonometric functions { cos( 2kt / T , sin( 2kt / T ) } is complete, and, hence,
the series expansion results in zero mean square error.
Gram-Schmidt procedure
We have a set of finite energy signal waveforms {si(t), i=1,2,,M} and we wish to
construct a set of orthonormal waveforms. The first orthonormal waveform is simply
constructed as
s1 (t )
f1 (t ) = (4.2-32)
E1
The second waveform is constructed from s2(t) by first computing the projection of f1(t)
onto s2(t), which is
c12 = s (t ) f (t )dt
2 1 (4.2-33)
Then
f 2(t ) = s2 (t ) c12 f1 (t ) (4.2-34)
and again
f 2 (t )
f 2 (t ) = (4.2-35)
E2
in general
f k (t )
f k (t ) = (4.2-36)
Ek
where
k 1
f k (t ) = sk (t ) cik f i (t ) (4.2-37)
i =1
and
cik = s (t ) f (t )dt
k i (4.2-38)
Thus, the orthogonalization process is continued until all the M signal waveforms {si(t)}
have been exhausted and N M orthonormal waveforms have been constructed. The
dimensionality N of the signal space will be equal to M if all the signal waveforms are
linearly independent, i.e., none of the signal waveform is a linear combination of the
other signal waveforms.
Once we have constructed the set of orthonormal waveforms {fn(t)}, we can express the
M signals {sn(t)} as linear combinations of the {fn(t)}.
N
sk (t ) = skn f n (t ) (4.2-39)
n =1
and
N
Ek = [ sk (t )]2 dt = skn2 = sk
2
(4.2-40)
n =1
Based on the expression in Equation 4.2-39, each signal may be represented by the vector
Let us apply the Gram-Schmidt procedure to the set of four waveforms illustrated in
Fig.4.2-1a.
s1 (t )
The waveform s1 (t ) has energy E1 = 2 , so that f1 (t ) = . Next, we observe that
2
s (t )
c12 = 0 ; hence s2 (t ) and f1 (t ) are orthogonal. Therefore f 2 (t ) = 2 . To obtain f 3 (t ) ,
2
we compute c13 and c23 , which are c13 = 2 and c23 = 0 . Thus,
f 3' (t ) = s3 (t ) 2 f1 (t ) = {0;1(;otherwise
( 2 t 3)
)
Since f 3' (t ) has unit energy, it follows that f 3 (t ) = f 3' (t ) . In determining f 4 (t ) , we find
that c14 = 2 , c24 = 0 , and c34 = 1 . Hence
f 4' (t ) = s4 (t ) + 2 f1 (t ) f3 (t ) = 0
Let us obtain the vector representation of the four signals shown in Fig.4.2-1a by using
the orthonormal set of functions in Fig4.2-1b. Since the dimensionality of the signal
space is N=3, each signal is described by three components. The signal s1 (t ) is
characterized by the vector s1 = ( 2 ,0,0 ). Similarly, the signals s2 (t ) , s3 (t ) and s4 (t ) are
characterized by the vectors s2 = ( 0, 2 ,0 ), s3 = ( 2 ,0,1 ), and s4 = ( 2 ,0,1 ),
respectively. These vectors are shown in Fig4.2-2. Their lengths are s1 = 2 , s2 = 2 ,
2
s3 = 3 , and s4 = 3 , and the corresponding signal energies are Ek = sk , k=1,2,3,4.
We have demonstrated that a set of M finite energy waveforms {sn(t)} can be represented
by a weighted linear combination of orthonormal functions {fn(t)} of dimensionality
N M . The functions {fn(t)} are obtained by applying the Gram-Schmidt
orthogonalization procedure on {sn(t)}. It should be emphasized, however, that the
functions {fn(t)} obtained from Gram-Schmidt procedure are not unique. If we alter the
order in which the orthogonalization of the signals {sn(t)} is performed, the orthonormal
waveforms will be different and the corresponding vector representation of the signals
{sn(t)} will depend on the choice of the orthonormal functions {fn(t)}. Nevertheless, the
vectors {sn} will retain their geometrical configuration and their lengths will be invariant
to the choice of orthonormal functions {fn(t)}.
The orthogonal expansions described above were developed for real-valued signal
waveforms.
Finally, let us consider the case in which the signal waveforms are band-pass and
represented as
1
Em = s (t )dt = | slm (t ) |2 dt
2
m (4.2-43)
2
The similarity between any pair of signal waveforms, say sm(t) and sk(t), is measured by
the normalized cross correlation
1
1
Em Ek sm (t )sk (t )dt = Re 2 Em Ek s lm (t ) slk* (t )dt
(4.2-44)
1
km = s
*
lm (t ) slk (t )dt ( 4.2-45)
2 Em E k
Then,
1
Re( km ) =
Em Ek s
m (t ) sk (t )dt (4.2-46)
or, equivalently
sm sk s s
Re( km ) = = m k (4.2-47)
sm sk Em Ek
1
d (e)
km = sm sk = { [sm (t ) sk (t )] dt} 2 2
(4.2-48)
1
= {Em + Ek 2 Em Ek Re( km )} 2
1
(e)
d km = {2 E[1 Re( km )]}2 (4.2-49)
Thus, the Euclidean distance is an alternative measure of the similarity (or dis-similarity)
of the set of signal waveforms or the corresponding signal vectors.
In the following section, we describe digitally modulated signals and make use of the
signal space representation for such signals. We shall observe that digitally modulated
signals, which are classified as linear, are conveniently expanded in terms of two
orthonormal basis functions of the form
2
f1 (t ) = cos 2f ct
T
(4.2-50)
2
f 2 (t ) = sin 2f ct
T
Hence, if slm(t) is expressed as slm(t) = xl(t) +jyl(t), it follows that sm(t) in Eq. 4.2-42 may
be expressed as
sm (t ) = xl (t ) f1 (t ) + yl (t ) f 2 (t ) (4.2-51)
4.4
Determine the autocorrelation function of the stochastic process
x(t ) = A sin( 2f ct + ) where fc is a constant and is a uniformly distributed phase, i.e.,
1
p ( ) = , 0 2
2
Sol:
4.8
Determine the correlation coefficient km among the four signal waveforms {si(t)}
shown in Fig.4.2-1, and the corresponding Euclidean distances.
Sol:
4.17
Carry out the Gram-Schmidt orthogonalization of the signals in Fig.4.2-1a in the order
s4(t), s3(t), s2(t), s1(t), and, thus, obtain a set of orthonormal functions. Then, determine
the vector representations of the signals and determine the signal energies.
Sol:
4.3
REPRESENTATION OF DIGITALLY MODULATED SIGNALS
Modulator with memory when the mapping from the digital sequence {an } to
waveforms is performed under constraint that a waveform transmitted in any time
interval depends on one or more previously transmitted waveforms.
Memoryless modulator when the mapping from the sequence {an } to the
waveforms {sm (t )} is performed without any constraints on previously transmitted
waveforms.
Linear modulator principle of superposition
Non-linear modulator - principle of superposition does not exist
4.3.1
Memoryless Modulation Methods
where 2d is the distance between adjacent signal amplitudes. The waveform g(t) is a real-
valued signal pulse whose shape influences the spectrum of the transmitted signal. The
symbol rate for the PAM is R/k. This is the rate at which changes occur in the amplitude
of the carrier to reflect the transmission of new information. The time interval Tb=1/R is
called the bit interval and the time interval T=k/R is called the symbol interval.
The M PAM signals have energies
T T
1 2 2 1
Em = sm2 (t )dt = Am g (t )dt = Am2 Eg (4.2-3)
0
2 0 2
sm (t ) = sm f (t ) (4.3-4)
2
f (t ) = g (t ) cos 2f ct (4.3-5)
Eg
and
1
sm = Am Eg ; m = 1,2,,M (4.3-6)
2
The corresponding signal space diagram for M=2, M=4, and M=8 are shown in Fig.4.3-1. Digital
PAM is also called amplitude-shift keying (ASK).
(e)
d mn = ( sm sn ) 2
(4.3-7)
1
= Eg | Am An |= d 2 E g | m n |
2
(e)
d min = d 2Eg (4.3-8)
The carrier-modulated PAM signal represented by Eq. 4.3-1 is a double-sideband (DSB) signal
and requires twice the channel bandwidth of the equivalent low-pass signal for transmission.
The digital PAM signal is also appropriate for transmission over a channel that does not require
carrier modulation. In this case the signal is called baseband signal represented as
sm (t ) = Am g (t ) ; m = 1,2,,M (4.3-10)
If M=2 than such signals are called antipodal signal with property
s1 (t ) = s2 (t )
sm (t ) = Re[ g (t )e j 2 ( m 1) / M e j 2 f c t ]
2
= g (t ) cos[2f ct + (m 1)] ; m = 1,2,,M (4.3-11)
M
2 2
= g (t ) cos (m 1) cos 2f ct g (t ) sin (m 1) sin 2f ct
M M
where g(t) is the signal pulse shape and m = 2 (m 1) / M ; m = 1,2,,M , are the M possible
phases of the carrier.
T T
1 2 1
E = sm2 (t )dt = g (t )dt = Eg (4.3-12)
0
20 2
sm (t ) = sm1 f1 (t ) + sm 2 f 2 (t ) (4.3-13)
where
2
f1 (t ) = g (t ) cos 2f ct (4.3-14)
Eg
2
f 2 (t ) = g (t ) sin 2f ct (4.3-15)
Eg
E 2 Eg 2
sm = g cos (m 1); sin (m 1) (4.3-16)
2 M 2 M
If M=2
E Eg
s1 = g ;0 ; s2 = ;0
2 2
Signal space diagrams for M=2,4,8 are shown in Fig.4.3-3. We note that M=2 corresponds to
one-dimensional signals, which are identical to binary PAM signals.
(e)
d mn = s m sn
2
1 (4.3-17)
= {Eg [1 cos (m n)]}2
M
The minimum ED corresponds to the case in which m n = 1 , i.e., adjacent signal phases. In
this case,
2
(e)
d min = Eg (1 cos ) (4.3-18)
M
where Amc and Ams are the information-bearing signal amplitudes of the quadrature
carriers and g(t) is the signal pulse.
sm (t ) = Re[Vm e jm g (t )e j 2fct ]
(4.3-20)
= Vm g (t ) cos(2f ct + )
where
Vm = Amc
2
+ Ams
2
QAM signal waveforms may be viewed as combined amplitude and phase modulation.
As in the case of PSK signals, the QAM signal waveforms may be represented as a linear
combination of two orthonormal signal waveforms
sm (t ) = sm1 f1 (t ) + sm 2 f 2 (t ) (4.3-21)
where
2
f1 (t ) = g (t ) cos 2f ct
Eg
(4.3-22)
2
f 2 (t ) = g (t ) sin 2f ct
Eg
and
Eg Eg
sm = [sm1sm 2 ] = Amc ; Ams (4.3-23)
2 2
(e)
d mn = sm sn
[ ]
1 (4.3-24)
E g ( Amc Anc ) + ( Ams Ans )
2 2
=
2
In the special case where the signal amplitudes take the set of discrete values
{(2m 1 M )d , m = 1,2,...M } , the signal diagram is rectangular, as shown in Fig4.3.-5. In this
case, the ED between adjacent points, i.e, the minimum distance is
(e)
d min = d 2Eg (4.3-25)
It is apparent from the discussion above that the digital modulation of the carrier amplitude and
phase allows us to construct signal waveforms that correspond to two-dimensional vectors and
signal space diagrams. If we wish to construct signal waveforms corresponding to higher-
dimensional vectors, we may use either the time domain or the frequency domain or both in order
to increase the number f dimensions.
Suppose we have N-dimensional signal vectors. For any N, we may subdivide a time interval of
length T1=NT into N subintervals of length T=T1/N. In each subinterval of length T, we may use
binary PAM (a one-dimensional signal) to transmit an element of the N-dimensional signal
vector. Thus, the N time slots are used to transmit the N-dimensional signal vector. If N is even, a
time slot of length T may be used to simultaneously transmit two components of the N-
dimensional vetor by modulating the amplitude of quadrature carriers independently by the
corresponding components. In this manner, the N-dimensional signal vector is transmitted in
(1/2)NT seconds (1/2 - N times slots).
Alternatively, a frequency band of width Nf may be subdivided into N frequency slots each of
width f . An N-dimensional signal vector can be transmitted over the channel by
simultaneously modulating the amplitude of N carriers, one in each of the N frequency slots. Care
must be taken to provide sufficient frequency separation f between successive carriers so that
there is no cross-talk interference among signals on the N carriers. If quadrature carriers are used
in each frequency slot, the N-dimensional vector (even N) may be transmitted in (1/2)N
frequency slots, thus reducing the channel bandwidth utilization by a factor of 2.
More generally, we may used both the time and frequency domains jointly to transmit an N-
dimensional signal vector. For example, Fig.4.3-6 illustrates a subdivision of the time and
frequency axes into 12 slots. Thus, an N=12 dimensional signal vector may be transmitted by
PAM or an N=24 dimensional signal vector may be transmitted by use of two quadrature
carriers (QAM) in each slot.
Orthogonal multidimensional signals (FSK)
As a special case of the construction of multidimensional signals, let us consider the construction
of M equal-energy orthogonal signal waveforms that differ in frequency and are represented as
sm (t ) = Re[ slm (t )e j 2f c t ]
2E (4.3-26)
= cos[2f ct + 2mf t ]
T
2 E j 2mft
slm (t ) = e (4.3-27)
T
sin T (m k )f jT ( m k ) f
T
2 E / T j 2 ( m k ) ft
km =
2E 0 e dt =
T (m k )f
e (4.3-28)
sin[T (m k )f ]
r Re( km ) = cos[T (m k )f ]
T (m k )f
(4.3-29)
sin[2T (m k )f ]
=
2T (m k )f
For the case in which f = 1 / 2T , the M FSK signals are equivalent to the N-dimensional
vectors
s1 = [ E 0 0 0... 0]
s2 = [ 0 E 0 0... 0]
(4.3-30)
s3 = [0 0 E 0... 0]
s N = [0 0 0 0... E]
(e)
d km = 2E for all m,k (4.3-31)
A set of M biorthogonal signals can be constructed from (1/2)M orthogonal signals by simply
including the negatives of the orthogonal signals. Thus, we require N = (1/2)M dimensions for the
construction of a set of M biorthogonal signals.
We note, that the correlation between any pair of waveforms is either r = 1, or,0 . The
corresponding distances are d = 2 E or d = 2 E , with the latter being the minimum distance.
Simplex signals
M
1
s=
M
s
m =1
m (4.3-32)
Now, let us construct another set of M signals by substracting the mean from each of the M
orthogonal signals. Thus
sm' = sm s (4.3-33)
The effect of the subtraction is to translate the origin of the m orthogonal signals to the point s .
The resulting signal waveforms are called simplex signals and have the following properties.
First, the energy per waveform is
2 2
sm' = sm s
= E (2 / M ) E + (1 / M ) E (4.3-34)
1
= E (1 )
M
sm sn 1/ M 1
Re( km ) = = = (4.3-35)
sm sn 1 1 / M M 1
for all m, n.
Hence, the set of simplex waveforms is equally correlated and requires less energy, by the factor
1-1/M, than the set of orthogonal waveforms. Since only the origin was translated, the distance
between any pair of signal points is maintained at d = 2 E , which is the same as the distance
between any pair of orthogonal signals.
Signal waveforms from binary codes
A set of M signaling waveforms can be generated from a set of M binary code words of
the form
where cmj=0 or 1 for all m and j. Each component of a code word is mapped into an
elementary binary PSK waveform as follows:
2 Ec
cmj = 1 smj (t ) = cos 2f ct , 0 t Tc
Tc
(4.3-37)
2 Ec
cmj = 0 smj (t ) = cos 2f ct , 0 t Tc
Tc
where Tc=T/N and Ec=E/N. Thus, the M code words {Cm } are mapped into a set of M
waveforms {sm (t )}.
The waveforms can be represented in vector form as
E
where smj = for all m and j. N is called the block length of the code, and it is also
N
the dimension of the M waveforms.
We note that there are 2N possible waveforms that can be constructed from the 2N
possible binary code words. We may select a subset of M < 2 N signal waveforms for
transmission of information. We also observe that 2N possible signal points correspond to
the vertices of an N dimensional hypercube with its center at the origin. Figure 4.3-11
illustrates the signal points in N=2 and 3 dimensions.
Each of the M waveforms has energy E. The cross correlation between any pair of
waveforms depends on how we select the M waveforms from the 2N possible waveforms.
Clearly, any adjacent signal points have a cross-correlation coefficient
E (1 2 / N ) N 2
r = = (4.3-39)
E N
d ( e ) = 2 E (1 r ) = 4 E / N (4.3-40)
bk = ak bk 1 (4.3-41)
where ak is the input into the encoder and bk is the output of the encoder. When bk=1, the
transmitted waveform is a rectangular pulse of amplitude A, when bk=0, the transmitted
waveform is a rectangular pulse of amplitude A. Hence, the output of the decoder is
mapped into one of two waveforms in exactly the same manner as for NRZ signal.
The differential encoding operation introduces memory in the signal. The combination of
the encoder and the modulator operations may be represented by a state diagram (a
Markov chain) as shown in Fig.4.3-13.
The state diagram may be described by two transition matrices corresponding to the two
possible input bits {0,1}. We note that when ak=0, the encoder stays in the same state.
Hence, the state transition matrix for a zero is simply
1 0
T1 = (4.3.-42)
0 1
where tij=1 if ak results in a transition from state i to state j, i=1,2 and j=1,2; otherwise,
tij=0. Similarly, the state transition matrix for ak=1 is
0 1
T2 = (4.3-43)
1 0
Thus, these two state transition matrices characterize the NRZI signal.
Another way to display the memory is by a trellis diagram. The trellis provides the same
information concerning the signal dependence as the state diagram, but also depicts a
time evolution of the state transitions.
The signal generated by delay modulation also has memory. Delay modulation is
equivalent to encoding the data sequence by a run-length-limited code called a Miller
code and using NRZI to transmit the encoded data this type of modulation has been used
extensively in binary PSK. The signal may be described by a state diagram that has four
states as shown in Fig.4.3.-15a. There are two elementary waveforms s1(t) and s2(t) and
their negatives -s1(t) and -s2(t), which are used for transmitting the binary information.
These waveforms are illustrated in Fig.4.3-15b. The mapping from bits to corresponding
waveforms is illustrated in the state diagram. The state transition matrices that
characterize the memory of this encoding and modulation method are easily obtained
from the state diagram in Fig.4.3.-15. When ak=0, we have
0 0 0 1
0 0 0 1
T1 = (4.3-44)
1 0 0 0
1 0 0 0
0 1 0 0
0 0 1 0
T2 = (4.3-45)
0 1 0 0
0 0 1 0
Modulation techniques with memory such as NRZI coding are generally characterized by
a K-state Markov chain with stationary state probabilities {pi=1,2,,K} and transition
probabilities {pij, i,j = 1,2,,K}. Thus, the transition probability pij denotes the
probability that signal waveform sj(t) is transmitted in a given signaling interval after the
transmission of the signal waveform si(t) in the previous signaling interval. The
transition probabilities may be arranged in matrix form as
2
P = qiTi (4.3-47)
i =1
For the NRZI signal with equal state probabilities p1=p2=1/2 and transition matrices
given by Equations 4.3-42 and 4.3-43, the transition probability matrix is
1 1
2
P = 2 (4.3-48)
1 1
2 2
Similarly, the transition probability matrix for the Miller-coded signal with equally likely
symbols (q1=q2=1/2 or, equivalently, p1=p2=p3=p4=1/4) is
0 1 / 2 0 1 / 2
0 0 1 / 2 1 / 2
P= (4.3-49)
1 / 2 1 / 2 0 0
1 / 2 0 1 / 2 0
To avoid the use of signle having large spectral side lobes, the information-bearing
signal frequency modulates a single carrier whose frequency is changed continuously
(CPFSK). This type of FSK signal has memory because the phase of the carrier ic
constrained to be continuous.
d (t ) = I n g (t nT ) (4.3-50)
n
where {In} denotes the sequence of amplitudes obtained by mapping k-bit blocks of
bojary digits from the information sequence {an} into the amplitude levels
1,3,..., ( M 1) and g(t) is a rectangular pulse of amplitude 1/2T and duration T
seconds. The signal d(t) is used to frequency-modulate the carrier. Consequently, the
equivalent low-pass waveform v(t) is expressed as
2E t
v(t ) = exp j 4Tf d d ( )d + 0 (4.3-51)
T
where fd is the peak frequency deviation and 0 is the initial phase of the carrier.
The carrier-modulated signal corresponding to Eq. 4.3.-51 may be expressed as
2E
s (t ) = cos[2f ct + (t; I ) + 0 ] (4.3-52)
T
t t
(t ; I ) = 4Tf d d ( )d = 4Tf d
I
n
n g ( nT ) d
(4.3-53)
Note that, although d(t) contains discontinuities, the integral of d(t) is continuous. Hence,
we have a continuouse-phase signal. The phase of the carrier in the interval
nT t ( n + 1)T is determined by integrating Eq. 4.3-53. Thus,
n 1
(t ; I ) = 2Tf d I
k =
k + 2f d (t nT ) I n = n + 2hI n q (t nT ) (4.3-54)
n 1
n = h I k (4.3-56)
k =
(4.3-57)
0 (t < 0)
q (t ) = t / 2T (0 t T )
1/ 2 t > T )
When expressed in the formo f Eq.4.3-54, CPFSK becomes a specil case of a general
vlase of continuous-phase modulated (CPM) signle in which the carrier phase is
n 1
(t ; I ) = 2 I h q(t kT ) , nT t (n + 1)T
k =
k k (4.3-58)
where {Ik} is the sequence of M-ary information symbols selected from the alphabet
1,3,..., ( M 1) , {hk} is a sequence of modulation indices, and q(t) is some normalized
waveform shape.
When hk=h for all k, the modulation index is fixed for all symbols. When the modulation
index varies from one symbol to another, the CPM signa lis called multi-h. In such a case,
the {hk} are made to vary in a vycliv manner through a set of indices.
The waveform q(t) may be represented in general as the integral of some pulse g(t), i.e.,
t
q(t ) = g ( )d (4.3-59)
0
If g(t) = 0 for t > T, the CPM signal is called full response CPM. If g (t ) 0 for t > T, the
modulated signa lis called partial response CPM. Fig 4.3-16 illustrates several pulses
shapes for g(t), and corresponding q(t). I tis apparent that an infinite variety of CPM
signle can be generated by choosing different pulse shapes g(t) and by varying the
modulation index h and the alphabet size M.
We note that the CPM signal has memory that is introduced through the phase kontinuity.
For L > 1, additional memory is introduced in the CPM signal by the pulse g(t).
Three popular pulse shapes are given:
1 (0 1 LT )
,
LREC g (t ) = 2 LT
0, (otherwise)
1 2t (0 1 LT )
(1 cos ),
LRC g (t ) = 2 LT LT
0, (otherwise)
T T
GMSK g (t ) = {Q[ 2B (t ) /(ln 2)1 / 2 ] Q[ 2B (t + ) /(ln 2)1 / 2 ]}
2 2
1 x2 / 2
Q(t ) = e dt
t 2
LREC denotes a rectangular pulse duration LT, where L is a positive integer. In this case,
L = 1 results in a CPSK signal, with the pulses as shown in Fig.4.3-16a. The LREC pulse
for L = 2 is shown in Fig.4.3-16c. LRC denotes a raised cosine pulse of duration LT. The
LRC pulses corresponding to L = 1 and L = 2 are shown in Fig.4.3-16b and d,
respectively. The third pulse is called a Gaussian minimum-shift keying (GMSK) pulse
with bandwidth parameter B, which represents the -3dB bandwidth of the Gaussian pulse.
Fig.4.3-16e illustrates a set of GMSK pulses with time-bandwidth products BT ranging
from 0.1 to 1. We observe that the pulse duration increases as the bandwidth of the pulse
decreases, as expected. In practical applications, the pulse is ussually truncated to some
specific fixed duration. GMSK with BT = 0.3 is used in the European digital cellular
communication systm, called GSM. From Fig.4.3.-16e we observe that when BT = 0.3,
the GMSK pulse may be truncated at |t| = 1.5T with a relatively small error incurred for
T > 1.5T.
It is instructive to sketech the set of phase trajectories (t ; I ) generated by all possible
values of the information sequence {I n } . For example, in the case of CPFSK with bojary
symbols I n = 1 , the set of phase trajectories beginning at time t = 0 is shown in Fig.4.3-
17. For comparison, the phase trajectories for quaternary CPFSK are illustrated in
Fig.4.3-18. These phase diagrams are called phase trees. We observe that the phase trees
for CPFSK are picewise linear as a consequence of the fact that the pulse g(t) is
rectangular. Smoother phase trajectories and phase trees are obtained by usny pulses that
do not contain discontinuities, such as the vlase of raised cosine pulses. For example, a
phase trajectory generated by the sequence (1,-1,-1,-1,1,1,-1,1) for a partial response,
raised cosine pulse of length 3T is illustrated in Fig.4.3-19. For comparison, the
corresponding phase trajectory generated by CPFSK is also shown.
Minimum-shift keying (MSK)
MSK is a special formo f bojary CPFSK (and, therefore, CPM) in which the modulation
index h = . The phase of the carrier in the interval nT t ( n + 1)T is
1 n 1
1 t nT
(t ; I ) =
2
I
k =
k + I n q (t nT ) = n + I n
2 T
, nT t ( n + 1)T
(4.3-63)
1 t nT 1 1
s (t ) = A cos 2f ct + n + I n = A cos 2 f c + I n t nI n + n (4.3-64)
2 T 4T 2
The expression 4.3-64 indicates that the binary CPFSK signal can be expressed as a
sinusoid having one of two possible frequencies in the interval nT t ( n + 1)T . If we
devone these frequencies as
1 1
f1 = f c , f2 = fc + (4.3-65)
4T 4T
Then the binary CPFSK signal given by Eq.4.3-64 may be written in the form
1
si (t ) = A cos 2f it + n + n (1)i 1 , i = 1,2 (4.3-66)
2
4.4.1
Power Spectra of Linearly Modulated Signals
s (t ) = Re[v(t )e j 2f c t ]
which relates the band-pass signal s(t) to the equivalent low-pass signal v(t), we may
express the autocorrelation function of s(t) as
ss ( ) = Re[vv ( )e j 2f ] c
(4.4-1)
The Fourier transform of Eq. 4.4-1 yields the desired expression for the power density
spectrum in the form
1
ss ( f ) = [ vv ( f f c ) + vv ( f f c )] (4.4-2)
2
First we consider the linear digital modulation method for which v(t) is represented in the
general form
v (t ) = I
n =
n g (t nT ) (4.4-3)
where the transmission rate is 1/T=R/k symbols/s and {In} represents the sequence of
symbols that results from mapping k-bit blocks into corresponding signal points selected
from the appropriate signal space diagram. Observe that in PAM, the sequence {In} is
real and corresponds to the amplitude values of the transmitted signal, but in PSK, QAM,
and combined PAM-PSK, the sequence {In} is complex-valued, since the signal points
have a two-dimensional representation.
1 2
vv ( f ) = G ( f ) ii ( f ) (4.4-12)
T
where G(f) is the Fourier transform of g(t), and ii ( f ) denotes the power density
spectrum of the information sequence, defined as
ii ( f ) = ( m )e
m =
ii
j 2fmT
(4.4-13)
The result 4.4-12 illustrates the dependence of the power density spectrum of v(t) on the
spectral characteristics of the pulse g(t) and the information sequence {In}. That is, the
spectral characteristics of v(t) can be controlled by design of the pulse shape g(t) and by
design of the correlation characteristics of the information sequence.
The desired power density spectrum of v(t) when the sequence of information symbol sis
uncorrelated is in the form
2
i2 i2 m
m
2
2
vv ( f ) = G( f ) + G( ) ( f ) (4.4-18)
T T m= T T
where i2 denotes the variance of an information symbol and i denotes the mean.
The expression 4.4-18 for the power density spectrum is purposely separated into two
terms to emphasize the two different types of spectral components. The first term is the
continuous spectrum, and its shape depends only on the spectral characteristic of the
signal pulse g(t). The second term consists of discrete frequency components spaced 1/T
apart in frequency. Each spectral line has a power that is proportional to |G(f)|2 evaluated
at f =m/T. Note that the discrete frequency components vanish when the information
symbols have zero mean, i.e., i = 0 . This condition is usually desirable for the digital
modulation techniques under consideration, and it is satisfied when the information
symbols are equally likely and symmetrically positioned in the complex plane. Thus, the
system designer can control proper selection of the characteristics of the information
sequence to be transmitted.
Example 4.4-1
To illustrate the spectral shaping resulting from g(t), consider the rectangular pulse
shown in Fig.4.4-1a. The Fourier transform of g(t) is
sin fT jfT
G ( f ) = ( AT ) 2 e
fT
Hence
2
2 sin fT
G ( f ) = ( AT ) 2
(4.4-19)
fT
Note that it contains zeros at multiples of 1/T in frequency and that it decays inversely as
the square of the frequency variable. As a consequence of the spectral zeros in G(f), all
but one of the discrete spectral components in Eq.4.4-18 vanish. Thus upon substitution
for |G(f)|2 from Eq.4.4-19, Eq.4.4-18 reduces to
2
sin fT
vv ( f ) = A T
2 2
+ A2 i2 ( f ) (4.4-20)
fT
i
Example 4.4-2
As a second illustration of the spectral shaping resulting from g(t), we consider the raised
cosine pulse
A 2 T
g (t ) = 1 + cos (t ) , 0t T (4.4-21)
2 T 2
This pulse is graphically illustrated in Fig.4.4-2a. Its Fourier transform is easily derived,
and it may be expressed in the form
AT sin fT
G( f ) = e jfT (4.4-22)
2 fT (1 f T )
2 2
Consequently, all the discrete spectral components in Eq.4.4-18, excerpt the one at f=0
and f = 1 / T , vanish. When compared with the spektrum of the rectangular pulse, the
spektrum of the raised cosine pulse has a broader main lobe but tha tails decay inversely
as f6.
Example 4.4-3
To illustrate that spectral shaping can also be accomplished by operations performed on
the input information sequence, we consider a binary sequence {bn} from which we form
the symbols
I n = bn + bn 1 (4.4-23)
The {bn} are assumed to be uncorrelated random variables, each having zero mean and
unit variance. Then the autocorrelation function of the sequence {In} is
2 (m = 0)
ii (m) = E ( I n I n + m ) = 1 (m = 1) (4.4-24)
0 (otherwise)
and the corresponding power density spectrum for the (low-pass) modulated signal is
4 2
vv ( f ) = G ( f ) cos 2 fT (4.4-26)
T
4.4.2 Power Spectra of CPFSK and CPM Signals
4.4.3 Power Spectra of Modulated Signals with Memory
SELECTED PROBLEMS:
4.14
Consider an equivalent low-pass digitally modulated signal of the form
where {an} and {bn}are two sequences of statistically independent binary digits and g(t)
is a sinusoidal pulse defined as
This type of signal is viewed as a four-phase PSK signal in which the pulse shape is one-
half cycle of a sinusoid. Each of the information sequences {an} and {bn} is transmitted
at rate of 1/2T bits/s and, hence, the combined transmission rate is 1/T bits/s. The two
sequences are staggered in time by T seconds in transmission.
Consequently, the signal u(t) is called staggered four-phase PSK.
u (t ) = I g (t nT )
n
n
1
where In takes on one of the four possible values (1 j ) with equal probability. The
2
sequence of information symbols {In} is statistically independent.
a) Determine and sketch the power density spectrum of u(t) when
A (0 < t < 2T )
g (t ) =
0 (otherwise)
A sin(t / T ) (0 t T )
g (t ) =
0 (otherwise)
c) Compare the spectra obtained in (a) and (b) in terms of the 3-dB bandwidth and
the bandwidth to the first spectral zero.
Sol:
4.19
- QPSK may be considered as two QPSK systems offset by /4 radians.
Sol:
4.20
The power density spectrum of the cyclostationary process
v(t ) = I g (t nT )
n =
n
was derived in Section 4.4.1 by averaging the autocorrelation function vv(t+,t) over the
period T of the process and the evaluating the Fourier transform of the average
autocorrelation function. An alternative approach is to change the cyclostationary process
into a stationary process (t) by adding a random variable , uniformly distributed over
0<T, so that
v (t ) = I g (t nT )
n =
n
And defining the spectral density of (t) as the Fourier transform of the autocorrelation
function of the stationary process (t).Derive the result in Equation 4.4 -11 by
evaluating the autocorrelation function of (t) and its Fourier transform.
Sol:
4.22
The low-pass equivalent representation of PAM signal is
u (t ) = I g (t nT )
n
n
I n = an an2
where {an}is a sequence of uncorrelated binary-valued (1,-1) random variables that occur
with equal probability.
a) Determine the autocorrelation function of the sequence {In}.
b) Determine the power density spectrum of u(t).
c) Repeat (b) if the possible values of the an are (0,1).
Sol:
4.30
Show that 16 QAM can be represented as a superposition of two four-phase constant
envelope signals where each component is amplified separately before summing, i.e.,
where {An}, {Bn}, {Cn}, and {Dn} are statistically independent binary sequences with
element from the set {+1,-1} and G is the amplifier gain. Thus, show that the resulting
signal is equivalent to
s (t ) = I n cos 2f c t + Qn sin 2f c t
Sol:
4.36
The two signal waveforms for binary FSK signal transmission with discontinuous phase
are
2 b f
s0 (t ) = cos 2 ( f c )t + 0 0tT
Tb 2
2 b f
s1 (t ) = cos 2 ( f c )t + 1 0tT
Tb 2
where f=1/T<<fc, and 0 and 1 are uniformly distributed random variables on the
interval (0,2). The signals s0(t) and s0(t) are equally probable.
a) Determine the power spectral density of the FSK signal.
b) Show that the power spectral density decays as 1/f2for f>>fc.
Sol: