You are on page 1of 105

IEEE Int. Microwave Symp. Digest, June 1996, Vol. 1, pp.

369372.
196. A. Kanda, T. Hirota, H. Okazaki, and M. Nakamae, An
MMIC chip set for V-band phase-locked local oscillator, Proc.
1995 Gallium Arsenide Integrated Circuit (GaAs IC) Symp.
1995, pp. 259262.
197. G. Nesbit, T. Dere, D. English, V. Purdy, and B. Parrish,
Ka-band MMIC-based transceiver for battleeld combat
identication system, 1995 Microwave and Millimeter-
Wave Circuit Symp. Digest 1995, pp. 5357.
198. E. W. Lin, Y. L. Kok, G. S. Dow, H. Wang, T. T. Chung, S. Lau,
D. Okamuro, and B. R. Allen, An advanced single-chip Ka-
band transceiver, 1996 IEEE Int. Microwave Symp. Digest,
June 1996, Vol. 2, pp. 513515.
199. Y. L. Kok, M. Ahmadi, H. Wang, B. Allen, and T. S. Lin, A
Ka-band monolithic single-chip transceiver using sub-har-
monic mixer, 1998 IEEE Int. Microwave Symp. Digest, June
1998, Vol. 1, 309311.
200. D. C. Streit, D. K. Umemoto, K. W. Kobayashi, and A. K. Oki,
Monolithic HEMT-HBT integration by selective MBE, IEEE
Trans. Electron. Devices 42(4):618623 (April 1995).
201. L. Tran, J. Cowles, R. Lai, T. Block, P. Liu, A. Oki, and D.
Streit, Monolithic integration of InP HBT and HEMT by
selective molecular beam epitaxy, Proc. 1996 Int. Conf.
Indium Phosphide and Related Materials (IPRM), April
1996, pp. 7678.
202. H. Wang, R. Lai, L. Tran, J. Cowles, Y. C. Chen, E. Lin, H. H.
Liao, M. K. Ke, T. Block, and H. C. Yen, A single-chip 94-
GHz frequency source using InP-based HEMT-HBT integra-
tion technology, 1998 IEEE Int. Microwave Symp. Digest,
Vol. 1, June 1998, pp. 219222.
203. K. W. Kobayashi, H. Wang, R. Lai, L. T. Tran, T. R. Block, P.
H. Liu, J. Cowles, Y. C. Chen, T. W. Huang, A. K. Oki, H. C.
Yen, and D. C. Streit, An InP HEMT W-band amplier
with monolithically integrated HBT bias regulation, IEEE
Microwave Guided Wave Lett. 7:222224 (Aug. 1997).
204. K. J. Herrick et al; 95GHz metamorphic HEMT power
ampliers on GaAs, 2003 IEEE MTT-S Digest, 2003, pp.
137140.
205. C. S. Whelan et al; GaAs metamorphic HEMT: An attractive
alternative to InP HEMTs for high performance low noise
and power applications, IPRM2000 Conf. Proc. 2000, pp.
337340.
206. www.compoundsemiconductor.net.
207. www.semireporter.com.
208. http://www.winfoundry.com.
209. http://www.velocium.com.
210. http://www.triquint.com.
211. http://www.gct.com.tw.
212. http://www.gcsincorp.com.
213. http://www.tsmc.com.
214. http://www-306.ibm.com.
215. http://www.atmel.com.
216. R. C. Liu, H. Y. Chang, C. H. Wang, and H. Wang, A 63-GHz
VCO using a standard 0.25-mm CMOS process, IEEE ISSCC
Digest Tech. Papers, 2004 (24.7).
217. I. J. Chen, H. Wang, and P. Hsu, AV-band quasi-optical GaAs
HEMT monolithic integrated antenna and receiver front
end, IEEE Trans. Microwave Theory Tech. 51(12):24612468
(Dec. 2003).
218. O. Wohlgemuth, P. Paschke, and Y. Baeyens, SiGe broad-
band ampliers with up to 80GHz bandwidth for optical
applications at 43Gbit/s and beyond, 33rd European Micro-
wave Conf. Proc., Munich, Germany, Oct. 2003, pp. 1087
1090.
219. S. Reynolds, B. Floyd, U. Pfeiffer, and T. Zwick, 60GHz
transceiver circuits in SiGe bipolar technology, IEEE IS-
SCC Digest Tech. Papers, 2004 (24.5).
MILLIMETER-WAVE MEASUREMENT
GUY VERNET
Universite ParisSud
GERARD BEAUDIN
Observatoire de ParisMeudon
DOMINIQUE CROS
Universite de Limoges
PAUL CROZAT
Universite ParisSud
GILLES DAMBRINE
Institut dElectronique et
Microelectronique du Nord
(IEMN)
BERNARD HUYART
Ecole Nationale Superieure des
Telecommunications (ENST)
JEAN-MICHEL NEBUS
Universite de Limoges
The millimeter-wavelength spectral band covers the fre-
quency range 30 GHz (l 10 mm) to 300GHz (l 1 mm).
In the larger view, it can include a part of the submilli-
meter band: the extended range up to 1THz (l 0.3 mm),
which represents one of the least explored portions of the
electromagnetic spectrum. The frontier between the mil-
limetersubmillimeter region and the far-infrared region
is arbitrary and variable. The distinction comes mainly
from the detection techniques employed (coherent or in-
coherent detection). The millimeter spectrum is presented
in Fig. 1.
In the microwave domain, the atmosphere is transpar-
ent to frequencies up to 40 GHz except for a weak water
vapor absorption line at 22 GHz. However, in the millime-
ter domain, there are several strong absorption lines: (1) a
large and complex set of oxygen lines around 5560GHz,
(2) a single oxygen line around 119 GHz, and (3) a water
vapor line around 183GHz. Above 300GHz, several ab-
sorption lines exist, mainly due to the water vapor. The
spectral region located in between these lines, currently
called windows, is decreasingly transparent when the
frequency increases.
Millimeter waves offer a solution to the increasing de-
mand in frequency allocation due to the low-frequency-
band saturation and the requirement for higher data
rates. Moreover, a high directivity can be obtained with
small antennas associated with small-sized circuits that
become more easily integrable. Applications are nume-
rous, ranging from mobile communications, local-area
3046 MILLIMETER-WAVE MEASUREMENT
Previous Page
networks, and collision avoidance radars to satellite
communications, radio astronomy, radio altimetry, and
robotics.
In the millimeter-wave range up to 100GHz, the equip-
ment and methods of measurement have been extended
from the microwave domain. The major problems in the
millimeter eld are due to the small size of the devices and
the transmission line losses. Above 100 GHz, as an alter-
native, other equipment and methods of measurement,
using a quasioptic setup, have been developed or adapted
from far-infrared techniques (dielectric waveguide cavity
resonator, free-space methods).
1. MILLIMETER-WAVE AUTOMATIC NETWORK ANALYZER
As all other types of automatic network analyzer
(ANA), the millimeter-wave automatic network analyzer
(MWANA) measures magnitudes and phases of scattering
parameters (S parameters) of the device under test
(DUT).
1.1. Main Types of MWANA
1.1.1. Broadband Coaxial Systems. In this group, single
or multiple synthesized sweeper network analyzers are
commercially available [1]. The single synthesized source
systems perform S-parameter measurements up to
50 GHz using 2.4mm coaxial accessories, and up to
67 GHz using 1.85mm coaxial elements (V connectors).
The multiple synthesized sources system may cover the
40 MHz to 110 GHz frequency range using 1mm coaxial
elements (W connectors).
1.1.2. Rectangular Waveguide Systems. These network
analyzers perform S-parameter measurements in Q
(3350GHz), U (4060 GHz), V (5075GHz), and W (75
to 110 GHz) frequency ranges; the rectangular wave-
guide standards are WR-22, WR-19, WR-15, and WR-10,
respectively.
In the multiple-source network analyzer, one synthe-
sized source provides the radiofrequency (RF) (stimulus)
signal and the second provides the local-oscillator (LO)
signal. Figure 2 shows a simplied block diagram of this
system common to all waveguide bands. This system con-
sists of a conventional network analyzer, two microwave
sources (RF and LO), and a pair of band-dependent milli-
meter-wave test set modules covering the frequency bands
given above. The RF signal after amplication is routed to
the port 1 test set module for forward measurements (S
11
and S
21
) or to the port 2 test set module for reverse mea-
surements (S
22
and S
12
). Components in the millimeter-
wave test set module provide frequency multiplication,
signal separation to sample incident, reected and trans-
mitted signals, and the harmonic mixers to accomplish the
intermediate frequency (IF) conversion (generally rst IF)
to some MHz (e.g., 20 MHz). The second source provides
the LO for the four harmonic mixers. This LO source is set
such that the millimeter-wave RF test signal frequency
and the appropriate LO harmonic are offset by exactly the
IF (e.g., 20 MHz). For instance, in the case of the HP8510C
MWANA [1] with V-band millimeter-wave test set mod-
ules, the frequency of the two microwave sources (RF and
LO) can be expressed as follows
RF
F
op
4
and LO
F
op
20 MHz
14
where F
op
is the operating frequency.
As compared with a single-source network analyzer
(coaxial), the rectangular waveguide system has inherent
drawbacks. Indeed the power of the RF signal injected
to the DUT cannot be controlled due to the frequency
multiplication. This power may be close to 0 dBm (1mW
on 50-W system) and may be more dependent on the
frequency band. This feature may induce nonlinear phe-
nomena (compression, distortion) when the DUT is an ac-
tive device (transistor, amplier, etc.). Moreover, reactive
impedance of a rectangular waveguide below its cutoff
frequency may allow instability of the active DUT.
1 m
VHF UHF SHF EHF
10 cm
Microwaves Submillimeter IR
300 MHz 300 GHz 3 GHz 3 THz 30 THz 300 THz 30 GHz
1 cm 1 mm 100 m 10 m 1 m
1 cm
Ka Q V W D
Millimeter waves
27 46 40
54 92 143
62 96 137 300 GHz
42
1 mm
Figure 1. Atmospheric transmission in the millimeter domain.
MILLIMETER-WAVE MEASUREMENT 3047
1.2. Dynamic Range of Millimeter-Wave Automatic Network
Analyzer
Dynamic range, which is the key consideration in most
measurement systems, relates to the ability of a receiver
to accurately detect a signal over a large amplitude range.
The largest input signal is usually limited by compression
in the input receiver, while the smallest ones that can be
detected are limited by the noise oor and other undesir-
able signals. Dynamic range can be improved by increas-
ing the number of measurement averages and changing
video IF bandwidth. Table 1 summarizes the dynamic
range of HP8510C for transmission measurements as
function of the frequency band.
1.3. On-Wafer Probing System
Commercially available coplanar probes cover the full
millimeter-wave band [2]. On-wafer probing in milli-
meter-wave measurement is by far the most precise tech-
nique, due to (1) better positioning and (2) better contact
repeatability. For millimeter-wave measurement, only
groundsignalground (GSG) topology is useful, since
fundamental modes must only be excited at the probe tip.
There are mainly two types of coplanar probes: coaxial-to-
coplanar probe tips and wave-guide-to-coplanar probe
tips. The former one uses the internal MWANA test set
bias tee, while the latter one may include a direct-current
(DC) bias tee inside the probe (Fig. 3). Typical values
for return loss and insertion loss are, respectively, within
1015 dB and 12 dB [2]. The connection between the
probe and the test set port must be kept as short as pos-
sible, and the millimeter-wave coaxial cable may easily
add several decibels to the insertion loss.
1.4. Specific On-Wafer Calibration Technique
High-precision measurement relies on careful reference
plane definition and on-chip parasitic access determina-
tion [4]. Reference plane definition strongly correlates with
the calibration used. For SOLT (short, open, load, and
through standards) calibration, the reference plane is
X4 X4
DUT
L.O.
R.F. R.F.
L.O. L.O. L.O.
Harmonica
mixer
Directorial
coupler
Directional
coupler
Toward I.F.section
(20 MHz)
Toward I.F.section
(20 MHz)
a
1
b
1
a
2
b
2
Port 1 Port 2
Isolator Multiplier Isolator Multiplier
Figure 2. Simplied synopsis of a MWANA test set, for an HP85106DV-band system(5075GHz).
Table 1. Dynamic Range ofHP8510C for Transmission Measurements as a Function of the Frequency Band
Frequency range (GHz) 3850 4060 5075 75110
Maximum power measured at port 2, nominal value 12dBm 10dBm 10dBm 0 dBm
Reference power at port 1, nominal value 0dBm 0dBm 0 dBm 3dBm
Minimum power measured at port 2 87dBm 87dBm 75dBm 79 dBm
Receiver dynamic range
a
99dB 97dB 85dB 79dB
System dynamic range
b
87dB 87dB 75dB 75dB
a
Receiver dynamic range is dened as the ratio of the maximum signal level at port 2 for 0.1 dB compression to the system noise oor.
b
System dynamic range is dened to the ratio of the maximum signal at port 1 and the system noise oor.
3048 MILLIMETER-WAVE MEASUREMENT
dened by the coherent values declared for short and
open. Through losses must be kept low, and delay decla-
ration must be coherent with reference plane positioning.
Any inconsistency will lead to poor measurement. For TRL
(through, reect, line standards) or LRM (line, reects,
match standards) calibration, the reference plane is al-
ways located in the center of the through, but may be
moved to some other convenient place after calibration.
When using on-chip standards, some or all of the on-chip
access parasitics may be included in the calibration, while
use of specific standards (on alumina substrate) implies
subsequent determination (deembedding) of the access
parasitics. In the latter case, the reference plane is usual-
ly located under the probe tip. The main error sources are
(1) bad calibration and (2) bad access parasitic determina-
tion. Advanced calibration techniques have been devised
for improving calibration while use of specific on-wafer el-
ements may improve deembedding of the access parasitic.
1.4.1. Advanced Calibration Technique. This technique
uses more standards in order to (1) obtain better stan-
dards definition (SOLTcalibration) and (2) perform vector
errors correction (LRM calibration and TRL calibration).
None of these techniques are implemented in ANA hard-
ware, so specific computer programs are needed.
1.4.1.1. SOLT Enhancement. The reference plane is
solely determined by the short declaration (usually
0 pH). An open-ended long line measurement is performed
with an incorrect open declaration and an error model to
allow nding for each frequency the open declaration er-
ror, thus leading to a better frequency dependent open
declaration [5]. This allows precise measurement with
SOLT up to 110 GHz.
1.4.1.2. LRRM Calibration. A standard LRM calibration
is performed, a new reect is measured (a short if the
calibration reect was an open), and a new set of error
vectors is calculated [6]. This allows us to correct for
a small probe misplacement in addition to true load
deviation.
1.4.1.3. NIST Multiline Calibration. The TRL calibration
technique is based only on the accurate knowledge of the
characteristic impedance of transmission line standards.
One of the main drawbacks of TRL is its relatively narrow
operating frequency range. To perform a very broadband
(up to 110 GHz) TRL calibration, a multiline calibration
technique has been proposed by the National Institute of
Standards and Technology (NIST) [7].
1.5. Deembedding of the Access Parasitic
The use of an on-chip-specific design may allow precise
determination of all access parasitics between a reference
plane and a DUT port. This is an alternative to techniques
based on frequency dependence of Y and Z parameters,
which allow parasitic determination for transistor mea-
surement [8]. The deembedding uses direct S-parameter
correction or correction through precise parasitic model-
ing using a specially designed on-chip test device.
1.5.1. Direct S-Parameter Correction. The measure-
ment of open and short placed at the DUT port position
allows direct S-parameter correction, using S-to-Y trans-
formation:
Y
device
Y
meas
Y
open

1
Y
short
Y
open

1
This technique is frequently used for microwave measure-
ment on silicon devices but is also interesting in millime-
ter-wave measurement. However, extreme care must be
taken to compensate for open capacitance (fringing eld)
and short inductance (ground access) when designing the
specific open and short device.
1.5.2. Precise Parasitic Modeling. This usually uses sev-
eral short, open, and through devices. A careful modeling
of all these elements allows us to nd out the true access
parasitic and the intrinsic device parasitic. Once the ac-
cess parasitic models are known, correction of the DUT
measurement are obtained through the use of a linear
simulator.
1.6. Specific Characterizations of Transistors in Millimeter
Wave
In view of the increasing number of applications in the
centimeter-wave range, the millimeter-wave range is now
largely used. MVDS (40.542.5 GHz), wireless local area
networks (60GHz WLAN), and automotive radar (77 GHz)
are among the most focused millimeter-wave applications
today. In addition, advanced technologies are now avail-
able for manufacturing integrated circuits used in this
range. The main challenge is to design this integrated
circuit accurately. To this end, reliable broadband transis-
tor models are needed for designing a millimeter-wave in-
tegrated circuit. Linear models (or equivalent circuit)
including high-frequency noise sources are usually de-
duced from S parameter and noise parameter on-wafer
WR10 waveguide
Pole
diameter
0.13 mm
Bias
10 pF
50
0.65 mm
Coaxial line
Pole depth 0.67 mm
Figure 3. Mechanical structure of the waveguidecoaxial
transition and the coaxial probe. (From Ref. 3.)
MILLIMETER-WAVE MEASUREMENT 3049
measurements. The accuracy of each element of such mod-
els depends on the measurement accuracy. The determi-
nation of equivalent circuit elements may be difcult and
inaccurate in the millimeter-wave range. The key consid-
erations in designing a reliable equivalent circuit of tran-
sistors in the millimeter-wave range are as follows:
1. The choice of calibration technique as a function of
the topology of the transistor and the nature of the
substrate,
2. The choice of the equivalent circuit topology includ-
ing parasitic elements.
Another solution consists in establishing an equivalent
circuit of transistors from S parameters and noise para-
meters performed in a relatively lower frequency range
(for instance, up to 50 GHz). The main advantage is that
the accuracy of measurement in this frequency range is
better controlled than that in the millimeter-wave range.
To validate the reliability of such an equivalent circuit, we
calculate the S parameters and noise parameters from the
elements of the equivalent circuit and we compare these
calculated data with measured ones in the millimeter-
wave range.
1.7. Millimeter-Wave Cryogenic On-Wafer Measurement
There are basically two different solutions depending on
the temperature range. For measurements down to 200 K,
the setup is similar to that of the system used for high
temperature measurement. The system works at ambient
pressure, only the chuck is cold, and a local overpressure
of drier air or nitrogen is used to prevent icing of wafer
or probe tips. In this case, the temperature gradient is
mainly located on the probe itself, so cable length at low
temperatures is kept minimal. The calibration substrate
may be kept at room temperature.
For measurements down to a few kelvin, the device and
probes are kept under vacuum in a nitrogen or helium ow
cryostat. Probe displacement under vacuum is obtained
through the use of a bellow, cable length is significant, and
calibration and measurement must be made at the same
temperature.
2. VOLTAGE AND POWER RATIO TECHNIQUES: SIX-PORT
NETWORK ANALYZER
The voltage and power ratio techniques and the six-port
network analyzer (SPNA) are based on direct detection of
the millimeter-wave. The hardware conguration of these
measurement systems is simple because it is composed of
diode or thermal detectors and of directional couplers or
probes. In contrast, heterodyne detection systems involve
multiple frequency conversions requiring local oscillators.
The complexity of the measurement system makes ran-
dom and systematic errors more difcult to estimate. That
is why direct detection techniques provide much of the
basis for precision microwave metrology. This article deals
with the measurement of the scattering parameters S
ij
of
n-port millimeter devices using a slotted line, a (tuned)
reectometer, and an SPNA.
2.1. Slotted Line
This is the oldest method for measuring the reection co-
efcient S
11
of an impedance. In the millimeter frequency
range, the slotted line is realized using a piece of metallic
rectangular waveguide with a slot located at the center of
the broad wall of the guide. The electric eld inside the
guide is sampled with a wire antenna connected to a
Schottky diode detector. The magnitude of S
11
is given
by the voltage standing-wave ratio (VSWR). The phase of
S
11
is given by the position of the antenna for which the
detected voltage is minimum. This technique has been
largely replaced by an automated method.
2.2. The Tuned Reectometer
A simple reectometer requires one or two directional
couplers and power detectors in order to measure the
magnitude of S
11
. These techniques suffer from low direc-
tivity of the couplers and from the mismatches of the
source and the measurement port G
0
. A tuned reecto-
meter includes tuners in order to overcome these difcul-
ties. The measurement system is composed of a
millimeter-wave source, one coupler of directivity D, one
power detector, and two tuners. The detected power P may
be written as follows:
PK
S
11
D
1 S
11
G
0

2
where K is a constant characterizing the measurement
system.
The measurement procedure consists of successively
connecting a sliding load and a sliding short in order to
null D and G
0
using the tuners. Thereafter, the magnitude
of S
11
is given by the power ratio
jS
11
j
P
P
cc
where P
cc
is the power measurement when a short circuit
takes place at the DUT. For a frequency equal to 110 GHz,
the uncertainty measurement (dened at 2s) of |S
11
| us-
ing the tuned reectometer is in the range of 0.0050.06
when |S
11
| varies from 0.01 to 0.5. In metrological labs,
transmission measurements (S
21
) are performed using an
IF attenuator (IF substitution method).
2.3. Six-Port Network Analyzer
The term six-port is due to the six-port millimeter-wave
junction (Fig. 4). At its four output ports it provides power
readings P
3
to P
6
, which are a weighted addition of the
incident a
2
and reected b
2
waves. The complex value of
S
11
(b
2
/a
2
) derives from the six-port equations:
P
i
P
3
K
i
a
i
a
2
b
i
b
2
a
3
a
2

2
; i 4; 5; 6
where a
i
and b
i
are the weighted factor of the waves a
2
and
b
2
at the ith port and K
i
is a constant of the power detector.
3050 MILLIMETER-WAVE MEASUREMENT
The four scattering parameters may be obtained by the
connection of two SPNAs at the two ports of the DUT or
one SPNA in the reection or transmission mode.
2.4. Practical SPNA Junctions
Six-port theory is, in principle, applicable to arbitrary de-
sign. However, for better accuracy assessment, design ob-
jectives should be obtained:
*
At one output port, the wave is proportional (a
3
) to
the incident wave a
2
*
At the three remaining ports we have |q
i
|1.5 and
arg(q
i
q
j
) 1201, where q
i
a
i
/b
i
for i, j 4,5,6.
A simple six-port junction consists of one directional
coupler and three voltage probes (as used in the slotted
line) separated by about l/6. A similar junction replaces
the probes by a waveguide coupling structure [9]. This
structure contains two E-plane T junctions at the upper
broad wall of the main R320 (26.540 GHz) waveguide and
one E-plane T junction at the lower broadwall. The dis-
tances between the T junctions are about l/6.
Figure 5 shows a six-port junction using techniques
[10] at submillimeter wave (300 GHz). Similar quasiopti-
cal techniques have been applied in optic domains for a
wavelength of 0.633 mm [11]. The beamsplitter may be re-
placed by directional couplers using a metallic waveguide
or dielectric waveguide structure (94 GHz) [12].
A more wideband system [13] (75110GHz) has been
realized by means of connecting ve 3-dB 901 hybrid cou-
plers. It can be shown that the q
i
points are frequency-
independent and are equal to (j, 1 j, 1 j) assuming
identical and symmetrical couplers with a coupling factor
of 3 dB. This feature is interesting in the millimeter-wave
range because the phase property of commercial couplers
are usually unknown.
Another technique is the multistate reectometer. It
consists of two directional couplers. The internal matched
termination on the fourth arm of one coupler has been re-
placed by a phase shifter. Three states of the phase shifter
provide the three equivalent power ratios of the six-port
technique. Currently, this system permits on-wafer mea-
surement at a frequency of 140GHz [14].
2.5. Experimental Results
Table 2 shows S
11
measurement results obtained with dif-
ferent systems. The measurement results labeled SPNA,
HP8510, or AB millimeter can be compared with the
values labeled LCIE given by the calibration center of
LCIE (Laboratoire Central des Industries Electriques en
France), which are considered arbitrarily to be the refer-
ences. In this case, the magnitude of S
11
was determined
with a tuned reectometer while the phase is obtained
with a slotted line. The mean standard deviation is equal
to 0.01 for the magnitude and 41 for the phase. The small
differences may be due to temperature effects or the non-
repeatability of the connections.
P
4
Source Six-port junction
Measurement
port
a
4
P
5
a
5
b
2
a
2
P
6
a
6
P
3
a
3
Figure 4. Six-port measurement system. It provides the complex
value of the reection coefcient S
11
of the load connected at the
measurement port. The power detector, connected at each output
port, measures the power of b
i
, where i 36.
mmW
source
Lens
Horn
P2
Mirror
Reference
plane of the
measurement
port
S
P4
Mirror
P1
Absorptive material
Beam splitter
P3
Power probe
Figure 5. Six-port millimeter-wave junction using quasioptic techniques. It comprises ve horns
and four dielectric sheets. Each of the dielectric sheets is a beamsplitter. A metallic mirror is placed
on the fourth branch of each beamsplitter except the one which involves the measurement of the
source signal. The distance between the mirror and the dielectric sheet gives the weight of the
added signals.
MILLIMETER-WAVE MEASUREMENT 3051
2.6. Future Trends
The six-port junction may be realized using a microwave
monolithic integrated circuit (MMIC). The MMIC chips
can be used as a sensor in an antenna array or integrated
inside the tips of a probe station. In the latter case, the
series of losses of the probe tips and the line connection do
not decrease the measurement accuracy of the wafer probe
station.
3. SOURCE-PULL AND LOAD-PULL TECHNIQUES
Large-signal millimeter-wave measurements of represen-
tative samples of semiconductor devices are of prime im-
portance for two main reasons: (1) accuracy and
consistency check of nonlinear transistor models for
CAD and (2) experimental optimization of transistor opti-
mum operating conditions without the use of any model.
Nonlinear devices demonstrate different aspects of
their behavior depending on the source and load match.
Therefore, large-signal measurement systems use either
computer-controlled tuners or active loads to change
source and load impedances of the DUT to reach the op-
timum matching conditions under large-signal operation
(load-pull system).
Tuner systems operating up to the W band are com-
mercially available. They are widely used for the design of
low-noise ampliers [15], power ampliers and oscillators
[16], and mixers [17]. However, such systems do not allow
synthesis of impedances close to the edge of the Smith
chart. This main drawback becomes more and more cru-
cial if the operating frequency increases (millimeter wave)
or if on-wafer measurements are performed. For these
reasons the active source and load-pull technique has
emerged. Going further in the large-signal characteriza-
tion, novel measurement systems allowing the extraction
of voltage/current waveforms at the DUTs ports have
been developed.
3.1. Basic Considerations on the Source and Load-Pull
Techniques
The principle of the large-signal characterization of any
non-linear two-port is sketched in Fig. 6. If a single-tone
power source is used, the four power waves are expressed
as follows:
a
1
t

n
A
1n
cosnot j
1n
;
b
1
t

n
B
1n
cosnot y
1n

a
2
t

n
A
2n
cosnot j
2n
;
b
2
t

n
B
2n
cosnot y
2n

Avector network analyzer (VNA) or six-port reectometer


provides the measurements of the magnitudes |A
in
|,
|B
jn
| (i, j 1, 2) and the power wave ratios at the same
frequency. From this information, powers, impedances
and gains can be calculated. Unfortunately, classical
VNAs do not allow the measurements of absolute phases
j
in
and y
in
. As a consequence, time-domain waveforms
cannot be extracted. A novel system allowing the mea-
surements of time-domain waveforms will be described
later.
First a conventional source and load-pull system (mea-
surements of impedances and powers) is considered. Re-
ferring to Fig. 6, a systematic approach for performing
large-signal characterization of a DUT is as follows:
1. Impose desired DC voltages or currents.
2. Tune the source and load networks.
3. Sweep the power level of the input source and mea-
sure powers, efciency, and gain.
Then the same procedure can be repeated for different op-
erating conditions. This implies the use of a fully auto-
mated measurement system.
3.2. Multiharmonic Active Source and Load-Pull System
Multiharmonic source and load-pull systems are very use-
ful in designing optimized nonlinear microwave circuits.
Both source and load impedances have a great inuence
on the performances of DUTs in terms of efciency and
linearity. The load-pull characterization has also become a
key step in the whole modeling process of semiconductor
devices.
3.2.1. Load Pull [18]. The block diagram of a multihar-
monic load-pull system is shown in Fig. 7. The measure-
ments of absolute powers and power wave ratios are
performed by using a VNA (receiver operation mode)
calibrated with a TRL procedure. The synthesis of load
Two port
Dc supply
Source
a
1
(t ) a
2
(t )
b
1
(t ) b
2
(t )
Power
source
Load
network
Figure 6. Source- and load-pull techniques: principle.
Table 2. Measurement Comparison Among Network
Analyzers
Network Analyzer 93 GHz 94GHz 95GHz 96GHz
LCIE 0.415 0.447 0.479 0.515
100.81 174.81 91.41 131
SPNA 0.01 0.013 0.012 0.013
0.11 0.21 0.81 71
HP8510 0.005 0.002 0.01 0.002
51 31 51 31
AB millimeter 0.006 0.007 0.02 0.01
0.11 41 31 11
3052 MILLIMETER-WAVE MEASUREMENT
impedances at the rst three harmonics coming out of the
DUT is performed by using active loops and monitoring
the complex gain of each loop with attenuators and phase
shifters. Once the gains are xed, a power sweep at the
input of the DUT is performed and input/output power
characteristics of the DUT are measured.
3.2.2. Source Pull [19]. Figure 8 shows a measurement
system based on the use of six-port reectometers. This
system integrates both input and output active loops to
perform source and load-pull measurements. Depending
on the position of switch 1, the input six-port measures
either the input reection coefcient of the DUT or the
reection coefcient of the source load presented to the
DUT. For both cases the error terms found by a classical
calibration procedure are valid.
3.3. Time-Domain Waveform Measurement System [20]
As mentioned previously, conventional VNAs do not allow
the measurements of absolute phases of harmonically re-
lated signals. As a consequence, time-domain waveforms
cannot be extracted. Therefore, different institutes have
Variable
attenuator
Vector network
analyzer
Loop
at f
0
Phase shifter
Variable attenuator
Loop amplifier
Directional
coupler
Loop
at 2f
0
Loop
at 3f
0
Microwave
source
(f
0
)
A
b1
f
0

b1

b1

b2

b2

b3

b3

A
b2
2f
0
A
b3

3f
0
D
U
T
Figure 7. Multiharmonic load-pull system block diagram.
Six-ports
Coupler 2 Coupler 1
Source
plane
a
b
Input
plane
b
1
a
1
a
g

DUT
Isolator
Isolator
Attenuator
Amplifier
Switch 1
g
g

in
Figure 8. Source-pull implementation with a millimeter-wave six-port junction.
MILLIMETER-WAVE MEASUREMENT 3053
developed measurement systems to extract time wave-
forms in one or another way usually based on the HP mi-
crowave transition analyzer. The potential of the
combination of the nonlinear network measurement sys-
tem (NNMS) with active source and load-pull techniques
is under study.
This NNMS is mainly composed of a four-channel
broadband downconverter followed by digitizers. It uses
the harmonic mixing principle to convert RF fundamental
and harmonics into IF fundamental and harmonics. This
instrument takes the place of the VNA in the system pre-
viously presented.
The calibration of the system is performed in three
main steps:
1. Classical TRL calibration
2. Power calibration calibration
3. Phase calibration
During the last step, a reference generator (step recovery
diode) is connected instead of the DUT [21]. The reference
generator is calibrated using the nose-to-nose calibration
procedure [22].
4. DIELECTRIC WAVEGUIDE CAVITY RESONATOR
As millimeter waves, resonators are useful for a large
number of applications in communication systems and
measurements of dielectric properties. In the millimeter-
wave and submillimeter-wave ranges, difculties arise
from wavelengths which are very short, and devices are
difcult to machine with a large degree of accuracy. So the
problem is to achieve high circuit Q for volumic or hybrid
millimeter-wave integrated circuits.
Different resonator structures are used. Some of them
are derived from low-frequency application, like a cylin-
drical metallic cavity, but other devices have been devel-
oped specially for millimeter-wave measurement. In the
following subsections we present mainly the devices given
in Fig. 9, which are often used.
4.1. Cylindrical Metallic Cavity
This structure presented in Fig. 9a is composed of a cy-
lindrical metallic waveguide closed at the top and the bot-
tom by a metallic plane. The resonant frequency depends
on the dimensions of the cavity (diameter and height) and
the mode that is excited in the structures. These modes
are chosen to be TE
01n
or TM
01n
modes and depend on ex-
citation line position. The unloaded Q factor of these res-
onators increases with the axial number n. But it is
difcult to use axial numbers greater than ve, because
a lot of modes are excited in a frequency band and it is
difcult to obtain good frequency isolation. Typically, at
room temperature and with copper to realize the cavity,
values of unloaded Q factor are equal to 12,000 at 30 GHz
and 7000 at 100GHz on the TE
013
modes.
4.2. Open Resonators
The most popular of this type of resonator is the Fabry
Perot, which is presented in Fig. 9b [23,24]. These reso-
nators are used from the short microwave to the optical
domains [25]. The basic device is composed of two reec-
tors of arbitrary radius of curvature separated by a length
d. At low frequencies, the dimensions of the mirror will be
very large, so for this reason these devices are used es-
sentially at very high frequencies. TEM
plq
mode is excited
in these structures, where p, l, and q are, respectively, ra-
dial, azimuthal, and axial variations of the energy which
is localized in the center between the two mirrors. In a
great number of applications, the TEM
ooq
mode is used
and resonant frequency of these modes are periodics along
the q parameter. As in metallic cavities, the unloaded Q
factor increases with the number of axial variations, and
values of the Q factor greater than 10
6
are possible in
millimeter-wave measurement.
4.3. Dielectric Resonators
For high frequencies the dimension of resonators excited on
conventional modes becomes impractically small. A solu-
tion consists of using dielectric resonators excited on whis-
pering-gallery modes (WGMs), which are higher-order
modes. The rst advantage of this solution is the dimen-
sion of the resonators, which is approximatively 10 times
bigger than resonator excited on conventional modes. The
geometry of the resonators is a disk with a diameter greater
than thickness, as shown in Fig. 9c [26,27]. So these reso-
nators are easy to integrate in planar circuits.
Moreover, acting on these modes, energy is conned at
the periphery of the dielectric resonator, and radiation
losses are negligible. Thus, unloaded Q factors are very
large and only limited by dielectric losses of the material
used to realize the resonators.
(a) (c) (b)
Waveguide
or coaxial Mirror Waveguide
2a
2a>>h
h
2a Energy localization
Figure 9. Example of millimeter-wave
resonators: (a) cylindrical metallic cavity;
(b) open resonators; (c) whispering-gallery
dielectric resonators.
3054 MILLIMETER-WAVE MEASUREMENT
At room temperature and using quartz material, a
measured Q factor of 30,000 has been obtained at
100GHz. Placed in a metallic cavity and at 77 K, a Q
factor of 30,000,000 has been measured at 7 GHz with
sapphire.
4.4. Applications to Millimeter Devices
In millimeter-wave devices, a large number of applications
use resonator circuits. These elements are used in devices
such as lters or oscillators, or for material measurements
to determine complex permittivity and permeability. In
both cases, it is very interesting to have a high Q factor of
the resonance modes.
4.4.1. Filtering. Insertion losses and rejection depend
on Q factor of the resonators. To realize these circuits, cy-
lindrical metallic cavity or WGM dielectric resonators are
suitable because association of several resonators is pos-
sible. At high frequencies, topologies of these structures
are the same as for low-frequency devices.
4.4.2. Oscillator. Frequency stabilization and phase
noise need a high Q value of the resonant device. For mil-
limeter-waves, dielectric resonators excited on WGM give
good results and are easy to integrate in the devices. With
these modes, the original topology of oscillators can be
realized by using the wave propagation effect at the peri-
phery of the resonators, which is another property of these
modes.
4.4.3. Dielectric Material Measurement. These resona-
tor devices are currently used because they permit good
accuracy with regard to the complex permittivity of the
material. For metallic cavities or open resonators, the
method consists of comparing the resonant frequency
and the unloaded Q factor of the empty and loaded reso-
nators. This method is convenient if the thickness of the
material under test is smaller than the wavelength. For
material with a large thickness, methods using WGM are
suitable. In this case, measurements of resonance fre-
quency and Q factor are compared with results obtained
by electromagnetic simulator. These methods can be used
for anisotropic dielectric or magnetic material [28].
4.5. Future Trends
The performance of millimeter-wave resonator devices
is limited by the difculty of integration of resonators
in devices (in particular, for cavity or open resonators)
or by losses of metallic or dielectric materials. Since
the late 1980s, with the development of new dielectric
materials like sapphire in the microwave domain, per-
formances have been improved with regard to the unload-
ed Q factors. Unfortunately, characteristics of these ma-
terials change with temperature, and frequency
stabilization is difcult to obtain without using regulating
temperature devices. In the future, with technology de-
velopment, we can hope to obtain material with optimum
characteristics.
5. FREE-SPACE METHODS: INTERFEROMETRY
Waveguide loss becomes important for millimeter waves;
free-space transmission has lower loss and is good for low-
noise applications as well as for high-power applications
(in addition, larger area of beamspread produces a lower
power density). Free-space measurement is required
when contact is not possible. Such is the case in radiome-
try for measurement of temperature and chemical compo-
sition, as well as in interferometry and in radar detection
for measurement of distance, velocity, and position.
Very often for millimeter waves, the beam diameter is a
relatively small number of wavelengths; thus, diffraction
must be considered. A wide variety of components and
systems have been developed using quasioptical tech-
niques, either similar to waveguide devices or derived
from infrared and optical techniques [29,30].
5.1. Quasioptical Techniques
5.1.1. Gaussian Beams. Paraxial propagation of a beam
in free space is relatively simple to analyze if the transverse
electric eld amplitude variation has a Gaussian form
Er=E0 expr=w
2

where r is the distance from the axis of propagation and w


is called a beam radius. A Gaussian beam is produced
with, or focused to, a minimum size; this minimum beam
radius w
0
called a beam waist.
The feedhorn is the best coupling device between the
Gaussian beam and the guided wave (Fig. 10). The best
coupling (98%) is obtained with a scalar feedhorn pattern.
Several types of planar antennas (patch, bowtie, travel-
ing-wave slot) can also be used. An associated lens is used
to obtain reasonable coupling efciency. Several types of
planar antennas (patch, bowtie, traveling-wave slot) can
be used. An associated lens allows us to reduce the beam
size and increase the coupling efciency.
5.1.2. Quasioptical Components Used in Millimeter-
Wave Measurement. Quasioptical components provide a
wide variety of functions used for millimeter measurements:
*
Beam transformations require focusing elements
such as parabolic or ellipsoidal mirrors and lenses.
To minimize the absorptive loss of lenses, low-loss di-
electrics must be selected (PTEE, alumina, fused sil-
ica, etc.); and to obtain low reection loss, a matching
layer or grooves are essential, except for low-index
materials.
*
High-Q-factor (b 10,000) resonant cavities can be
formed with two spherical mirrors or one spherical
and one plane mirror.
*
Signal ltering can be achieved by interferometers
(see below) and by plate lters: Perforated conductive
plates or arrays of resonant patterns are printed on a
dielectric substrate.
*
Polarizing grids are usual in quasioptical systems,
often used as beamsplitters for a polarized signal.
MILLIMETER-WAVE MEASUREMENT 3055
These grids can be formed with freestanding wires or
with dielectric-supported conducting strips. Dielec-
tric plates are also used as beamsplitters and can
function as hybrids (901 phase shift between reected
and transmitted beams).
*
Different types of interferometers are developed from
beamsplitters and reective devices: dual-beam in-
terferometers or FabryPerot interferometers.
5.1.3. Quasioptical Bench. The purpose of the quasiop-
tical bench is to create a beam waveguide including a
sufcient measurement area. Figure 11 shows a basic
bench, in which the measurement area is located between
a signal generator and a detector equipped with free-space
coupling devices (horns and lenses). The relative positions
must be nely adjustable (in three rectangular directions
and two or three rotating angles) while staying extremely
Acquisition
Amplitude modulation / Frequency / Phase
Computer
Control (optional)
Generator
Horn
Lens Lens
Transmission
Reflection
Detector
(optional)
Detector
Horn
DUT
Frequency
G
e
n
e
r
a
t
o
r
e
l
e
c
t
r
o
n
i
c
s
Detected level
D
e
t
e
c
t
o
r
e
l
e
c
t
r
o
n
i
c
s
Figure 11. Quasioptical bench.
Bimodal horn
TE
11
TE
11
TM
11
Corrugated horn
Substrate lens
antenna
Planar
antenna
Extended
hemispherical
lens
TE
11
Open structure Corner-cube
reflector
Figure 10. Devices for millimeter-wave and sub-millimeter-wave beam production (horn and
open-structure examples).
3056 MILLIMETER-WAVE MEASUREMENT
stable. As in coaxial or waveguide measurements, gener-
ators and detectors can use frequency multipliers and het-
erodyne and phase-locked systems to increase the
sensitivity and stability.
Free-space measurements may use a device for analog-
to-coaxial calibration set parts. The methods are identical,
but special care is required to (1) decrease the multiple
reections by using absorbing shields and anechoic rooms,
(2) take VSWR into account, and (3) manage external (or
internal, electromagnetic interferences. Another source of
error and instability is atmospheric absorption, when the
measuring frequency band comes over the absorption
bands of an atmospheric molecule.
5.2. Free-Space Antenna Measurement
The antenna characteristics that have to be measured in
the millimetric range are mainly radiation patterns in
co- and cross-polarization. Phase center measurements of
primary feeds, as well as beam efciency, also have a great
importance for reector antenna design. Measurement
techniques are much the same as at lower frequencies,
but with specific difculties and requirements [31]. (See
also RADIOMETRY, ELECTROMAGNETIC FIELD MEASURE-
MENT.)
5.2.1. Radiation Pattern Measurements. Far-eld mea-
surements must be performed outdoors if antenna dimen-
sions are large compared with the wavelength, which
generally is the case for reector antennas at millimetric
frequencies. However, atmospheric attenuation and geo-
graphic implementation become prohibitive when the far
eld exceeds 1km. Compact antenna test ranges (CATRs)
remedy this problem for medium-to-large reector anten-
nas in the millimetric range.
For antennas of moderate (cm) dimensions, far-eld
measurements can be performed indoors. Horns and
printed antennas are tested in anechoic chambers. In
CATRs, a local plane wave is created in a zone called the
quiet zone, by way of one or several reectors used to
collimate the beam of a smaller source. Various designs
exist, ranging from the basic one with a single offset re-
ector to triple reector systems, according to the required
cross-polarization and spillover levels and the size of the
antennas under test. Diffraction at the edges of the reec-
tors is less critical than in the microwave range, but re-
ector surfaces requirements are more stringent because
the root-mean-square (rms) surface error should be better
than
1
100
wavelength to obtain good precision on the plane
wave phase. Corrugated or special multimode horns are
used as sources.
Hologram CATRs are being developed. Reectors are
replaced by a hologram, with a surface accuracy require-
ment divided by 10. This technique is thus less expensive,
but it is still very new and faces problems concerning the
size of the required holograms as well as frequency band-
width limitations (2030%) and polarization difculties.
In the near-eld scanning technique, elds are
measured close to the antenna under test, on either a
planar, cylindrical, or spherical surface. This technique
requires both amplitude and phase measurements,
because the sampled elds are used to calculate the radi-
ated far-eld through a near-eld to far-eld transforma-
tion. In the millimeter-wave domain, this technique
encounters problems of time consumption and precision
phase measurement.
5.2.2. Other Antenna Performance Measurement. The
beam efciency measurement is performed by measur-
ing the power radiated within the mainbeam of the an-
tenna. It is especially important for radiometer antennas,
which must have very low sidelobes. It requires both ra-
diation pattern measurements, although not with wide-
angle scanning, and absolute power measurements. The
phase center position measurement is useful only for
horns used as primary sources in reector antennas. It is
performed by positioning the center of phase patterns in
different planes along the axis of the horn. It requires
precise phase measurements and mechanical positioners.
5.3. Quasioptical Measurement
5.3.1. Power Measurement. Most of the power detectors
used in the microwave measurement (Schottky diodes for
instance) still work in the millimeter-wave frequency
range. Moreover, bolometers and the calorimeters also op-
erate in this range. These devices mounted in a waveguide
structure can be associated with a horn to make up a beam
detector. To increase the sensitivity, synchronous detec-
tion and heterodyne conversion may be used. Absolute
calibration must be performed with photoacoustic detec-
tors (used at Brewster angle and through amplitude
modulation).
5.3.2. Quasioptical Device Characteristics Measure-
ment. A basic quasioptical bench allows us to measure
the main of millimeter-wave characteristics of a DUT in-
serted in the optical path: transmittance, loss and scat-
tering by insertion, and reection by comparison with a
good reector. Much attention must be paid to (1) the ra-
dial size of the DUT compared with the usable beam size,
(2) the compensation for phase differences, and (3) the
VSWR (which can be reduced by choosing an incidence
angle other than zero).
For low losses, noise measurements provide a better
accuracy. The equivalent noise temperature T
1
is in-
creased to T
2
by the insertion of a DUT at the front of a
low noise receiver. Giving the DUT physical temperature
T
d,
the loss factor L is obtained from
T
2
T
d
L 1 LT
1
1G
2

with G
2
[(VSWR1)/(VSWR1)]
2
.
5.3.3. Noise Measurement. In addition to classic noise
measurement using a noise source (diode, gas tube) asso-
ciated in this case with a horn, the common noise mea-
surement uses two absorbing targets with two different
radiant temperatures. The target with the higher temper-
ature T
h
, the hot load, takes the place of the target with
the lower temperature T
c
, the cold load, in front of the
MILLIMETER-WAVE MEASUREMENT 3057
DUT. The respective output powers are P
h
and P
c
. With
ideal targets the equivalent noise temperature of the
DUT is
T
x
T
h
.
P
c
T
c
.
P
h
=P
h
P
c

Radiometer systems must be extremely stable over long


periods and linear over the whole level range. The target
size must be enough to cover the whole beam. Tempera-
ture and absorption coefcients must be homogeneous
over the target surface. At millimeter wavelengths,
hvokT, the brightness temperature is very near the phys-
ical temperature. Measurement uncertainties comes from
variations of the effective emissivity of the target and from
the mismatch with the receiver. In addition, as a result of
the standing-wave effect, the total noise entering the re-
ceiver becomes frequency-dependent.
The standard ferrite-loaded foam absorbers may be
used as calibration targets in the lower frequency range,
but for higher frequencies the reected power unfortu-
nately reaches 20 dB, depending on the polarization
angle. For a single polarization and when the congura-
tion is xed, a specially developed ridged absorber or
dielectric surface at the Brewster angle acts as a quasi-
perfect absorber.
To achieve good precision (o1%), a lot of specific tar-
gets have been developed from the principle of a conical
hole with an angle of o101 to increase the number of re-
ections.
5.3.4. Other Quasioptical Measurements. The frequency
measurement may use a downconversion by means of a
millimeter mixer coupled with a local oscillator by a qua-
sioptical coupler (an interferometer or a simple dielectric
plate). The local oscillator and the low frequency counter
are phase/frequency locked on a reference ultrastable
oscillator. On the other hand, wavemeters may use very-
high-Q-factor cavities in quasioptical techniques.
The measurement of the polarization of a signal usu-
ally takes advantage of the sensitivity of the detector
for one electromagnetic eld direction (detection diodes
and rectangular waveguide mounts). To increase the ac-
curacy or to use a non-polarized detector, a polarized plate
(grid) may be inserted (with an incident angle to decrease
a possible VSWR effect). This method requires us to rotate
the whole receiver or to insert a waveguide twist behind
the horn (causing a calibration problem). The other solu-
tion is to use a quasioptical polarization rotator on the
optical path. This device uses three grids in transmission
[32] or one grid and one reector [33]. In this way the me-
chanical rotation is limited to one light device, and the
transition time may be very short (o10 ms). Circular po-
larization may be measured with a particular disposition
of MartinPulplett interferometer.
The knowledge of the insertion effects (loss and phase
variations) allows us to compute the complex dielectric
constant of a material. Other methods for material
characteristics measurement use modications of a
cavity resonators Q factor.
5.4. Interferometry
The resolution of an antenna has a diffraction limit of El/
D, and an interferometer increases the resolution accord-
ing to the area covered by two or several connected an-
tennas. More generally, an interferometer can be used to
measure the Fourier components of a brightness distribu-
tion. Since Ryle and Hewish [34] have formulated the
principle of aperture synthesis, many interferometers
have been built or are in progress (mainly for radio as-
tronomy). The aperture may be synthesized by multipli-
cation, physically moving of elements, or by using the
rotation of the earth. As baselines are increased, the major
problem in millimeter-wave measurement is the mainte-
nance of phase stability of local oscillators; for very-long-
baseline interferometry (VLBI), the local oscillators are
independent and need very accurate frequency standards.
BIBLIOGRAPHY
1. Technical information available from Hewlett-Packard (on-
line) at www:www.hp.com Anritzu.
2. Technical information available from Cascade Microtech Inc.
(online) at www:www.cmicro.com. and Picoprobe, a GGB In-
dustries Inc. (Online). Available www:WWW.picoprobe.com
3. S. M. J. Liu and G. G. Boll, A new probe for W-band on-wafer
measurements, IEEE MTT-S Digest, 1993.
4. See application note from Cascade Microtech Inc., Hp, and
Picoprobe.
5. P. Crozat, J. C. Henaux, and G. Vernet, Precise determination
of the open circuit capacitance of coplanar probes for on-wafer
automatic network analyzer measurements, Electron. Lett.
27:14761478 (1991).
6. A. Davidson, K. Jones, and E. Strid, LRM and LRRM cali-
brations with automatic determination of load inductance,
36th ARFTG Conf. Digest, 1990, pp. 5763.
7. R. B. Marks, A multiline method of network analyzer cali-
bration, IEEE Trans. Microwave Theory Tech. 39:12051215
(1991).
8. G. Dambrine et al., A new method for determining the F.E.T.
small signal equivalent circuit, IEEE Trans. Microwave The-
ory Tech. 36:11511159 (1988).
9. U. Stumper, Experimental investigation of mm W Six port
incorporating simple waveguide structure, IEEE Trans.
Instrum. Meas. 469472 (1991).
10. U. Stumper, A six port reectometer operating at submilli-
meter wavelengths, Proc. 15th European Conf., 1985.
11. N. C. Wolker and J. E. Carrol, Simultaneous phase and am-
plitude measurements on opticals signals using a multiport
junction, Electron. Lett. 20:981983 (1984).
12. G. Hjipieris, R. J. Collier, and J. Grifn, A mm W six port us-
ing dielectric waveguide, IEEE Trans. Microwave Theory
Tech. 38:5461 (1990)
13. S. A. Chahine et al., A six-port reectometer calibration using
Schottky diodes operating in AC detection mode, IEEE Trans.
Instrum. Meas. 42:505510 (1993).
14. R. J. Collier and I. M. Boese, Impedance measurements using
a multistate reectometer from 110170GHz, BEMC, 3-1,
3-4, 1996.
15. Focus Microwave Inc., An ultra wideband tuner system for
load pull and noise characterization, Microwave J. 38(6):90
94 (1995).
3058 MILLIMETER-WAVE MEASUREMENT
16. F. M. Ghannouchi and R. G. Bosisio, Source pull/load pull os-
cillators measurements at microwave/MM wave frequencies,
IEEE Trans. Instrum. Meas. 41:3235 (1992).
17. D. L. Le and F. M. Ghannouchi, Source-pull measurements
using reverse six-port reectometers with application to ME-
SFET mixer design, IEEE Trans. Microwave Theory Tech.
42:15891595 (1994).
18. F. Blache et al., A novel computerized multiharmonic active
load pull system for the optimization of high efciency oper-
ating classes in power transistors, IEEE MTT Symp., Orlan-
do, Fl, 1995, pp. 10371040.
19. G. Berghoff et al., Automated characterization of HF power
transistors by source pull and multiharmonic load pull, IEEE
Trans. Microwave Theory Tech. (in press).
20. D. Barataud et al., A novel time domain characterization
technique of intermodulation in microwave transistors.
Application to the visualization of the distortion of high
efciency power ampliers, IEEE MTT Symp., Denver,
1997, pp. 16871690.
21. J. Verspecht et al., Accurate on wafer measurement of phase
and amplitude of the spectral components on incident and
scattered voltage waves at the signal ports of a nonlinear
microwave device, IEEE MTT Symp., Orlando, 1995, pp.
10291032.
22. J. Verspecht and K. Rush, individual characterization
of broadband sampling scopes with a nose to nose calibra-
tion procedure, IEEE Trans. Instrum. Meas. 43:347354
(1994).
23. J. C. McCleavy and K. Chang, Low-loss quasioptical open
resonators lters, IEEE MTT Symp. Digest, 1991, pp. 313
316.
24. D. Steup, Quasioptical SMMW resonator with extremely
high Q factor, Microwave Opt. Technol. Lett. 8(6):275279
(1995).
25. D. Cros and P. Guillon, Whispering gallery dielectric resona-
tor modes for W-band devices, IEEE Trans. Microwave Theory
Tech. 38:16671674 (1990).
26. O. Di Monaco et al., Mode selection for a whispering gallery
mode resonator, Electron. Lett. 32(7):669670 (1996).
27. J. Krupka et al., Study of whispering gallery modes in aniso-
tropic single-crystal dielectric resonators, IEEE Trans. Mi-
crowave Theory Tech. 42:5661 (1994).
28. A. Parash. J. K. Vaid, and A. Mansinch, Measurement of di-
electric parameters at microwave frequencies by cavity per-
turbation technique, IEEE Trans. Microwave Theory Tech.
27:791795 (1979).
29. P. F. Goldsmith, Quasioptical techniques at millimeter and
submillimeter wave-lengths, in K. J. Button, ed., Infrared
and Millimeter Waves, Vol. 6, Academic Press, New York,
1982, pp. 277343.
30. J. C. G. Lesurf, Millimeter-wave Optics, Devices & Systems,
Adam Hilger, Bristol, UK, 1990.
31. A. D. Olver and C. G. Parini, Millimetre wave compact
antenna test ranges, Proc. JINA 92, Nice, 1992, pp.
121128.
32. R. K. Garg and M. M. Pradhan, Far-infrared characteristics of
multielement interference lters using different grids, Infra-
red Phys. 18:292298 (1978).
33. C. Prigent, P. Abba, and M. Cheudin, A quasioptical polar-
ization rotator, Int. J. Infrared Millimeter Waves 9(5):447490
(1988).
34. M. Ryle and A. Hewish, Monthly Notices Roy. Astron. Soc.
120220 (1960).
MIMO SYSTEMS FOR WIRELESS
COMMUNICATIONS
AHMED IYANDA SULYMAN
MOHAMED IBNKAHLA
Queens University
Kingston, Ontario, Canada
1. INTRODUCTION
Wireless data services have grown rapidly both for micro-
and macrocellular systems,
1
and high- and low-mobility
applications. This is due to the various technological
breakthroughs recorded in wireless applications such as:
mobile computing, mobile and high-speed internet access,
mobile multimedia, and a host of personal communica-
tions (PC) services. The demands for ubiquitous access
to these services are ever increasing, and necessitating
continued additions of new techniques to provide more
services.
Wireless channels suffer from severe distortions caused
mainly by multipath
2
fading. The severity of these distor-
tions often makes it impossible for the mobile receiver to
make a correct detection of the transmitted symbols, un-
less some less attenuated replicas of the transmitted sig-
nal are provided to the receiver. This is referred to as
diversity. Diversity techniques are based on the notion
that errors occur in reception when the channel is in deep
fadea phenomenon that is more pronounced in mobile
communications, due to the mobility of the transmitter,
the receiver, or both of them. Therefore, if the receiver is
supplied with several replicas, say, L, of the same infor-
mation signal transmitted over independently fading
channels, the probability that all the L independently fad-
ing replicas fade belowa critical value is P
L
(where P is the
probability that any one signal will fade below the critical
value). The error rate performance of the system is thus
improved without increasing the transmitted power. Most
of these diversity solutions, however, have traditionally
focused on the receiver diversity considerations. Examples
of diversity technique for wireless applications can be
found in Refs. 14.
Another powerful fading combatant that has been used
for wireless channels is the coding technique. While di-
versity system introduces spatial redundancies in the re-
ceived signals, coding technique on the other hand
introduces the redundancies in the temporal sense. In
the coding technique, an encoder takes as input n infor-
mation bits at any time instant and adds error correcting
bits (or redundancies) to produce at its output m bits
(m4n) of information bearing code, known as codeword,
and this is transmitted over the wireless channel. The
1
In mobile cellular system, cell sizes (range within which a user
can be served by a nearby base station before being transferred to
another base station) may vary from large macrocells to small
microcells. Macrocells provide services to high speed mobiles,
while microcells provide services to low-mobility applications.
2
Signals emanating from the transmitter arrive at the receiver
via different paths, with different delays and phase. These are
referred to as multipaths (see Section 1.1).
MIMO SYSTEMS FOR WIRELESS COMMUNICATIONS 3059
ratio of the inputoutput bitlength of the encoder, n/m is
known as the code rate and is a measure of the amount of
information contained in one data bit after the encoding
operation. At the receiving end, an event of error due to
corruption in the wireless channel will be detected and
corrected by the decoder using its knowledge of the valid
codewords for the coding techniques employed, as long as
the numbers of bits in error are not greater than the error
correcting capability of the code. This process is known
traditionally as channel coding. Thus the temporal redun-
dancies added to the transmitted bits by the encoder are
used to achieve link quality improvement. Examples of
coding techniques for wireless applications can be found in
Refs. 5 and 6.
A combined codingdiversity scheme, known as space-
time codes, promising dramatically high data rates as well
as reliable communication over the wireless channels was
proposed in [7]. This scheme employs coding techniques
appropriate to multiple transmit antennas to achieve a
combined coding and diversity gains that enables higher
data rates, without prejudice to error rate performance.
Age-long approach to achieve higher data rates is to ex-
pand the signal constellation and use powerful coding and
modulation techniques. However, this approach falls short
of the goal of achieving truly high-speed data services be-
cause of the substantial SNR penalty paid for increasing
the signal constellation size. In addressing this problem,
the deployment of wireless systems employing multiple
transmit antenna (transmit diversity) were rst proposed
in the context of signal processing in several publications
[4,810] using the concept of delay transmit diversity. In
the delay transmit diversity scheme, replicas of the infor-
mation signal are transmitted through multiple antennas
at different times, and necessary signal processing tech-
niques are employed at the receiver to retrieve the original
information signal. The delay transmit diversity scheme
was shown to achieve significant performance boost to the
wireless channel compared to the conventional system
with single transmit antenna.
Tarokh et al. [7] then adapted a coding perspective to
this scheme, and proposed the spacetime coding employ-
ing multiple transmitmultiple receive antennas. The
multiple transmit antennas are used to send different en-
coded signals in parallel at the transmitter, and multiple
antennas are employed at the receiver for signal detection.
An appropriate code is employed such that the number of
codewords at the output of the encoder matches the num-
ber of the transmit antennas. Spacetime codes achieve
much more significant performance improvement over the
conventional wireless system than does the delay diver-
sity transmission [7]. This celebrated result has therefore
spurred a host of research works aimed at increasing the
wireless channel capacity through the spacetime process-
ing [1118].
Alamouti [15] designed a simple but elegant MIMO
system exploiting transmitter diversity to obtain system
performance similar to a maximum ratio combining
(MRC) receiver diversity. In his scheme, a pair of symbol
is transmitted using two antennas at rst, and the trans-
formed version of the same pair is transmitted in the next
time slot, to obtain the MRC-like diversity gain. Space-
time block codes were later designed, using orthogonal
structure, and were shown to generalize the Alamouti
scheme for various MIMO congurations [13]. Several
variants of the MIMO signal processing techniques have
since been exploited. These include the spatial multiplex-
ing system [19] and MIMO maximum ratio combining
(MIMO-MRC) [14], among others. System performance
and information capacity of the wireless communication
system employing these MIMO technologies have been
demonstrated to increase dramatically over those of the
conventional wireless systems [16,1923].
1.1. Basic Baseband MIMO Channel Model
This section illustrates a basic baseband model for the
MIMO wireless communication system. Throughout the
article, we assume a MIMO system with N transmitting
antennas at the transmitter, and L receiving antennas at
the receiver. We use the notation h
ij
to denote the sampled
complex channel gain from transmit antenna j to receive
antenna i at discrete time k, where i 1,2,y, L and j
1,2,y, N. Therefore, we express the L N complex MIMO
channel matrix at time k as
Hk
h
11
k h
12
k h
1N
k
h
21
k h
22
k h
2N
k
h
L1
k h
L2
k h
LN
k
_

_
_

_
Figure 1 illustrates a general block diagram for MIMO
communication systems. The system equation describing
the inputoutput behavior of the MIMO system can be
expressed, for a at fading channel as
yk Hk ck nk 1
where yk y
1
k; y
2
k; . . . ; y
L
k
T
denotes the L 1
complex received signal vector, ck c
1
k; c
2
k; . . . ;
c
N
k
T
denotes the N 1 complex signal vector transmit-
ted from the N transmit antennas, and k is the time index.
nk n
1
k; n
2
k; . . . ; n
L
k
T
is the L 1 complex chan-
nel noise [additive white Gaussian noise (AWGN)] vector.
Channel
coding
To channel
decoder
Data bits
Multiple
antenna
encoder
Multiple
antenna
decoder
MIMO Signal processing block
c
1
(k)
c
N
(k) y
L
(k)
y
1
(k)
h
II
h
LI
h
LN
h
IN
Figure 1. MIMO communication systems.
3060 MIMO SYSTEMS FOR WIRELESS COMMUNICATIONS
Here it is assumed that the AWGN is spatially and tem-
porally white [i.e., n(k) is a zero-mean complex Gaussian
vector with covariance matrix s
2
n
I].
1.2. Channel Effect in Mobile Wireless Communications
Channel effects in mobile wireless communication systems
arise from multipath propagation and user mobility, be-
sides the regular propagation loss and fading (attenua-
tion) on wireless links. The multipath effect is a feature
associated with the fact that the signal transmitted from a
mobile unit undergoes scattering, reections, or diffrac-
tion before reaching the base station, where it arrives
from different paths, each with its own fading, propaga-
tion delay, and angle of arrival. Multipath scattering may
arise from scatterers local to the mobile unit, remote scat-
terers, or scatterers local to the base station, or all of them.
The signal received at the base station is a summation of
these multipaths signals. Figure 2 displays a typical mo-
bile wireless propagation environment.
The combined effect of these features leads to the char-
acterization of mobile wireless channels as time-varying
fading channels, as well as frequency-selective fading
channels.
The equation describing the inputoutput behavior of
the MIMO system can be expressed for frequency-selec-
tive fading channel as
y
^
k

m1
l 0
Hk; lc k l nk 2
where Hk; ll 0; 1; 2; . . . ; m1 is the L N MIMO
channel matrix representing the lth tap of the mobile
channel matrix response with c(k) as the input and y(k) as
the output at time instant k. The parameter m denotes the
memory length of the impulse response of the mobile
channel. To simplify the exposition hereafter, we drop
the time index k in the system equations where necessary.
1.3. Capacity of MIMO Systems
For a given channel and a given transmitter input power
P
T
, Shannon definition of capacity for the single transmit
single receive antenna, or single inputsingle output
(SISO), system can be expressed as
Clog
2
1
P
T
s
2
n
_ _
bps=Hz 3
where s
2
n
is the noise variance and bps is bits per second.
For the case of MIMO transmission, assuming that
the channel state information (CSI) is unknown at the
transmitter, and that the transmitted power is divided
equally among the transmit antennas, then the capacity
formula can be written for a deterministic MIMO channel
as [2427]
C
LoN
log
2
det I
L

P
T
Ns
2
n
HH
H
_ _
bps=Hz 4
and
C
L!N
log
2
det I
N

P
T
Ns
2
n
H
H
H
_ _
bps=Hz 5
where C
LoN
denotes the capacity for the case when the
number of receive antennas L is less than the number of
transmit antennas N, while C
L!N
denotes the capacity for
the case when the number of receive antennas is greater
than or equal to the number of transmit antennas.
The ergodic capacity for fading MIMO channel is ob-
tained by taking the expectation of the capacity expression
above with respect to the random channel. Assuming that
the MIMO channel is spatially white (i.e., uncorrelated),
and consider that NLK, then for arbitrarily large
number of transmit and receive antennas, it can be shown
using the strong law of large numbers that the MIMO
channel capacity in the absence of channel knowledge at
the transmitter approaches [2831]
C !K log
2
1
P
T
s
2
n
_ _
bps=Hz 6
Comparing Eq. (3) and (6), we can observed that the ca-
pacity of MIMO channel increases linearly with the num-
ber of antennas K, when K is very large. Therefore, the
bandwidth efciency growth for MIMO transmission is
linear with the number of antennas. Figure 3 illustrates
the MIMO capacity presented by Paulraj et al. [28] for
various MIMO congurations. In this gure, the number
Remote
scatterers
Base
station
Scatterers
local to mobile
Figure 2. Mobile radio propagation environment.
MIMO SYSTEMS FOR WIRELESS COMMUNICATIONS 3061
of transmit antennas is denoted as NM
T
and the num-
ber of receive antennas is denoted as NM
R
. The capacity
increase of the MIMO system over the SISO system (the
case N1, L1) is clearly depicted in the gure.
2. MIMO COMMUNICATION SYSTEM DESIGNS
MIMO communication system design can be broadly cat-
egorized into two groups:
1. Spatial multiplexing (SM) methods exploiting ca-
pacity increase from the multiple antenna system
2. Diversity methods exploiting link quality improve-
ments from the multiple antenna system
Figure 1 displays a general block diagram for both meth-
ods. In the former, incoming data bits are partitioned into
multiple substreams and each substream is transmitted,
simultaneously, on a different antenna, thereby increasing
the link capacity [19]. The multiple antennas at the re-
ceiver are then expended in separating these substreams
and therefore diversity against fading is rarely provided in
SMat least in the initial design known as V-BLAST
(vertical Bell Labs layered spacetime codes).
In the diversity method, approaches include those ex-
ploiting both diversity and coding gains from the MIMO
processing block shown in Fig. 1, known as spacetime cod-
ing systems [7], and those exploiting only the diversity
gain, known as MIMO-MRC systems [14]. In spacetime
codes, coding techniques appropriate to multiple transmit
antennas are incorporated in the MIMO signal processing
block, in addition to the external channel codes, thereby
achieving combined coding and diversity gains from this
block. In the MIMO-MRC system, only channel codes ex-
ternal to the MIMO signal processing block are employed.
The transmitreceive multiple antennas are utilized
purely for diversity gains, with each transmit antenna
allocated a weighted fraction of the total transmitted
power. The transmit weighting vector is usually matched
to the channel in a way to maximize the postprocessing
SNR at the output of the channel [14]. MIMO-MRC sys-
tems have specific advantage of simplicity of implementa-
tions because the scheme employs MRC-like detection at
the receiverwhich is typically less complex than maxi-
mum-likelihood detection (MLD) used in spacetime cod-
ing. A detailed comparison among the performances of
these methods can be found in Ref. 21.
2.1. MIMO-MRC
Figure 4 displays the model for a MIMO-MRC system
consisting of N antennas at the transmitting station and L
antennas at the receiving station. The symbol c to be
25
20
15
10
5
0
M
T
=1,M
R
=1
M
T
=1,M
R
=2
M
T
=2,M
R
=1
M
T
=2,M
R
=2
M
T
=4,M
R
=4
C
a
p
a
c
i
t
y

(
b
p
s
/
H
z
)
SNR/dB
0 2 4 6 8 10 12 16 18 20 14
Figure 3. Ergodic capacity for various MIMO an-
tenna congurations.
h
11 y
1
w
1
w
L
n
L
n
1
y
L
h
L1
h
1N
h
LN
s
1
v
1
v
N
X
c
c'
X X
X
s
N
Transmit
weight
vector
calculator
Feedback
+
+

Figure 4. MIMO-MRC model.


3062 MIMO SYSTEMS FOR WIRELESS COMMUNICATIONS
transmitted is weighted with a transmit weighting vector
v[v
1
yv
N
]
T
to obtain the transmitted signal vector:
s s
1
s
N

T
c

E
av
_
v
1
v
N

T
7
where E
av
is the average signal energy at each antenna.
The transmit weight vector v is chosen as [14,3234]:
vv
1
v
N

T
H
H
w=jjH
H
wjj 8
where w[w
1
yw
L
]
T
is the weight vector at the receiver.
For i.i.d. (independent, identically distributed) channel
coefcients, the condition on w to achieve maximum post-
processing SNR is |w
1
||w
2
|?|w
L
| [14]. Without
loss of generality, w can be a unit vector. The received
signal vector is therefore characterized as
yHs n 9
where H is the L N MIMO channel matrix and n is the
additive white Gaussian noise (AWGN) vector. The deci-
sion variable for detecting the transmitted symbol c is ob-
tained in an MRC-like processing by taking the dot product
of w and y, which can be expressed from Eqs. (7)(9) as:
w
H
yc

E
av
_
jjH
H
wjj w
H
n 10
The output SNR from the MIMO-MRC receiver g
MIMO
is
therefore given by
g
MIMO

E
av
N
0
jjH
H
wjj
2
=jjwjj
2
11
where N
0
is the noise power.
2.1.1. Performance Results. In this section, we present
some results to illustrate the performance of MIMO-MRC
system for several quadrature amplitude modulation
(QAM) constellations, both circular and rectangular for-
mats. Four QAM constellations have been considered in
this illustration: rectangular 16-QAM, star-QAM (8,8),
(4,12), and (5,11) constellations (Figs. 5 and 6). For the
circular QAM formats [star-QAM, (4,12)-QAM, and
(5,11)-QAM], note that the signals in the constellations
are arranged on inner and outer circles. The ratio between
the radius of the outer and inner circles of a constellation
is known as the ring ratio, a R/r.
Figure 7 compares the symbol error probability (SEP)
of rectangular 16-QAM, star-QAM, (4,12), and (5,11) QAM
constellations in MIMO channels, for various fading sce-
narios and various MIMO congurations. All the results
presented in this gure for the circular constellations have
been computed using the respective asymptotic optimum
value of the ring ratio [35]. It is observed from the results
that rectangular 16-QAM has similar performance with
(5,11) format, with the latter having slightly better SEP
performance at high MIMO order. Similarly, star-QAM
and (4,12) have close SEP performance, with (4,12) having
better SEP performance than star-QAM. Rectangular and
(5,11) constellations both have better SEP performance
than (4,12) and star-QAM constellations in all the MIMO
congurations considered. For all these constellations
however, dramatic improvements in the SEP performance
of the MIMO system is observed as the MIMO dimension
is increased (from N2, L2 to N4, L4 in this
gure). This is an illustration of the link quality im-
provement achieved through the multiple antenna
transmission.
R
r
1
0.5
16 Rectangular-QAM constellation 16 Star-QAM (8,8)constellation
S
0
S
4
S
2
S
3
S
7
S
11
S
15
S
6
S
3
S
2
S
1
S
10
S
11
S
12
S
13
S
14
S
15
S
0
S
8
S
9
S
7
S
6
S
5
S
4
S
10
S
14
S
8
S
12 S
13
S
9
S
5
S
1
0.5
1
1
0.5
0
0.5
1
1 0.5 0
0
0.5 1
1 0.5 0 0.5 1
Figure 5. 16 rectangular QAM signal constella-
tions and 16 star-QAM (8,8) signal constellations.
1
S
7
S
6
S
5
S
4
S
8
S
1
S
2 S
3
S
0
S
10
S
11
S
12
S
13
S
14
S
15
S
9
0
0 1
0.5
0.5
1
1 0.5 0.5
r
R
(4,12)-QAM constellation
1
S
7
S
6
S
5
S
4
S
8
S
1
S
2
S
3
S
0
S
10
S
11
S
12 S
13
S
14
S
15
S
9
0
0 1
0.5
0.5
1
1 0.5 0.5
r
R
(5,11)-QAM constellation
Figure 6. (4,12)-QAM signal constellations
and (5,11)-QAM signal constellations.
MIMO SYSTEMS FOR WIRELESS COMMUNICATIONS 3063
2.2. Spacetime Coding
Spacetime codes make smart use of the multiple antenna
system by combining modulation, coding, and diversity
transmission in one function and then use the multiple
transmit antennas to transmit different codewords simul-
taneously at each time instant. The system utilizes both
coding and diversity gains to realize significant perfor-
mance improvements (higher capacity) over the single an-
tenna system. The code construction is done in a way to
ensure that both the coding and diversity gains at the re-
ceiver are maximized. A celebrated pioneer work on space-
time codes [7] presents the details of the system design
criteria and code constructions.
Consider a spacetime coding system, with N transmit-
ting and L receiving antennas, over a wireless communi-
cation channel illustrated in Fig. 8. At any time instant k,
let the information-bearing signals, d(k), be encoded by the
spacetime encoder as N 1 code vector c(k) [c
1
(k) c
2
(k)
c
N
(k)]
T
, and each code symbol is transmitted simulta-
neously from a different antenna. All N transmitted sig-
nals have the same transmission period. At the receiver
side, signals arriving at the different receive antennas un-
dergo independent fading. The received signal is a linear
combination of the transmitted signal and the MIMO
channel coefcients h
ij
, (i 1,y, N), ( j 1,y, L), corrupt-
ed by additive noise. The received signal vector at the kth
transmission period is therefore given by Eq. (1).
10
12
10
10
10
8
10
6
10
4
10
2
S
y
m
b
o
l

e
r
r
o
r

r
a
t
e
0 10 20 30
E
b
/No dB
N=2
L=2
16 ReoQAM
16 StarQAM
(4,12)
(5,11)
N=4
L=4
Figure 7. SEP of rectangular and circular 16-QAM
signals in MIMO channels. (This gure is available
in full color at http://www.mrw.interscience.wiley.
com/erfme.)
Data source
d(k)
c
N
(k)
c
1
(k)
y
L
(k)
y
1
(k)
Receiver
S T encoder
Figure 8. Spacetime coding system.
3064 MIMO SYSTEMS FOR WIRELESS COMMUNICATIONS
An appropriate signal processing operation is used to
extract an estimate of the information stream from the
noisy superposition of the faded version of the N trans-
mitted signals, received by each of the antennas, which is
then passed on to the detector. The main techniques pro-
posed for this purpose include and minimum mean square
error (MMSE), maximum-likelihood detection (MLD), and
singular value decomposition (SVD). While some of these
techniques (e.g., MLD) perform MIMO signal processing
only at the receiver, others (e.g., SVD) perform MIMO
processing both at the transmitter and receiver [36].
Among these detection techniques, MLD is optimum in
terms of minimizing the overall error probability [37]. In
the following therefore, we focus only on MLD. Assuming
that maximum likelihood decoding of the transmitted data
c(k) from the received signal sequences is carried out
at the receiver, and assuming that l consecutive code vec-
tors, fckg
l
k 1
, have been transmitted. The maximumlike-
lihood (ML) decoder can be realized using the Viterbi
algorithm with the ML metric given, in the form of min-
imum Euclidian distance, as
~ cc arg min
~ cc1; ~ cc2; ~ ccl
jjy1; y2; ; yl

~
HH1 c1;
~
HH2 c2; ;
~
HHl cljj
2
arg min
~ cc1; ~ cc2;; ~ ccl

l
k1
jjyk
~
HHk ~ cckjj
2
12
where
~
HHk is the MIMO channel estimate at the receiver
at time instant k.
2.2.1. Spacetime Trellis Codes. For the case when the
underlying code is trellis-coded modulation, an expression
for upper bound on the pairwise error probability (PWEP)
of the resulting spacetime trellis code (STTC) is given [7]
as
Pc ! ~ cc

r
i 1
l
i
_ _
L
.
E
s
4N
0
_ _
rL
13
where r is the rank of the error matrix between the trans-
mitted (true ) codeword and the received (possibly errone-
ous) codeword [7]. l
i
, i 1,y,r are the nonzero eigenvalues
of this error matrix, while E
s
/N
0
is the average signal-to-
noise power ratio (SNR). The rst term d
r

r
i 1
l
i
rep-
resents the coding gain achieved by the spacetime code,
and the second term (E
s
/4N
0
)
rL
represents a diversity
gain of rL achieved from the use of multiple antennas.
Hence, in designing a spacetime trellis code, the rank of
the error matrix r should be maximized (thereby maxi-
mizing the diversity gain) and at the same time, d
r
should
be also maximized (thereby maximizing the coding gain).
An example of a four-state STTC code constructed for
4-PSK signal [7] is shown in Fig. 9. This code is designed
for systems with two transmit antennas. The label ij refers
to the transition between the states i and j in the trellis,
and each symbol pairs in a given row labels the transition
out of a given state.
2.2.2. Spacetime Block Codes. Alamouti [15] proposed
an ingenious spacetime block coding scheme for transmis-
sion with two antennas. In this scheme, input symbols are
grouped in pairs and transmitted at time instant k, and a
transformed version of the symbols is transmitted at time
k 1. Let the symbols c
1
and c
2
be transmitted at time
k from the rst and second antennas, respectively. Then
at time k 1, symbol c

2
is transmitted from the rst
antenna and symbol c

1
is transmitted from the second
antenna, where (.)* denotes the complex conjugate. The
received signals at the jth receive antenna are therefore
given by
y
1
j
h
j1
c
1
h
j2
c
2
n
1
j 1; . . . ; L
y
2
j
h
j1
c

2
h
j2
c

1
n
2
j 1; . . . ; L
14
where we have assumed that the channel is xed for the
two transmission periods.
Alamoutis spacetime block codes have been adopted in
several wireless standards such as wideband code-division
multiple access (W-CDMA) and CDMA 2000 [38]. The code
has the following attractive features: (1) it achieves full
diversity at full transmission rate for any real or complex
signal constellation and (2) the code does not require
knowledge of CSI at the transmitter. Third, maximum
likelihood decoding of the code involves only linear pro-
cessing at the receiver, which reduces the decoding com-
plexity significantly. The Alamouti code has been
extended to the case of more than two transmit antennas
[13] using the theory of orthogonal designs.
2.3. Spatial Multiplexing
Figure 10 illustrates the principle of spatial multiplexing.
As shown in the gure, the input (information bearing)
bitstream is rst demultiplexed into p substreams and
each substream is mapped to a predetermined digital
modulation [e.g., phase shift keying (PSK) or QAM]. The
p substreams are then transmitted simultaneously
over the channel using N (NZp) independent transmit
antennas [19]. The same modulation constellation with
size Q is used for each substream. Therefore, log
2
(Q)
information bits are mapped into one Q-ary symbol. At
the receiving end, the signals received by L (LZp) anten-
nas are processed (using any of the techniques mentioned
in Section 2.2) to recover the original bit-stream. Spatial
30 31 32 33
20 21 22 23
10 11 12 13
00 01 02 03
Trellis diagram
4-PSK constellation
1
0
3
2
Figure 9. Trellis diagramfor four-state spacetime code for 4-PSK
signals. (This gure is available in full color at http://www.mrw.
interscience.wiley.com/erfme.)
MIMO SYSTEMS FOR WIRELESS COMMUNICATIONS 3065
multiplexing achieves high data rate through the trans-
mission of parallel bitstreams.
3. RECEIVER SIGNAL PROCESSING FOR MIMO
TRANSMISSIONS OVER MOBILE CHANNELS
The combined effect of the features of mobile radio prop-
agation environment (discussed in Section 1.1) leads to
the characterization of mobile wireless channels as time-
varying fading channels, as well as frequency-selective
fading channels. For the time-varying fading problem, the
channel strengths may have significant variation within a
transmission block (rapid fading) or from one block to an-
other (quasistatic fading). In either case, channel tracking
can be employed [39,40] to estimate the amount of atten-
uation in the wireless link, and this information [or chan-
nel state information (CSI)] is then used in the detection of
the transmitted signals. The frequency selectivity pro-
blem on the other hand, results in the introduction of
intersymbol interference (ISI) among successive symbols
transmitted over mobile radio channels, causing severe
performance degradation unless corrective measure known
as equalization is employed.
3.1. MIMO Channel Equalization
For MIMO transmission over frequency-selective (mobile
radio) channels, the channel output is given by the ex-
pression in Eq. (2) and has the Z transform given by
yz H
^
zcz nz 15
where H
^
z

m1
l 0
Hk; lz
1
The function of an adaptive equalizer employed at the
MIMO receiver is to carry out a reverse operation of the
frequency-selective MIMO channel actions in Eq. (15) in
order to recover the original information bits from the
noisy observation y(z).
If we assume perfect knowledge of the MIMO channel
coefcients at the receiver, then the optimum receiver is a
maximum-likelihood sequence estimator (MLSE). For
transmission over frequency-selective MIMO channel,
therefore, the best performance in terms of error rate,
can be achieved through trellis equalization of the space-
time codes based on MLSE or symbol-by-symbol maxi-
mum a posteriori probability (MAP) estimation [41].
However, it is well known that the complexity of these
methods is proportional to the number of states of the
trellis, which grows exponentially with the product of the
channel memory and the number of transmit antennas.
The complexity of the algorithm therefore becomes some-
what impractical when the channel memory becomes
large and high-order constellations are used. In address-
ing this problem, some suboptimum, reduced-complexity,
equalization methods have been developed. In the next
section, we review two families of such suboptimum equal-
izers achieving a good performancecomplexity tradeoff,
which have been employed in MIMO channels. The rst
of these is the family of the block linear and Decision-
Feedback equalizers, and the second is the family of the
list-type equalizers.
3.2. Block Linear and Decision Feedback Equalizers
Block linear and decision feedback equalizers are by na-
ture optimized for block transmission systems [42]; there-
fore these equalizers are easily adapted for MIMO
systems.
3.2.1. Block Linear Equalizers. The expression for the
signal estimate at the output of ZF-BLE can be written in
the form
^
dd
ZFBLE
d<n 16
where d is the Nl 1 vector that stacks the transmitted
symbols (from the N transmit antennas) during the trans-
mission of a block of length l. The matrix < is an ampli-
cation factor that represents noise enhancements due to
the zero-forcing operation, and n is the noise vector.
A similar expression for the MMSE-BLE can be writ-
ten as
^
dd
MMSEBLE
W
^
dd
ZFBLE
17
where the elements of W can be seen as coefcients of a
Weiner lter. The estimate from an MMSE-BLE can then
be interpreted as the output of the ZF-BLE followed by a
Weiner lter. The Weiner lter reduces the performance
degradation caused by noise enhancement in ZF-BLE.
Therefore, the SNR at the output of the MMSE-BLE per
symbol is, in general, larger than that of the ZF-BLE.
3.2.2. Block Decision Feedback Equalizers. Figure 11
shows the block diagram of a block decision feedback
equalizer employed in a MIMO setup. At any time in-
stant, k, the received signal vector y(k) is ltered by the
equalizers feedforward lter (FFF), with coefcients W(k),
to obtain the ltered signal vector y
0
(k). Previously ob-
tained data estimates are processed through a feedback
lter (FBF), with coefcients B(k), and subtracted from
y
0
(k). The resultant signals are then fed into threshold
detectors from where estimates of the transmitted
data ^ cck D, are obtained, where D is the delay in the
equalizer, and ^ cck D corresponds to the input signals at
time kD.
Data source
(Encoded
information)
De-
multiplexer
Modulator
Modulator
c
1
(k)
c
N
(k)
y
1
(k)
y
L
(k)
De-
modulator
Multiplexer
To
decoder
De-
modulator
Figure 10. Spatial multiplexing.
3066 MIMO SYSTEMS FOR WIRELESS COMMUNICATIONS
Similar comparative analysis for the MMSE-BDFE and
ZF-BDFE [41] shows that the SNR at the output of
MMSE-BDFE is in general larger than the SNR at the
output of the counterpart ZF-BDFE. Therefore, both block
linear and block decision feedback equalization of MIMO
channel based on MMSE criterion will yield better perfor-
mance than will their counterpart zero-forcing schemes.
This conclusion is consistent with what is known for the
SISO channel case.
3.3. List-Type Equalizers
The list-type equalizer is another reduced complexity sub-
optimum equalization method. It employs a state reduc-
tion algorithm in the Viterbi or MAP equalizer, using the
concept of per survivor processing (PSP) [43], to achieve a
reduced complexity. The equalizers consider a reduced
number of taps of the channel to construct the trellis,
leading to a reduced number of states, and an adaptive
equalization of the channel is carried out on the basis of
the reduced states. To ensure that the best suboptimum
performance is achieved, a receiver lter that concen-
trates the channel energy on the rst few taps chosen
for the trellis construction is used. This ensures that the
chosen taps have the strongest energy. In the MIMO chan-
nel case, this is achieved using a multidimensional whit-
ened matched lter (WMF) as a prelter for the equalizer.
Comparing the performances of the block equalizers
and the preltered list-type MAP equalizers in MIMO
channel [41], it is observed that the preltered list-type
MAP equalizer achieves better performance than the
block equalizer. However, the list-type MAP equalizer is
much more complex to implement. Hence, the regular
tradeoff between performance and complexity has to be
part of the criteria for selecting any of these structures for
MIMO applications.
4. CONCLUSION
This article has presented a survey of the most popular
MIMO signal processing techniques used in wireless com-
munications. We have discussed in particular the spatial
multiplexing, spacetime code, and MIMO-MRC systems.
The capacity increase achieved through the spacetime
MIMO transmission is also illustrated and shown to im-
prove dramatically over the conventional wireless com-
munication system as the number of transmitreceive
antennas increase. Equalization techniques employed in
MIMO receivers for transmissions over frequency-selec-
tive, mobile communication, channels are then reviewed.
Acknowledgments
This work has been supported in part by the Natural Sci-
ences and Engineering Research Council of Canada
(NSERC), Communications and Information Technology
Ontario (CITO), and the Ontario Premiers Research Ex-
cellence Award (PREA)
BIBLIOGRAPHY
1. N. Kong and L. B. Milstein, Combined average SNR of a gen-
eralized diversity selection combining scheme, Proc. IEEE
Int. Conf. Commun., June 1998, Vol. 3, pp. 15561560.
2. A. I. Sulyman and M. Kousa, Bit error rate performance of a
generalized selection diversity combining scheme in Nak-
agami fading channels, Proc. IEEE-WCNC2000, Sept. 2000.
3. M. K. Simon and M.-S. Alouini, Performance analysis of gen-
eralized selection combining with threshold test per branch
(T-GSC), IEEE Trans. Vehic. Technol. 51(5):10181029 (Sept.
2002).
4. A. Wittneben, A new bandwidth efcient transmit antenna
modulation diversity scheme for linear digital modulation,
Proc. IEEE ICC 93, 1993, pp. 16301634.
5. S. Al-Semari and T. Fuja, Performance analysis of coherent
TCM systems with diversity reception in slow Rayleigh fad-
ing, IEEE Trans. Vehic. Technol. (Jan. 1999).
6. G. Ungerboeck, Channel coding with multilevel/phase sig-
nals, IEEE Trans. Inform. Theory 28:5567 (Jan. 1982).
7. V. Tarokh, N. Seshadri, and A. R. Calderbank, Space-time
codes for high data rate wireless communication: perfor-
mance criterion and code construction, IEEE Trans. Inform.
Theory 744765 (March 1998).
8. A. Wittneben, Base station modulation diversity for digital
SIMULCAST, Proc. IEEE VTC, May 1993, pp. 505511.
9. N. Seshadri and J. Winters, Two signaling schemes for im-
proving the error performance of FDD transmission systems
using transmit antenna diversity, Proc. IEEE VTC, May 1993,
pp. 508511.
10. J. Winters, The diversity gain of transmit diversity in wire-
less systems in Rayleigh fading, Proc. ICC/Supercomm, New
Orleans, LA, May 1994, Vol. 2, pp. 11211125.
11. V. Tarokh, A. Naguib, N. Seshadri, and A. R. Calderbank,
Space-time codes for high data rate wireless communication:
performance criteria in the presence of channel estimation
errors, mobility and multiple paths, IEEE Trans. Commun.
(Feb. 1999).
+
+
+
c(k ):
y(k)
y'(k)
B(k)
W(k)
Threshold detectors
F B F
F F F
c(k )

Figure 11. MIMO DFE block diagram.


MIMO SYSTEMS FOR WIRELESS COMMUNICATIONS 3067
12. V. Tarokh, H. Jafarkhani, and A. R. Calderbank, Space-time
block coding for wireless communications: Performance results,
IEEE J. Select. Areas Commun. 17:451459 (March 1999).
13. V. Tarokh, H. Jafarkhani, and A. R. Calderbank, Space-time
block codes from orthogonal designs, IEEE Trans. Inform.
Theory 45:14561467 (July 1999).
14. T. K. Y. Lo, Maximum ratio transmission, IEEE Trans. Co-
mmun. 47:14581461 (Oct. 1999).
15. S. M. Alamouti, A simple transmit diversity technique for
wireless communications, IEEE J. Select. Areas Commun.
16(8):14511458 (Oct. 1998).
16. P. W. Wolniansky, G. J. Foschini, G. D. Golden, and R. A.
Valenzuela, VBLAST: An architecture for realizing very high
data rates over rich scattering wireless channels, Proc. IS-
SSE-98, Sept. 1998, pp. 295300.
17. A. Naguib, V. Tarokh, N. Seshadri, and A. R. Calderbank, A
space-time coding for high-data-rate wireless communica-
tions, IEEE JSAC, Oct. 1998.
18. A. Naguib, N. Seshadri, and A. R. Calderbank, Increasing
data rate over wireless channels, IEEE Signal Process. Mag.
7792 (May 2000).
19. H. Sampath and A. J. Paulraj, Joint transmit and receive op-
timization for high data rate wireless communication using
multiple antennas, Proc. 33rd IEEE Asilomar Conf. Signals,
Systems, and Computers, Oct. 1999, Vol. 1, pp. 215219.
20. C.-N. Chuah, D. N. C. Tse, J. M. Kahn, and R. A. Valenzuela,
Capacity scaling in MIMO wireless systems under correlated
fading, IEEE Trans. Inform. Theory 48(3):637650 (March
2002).
21. S. Catreux, L. J. Greenstein, and V. Erceg, Some results and
insights on the performance gains of MIMO systems, IEEE J.
Select. Areas Commun. 21(5):839847 (June 2003).
22. A. Jemmali and A. Kouki, Investigation of MIMO channel
correlation and capacity based on partial embedded RF mea-
surements, Proc. IEEE-CCECE04, Niagara Falls, Canada,
May 2004, pp. 531534.
23. G. Levin and S. Loyka, Statistical analysis of a measured
MIMO channel, Proc. IEEE-CCECE04, Niagara Falls, Cana-
da, May 2004, pp. 875878.
24. S. Haykin and M. Moher, Modern Wireless Communications,
Prentice-Hall 2005.
25. G. D. Durgin, Space-Time Wireless Channels, Prentice-Hall,
2003.
26. A. J. Paulraj, R. U. Nabar, and D. Gore, Introduction to Space-
Time Wireless Communications, Cambridge Univ. Press,
2003.
27. S. N. Diggavi, N. Al-Dhahir, A. Stamoulis, and A. R. Calder-
bank, Great expectations: The value of spatial diversity in
wireless networks, Proc. IEEE 92(2):219246 (Feb. 2004).
28. A. J. Paulraj, D. A. Gore, R. U. Nabar, and H. Bolcskei, An
overview of MIMO systemsa key to Gigabit wireless, Proc.
IEEE (special issue on Gigabit wireless communications:
technologies and challenges) 198218 (Feb. 2004).
29. O. Oyman, R. U. Nabar, H. Bolcskei, and A. J. Paulraj, Char-
acterizing the statistical properties of mutual information
in MIMO channels, IEEE Trans. Signal Process. 51(11):
27842795 (Nov. 2003).
30. L. Zheng and D. N. C. Tse, Communication on the grassmann
manifold: A geometric approach to the noncoherent multiple
antenna channels, IEEE Trans. Inform. Theory 48:359383
(Feb. 2002).
31. I. E. Telatar, Capacity of multi-antenna Gaussian channels,
Eur. Trans. Telecommun. 10(6):585595 (Nov./Dec. 1999).
32. M. Kang and M.-S. Alouini, Performance analysis of MIMO
MRC systems over Rician fading channels, Proc. IEEE
VTC02, 2002, Vol. 2, pp. 869873.
33. V. Tarokh and T. K. Y. Lo, Principal ratio combining for xed
wireless applications when transmitter diversity is employed,
IEEE Commun. Lett. 2(8):223225 (Aug. 1998).
34. P. A. Dighe, R. K. Mallik, and S. S. Jamuar, Analysis of trans-
mit-receive diversity in Rayleigh fading, IEEE Trans. Co-
mmun. 51(4):694703 (April 2003).
35. A. I. Sulyman and M. Ibnkahla, Performance analysis of non-
linearly amplied M-QAM signals in MIMO channels, Proc.
IEEE Int. Conf. Acoustics, Speech, and Signal Processing, IC-
ASSP04, May 2004.
36. R. Choi and R. Murch, MIMO transmit optimization for wire-
less communication systems, Proc. IEEE Int. Workshop Elec-
tron. Design, Test, and Applications, (DELTA 02), 2002.
37. X. Zhu and R. D. Murch, Performance analysis of maximum
likelihood detection in a MIMO antenna system, IEEE Trans.
Commun. 50(2) (Feb. 2002).
38. N. Al-Dhahir, Space-time coding and signal processing for
broadband wireless communications, in M. Ibnkahla, ed., Sig-
nal Processing for Mobile Communications Handbook, CRC
Press, 2004, Chap. 13.
39. M. Ibnkahla et al., Adaptive signal processing for mobile
communication, in R. Dorf, ed., Electrical Engineering Hand-
book, (in press).
40. S. Haykin, Adaptive tracking of linear time-variant systems
by extended RLS algorithm, IEEE Trans. Signal Process.
45(5):11181128 (May 1997).
41. N. Sellami, I. Fijalkow, and M. Siala, Overview of equaliza-
tion techniques for MIMO fading channels, in M. Ibnkahla,
ed., Signal Processing for Mobile Communications Handbook,
CRC Press, 2004, Chap. 18.
42. G. Kaleh, Channel equalization for block transmission sys-
tems, IEEE J. Select. Areas Commun. 13(1):110121 (1995).
43. T. Hashimoto, A list-type reduced-constraint generalization
of the Viterbi algorithm, IEEE Trans. Inform. Theory
IT-33:866876 (1987).
MINIATURIZED PACKAGED (EMBEDDED)
ANTENNAS FOR PORTABLE WIRELESS DEVICES
M. ALI
University of South Carolina
Columbia, South Carolina
1. INTRODUCTION
With the rapid growth of wireless communications there
has been an ever-increasing demand for small, wideband/
multiband packaged or embedded antennas for mobile
phones, wireless PDAs, pagers, GPS receivers, implanta-
ble wireless devices, and RFID tags [13]. The application
list is not exhaustive and may also include many other
scenarios, such as man-pack devices for the land-warrior
and smart multifunctional wireless device for law enforce-
ment personnel. When an embedded antenna is concerned
one thing is common, the antenna is packaged within
the housing of the device. In some cases the antenna is
3068 MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES
printed directly on the device PCB, its housing, or an
onboard chip.
This problem, however, is not very easy to solve. De-
pending on the specific application, there is always a set of
requirements that must be fullled before a useful anten-
na can be designed. These requirements can vary widely
from mobile phone application to Bluetooth to GPS. Nev-
ertheless, the key challenges that we need to confront are
bandwidth, gain, radiation pattern, polarization, and SAR
(specific absorption rate). When an antenna is packaged
or embedded within a device it suffers degradation in some
or all of these performance characteristics. This happens
because the antenna (1) needs to be miniaturized to be
accommodated within a device and (2) operates in close
proximity to other metallic and/or dielectric objects in its
vicinity. Thus a careful evaluation of antenna performance
is required. Unfortunately, since the antenna is very plat-
form-dependent, any change in the platform or embedding
medium requires full characterization and optimization.
Albeit phenomenal progress has been made in electromag-
netic analysis using the nite-element method or method
of moments (MoM) or the nite-difference time-domain
method the results obtained there fromcan serve as guide-
lines only. The actual performance predictor is an antenna
prototype built and tested in the laboratory. Thus simula-
tions must be conducted to get a broad knowledge and an
overall understanding of the antenna design. In circum-
stances where the simulation model can replicate the ex-
act CAD environment of the wireless device, very realistic
results are obtainable. The efcacy of the simulation tools
lies in their rapid prediction capabilities, which save a
significant amount of time from the concept to production
phase of an embedded antenna. Thus modeling and mea-
surement must proceed hand in hand to get a functioning
antenna in an embedded environment.
Note that as the development phase of the product pro-
gresses more and more variables start to add up. Thus a
significant number of measurements need to be conducted
to ascertain that performance criteria are met. As a simple
example, consider a mobile phone antenna. It starts with a
simple model of the antenna on a blank printed circuit
board. Then the board gets populated with components,
the mechanical components begin to add, the radio starts
functioning, the audio works, and so on. Thus the antenna
needs to be measured and tuned accordingly in free space,
in the presence of a phantom in various talk positions, and
for SAR every step of the way.
2. ANTENNAS FOR MOBILE PHONES
Before discussing antennas for mobile phones, it is worth-
while to mention the frequency bands of interest. For in-
stance, for the AMPS (Advanced Mobile Phone Systems)
system the frequency bands are 824859MHz for transmit
(Tx) and 869894MHz (Rx) for receive (see Table 1). From
an antenna design perspective, the two frequency bands
are fairly close to each other. Thus usually a single anten-
na is designed to support the entire 824894-MHz band.
Respective transmit and receive frequency bands for the
GSM, DCS, and PCS systems are listed in Table 1 [4].
Lately almost all phones are at least dual-band (one
low-frequency band and the other high-frequency band).
Interest is growing to develop triple or quad-band phones
also which will enable a user to use the same phone in
different geographic locations with different air interface
standards. One example is a dual-mode AMPS/GSM
phone that can be easily triple band. Clearly it is greatly
desirable to support all three bands by just one antenna.
Thus there is demand for triple- and quad-band antennas.
A mobile phone antenna has to satisfy various perfor-
mance and regulatory requirements. Among these are
bandwidth, gain, radiation pattern, and SAR (specific ab-
sorption rate). The antenna must have good VSWR band-
width (usually within a maximum VSWR of 2.51). Even
though a VSWR limit of 21 is preferred, it is difcult to
achieve that with a small embedded antenna. The anten-
na must also have reasonable peak and average gain in
free space and in talk position. The gain in the talk posi-
tion is extremely critical to ensure proper operation. The
peak gain provides a good basis point to determine the
EIRP (effective isotropic radiated power). While there are
near-eld chambers to measure complete three-dimen-
sional patterns and the antenna efciency thereof, a gen-
erally acceptable representation is obtainable by
measuring the principal plane cuts. If average gain data
is required it can be obtained from the total three-dimen-
sional eld distribution. The amount of head blockage
(pattern shadowed by or energy absorbed by the user
head) varies from antenna to antenna and from phone to
phone. Thus proper electromagnetic modeling or measure-
ment is necessary to predict the antenna average gain for
each design. Antenna gain in talk position is generally
measured with the aid of a phantom head consisting of
brain simulating uid.
After the phone is manufactured, radiated live tests are
conducted and EIRP and receiver sensitivity are mea-
sured at different channels to obtain a complete picture of
the phone performance. This is done before the product
and test data are sent to the respective regulatory agen-
cies for compliance. In the United States the Federal Com-
munications Commission (FCC) has a set of requirements
that must be met before a mobile phone can be sold in the
market. These include the EIRP, out-of-band emission,
and most importantly, SAR (specific absorption rate).
2.1. External Antennas
Until around 2000 most mobile phone antennas were ex-
ternal. In the earlier days end-fed sleeve dipoles were used
Table 1. Frequency Bands of Different Mobile Telephone
Systems
System
Transmit
Frequency
Band (Tx)
(MHz)
Receive
Frequency
Band (Rx)
(MHz)
Antenna
Operational
Band
(MHz)
AMPS 824859 869894 824894
GSM 880915 925960 880960
DCS 1,7101,785 1,8051,880 1,7101,880
PCS/GSM 1900 1,8501,910 1,9301,990 1,8501,990
MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES 3069
to achieve a gure-eight radiation pattern [1]. Such pat-
terns are most common with half-wave dipole antennas.
Albeit a dipole antenna works better than a monopole
(ground plane independent and suffers from less head
blockage) its larger size, and the requirement for a balun
(when coaxial line feed is used) forced design engineers to
explore and utilize the monopole geometry. In the latter
case the cellular phone PCB (printed-circuit board) along
with its housing (if metallic) acts as the monopole ground
plane (counterpoise).
For example, at 900MHz a resonant thin-wire mono-
pole antenna should be about 78 mm long. This length will
vary to some extent based on the wire radius and the po-
sition of the antenna with respect to the device housing.
Since a conventional monopole operates on a large ground
plane (several wavelengths in diameter for a circular
ground plane) its radiation pattern is restricted to the up-
per hemisphere only with the peak of the beam directed
toward the horizon. The directivity in such a case is
5.1 dBi, which is 3dB higher than the directivity of a
half-wave dipole [5]. Such a large ground plane is com-
pletely impractical for a mobile phone. Because the mobile
phone ground plane is much smaller, there is a significant
amount of current ow on it and the ground plane gener-
ally dominates the radiation pattern. If a phone measures
110 40 25 mm, the 78-mm-long antenna and the phone
housing ensemble represents an asymmetric dipole where
the longer and wider ground plane dominates over the
small monopole antenna. Thus the radiation pattern has a
buttery shape and is directed to Earth for a vertically
oriented phone [6]. The maximum eld strength is not di-
rected toward the horizon.
External antennas have also evolved a great deal over
the years primarily due to the need for miniaturization.
Engineers have focused on reducing the antenna size by
inductive loading. This is achieved either by employing a
helical, meander, or zigzag geometry. Such an antenna
can, in general, have a three-dimensional shape. In case of
a meander or zigzag conguration it can also be planar.
Examples of small helical or meander stub antennas as
small as 3540 mm are everywhere. Some phones come
with a retractable geometry where ordinarily the antenna
is a small stub which when extended can be much longer
in size. As apparent a small stub will be more susceptible
to performance degradation when placed close to a users
head than a 78-mm-long monopole antenna. The smaller
stub is simply shadowed much more than a longer anten-
na. However, experience has shown that a small stub
still provides reasonable performance for most cases. For
dual- or triple-band operation branches are created to
excite separate current ow paths. Usually the branch
having a longer current ow path is responsible for the
low-frequency band of operation while the shorter current
ow path is responsible for the high-frequency band of
operation.
2.2. Packaged (Embedded) Planar Inverted-F Antennas
2.2.1. Background. There has been a surge of interest
in planar inverted-F antennas (PIFAs) for mobile phone
applications [719]. Such antennas are smaller than
resonant half-wavelength-long microstrip patches and
can be easily placed internally within the housing of a
mobile phone. For mobile phone applications the PIFA is
usually placed under the back cover of the phone right
above the battery line.
The PIFA evolved from a shorted quarter-wave micro-
strip patch antenna. A conventional microstrip patch is a
half-wavelength (0.5l) long (guided wavelength), includ-
ing the edge effects and dielectric loading (see Fig. 1). A
quarter-wave patch has a short circuit along one of its
edges (Fig. 2). The short circuit is positioned along one of
the patch edge and has a width of W and depth of h as
indicated in Fig. 2. Thus the length L determines the op-
erating frequency (0.25l minus the effect of dielectric and
the edge effect).
In contrast, a PIFA (see Fig. 3) consists of a shorting pin
instead of a large plate as it is for a quarter-wave patch.
The shorting pin diameter can be the same as the probe
feed or can be different. Since only one shorting pin is
present the antenna resonant length is approximately de-
termined by LW, which is about a quarter-wavelength
(B0.25l) at the operating frequency. However, for mobile
phone applications the positioning of the PIFA is generally
at one of the edges of the PCB for convenience and better
utilization of space. The PIFA performance is determined
largely by the antenna parameters L,W,h and the spacing
between the feed and shorting pin S. The size of the PCB
also plays a dominant role in antenna performance, par-
ticularly at the low-900-MHz frequency band. As an ex-
ample, if L50 mm, W23 mm, h6 mm, and the feed
shorting pin spacing is 6 mm, the PIFA will operate at
890 MHz with a bandwidth of 100MHz within 2.51
VSWR. Ground-plane size is 11050 mm.
In designing a PIFA the primary challenge is to achieve
the necessary operating bandwidth, which requires that
the PIFA height be about 812 mm from the ground plane.
This large antenna height makes the phone thicker even
Half-wave microstrip patch
L
W
Feed
Cross-sectional view
h
L

r
Figure 1. Half-wave microstrip patch.
h
L
c
r
Figure 2. Quarter-wave microstrip patch.
3070 MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES
though the battery is very thin. It will be greatly advan-
tageous if PIFAs with much smaller heights can be de-
signed. However, bandwidth becomes extremely narrow
as antenna height is reduced.
It has been reported [2022] that the bandwidth of a
PIFA also depends on the size of the ground plane. For
instance, for optimal bandwidth in the 900-MHz band the
combined dimensions of the length and width of the
ground plane should be 0.5l [21]. Ground planes that
are smaller than that provide much narrower bandwidth.
As an example, a PIFA (h4 mm) on a 9035 mm ground
plane has 2.5% bandwidth while that on a 130 35 mm
ground plane has 9.5% bandwidth. Conversely, it can be
inferred that for a xed bandwidth, a much thinner an-
tenna can be designed if a larger ground plane is utilized.
Based on this concept, a slotted meandered ground
plane was proposed in [16] that can effectively reduce
the heights of PIFAs by more than 50%. The meander-line
conguration proposed in [16] can be viewed as a slow-
wave structure where the phase velocity of the propagat-
ing wave is smaller than the velocity of light. This makes a
slotted meandered ground plane appear electrically long-
er, even though its physical size is unchanged, and hence
helps achieve much thinner PIFA design.
2.2.2. Dual-band PIFA on Conventional and Modied
Ground Planes. An example of a dual-band PIFA is shown
in Fig. 4. The antenna consists of two radiating elements
joined near the feedpoint. The larger element has a longer
current ow path from the feed and hence is responsible
for low band while the smaller element, which is close to
the feed, is responsible for the high band. However, the
two elements are not completely independent from each
other as it pertains to overall performance characteristics.
Both elements are at a height h from the PCB and the
feedshorting pin (ground) spacing is s. All antenna pa-
rameters can be adjusted to vary the resonant frequency,
bandwidth, and pattern. Typically, a design will start with
a full-wave three-dimensional electromagnetic model de-
velopment using the method of moments (MoM) or nite-
element method (FEM), or nite-difference time-domain
(FDTD) method. Concurrently or afterward antenna pro-
totypes must be developed and tested. The prototype de-
velopment and testing continues for each phase of the
phone development and each degree of complexity and so-
phistication in the phone may require the engineer to
evaluate and redesign the antenna over time.
As mentioned earlier, the typical dual-band PIFA
shown in Fig. 4 depends heavily on the antenna height
from the ground plane. Larger heights are usually re-
quired to satisfy the bandwidth requirements. To alleviate
this problem, an alternative scheme has been proposed
[16]. This scheme is shown in Fig. 5. Using this slotted
meandered ground plane, PIFA heights can be reduced by
more than 50%. The meanderline conguration proposed
in Ref. 16 can be viewed as a slow-wave structure where
the phase velocity of the propagating wave is smaller than
the velocity of light. This makes a slotted meandered
ground plane appear electrically longer, even though its
physical size is unchanged, and hence helps achieve much
thinner PIFA design.
Figure 6 shows the computed and measured VSWR
data for the dual-band designs. As apparent, a significant
improvement in bandwidth can be achieved with the pro-
posed new ground plane for both the low- and high-fre-
quency bands. In the low-frequency band, computed
bandwidth for antennas on conventional and modied
ground planes are 2.1% and 7.8%, respectively. Measured
bandwidth for the PIFA on the modied ground plane
is 7.6%.
In the high-frequency band, bandwidth of antennas on
conventional and modied ground planes are 3.1% and
8.8%, respectively, whereas measured bandwidth on the
modied ground plane is 7.1%.
2.3. Packaged (Embedded) Monopole-Type Antennas
Apart from PIFAs monopole-type radiators are also of in-
terest for embedded applications. In that case the antenna
can be considered as a volume lying at a height adjacent to
W
PCB
Ground
Feed
h
L
PIFA
Figure 3. PIFA on a mobile phone PCB.
Ground
Feed
h
L
1
s
90
35
PCB
PIFA
2
d
x
1
W
1
L
2
W
2
y([=90)
x([=0)
W
3
Figure 4. Dual-band PIFA on a conventional
PCB. [r 2004 IEEE. Reprinted, with permis-
sion, from M. F. Abedin and M. Ali, Modifying
the ground plane and its effect on planar in-
verted-F antennas (PIFAs) for mobile phone
handsets, IEEE Anten. Wireless Propag. Lett.
2(15):226229 (2003).]
MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES 3071
35
12
1
1
10
90
h
Feed
Ground
S
17
17
PIFA
c
W
2
0.25 2
d
x
W
1
W
3
L
1
L
2
Figure 5. Dual-band PIFA on a meandered
PCB. [r 2004 IEEE. Reprinted, with permis-
sion, from M. F. Abedin and M. Ali, Modifying
the ground plane and its effect on planar in-
verted-F antennas (PIFAs) for mobile phone
handsets, IEEE Anten. Wireless Propag. Lett.
2(15):226229 (2003).]
Conventional (computed)
Conventional (measured)
Modified (computed)
Modified (measured)
6
5
4
3
2
1
5.5
4.5
3.5
2.5
1.5
0.85 0.86 0.87 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1
V
S
W
R
Frequency (GHz)
Conventional (computed)
Conventional (measured)
Modified (computed)
Modified (measured)
1.81 1.83 1.85 1.87 1.89 1.91 1.93 1.95 1.97 1.99 2.01
6
5
4
3
2
1
5.5
4.5
3.5
2.5
1.5
V
S
W
R
Frequency (GHz)
(a)
(b)
Figure 6. VSWR characteristics of dual-
band PIFAs. [r 2004 IEEE. Reprinted, with
permission, from M. F. Abedin and M. Ali,
Modifying the ground plane and its effect on
planar inverted-F antennas (PIFAs) for mo-
bile phone handsets, IEEE Anten. Wireless
Propag. Lett. 2(15):226229 (2003).] (This
gure is available in full color at http://
www.mrw.interscience.wiley.com/erfme.)
3072 MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES
the ground plane. In general there cannot be any metal
below the antenna. In addition, there should be additional
clearance area kept in between the antenna and the
ground plane (d in Fig. 7). The antenna should have a
specific height h from the ground plane.
By developing antenna geometries of various different
shapes and sizes, operation in single or multiple frequency
bands can be achieved. Such an antenna has been de-
scribed in [23,24], consisting of a driven meanderline el-
ement and two parasitic coupled elements. The geometric
conguration, size, and proximity of the driven and par-
asitic elements help materialize the desired multiband
operation. The complete antenna assembly is internal to
the handset. The antenna can be tuned to operate in ei-
ther the (1) 824894-, 880960-, and 18501990-MHz
bands or (2) 824894-, 880960-, and 17101880-MHz
bands. The size of the antenna is 50 10 6 mm (3 cm
3
)
or less.
The geometry of the antenna and its associated print-
ed-circuit board (PCB) is shown in Fig. 8. As can be seen,
there are two metal layers. The bottom layer (layer 2)
consists of a PCB and two parasitic metallic strips. The
meanderline element is on the top layer at a height h from
the PCB. The antenna volume is 50 10 h mm. The dis-
tance d is a small separation between one of the parasite
and the PCB that can be minimized when PCB space is
critical. The parasites are directly connected to the PCB
ground. The antenna is on the upper layer at a height h
from the PCB. The antenna is fed using a connector pin
from a RF signal pad on the PCB (not shown).
The double-meander geometry for the antenna has
been chosen for two reasons: (1) to shorten the length of
the antenna and make it the same size as the width of the
PCB (50 mm) and (2) to achieve wideband characteristics
[25]. The length of the antenna can be further reduced
(current length50 mm) by increasing the width (anten-
na width10 mm). Note that the length of a resonant
quarter-wave monopole operating at 900MHz is about
78 mm. The double meandering reduces the antenna
length to 50 mm, so that it can be enclosed within the de-
vice housing.
Computed VSWR as function of antenna height h is
shown in Fig. 9, where l 26.5 mm, S6 mm, d 4mm,
and w2 mm. It is apparent that the antenna has two
resonances at around 900 and 1920 MHz. The rst reso-
nance is due to the meander antenna, while the second is
due to the parasitics attached to the PCB [25,26]. The an-
tenna VSWR changes as h varies, which has two effects:
(1) a shift in the resonance frequency (as h is reduced, the
resonant frequencies move higher as expected) and (2) the
overall level of the minimum VSWR. It is also clear that
for h6mm, the antenna is well suited for triple-band
operation. In the low-band the bandwidth is 250MHz
(27.8%) within a VSWR of 2.51. This is far greater than
the required bandwidth for AMPS 800 and GSM 900 com-
bined (15.25%). In the high-band the antenna bandwidth
is 9.4%. The bandwidth required for TDMA/GSM 1900 is
18501990 MHz or 7.3%. For practical purposes VSWR of
2.51 as an upper limit has been generally found to be ac-
ceptable for mobile handsets, which creates only B0.4 dB
of loss as the VSWR changes from 21 to 2.51.
d
h
PCB
Antenna volume
Figure 7. A monopole-type embedded antenna.
Layer 2
Layer
94
50
d
s
w
1
w
l
10
50
1
4
4
1
1 8
w
Detail dimension of antenna
Figure 8. Antenna and PCB geometry with
associated parameters (dimensions in mm). [r
2004 IEEE. Reprinted, with permission, from
M. Ali, G. J. Hayes, H.-S. Hwang, and R. A.
Sadler, Design of a multi-band internal anten-
na for third generation mobile phone handsets,
IEEE Trans. Anten. Propag. 51(7):14521461
(July 2003).]
MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES 3073
A prototype antenna was fabricated and tested (see
Fig. 10). Computed and measured VSWR data are com-
pared in Fig. 11. The resonant frequencies of the antenna
for both the computed and measured cases are about the
same. The measured bandwidths in each band (low and
high) are also in good agreement with the computed band-
widths within 2.51 VSWR. It is clear from Fig. 11 that the
antenna operates in the AMPS 800 and GSM 900 band
within 21 VSWR and the GSM 1900 band within VSWR
of 2.31. Antenna radiation patterns and gain were mea-
sured with reference to two standard gain antennas (the
gain values of which were known from the manufacturers
datasheet). A logperiodic dipole antenna and a rectangu-
lar horn antenna were used for the 900 and 1900MHz
bands, respectively. Measured gain for the two principal
plane patterns are listed in Table 2. The free-space peak
gain at 900 MHz is between 0 and 0.5dBi, while that at
1900 MHz is between 2.3 and 2.5 dBi. This is expected
since the antenna is more directional in the high band.
Measured normalized radiation patterns for the pro-
posed antenna are shown in Fig. 12. The azimuth
(xy-plane) patterns at 900 and 1900 MHz are shown in
Figs. 12a and 12b. At 900MHz the vertical component is
the dominant one and its variation is nearly uniform. The
front-to-back ratio is about 3dB. At 1900 MHz the vertical
eld component is fairly directional, front-to-back ratio is
about 8 dB. Although the vertical component is not uni-
form, fairly good angular coverage can still be obtained
when both vertical and horizontal components are com-
bined.
The angular region where coverage is between 8 and
10 dB is limited between the angular region of 601201.
This region will be blocked by the operator head. The
directionality in the high band can be considered as an
advantage since less energy is being deposited in the
operator head. Also to note is the significance of total eld
rather than just one component. In a mobile environment
polarization purity is absent. Thus when both components
exist and are comparable they need to be combined to get
the total eld.
3. ANTENNAS FOR BLUETOOTH/WLAN APPLICATIONS
Bluetooth [27] is a consortium pioneered by Ericsson in
the late 1990s and later on adopted by a large number of
companies. This is an air interface standard proposed to
support short-distance communication between devices,
2
4
6
8
10
1.7 1.8 1.9 2 2.1 1.6 1.5 1.4 1.3 1.2 1.1 1 0.9 0.8
6
5.5
5
4.5
4
3.5
3
2.5
2
1.5
1
Frequency (GHz)
V
S
W
R
Figure 9. Computed VSWR versus frequency
with antenna height h (mm) as parameter d
4 mm, l 26.5mm. [r 2004 IEEE. Reprinted,
with permission, from M. Ali, G. J. Hayes, H.-
S. Hwang, and R. A. Sadler, Design of a multi-
band internal antenna for third generation
mobile phone handsets, IEEE Trans. Anten.
Propag. 51(7):14521461 (July 2003).]
PCB
Parasitic strips
Antenna
Coaxial line
Feed
Figure 10. Laboratory prototype of the proposed antenna. The
antenna is placed on a thin transparency lm to show all param-
eters in one picture. In actual measurement a foam (e
r
E1.0) sub-
strate 6mm thick was used to support the antenna. [r 2004
IEEE. Reprinted, with permission, from M. Ali, G. J. Hayes, H.-S.
Hwang, and R. A. Sadler, Design of a multi-band internal antenna
for third generation mobile phone handsets, IEEE Trans. Anten.
Propag. 51(7):14521461 (July 2003).]
3074 MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES
such as mobile phones, laptops, desktops, and PDAs. The
frequency band of operation in the United States is 2.4
2.485 GHz. This band also coincides with the IEEE
802.11b standard. For more detailed information on Blue-
tooth, please see Ref. 27. Typically the transmitter has
0 dBm of output power, which can provide a link of up to
10 m, which can be extended to up to 100m by increasing
the output power to 20 dBm. In addition to Bluetooth,
there are the WLAN protocols based on several IEEE
standards, such as the IEEE 802.11a,b,e,g. There is also
Hyperlan in Europe. For high-speed directional WLAN
links, one must resort to directional high-gain antennas,
which are not practical for embedding within the device.
Thus most if not all embedded Bluetooth and WLAN
antennas are essentially nondirectional.
3.1. Surface Mount PIFAs
The most popular among embedded antennas for these
types of applications is the surface mount PIFA as de-
picted in Fig. 13. This antenna is essentially a planar
inverted-F antenna (PIFA) fabricated on a dielectric
substrate, such as FR4. Typical antenna size can be about
25 4 4mm (length, width, height). As indicated, the
antenna has a feed and shorting pin that are connected to
the respective pads on the PCB when surface-mounted.
The antenna size can be further reduced by either modi-
fying the geometry or using higher dielectric constant
substrates (ceramics). Geometry modication usually oc-
curs in the form of employing a meanderline congura-
tion. This can be utilized either on the top surface or on
the top as well as the side surfaces. Whether it is geometry
modication or high dielectric substrate antenna size re-
duction will result in bandwidth and gain degradation.
Thus clearly one must focus on the required bandwidth
(e.g., for Bluetooth it is 2.42.485 GHz). Once a surface
mount PIFA is fabricated it must be embedded within the
device PCB and housing and its characteristics evaluated.
Since in most circumstances the PIFA has to reside on the
same PCB as the mobile phone antenna adequate isolation
between them must be ensured (typically 10 dB or better).
Usually this is achieved by employing spatial separation
between the two antennas.
3.2. Integrated IFAs
3.2.1. Board-Mounted IFA. As indicated, surface mount
PIFAs are fabricated separately from the wireless device
PCB and hence they need to be assembled on the PCB la-
ter on. After the placement of the antenna its character-
istics is then evaluated and if performance deciency is
noted the design is changed accordingly. A superior and
alternative solution was proposed by Ali et al. [11,15]. In
their proposal the Bluetooth/WLAN antenna is an inte-
grated inverted-F antenna (IFA) rather than a PIFA. The
IFA is directly printed on the wireless device PCB, and
hence no assembly is required. Testing is conducted as
soon as the board is released. The antenna requires that
1.7 1.8 1.9 2 2.1 1.6 1.5 1.4 1.3 1.2 1.1 1 0.9 0.8
6
5.5
5
4.5
4
3.5
3
2.5
2
1.5
1
Frequency (GHz)
V
S
W
R
Computational results
Experimental results
Figure 11. Computed and measured VSWR
versus frequency; d4mm, s 6 mm, and l
26.5mm. [r 2004 IEEE. Reprinted, with per-
mission, from M. Ali, G. J. Hayes, H.-S.
Hwang, and R.A. Sadler, Design of a multi-
band internal antenna for third generation
mobile phone handsets, IEEE Trans. Anten.
Propag. 51(7):14521461 (July 2003).]
Table 2. Measured Peak Gain Data for Proposed Antenna
(Free-Space) at 5.25 and 5.78GHz
a
Frequency
(GHz)
Peak Gain
(dBi), yz Plane
Peak Gain
(dBi), xy Plane
5.25 1.8 at y 01 0.5 at y 2101
5.78 0.8dBi at y 1401 0.6 dBi at y 1901
a
For pattern characteristics and beam peak locations for yz and xy planes,
see Fig. 6.
Source: r 2004 IEEE. Reprinted, with permission, from M. Ali, T. Sitt-
ironnarit, H.-S. Hwang, R. A. Sadler, and G. J. Hayes, Wideband/dual-band
packaged antenna for 56 GHz WLAN application, IEEE Trans. Anten.
Propag. 52(2):610615 (Feb. 2004).
MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES 3075
there be an opening on both sides of the PCB so that it can
radiate (see Fig. 14). The antenna is directly printed on
the substrate material. In Fig. 14, dielectric material has
been shown as removed from the region to help visualize
the antenna. In reality, dielectric material will be present
and the antenna will be printed on it. Note that the an-
tenna consists of a trace, a feed (that brings the signal
through a transmission line, usually a microstrip or strip-
line), a via (through hole), and a shorting pin just adjacent
to the feed. The transmission line and via are not shown in
the gure. In the window where the antenna is located
there is no metal on the top or bottom part of the PCB
except the antenna, its feed, and the shorting pin. In ad-
dition, when the PCB is placed inside a device housing,
there cannot be any metal shadowing the antenna.
In Fig. 15 measured VSWR is plotted against the fre-
quency response for the proposed antenna. Note that the
antenna works under 21 VSWR throughout the entire
Bluetooth band. Finally, we show measured elevation
Antenna
(a) (b)
330
300
0
0
30
60
10
20
30
0
10
20
30
0
10
20
30
90
120
150
180
210
240
270
330
300
0
30
60
90
120
150
180
210
240
270
z (0=0)
y([=90,
0=+90)
x([=0)
Figure 12. Measured normalized azimuth plane patterns: (a) xy plane (900MHz); (b) xy plane
(1900MHz). Solid linevertical component; dashed linehorizontal component; d4 mm, s
6 mm, and l 26.5mm. [r2004 IEEE. Reprinted, with permission, from M. Ali, G. J. Hayes, H.-S.
Hwang, and R. A. Sadler, Design of a multi-band internal antenna for third generation mobile
phone handsets, IEEE Trans. Anten. Propag. 51(7):14521461 (July 2003).]
PCB
Antenna
Feed
Ground
Plated thru
holes
Figure 13. Surface-mount PIFA.
Substrate
Top PCB
Antenna
Feed
Shorting pin
y
Bottom PCB
Figure 14. Integrated inverted-F antenna
(IFA) [15].
3076 MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES
plane pattern data for a typical Bluetooth PIFA on a PC
board (Fig. 15). For comparison the pattern of a half-wave
dipole on a PC board is also shown. The dipole is twice as
long as the PIFA. Note that the peak gain of the PIFA is
slightly smaller. The PIFA pattern is much broader and
does not have sharp nulls as the dipole. Patterns in other
orthogonal planes also show comparable performance.
3.2.2. Flexible Film-Type Antenna. Ali et al. [14] pre-
sented a small internal inverted-F antenna printed on the
inside surface of the stylus holder of a PDA (Fig. 16). The
antenna can be printed on a exible lm substrate and
bonded to the plastic with an adhesive. The proposed an-
tenna operated with or without the stylus considering that
the stylus and the stylus holder both were made of plastic
material.
Input impedance data for the proposed antenna with
and without a dielectric insert are shown in Fig. 17. It is
evident that the impedance locus for each case is very
close to the center of the Smith chart, especially within the
S
11 500 m /
1
2

START 700.0 000 MHz STOP 2700.000 000 MHz
REF 2.5 31 Jan 2000 10:30:22 SWR CH1
hp
30 30
30
25
20
15
10
5
60 90 120 150 180 0
0
5
60 90 120 150 180
Dipole
PIFA
Angle (degrees)
G
a
i
n

(
d
B
i
)
1_: 1.9148
2.4 GHz
2.405GHz
2_: 1.744
Figure 15. Measured VSWR and radiation patterns
[15].
MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES 3077
frequency range of 2.42.485 GHz. The center frequency is
2.45 GHz, and the bandwidth is 11%. The antenna oper-
ates within 1.51 VSWR throughout the entire Bluetooth
band (2.42.485 GHz).
The yz-plane pattern is shown in Fig. 18. This is the
most important pattern that provides clear understanding
about the angular coverage that the antenna can provide.
3.3. Monopole-Type Radiators
In Fig. 19 a monopole type embedded antenna is shown for
wideband WLAN application in the 56-GHz bands
[28,29]. This antenna can support the IEEE 802.11a
wireless local-area network bands (5.155.35 GHz and
5.7255.825 GHz). The conguration is similar to the
ones presented in Refs. 30 and 31. In Ref. 30, only a sin-
gle-band folded design was presented for Bluetooth appli-
cation (2.42.485 GHz, 3.5% bandwidth). No method of
wideband/dual-band operation was described. The pro-
posed packaged design can either be used as a wideband
antenna, which can provide bandwidths in excess of 10%
within 21 VSWR, or it can be used for dual-band opera-
tion, where the bands are separated by 500700MHz in
the 56 GHz band. This latter property has been exploited
to present a design that satises the IEEE 802.11a WLAN
5.155.35-GHz and 5.7255.825-GHz bands. The antenna
design presented here is packaged within the housing of a
personal digital assistant (PDA) and has the maximum
dimensions of 28 9 3mm. The wideband/dual-band op-
eration has been achieved through proximity parasitic
Antenna
PCB
1
5
80
120
5
0.8
0.2
0.2
0.2
3.8
8
10
1
Feed
Ground
z
Stylus
holder
Figure 16. A uniquely packaged inverted-F antenna for Blue-
tooth or WLAN. [r2004 IEEE. Reprinted, with permission, from
M. Ali, R. A. Sadler, and G. J. Hayes, A uniquely packaged inter-
nal inverted-Fantenna for Bluetooth or wireless LAN application,
IEEE Anten. Wireless Propag. Lett. 1(1):57 (2002).]
j2
j2
j1
j1
j0.5
j0.5
j0.2
j0.2
0
0.2 0.5 1 2
Without insert
With insert
2.8 GHz
2.3 GHz
2.5 GHz
Figure 17. Input impedance of proposed an-
tenna. [r2004 IEEE. Reprinted, with permis-
sion, from M. Ali, R. A. Sadler, and G. J. Hayes,
A uniquely packaged internal inverted-F an-
tenna for Bluetooth or wireless LAN applica-
tion, IEEE Anten. Wireless Propag. Lett. 1(1):
57 (2002).]
3078 MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES
coupling between a folded radiator and an extended PCB
ground plane. The dimensions of the extended PCB
ground plane have been appropriately adjusted to ensure
the desired coupling.
The proposed antenna can also be manufactured to
operate in air. In such a case manufacturing the antenna
should be much simpler and easier. Computed VSWR
data for antenna on FR4 and in air are shown in Fig. 20.
Bandwidth obtained is 15.5% with 3mm antenna height
and 18.0% with 4 mm antenna height (within VSWR of
21). In contrast bandwidth obtained with FR4 was 10%
with 3 mm antenna height (within VSWR of 21). Thus
wider bandwidth can be obtained by replacing FR4 with
air. However, the dimensions of the antenna in air are
about 50 22 3 mm, while that on FR4 are 28 9
3 mm. A laboratory prototype of the proposed antenna
L
2
L
3
W
1
W
2
2h
2
2
6
x
x
y
z
gr
FR 4 or Air
FR 4 or Air
Figure 19. Antenna and PCB geometry. [r
2004 IEEE. Reprinted, with permission, from
M. Ali, T. Sittironnarit, H.-S. Hwang, R. A.
Sadler, and G. J. Hayes, Wideband/dual-band
packaged antenna for 56GHz WLAN appli-
cation, IEEE Trans. Anten. Propag. 52(2):
610615 (Feb. 2004).]
330
300
0
30
60
90
120
150
180
210
240
270
5 0 5 10 15 20 30 35
E
[
E
0
25
Figure 18. yz-plane pattern. [r 2004 IEEE.
Reprinted, with permission, from M. Ali, R. A.
Sadler, and G. J. Hayes, A uniquely packaged
internal inverted-F antenna for Bluetooth or
wireless LAN application, IEEE Anten. Wire-
less Propag. Lett. 1(1):57 (2002).]
MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES 3079
(on FR4) was built and tested for VSWR. Measured and
computed VSWR data are compared in Fig. 20. The agree-
ment between the measured and computed data is quite
good. Computed and measured resonant frequencies are
about the same. The measured bandwidths in both bands
are also in good agreement with the computed bandwidths
within 21 VSWR. It is clear that the antenna satises
the bandwidth requirements for the IEEE 802.11a LAN
(5.155.35 GHz and 5.7255.825 GHz). The midband
VSWR is only as high as 2.71. Antenna radiation pat-
tern and gain were measured inside an anechoic chamber.
Measured gain for the two principal plane patterns are
listed in Table 2. The free-space peak gain at 5.25 GHz is
1.8 dBi, while that at 5.78 GHz is 0.8 dBi.
4. ELECTRICALLY SMALL ANTENNAS, DIELECTRIC
LOADING, AND BANDWIDTH
As dened by Wheller [32], an electrically small antenna
has its maximum dimension contained within a sphere
with radius l/2p. Thus, clearly, if a linear antenna such as
a straight thin-wire dipole is constructed, it needs to be l/p
or smaller to fall within this category. A comprehensive
study on the minimum achievable antenna quality factor
(Q) is available in the literature [3236]. These references
are extremely useful if one intends to explore the funda-
mental limits on small antennas. Since most embedded or
packaged antennas for mobile phones utilize these plat-
forms, it is unlikely that these antennas will fall within
the category of electrically small antennas. However, an-
tennas for wireless radios or small antennas for VHF and
other UHF applications may belong to the class of electri-
cally small antennas. Primarily the antenna bandwidth
diminishes with extreme miniaturization, and specifically
for an electrically small antenna, the spherical volume
needs to be utilized properly to achieve the lowest possible
Q. A thin-wire dipole represents a rather poor utilization
of the radian sphere, a normal-mode helical antenna, most
commonly used in police radios (scanners) and mobile
phones, represents a better utilization of the same axial
dimension, while the disk-loaded monopole or the Gobou
antenna represents even better utilization of the small
antenna volume. There has been some effort in terms of
genetically optimizing electrically small antennas [37].
Apart from geometric modication, electrically small
antennas can also be developed using dielectric loading.
This invariably results in narrower antenna bandwidth
and lower antenna gain. For instance, a small GPS patch
using ceramic dielectric can be easily fabricated. Also
since for GPS, bandwidth required is extremely small
(only to satisfy tolerance), a small antenna size can be
readily achieved. However, comparing a 25-mm
2
patch
with a 12-mm
2
patch shows that the peak right-hand cir-
cularly polarized gain can fall from 6.5 to 2 dBi. A good
discussion on dielectric loading of antennas can be found
in Ref. 38. A more recent example of a dielectric loaded
antenna can be found in Ref. 39.
5. DISCUSSION AND FUTURE TRENDS
Research will continue on antenna miniaturization and
broadbanding. Miniaturization will be achieved primarily
by utilizing the available antenna volume rather than a
planar surface. Thus designers will increasingly utilize a
three-dimensional space. Antenna geometric modication
will also occur in the form of employing meander, zigzag,
and fractal-type elements.
While electromagnetic bandgap (EBG) materials have
shown tremendous progress in improving the gain of
microstrip patch antennas by reducing the surface waves,
5.75
6
5.5
5
4.5
4
3.5
3
2.5
2
1.5
1
4.75 5 5.25 6 5.5
Frequency (GHz)
V
S
W
R
Air(h=3mm computed)
Air(h=4mm computed)
FR4(h=3mm computed)
FR4(h=3mm measured)
Figure 20. VSWR characteristics. [r 2004
IEEE. Reprinted, with permission, from
M. Ali, T. Sittironnarit, H.-S. Hwang, R. A.
Sadler, and G. J. Hayes, Wideband/dual-band
packaged antenna for 56GHz WLAN appli-
cation, IEEE Trans. Anten. Propag. 52(2):
610615 (Feb. 2004).]
3080 MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES
such structures have not been very useful for small mobile
antenna applications. This is due partly to the fact that
mobile antennas are so dependent on their counterpoise.
There have already been some research activities in terms
of small, low-prole antenna development using novel ma-
terials [4046]. Nevertheless, in future negative reection
coefcient materials, negative transmission-line materials
and EBG materials may nd some useful role in packaged
(embedded) antenna development.
Fractal geometries have been considered to design
small resonant antennas because of their unique space
lling properties. The usefulness of a Hilbert wire anten-
na in order to lower the antenna resonant frequency has
been studied [4749]. Zhu et al. [48] conducted a paramet-
ric study on a matched Hilbert antenna to understand its
bandwidth and cross-polarization level. A printed Hilbert
antenna was proposed [49] for operation in the UHF band.
More recently we have proposed a miniaturized Hilbert
shaped PIFA for dual-band applications (900 and
1900 MHz) [50]. The proposed antenna occupied a volume
of only 40 10.65 10 mm (4.3 cm
3
). In contrast, a con-
ventional dual-band PIFA made of polygonal plates occu-
pies a volume of 40 22.610 mm (8.4 cm
3
). Thus, a 50%
saving in antenna volume was readily achieved with our
proposed design.
Apart from wire or printed dipole/monopole type an-
tennas, miniaturized slot antennas are also drawing in-
terests. Some examples can be found in Refs. 5154. As
wireless devices shrink in size and more and more func-
tionalities are added (phone, WLAN, GPS for E-911), the
need for recongurable/multifunctional antennas will also
grow. Such antennas must be researched and developed
within the embedding platform. MEMs switches can play
a dominant role in that area.
As evidenced from numerous publications, smart an-
tennas are becoming increasingly popular to increase sys-
tem capacity in a mobile communication system.
Currently researchers are focusing mostly on arrays for
base-station application that can scan the beam in space
in an adaptive fashion and track the mobile user. This way
a much larger antenna gain can be achieved with little or
no interference in any other angular direction. Introduc-
ing smart antennas in mobile portable or handheld ter-
minals will require significant progress since array
elements have to be placed in very close proximity to
each other.
Acknowledgment
This work was supported in part by the National Science
Foundation (NSF) Career Award ECS-0237783.
BIBLIOGRAPHY
1. K. Fujimoto and J. R. James, ed., Mobile Antennas Systems
Handbook, Artech House, 1994.
2. K.-L. Wong, Design of Non-Planar Microstrip Antennas and
Transmission Lines, Wiley, 1999.
3. K.-L. Wong, Compact and Broadband Microstrip Antennas,
Wiley, 2002.
4. T. S. Rappaport, Wireless Communications, Principles and
Practice, Prentice-Hall, 1996.
5. C. A. Balanis, Antenna Theory Analysis and Design, 2nd ed.,
Wiley, 1997.
6. M. Ali, Antenna design for mobile hand held devices, Recent
Research Development in Microwave Theory and Techniques,
Vol. 2, Trans-world Research Network, 2002, pp. 261278.
7. M. A. Jensen and Y. Rahmat-Samii, Performance analysis of
antennas for hand-held transceivers using FDTD, IEEE
Trans. Anten. Propag. 42:11061113 (Aug. 1994).
8. K. L. Virga and Y. Rahmat-Samii, Low-prole enhanced-
bandwidth PIFA antennas for wireless communications pack-
aging, IEEE Trans. Microwave Theory Tech. 45:18791888
(Oct. 1997).
9. L. Z. Dong, P. S. Hall, and D. Wake, Dual-frequency planar
inverted-F antennas, IEEE Trans. Anten. Propag. 45:1451
1458 (Oct. 1997).
10. C. R. Rowel and R. D. Murch, A capacitively loaded PIFA for
compact mobile telephone handsets, IEEE Trans. Anten.
Propag. 45:837884 (May 1997).
11. M. Ali and G. J. Hayes, Analysis of integrated inverted-F an-
tennas for Bluetooth applications, IEEE Antennas and Prop-
agation Conf. Wireless Communication Digest, Waltham, MA,
Nov. 2000, pp. 2124.
12. M.-S. Tong, M. Yang, Y. Chen, and R. Mittra, Finite difference
time domain analysis of a stacked dual-frequency microstrip
planar inverted-F antenna for mobile telephone handsets,
IEEE Trans. Anten. Propag. 49:367376 (March 2001).
13. G. K. H. Lui and R. D. Murch, Compact dual-frequency PIFA
designs using LC resonators, IEEE Trans. Anten. Propag.
49(7):10161019 (July 2001).
14. M. Ali, R. A. Sadler, and G. J. Hayes, A uniquely packaged
internal inverted-Fantenna for Bluetooth or wireless LAN ap-
plication, IEEE Anten. Wireless Propag. Lett. 1(1):57 (2002).
15. M. Ali and G. J. Hayes, A small printed integrated inverted-F
antenna for Bluetooth application, Microwave Opt. Technol.
Lett. 33(5):347349 (June 5, 2002).
16. M. F. Abedin and M. Ali, Modifying the ground plane and its
effect on planar inverted-Fantennas (PIFAs) for mobile phone
handsets, IEEE Anten. Wireless Propag. Lett. 2(15):226229
(2003).
17. M. Ali, G. Yang, H. S. Hwang, and T. Sittironnarit, Design and
analysis of an R-shaped dual-band planar inverted-Fantenna
for vehicular applications, IEEE Trans. Vehic. Technol.
53(1):2937 (Jan. 2004).
18. R. Sadler, G. Hayes, and M. Ali, Compact, Broadband Invert-
ed-F Antennas with Conductive Elements and Wireless Com-
municators Incorporating Same, U.S. Patent 6,218,992 (April
17, 2001).
19. R. Sadler, M. Ali, and G. J. Hayes, Multi-Frequency Band In-
verted-F Antennas with Coupled Branches and Wireless Com-
municators Incorporating Same, U.S. Patent 6,563,466 (May
13, 2003).
20. M. C. Huynh and W. L. Stutzman, Ground plane effects on
PIFA antennas, USNC/URSI Radio Science Meeting Digest,
2000, p. 223.
21. M. C. Huynh, A Numerical and Experimental Investigation of
Planar Inverted-F Antennas for Wireless Communication Ap-
plications, M.S. thesis, Virginia Tech., 2000.
22. http://www.nokia.com/downloads/aboutnokia/research/li-
brary/communication_systems/CS20.pdf.
23. M. Ali, G. J. Hayes, H.-S. Hwang, and R. A. Sadler, Design of a
multi-band internal antenna for third generation mobile
MINIATURIZED PACKAGED (EMBEDDED) ANTENNAS FOR PORTABLE WIRELESS DEVICES 3081
phone handsets, IEEE Trans. Anten. Propag. 51(7):14521461
(July 2003).
24. M. Ali, Dual-Band Antenna Having Mirror Image Meander-
ing Segments and Wireless Communicators Incorporating
Same, U.S. Patent 6,184,836 (Feb. 6, 2001).
25. M. Ali, S. S. Stuchly, and K. Caputa, A wide-band dual me-
ander-sleeve antenna, J. Electromagn. Waves Appl.
10(9):12231236 (1996).
26. M. Ali, M. Okoniewski, M. A. Stuchly, and S. S. Stuchly, Dual-
frequency strip-sleeve monopole for laptop computers, IEEE
Trans. Anten. Propag. 47(2):317323 (Feb. 1999).
27. www.bluetooth.com.
28. M. Ali, T. Sittironnarit, H.-S. Hwang, R. A. Sadler, and G. J.
Hayes, Wideband/dual-band packaged antenna for 56GHz
WLAN application, IEEE Trans. Anten. Propag. 52(2):610
615 (Feb. 2004).
29. K. V. Kumar, M. Ali, H. S. Hwang, and T. Sittironnarit, Study
of a dual-band packaged patch antenna on a PC card for 5
6 GHz wireless LAN applications, Microwave Opt. Technol.
Lett. 37:423428 (June 2003).
30. A. Faraone and D. O. McCoy, The folded patch omnidirection-
al antenna, IEEE AP-S Int. Symp. Digest, 2001, Vol. 2, pp.
712715.
31. G. Christodoulou, P. F. Wahid, M. R. Mahbub, and M. C. Bai-
ley, Design of a minimum-loss series-fed foldable microstrip,
IEEE Trans. Anten. Propag. 12641267 (Aug. 2000).
32. H. A. Wheller, Small antennas, IEEE Trans. Anten. Propag.
AP-23:462469 (July 1975).
33. L. J. Chu, Physical limitations on omni-directional antennas,
J. Appl. Phys. 19:11631175 (Dec. 1948).
34. R. C. Hansen, Fundamental limitations in antennas, Proc.
IEEE 69:170182 (Feb. 1981).
35. J. S. McLean, A re-examination of the fundamental limits on
the radiation Q of electrically small antennas, IEEE Trans.
Anten. Propag. 44:672676 (May 1996).
36. G. A. Thiele, P. L. DetWeiler, and R. P. Peno, On the lower
bound of the radiation Q for electrically small antennas, IEEE
Trans. Anten. Propag. 51:12631269 (June 2003).
37. E. E. Altshuler, Electrically small self-resonant wire anten-
nas optimized using a genetic algorithm, IEEE Trans. Anten.
Propag. 50:297300 (March 2002).
38. K. Fujimoto et al., Small Antennas, Wiley, New York, 1987.
39. J.-I. Moon and S.-O. Park, Small chip antenna for 2.4/5.8-GHz
dual ISM-band applications, IEEE Anten. Wireless Propag.
Lett. 2(21):313315 (2002).
40. R. F. J. Broas, D. F. Sievenpiper, and E. Yablonovitch, A high-
impedance ground plane applied to a cell-phone handset ge-
ometry, IEEE Trans. Microwave Theory Tech. 49(7):1262
1265 (July 2001).
41. Z. Du, K. Gong, J. S. Fu, B. Gao, and Z. Feng, A compact pla-
nar inverted-F antenna with a PBG-type ground plane for
mobile communications, IEEE Trans. Vehic. Technol.
52(3):483489 (May 2003).
42. D. Pavlickovski and R. B. Waterhouse, Shorted microstrip
antenna on a photonic bandgap substrate, IEEE Trans. An-
ten. Propag. 51(9):24722475 (Sept. 2003).
43. S. Clavijo, R. E. Diaz, and W. E. Mckinzie III, Design meth-
odology for Sievenpiper high-impedance surfaces: an articial
magnetic conductor for positive gain electrically small anten-
nas, IEEE Trans. Anten. Propag. 51:26782690 (Oct. 2003).
44. M. F. Abedin and M. Ali, Application of EBG substrates
to design ultra-thin wideband directional dipoles, IEEE
Antennas and Propagation Society Int. Symp. and URSI/
USNC Meeting, Monterey, CA, June 2004.
45. F. Auzanneau and R. W. Ziolkowski, Articial composite ma-
terials consisting of nonlinearly loaded electrically small an-
tennas: Operational-amplier-based circuits with
applications to smart skins, IEEE Trans. Anten. Propag.
47:13301339 (Aug. 1999).
46. R. W. Ziolkowski and A. D. Kipple, Application of double neg-
ative materials to increase the power radiated by electrically
small antennas, IEEE Trans. Anten. Propag. 51:26262640
(Oct. 2003).
47. S. R. Best and J. D. Morrow, The effectiveness of space-lling
fractal geometry in lowering resonant frequency, IEEE Anten.
Wireless Propag. Lett. 1(5):112115 (2002).
48. J. Zhu, A. J. Hoorfar, and N. Engheta, Bandwidth, cross-po-
larization and feed point characteristics of matched Hilbert
antennas, IEEE Anten. Wireless Propag. Lett. 2(1):25 (2003).
49. X. Chen, S. S. Naeini, and Y. Liu, A down sized printed Hil-
bert antenna for UHF band, Proc. IEEE Antennas and Prop-
agation Society Int. Symp., Columbus, OH, June 2003, Vol. 2,
pp. 581584.
50. M. Z. Azad and M. Ali, A miniature Hilbert planar inverted-F
antenna (PIFA) for dual-band mobile phone applications,
IEEE Antennas and Propagation Society Int. Symp. and
URSI/USNC Meeting, Monterey, CA, June 2004.
51. R. Azadegan and K. Sarabandi, A novel approach for minia-
turization of slot antennas, IEEE Trans. Anten. Propag.
51(3):421429 (March 2003).
52. J. M. Kim, J. G. Yook, W. Y. Song, Y. J. Yoon, and J. Y. Park,
Compact meander type slot antennas, IEEE AP-S Symp. Di-
gest, 2001.
53. H. Y. Wang, J. Simkin, C. Emason, and M. J. Lancaster, Com-
pact meander slot antennas, Microwave Opt. Technol. Lett.
24:377380 (2000).
54. A. T. M Sayem, M. Ali, and H. S. Hwang, A miniature Hilbert
slot antenna for dual-band wireless application, IEEE Anten-
nas and Propagation Society Int. Symp. and URSI/USNC
Meeting, Monterey, CA, June 2004.
MISSILE GUIDANCE
ARMANDO A. RODRIGUEZ
Arizona State University
Tempe, Arizona
1. A BRIEF HISTORY: FROM 1944 TO THE PRESENT
1.1. The Missile Age
Even prior to World War Iwhen powered ight was in
its rst decadeforward-thinking individuals from seve-
ral countries advocated the use of unmanned vehicles to
deliver high-explosive weapons from afar. Although the
earliest efforts to develop a practical ying bomb were un-
dertaken in the United States and Great Britain, it was in
Germany that a workable concept nally emerged. After
14 years of intense research, the Germans ushered in the
missile age during World War II with their Vengeance
3082 MISSILE GUIDANCE
weapons: the Luftwaffe-developed V-1 buzz bomb, and the
Army-developed V-2 rocket [1].
1.1.1. Lark Guided Missile. Because of the lack of suc-
cess of antiaircraft artillery in stopping Kamikaze aircraft
attacks against naval vessels, the U.S. Navy initiated the
development of the Lark guided missile in 1944. The rst
successful intercept of an unmanned aircraft occurred 6
years later on December 2, 1950. An account of this, as
well as the development of other missiles (e.g. Sparrow,
Hawk), is provided in Ref. 2.
1.1.2. The First Ballistic Missiles. The rst long-range
ballistic missile, deployed in 1944, was the German V-2.
After World War II, significant improvements in inertial
guidance system technology led to the Redstone
missilethe rst short-range U.S. ballistic missile with
a highly accurate inertial guidance system. Additional
progress was made with the medium-range U.S. Jupiter
missile [3].
1.1.3. Intercontinental Ballistic Missiles (ICBMs). Fur-
ther advancements in the area of nuclear warhead design,
inertial guidance system, and booster engine technology
led to the development of the intercontinental ballistic
missile (ICBM). The rst U.S. ICBMthe Atlaswas test-
ed in 1959. The Atlas would be used to launch satellites
into orbit, launch probes to the (Earths) Moon and other
planets, and to launch the Mercury spacecraft into orbit
around Earth. The Atlas was followed by the Titan one
year later. Both Atlas and Titan were liquid-fuelled mul-
tistage rockets that required fueling immediately prior to
launch. In 1961, the Minuteman ICBM was put into ser-
vice. Located within dispersed hardened silos, the Min-
uteman used a solid propellant stored within the missile.
The LGM-30 Minuteman III was deployed in 1970. This
system was designed such that specially congured EC-
135 airborne launch control aircraft could automatically
assume command and control of an isolated missile or
missiles in the event that command capability is lost be-
tween the launch control center and the remote missile
launch facilities. In 1986, the LGM-118A Peacekeeper was
deployed. This three-stage solid propellant systempermits
10 warheads to be carried via multiple independent (in-
dependently targeted) reentry vehicles (MIRVs). At the
peak of the cold war, the Soviet Union possessed nearly
8000 nuclear warheads on ICBMs. During the cold war,
the United States built up its strategic defense arsenal,
focusing on a nuclear triad consisting of (1) long-range
bombers (B-52 bombers and KC-135 tankers) with nuclear
air-to-surface missiles, (2) USA-based ICBMs, and (3) sub-
marine-launched ballistic missiles (SLBMs) launched
from nuclear-powered submarines (http://www.chin-
fo.navy.mil/navpalib/factle/ships/ship-ssbn.html). To
complement the ground-based leg of the triad, the U.S.
Navy would develop the submarine-launched Polaris, Po-
seidon, and Trident ICBMs (http://www.chinfo.navy.mil/
navpalib/factle/missiles/wep-d5.html). Trident I and II
were deployed in 1979 and 1990, respectively. Both ac-
commodate nuclear MIRVs and are deployed in Ohio-class
(Trident) submarines, each carrying 24 missiles (eight 100
kiloton warheads per missile).
1.2. Treaties and Programs
1.2.1. Nuclear Proliferation Treaties: SALT and MAD. Be-
cause of the large number of Soviet nuclear war-heads
during the cold war, some felt that U.S. ICBM elds were
threatened. On March 14, 1969, President Nixon an-
nounced his decision to deploy a missile defense system
(called Safeguard) to protect U.S. ICBM elds from attack
by Soviet missiles. This initiated intense strategic arms
negotiations between the United States and the Soviet
Union. The Strategic Arms Limitation Talks (SALT), be-
tween the United States and the Soviet Union led to a
1971 agreement xing the number of ICBMs that could be
deployed by the two nations. The Anti-ballistic Missile
(ABM) Treatysigned by the United States and the Soviet
Union on May 26, 1972was designed to implement the
doctrine of mutually assured destruction (MAD). MAD
was intended to discourage the launching of a rst strike
by the certainty of being destroyed by retaliation. The
treaty prohibits deployment of sea-, air-, and space-based
missiles and limits deployment of sea-, air-, and space-
based sensors. The impetus behind these arguments was
to perpetuate the existing balance of power and avoid the
economic chaos that would result from a full-scale arms
race. In 1976, the U.S. Congress ordered the closing of
Safeguardonly 4 months after it was operational. In
2001, the ABM treaty came under attack in the U.S. Con-
gress as the United States and Russia (the former Soviet
Union) discussed how to differentiate between theater and
strategic missile defenses.
1.2.2. BMD and SDI. In 1983, President Reagan initi-
ated the Ballistic Missile Defense (BMD) program under
the Strategic Defense Initiative (SDI). SDI would focus on
space-based defense research. Because SDI deployment
would contravene the ABM treaty, many critics felt SDI,
with its potential offensive use, would escalate the arms
race. In 1984, the Strategic Defense Initiative Organiza-
tion (SDIO) was formed. In 1987, Judge Abraham D. So-
faer, State Department Legal Advisor, concluded that the
ABM treaty did not preclude testing of space-based missile
defense systems, including directed energy weapons; SDI
research would continue. With the breakup of the Soviet
Union in 1991, the need for great nuclear arsenals came
into question. In 1993, the Ballistic Missile Defense Orga-
nization (BMDO) was formed, replacing the SDIO, and
SDI was abandoned for ground-based anti-missile systems.
1.2.3. Strategic Arms Reduction Treaties. In November
1994, the Strategic Arms Reduction Treaty I (START I)
became effective, with the United States, Russia, Belarus,
Kazakstan, and Ukraine agreeing to reduce nuclear war-
heads by 25%. In appreciation for the ratication, the
United States appropriated $1.5 billion for assistance in
dismantling nuclear weapons, properly storing weapons
grade materials, and turning military factories into civil-
ian buildings. The 2004 Treaty of Moscow promises to re-
duce the number of warheads from 6000 to 2200 by 2012.
MISSILE GUIDANCE 3083
1.3. Missile Warning Systems
Although the United States has no active ABM defense
system in place, an extensive warning system has been in
place for many years. Air and space defense is delegated to
the North American Aerospace Defense Command (NO-
RAD)a joint U.S.-Canadian organization. A Ballistic
Missile Early Warning System (BMEWS) consisting of
warning and tracking radars in Alaska, Greenland, and
the United Kingdom can detect missiles 4800 km
(B3000mi) away and provides a 15-min warning of an
attack on North America. The Perimeter Acquisition Ra-
dar Characterization System (PARCS), operating within
the United States interior, tracks incoming warheads and
determines impact areas. Phased-array radar antennas
along the U.S. Atlantic, Pacific, Alaskan, and Gulf coasts
provide warning of SLBM launches.
1.4. Persian Gulf War
In January 1991, the role of air power in modern warfare
was dramatically demonstrated during the Persian Gulf
War. Initial attacks by the United Statesled multination-
al coalition were designed to suppress Iraqi air defenses.
These attacks included Tomahawk cruise missiles
launched from warships in the Persian Gulf, F-117A
Stealth ghter-bombers armed with laser-guided smart
bombs, and F-4G Wild Weasel aircraft carrying high-speed
antiradiation missiles (HARMs). These attacks permitted
F-14, F-15, F-16, and F/A-18 ghter bombers to achieve air
superiority and to drop TV- and laser-guided precision
bombs. During the ground war, A-10 Thunderbolts with
armor-piercing heat-seeking or optically guided AGM-65
Maverick missiles, provided support for ground units. The
AH-64 Apache and AH-1 Cobra helicopters red laser-
guided Hellre missiles, guided to tanks by ground ob-
servers or scout helicopters. The E-3A airborne warning
and control system (AWACS), a ying radar system, pro-
vided targeting information to coalition members.
1.5. Missile Defense
While most weapon systems performed superbly during
the Gulf War, little could be done to stop the Iraqi Scuds
launched against Saudi Arabia and Israel. However, a Pa-
triot surface-to-air missile (SAM) system was brought in
to repel Scud attacks. Although the Patriot system had
been used in 1987 to destroy another Patriot during a
demonstration ight, the system was originally designed
as an anti-aircraft defense system. Thus, its effectiveness
against the Scuds was limited, because intercepts often
did not take place at sufciently high altitudes. Part of the
problem was attributed to the fact that the Patriot relied
on proximity detonation rather than a hit to kill. This
would often cause the incoming Scud to break up, leaving
a free-falling warhead to detonate on the civilian popula-
tion below. The many PatriotScud engagements were
televised to a world audience and demonstrated the need
for a high altitude air defense system that could intercept
(tactical) ballistic missiles far from critical military assets
and civilian population centers. For this reason much re-
search shifted toward the development of hit-to-kill thea-
ter high-altitude air defense (THAAD) systems. In his
January 1991 State of the Union address, President
George H.W. Bush formally announced a shift in SDI to
a concept of global protection against limited strikes
(GPALS), and by December, he signed into law the Mis-
sile Defense Act of 1991. On January 24, 1997, a Standard
Missile 2 (SM-2) Block IVA successfully intercepted and
destroyed a Lance missile at the White Sands Missile
Range in New Mexico. During the test, the SM-2 success-
fully transitioned from radar midcourse guidance to its
heat-seeking endgame/terminal guidance system prior to
destroying the target with its blast fragmentation war-
head. On February 7, 1997, BMDO carried out a test in
which a Patriot Advanced Capability-2 (PAC-2) missile
successfully intercepted a theater ballistic target missile
over the Pacific Ocean. In April 1997, BMDO established
the Joint Program Ofce (JPO) for the National Missile
Defense (NMD). On June 24, 1997, the rst NMD ight
test was successfully completed. During this test an exo-
atmospheric kill vehicle (EKV) sensor was used to identify
and track objects in space. To appreciate the formidable
problems associated with developing a THAAD system, it
is necessary to understand issues associated with the de-
sign of missile guidance systems.
2. MISSILE GUIDANCE, NAVIGATION, AND CONTROL
SUBSYSTEMS
We begin our technical discussion by describing the sub-
systems that make up a missile system. In addition to a
warhead, a missile contains several key supporting sub-
systems. These subsystems may include a target-sensing
system, a missile-navigation system, a guidance system,
an autopilot or control system, and the physical missile
(including airframe and actuation subsystem) (see Fig. 1).
2.1. Target Sensing System
The target sensing system provides target information
to the missile guidance system, including relative position,
velocity, line-of-sight (LOS) angle and rate. Target sensing
systems may be based on a number of sensors, such as
radar, laser, heat, acoustic, or optical sensors. Optical sen-
sors, for example, may be as simple as a camera for a
weapons systems ofcer (WSO) to visualize the target
from a remote location. They may be a sophisticated
imaging system (see text below). For some applications,
target coordinates are known a priori (e.g., via satellite
intelligence) and a target sensor becomes irrelevant. In
Navigation
Guidance
Sensors Target
Autopilot

Missile
Figure 1. Information ow for missiletarget engagements.
3084 MISSILE GUIDANCE
such a case, the navigation system provides the required
information.
2.2. Navigation System
A navigation system provides information to the missile
guidance system about the missile position in space rela-
tive to some inertial frame of reference, such as a at-
Earth constant-gravity model for short-range ights and a
rotating Earth variable-gravity model for long-range
ights. To do so, it may use information obtained from a
variety of sensors. These may include simple sensors such
as accelerometers or a radar altimeter. It may include
more sophisticated sensors such as a global positioning
system (GPS) receiver or an optical terrain sensor that
relies on comparisons between an image of the terrain be-
low with a stored image and a stored desired trajectory.
Optical stellar sensors rely on comparisons between an
image of the stars above with a stored image and a stored
desired trajectory.
2.3. Guidance System
Target and missile information is used by the guidance
system to compute updated guidance commands, which,
when issued to the missile autopilot, should ideally guide
(or steer) the missile toward the target [4,5]. When target
coordinates are known a priori, missile coordinates pro-
vided by the navigation system (e.g., GPS-based) are pe-
riodically compared with the (preprogrammed) target
coordinates to compute appropriate guidance corrections.
In general, the quality of the computed guidance com-
mands depends on the quality of the sensor data gathered
and the delity of the models used for the missile and tar-
get. Targets may be stationary, mobile, or highly maneu-
verable (e.g., silo, ship, ghter aircraft). Physically,
guidance commands may represent quantities such as de-
sired thrust, desired (pitch/yaw) acceleration, desired
speed, desired ight path or roll angle, and desired alti-
tude. Guidance commands issued by the guidance system
to the missile autopilot are analogous to the speed com-
mands issued by automobile drivers to the cruise control
systems in their cars. In this sense, the missile guidance
system is like the automobile driver and the missile auto-
pilot is like the automobile cruise control system. Missile
guidance commands are computed in accordance with
a guidance algorithm. Guidance algorithms and naviga-
tional aids will be discussed below.
2.4. Autopilot
The primary function of the autopilotsometimes re-
ferred to as the ight control system (FCS) or attitude
control system (ACS)is to ensure (1) missile attitude
stability and (2) that commands issued by the guidance
system are followed as closely as possible [4]. The autopi-
lot accomplishes this command-following objective by com-
puting and issuing appropriate control commands to the
missiles actuators. These actuators may include, for ex-
ample, rocket thrusters, ramjets, or servomotors that
move aerodynamic control surfaces. More specifically,
the autopilot compares commands issued by the guidance
system with real-time measurements (e.g., acceleration,
attitude and attitude rate, altitude) obtained from on-
board sensors (e.g., accelerometers, gyroscopes, radar al-
timeters) and/or external tracking systems. This
comparison, essentially a subtraction of signals, produces
a feedback error signal, which is then used to compute
control commands for the missile actuators. This compu-
tation may be based on a very complex mathematical
model that captures the following: missile airframe, aero-
dynamics (depending on speed, dynamic pressure, angle of
attack, sideslip angle, etc.), actuators, sensors, exible
modes, and uncertainty descriptions (e.g., dynamic uncer-
tainty, parametric uncertainty [6,7], disturbance/noise
bounds). It should be noted that commands that are is-
sued by the guidance system to the autopilot cannot al-
ways be followed exactly because of the presence of
multiple sources of uncertainty. Sources of uncertainty
may include disturbances acting on the missile, sensor
noise, unmodeled or uncertain missile airframe, actuator,
and sensor dynamics.
2.5. Flight Phases
The ight of a missile can be broken into three phases:
(1) a launch, separation, or boost phase; (2) a midcourse
or cruise phase; and (3) an endgame or terminal phase.
During each phase, a missile may use distinct guidance,
navigation, and control systems, specifically designed
to accommodate the requirements during that phase of
the ight. During each phase, the missile may very well
use different sets of sensors, actuators, and power sources.
2.6. Guidance System Performance Terminology
To describe the function and performance of a guidance
system, some terminology is essential. The imaginary line
that connects a missile center of gravity (c.g.) to the target
c.g. is referred to as the line of sight (LOS) [8]. The length
of this line is called the range. The associated vector from
missile to target is referred to as the range vector. The
time derivative of the range vector is called the closing
velocity. The most important measure of performance for
any missile guidance system is the so-called miss distance.
Miss distance is dened to be the missiletarget range
at that instant when the two are closest to one another
[8, p. 27]. The objective of most guidance systems is to
minimize the miss distance within an allotted time period.
For some applications (e.g., hit to kill), zero miss distance
is essential. For some applications (e.g., to minimize col-
lateral damage), it is essential to impact the target at a
specific angle. Because miss distance is sensitive to many
variables and small variations from missile to missile,
other quantities are used to measure performance. One of
the most common measures used is circular error proba-
bility (CEP). The CEP for a missile attempts to provide an
average miss distance for a class of missiletarget engage-
ments. If a missile has a CEP of 10 m, then most of the
time, say, 68% of the time, it will detonate within 10 m of
the target.
MISSILE GUIDANCE 3085
3. CLASSIFICATION OF MISSILES, TARGETS, GUIDANCE
SYSTEMS, NAVIGATION METHODS, AND TARGET
SENSING METHODS
The guidance system used by a missile depends on the in-
tended use of the missile. Missiles are classied according
to many categories. The most commonly used classica-
tions are as follows: strategic, tactical, exoatmospheric,
endoatmospheric, aerodynamic, ballistic, surface-to-sur-
face, surface-to-air, air-to-surface, air-to-air, inertially
guided, terrain guided, stellar guided, satellite guided,
passive; active, homing, command-guided, radar-guided,
laser-guided, heat seeking, re-and-forget, line-of-sight
guided, radar terrain-guided, TV guided, cruise, skid-to-
turn (STD), and bank-to-turn (BTT). Each category is now
briefly discussed.
3.1. Missile Types
3.1.1. Strategic Missiles. Strategic missiles are used pri-
marily against strategic targets, that is, resources that
permit an enemy to conduct large-scale military opera-
tions (e.g., battle management/command, control, and
communication centers, industrial/weapons manufactur-
ing centers). Such targets are usually located far behind
the battle line. As such, strategic missiles are typically
designed for long-range missions. While such missiles are
usually launched from naval vessels or from missile silos
situated below ground, they are sometimes launched from
aircraft (e.g., strategic bombers). Because such missiles
are intended to eliminate the most significant military
targets, they typically carry nuclear warheads rather than
conventional warheads. Strategic missiles typically oper-
ate at orbital speeds (B5 mi/s), outside the atmosphere,
over intercontinental distances. They use rockets/thrust-
ers/fuel and require very precise instrumentation for crit-
ical midcourse guidance. GPS has made such systems very
accurate.
3.1.2. Tactical Missiles. Tactical missiles are used pri-
marily against tactical targets, that is, resources that per-
mit an enemy to conduct small-scale military operations (a
ship, an aireld, a munitions bunker, etc.). Such targets
are usually located near the battle line. As such, tactical
missiles are typically designed for short- or medium-range
missions. Such missiles carry conventional explosive war-
heads, the size of which depends on the designated target.
Tactical missiles sometimes carry nuclear warheads in an
effort to deter the use of tactical nuclear/chemical/biolog-
ical weapons and to engage the most hardened targets
(e.g., enemy nuclear strategic missile silos). Tactical mis-
siles typically operate at lower speeds (o1 mi/s), inside the
atmosphere, and over short to medium distances (e.g.,
150mi). They typically use aerodynamic control surfaces
(discussed below) and require adequate instrumentation
for midcourse and terminal guidance. A target sensor (e.g.,
radar seeker) permits such missiles to engage mobile and
highly maneuverable targets.
3.1.3. Exoatmospheric Missiles. Exoatmospheric mis-
siles y their missions mostly outside Earths atmosphere.
Such missiles are used against long-range strategic tar-
gets. Because they y outside the atmosphere, thrusters
are required to change direction. Such thrusters use on-
board fuel. In order to maximize warhead size, and be-
cause missile weight grows exponentially with fuel
weight, it is important that guidance and control systems
for long-range missiles (e.g., strategic, exoatmospheric)
provide for minimum fuel consumption.
3.1.4. Endoatmospheric Missiles. Endoatmospheric mis-
siles y their missions inside Earths atmosphere. Such
missiles are used against strategic and tactical targets. In
contrast to exoatmospheric missiles, endoatmospheric
missiles may use movable control surfaces such as ns
(called aerodynamic control surfaces), which deect air-
ow in order to alter the missile ight path. In such a case,
the missile is called an aerodynamic missile. Endoatmo-
spheric missiles may, in some cases, rely entirely on rocket
power. In such a case, they are not aerodynamic. Exoat-
mospheric missiles that y outside Earths atmosphere
rely on rocket power and thrusters. These are not aerody-
namic. Examples of aerodynamic missiles are the Side-
winder and Patriot.
3.1.5. Ballistic Missiles. Ballistic missiles assume a
free-falling (unpowered) trajectory after an internally
guided, self-powered (boost and midcourse) ascent. Such
missiles are usually used against long-range strategic tar-
gets. ICBMs, for example, are usually exoatmospheric
strategic missiles that were developed for use against
strategic targets, and are typically launched from under-
ground missile silos and submarines. Modern ICBMs con-
tain multiple independently targeted nuclear warheads
deployed via MIRVs. Examples of ICBMs are the Atlas,
Titan, Minuteman, and Polaris. The Iraqi Scud, used in
the Persian Gulf War, is another ballistic missile.
3.1.6. Surface-to-Surface Missiles (SSMs). SSMs are typ-
ically launched from the ground, beneath the ground (e.g.,
from a missile silo), or from naval platforms against
ground targets (e.g., tank, munitions depot, missile silo)
or naval targets (e.g., battleship, submarine). ICBMs are
typically SSMs. SSMs may carry nuclear, biological, chem-
ical, or conventional warheads. Examples of SSMs are the
antiship Silkworm and the Tomahawk.
3.1.7. Surface-to-Air Missiles (SAMs). SAMs are typi-
cally launched from the ground, beneath the ground
(e.g., from a missile silo), or from naval platforms against
aircraft and missiles. SAMs were developed to defend sur-
face targets from air attacks, especially from high-altitude
bombers ying well above the range of conventional anti-
aircraft artillary (AAA). Most air defense SAMs employ
separate radars to acquire (detect) and track enemy air
threats. The separate radar is also used to guide the SAM
toward the hostile target; endgame guidance may be ac-
complished by the missiles onboard guidance system.
SSMs are typically heavier and carry larger warheads
than SAMs because they are usually intended to penetrate
hardened targets. Shoulder-launched SAMs (e.g., Stinger)
3086 MISSILE GUIDANCE
have become a major concern given increased terrorist
activities.
3.1.8. Air-to-Surface Missiles (ASMS). ASMs are
launched from aircraft against ground targets (e.g., a
bridge, aireld) or naval targets. While ASMs are typical-
ly intended for tactical targets, they are used by both stra-
tegic and tactical bombers. Equipping strategic bombers
with long-range ASMs extends their range, significantly
reducing the range that they need to travel toward the
intended target. Examples of ASMs are the antitank
Hawk and Hellre, the antiradar AGM-88 HARM, the
antiship Exocet and AGM-84D Harpoon, and the antiar-
mored vehicle AGM-65 Maverick (http://www.af.mil/
factsheets/). Other ASM systems include the advanced
medium-range air-to-air missile (AIM-120 AMRAAM)
and the airborne laser (ABL) system being developed by
several defense contractors. The ABL system has been
considered for boost-phase intercepts during which the
launched missile has the largest signature and is travel-
ing at its slowest speed.
3.1.9. Air-to-Air Missiles (AAMs). AAMs are launched
from aircraft against aircraft, ballistic missiles, and most
recently against tactical missiles. Such missiles are typi-
cally light, highly maneuverable, tactical weapons. AAMs
are generally smaller, lighter, and faster than ASMs since
ASMs are typically directed at hardened, less mobile, tar-
gets. Some SAMs and ASMs are used as AAMs and vice
versa. Examples of AAMs are the AIM-7 Sparrow, AIM-9
Sidewinder, AIM-54 Phoenix, and the AIM-120A AM-
RAAM.
3.2. Guidance Methods: Fixed Targets with Known Fixed
Positions
A missile may be guided toward a target, having a known
xed position, using a variety of guidance methods and/or
navigational aids, such as inertial, terrain, stellar, and
satellite guidance and navigation.
3.2.1. Inertially Guided Missiles. Inertially guided mis-
siles use missile spatial navigation information relative to
some inertial frame of reference to guide a missile to its
designated target. For short-range missions, one may use
a at-Earth constant-gravity inertial frame of reference.
This is not appropriate for long-range missions, approach-
ing intercontinental distances, for which Earth may not be
treated as at. For such missions, the sun or stars provide
an inertial frame of reference. One can also use an Earth-
centered variable-gravity frame. Position information is
typically obtained by integrating acceleration information
obtained from accelerometers or by pattern matching al-
gorithms exploiting imaging systems. Because accelerom-
eters are sensitive to gravity, they must be mounted in a
xed position with respect to gravity. Typically, acceler-
ometers are mounted on platforms that are stabilized
by gyroscopes or star tracking telescopes. Terrain and
stellar navigation systems are examples of imaging sys-
tems. Satellite navigated missiles use satellites for navi-
gation. Some satellite guided missiles use the Navstar
Global Positioning System (GPS)a constellation of
orbiting navigation satellitesto navigate and guide
the missile to its target. GPS has increased precision sig-
nificantly.
3.3. Guidance Methods: Mobile Targets with Unknown
Positions
If the target position is not known a priori, the foremen-
tioned methods and aids may be used in part but other
real-time target acquisition, tracking, navigation, and
guidance mechanisms are required. The most commonly
used classications for the guidance system in such cases
are as follows: passive, active, and semiactive. These are
now discussed.
3.3.1. Passive Missiles. Passive missiles are missiles
that have a target sensor sensitive to target energy emis-
sions (e.g., radar and thermal energy) and a guidance sys-
tem that uses received target emission signals to guide the
missile toward the target. Such missiles are said to have a
passive guidance system. While such systems are, in prin-
ciple, simple to implement, it should be noted that they
rely on a cooperative target: targets that radiate energy
at appreciable (detectable) power levels. Such systems are
also susceptible to decoys.
3.3.2. Active Missiles. Active missiles use an energy-
emitting transmitter combined with a reectiondetection
receiver (e.g., an active seeker) to acquire targets and
guide the missile toward the target. Such missiles are said
to have an active guidance system. For such systems,
great care is taken to ensure that transmitted and re-
ceived signals are isolated from one another. Stealthy tar-
gets are those that absorb or scatter (misdirect) the
transmitted energy. Receivers can consist of a gimballed
(movable) seeker antenna. Such mechanically directed an-
tennas are slow and have a limited eld of view. Fixed
phase array antennasoperating on interferometric prin-
ciplesoffer rapid electronic scanning capability as well
as a broad eld of view.
3.3.3. Semiactive Missiles. Semiactive missiles use a re-
ection-sensitive receiver to guide the missile to the tar-
get. The reected energy may be provided by a ground-
based, ship-based, or aircraft-based energy emission (e.g.,
radar or laser) system or by such a system aboard the
launching platform. In either case, a human operator (e.g.,
WSO) illuminates the target with a radar or laser beacon
and the missile automatically steers toward the source of
the reected energy. Such missiles are said to possess
semiactive guidance systems. For such implementations,
the illuminating power can be large.
Passive systems, of course, are stealthier than semiac-
tive or active systems as they do not intentionally emit
energy toward the target. Anti-radar missiles typically
use passive guidance systems since radars are constantly
emitting energy. As an antiradar missile approaches the
intended radar, radar operators typically shut down the
radar. This causes the missile to lose its critical guidance
signal. In such a case, an active guidance system must
MISSILE GUIDANCE 3087
take over. Active systems require more instrumentation
than passive systems and hence are heavier and more ex-
pensive.
Guidance system performance is limited by various
noise sources. For active systems, there is range depen-
dent noise that is proportional to the square of the dis-
tance from the missile to the target. For semiactive
systems, there is range dependent noise that is propor-
tional to the distance from the missile to the target. For
either system, the noise is wideband, may be modeled as
white, and goes to zero at intercept. For either system,
there are random uctuations due to the target radar re-
turn. This source of noise is referred to as glint noise. It
depends directly on the physical dimensions of the target
and is typically highly correlated [8].
3.4. Other Guidance Methods and Missile Types
3.4.1. Homing Missiles. Homing missiles, like homing
pigeons, home in on a target by steering toward energy
emitted by or reected from the target. If the missile
homes in on energy emitted by the target, then it uses
a passive guidance system. If the missile transmits a
signal and homes in on the reected energy, its guid-
ance system is active. In principle, sensor information
and homing improve as the missile gets closer to the
target.
3.4.2. Command-Guided Missiles. A command guided
missile is a remotely controlled missile. A cooperating
(ground-, ship-, or aircraft-based) control station uses a
radar (or two) to acquire the target, track the target, and
track the missile. Available computers are used to com-
pute guidance commands (on the basis of ranges, eleva-
tions, and bearings) that are transmitted via radio uplink
to the missile autopilot. Powerful computers, capable of
exploiting complex target models and performance crite-
ria, can provide precision guidance updates in real time.
Such systems are limited by the distance from the track-
ing station to the missile and target. Noise increases, and
guidance degrades, as the engagement moves further from
the tracking station. Such systems are also more suscep-
tible to electronic countermeasures (ECMs). While com-
mand-guided missiles do not require a seeker, one can be
included for terminal guidance to maximize the probabil-
ity of interception at long distances from the tracking sta-
tion. The Patriot is a command-guided SAM. To
significantly increase ECM immunity, some short-range
command guided missiles have a wire that unspools at
launch, keeping the missile connected to the command
station, e.g., the all-weather optically guided antitank Tow
missile.
3.4.3. Beam Rider Guidance (BRG). BRG is a specific
form of command guidance in which the missile ies along
a beam (e.g., radar or laser), which, in principle, points
continuously toward the target. If the missile stays within
the beam, an intercept will occur. Guidance commands
steer the missile back into the beam when it deviates.
BRG causes problems at large ranges because of beam
spreading issues.
3.4.4. Command-to-LOS Guidance. Command-to-LOS
guidanceused by the Tow missileis another command
guidance method that improves on beamrider guidance by
taking beam motion into account.
3.4.5. Energy-Guided Missiles. Radar-guided missiles
are guided to the target on the basis of radar energy. La-
ser-guided missiles are guided on the basis of laser energy.
The Hellre is a laser-guided antitank missile. Heat-seek-
ing missiles are guided on the basis of infrared (IR, heat,
or thermal) energy. The AIM-9 Sidewinder is a heat-
seeking AAM. Most AAMs employ radar homing or heat-
seeking devices and have replaced automatic gunre as
the main armament for ghter aircraft. The shoulder-
operated Stinger is a heat-guided re-and-forget SAM.
Such a missile is called a re-and-forget missile because
it allows the user to re, take evasive action, forget, and
engage other hostile targets.
3.4.6. Degradation of Electromagnetic Energy-Based Sen-
sors. The performance of many electromagnetic energy-
based sensors (e.g., millimeter-wave radars, electrooptical
thermal imagers, and laser radar) degrades under adverse
weather conditions such as rain, fog, dust, or smoke. This
occurs when the size of the weather particles are on the
same order as the wavelength of the energy return from
the target. Under adverse conditions, microwave radars
with wavelengths in centimeters (10GHz) are not degrad-
ed, millimeter radars with millimeter wavelengths
(100 GHz) are slightly degraded, and electrooptical sys-
tems with micrometer wavelengths (10
5
GHz) are severely
degraded. The AIM-120A AMRAAM is a ghter-launched
re-and-forget AAM that uses infrared (IR) sensors to ac-
quire (detect) targets at long range. It uses inertial mid-
course guidance without the need for the ghter to
illuminate the target. A small active seeker is used for
endgame homing.
3.4.7. LOS Guidance. When a missile is near the
target, the guidance system may use line-of-sight (LOS)
guidance. The guidance system of a LOS guided missile
uses target range and LOS information obtained from
the target sensor (e.g., a seeker) to generate guidance
commands to the missile autopilot.
3.4.8. Radar Terrain Guidance. A radar terrain guided
missile uses a radar altimeter, an a priori stored path
and terrain prole to navigate and guide the missile over
the terrain during the midcourse phase of a ight (typi-
cally). The stored path represents a desired path over
the terrain. The down-looking radar altimeter is used to
measure the altitude with respect to the terrain below.
This is used to determine where the missile is with
respect to the desired path. Deviations from the path are
corrected by adjusting guidance commands to the auto-
pilot. The Tomahawk is an all-weather cruise missile
3088 MISSILE GUIDANCE
that uses radar terrain guidance called terrain contour
matching (TERCOM) [9]. TERCOM terrain prolesob-
tained by reconnaissance satellites and other intelligence
sourcesbecome ner as the missile approaches the tar-
get. Such navigational/guidance systems permit terrain
hugging. Terrain echoes (referred to as clutter) then con-
fuse observing radars.
3.4.9. TV Guidance. TV guided missiles use imaging
systems that permit a WSO to see the target and remotely
guide the missile to the target.
3.4.10. Cruise Missiles. Cruise missiles are typically
SSMs that use inertial and terrain following navigation/
guidance systems while cruising toward the target. When
near the target, endgame guidance is accomplished by ei-
ther (1) homing in on target emitted/reected energy, (2)
focusing on a target feature by exploiting a forward-look-
ing imaging system and an onboard stored image, or (3)
using a more detailed terrain contour with a more accu-
rate downward-looking sensor. Cruise missiles offer the
ability to destroy heavily defended targets without risking
air crew. Because they are small, they are difcult to de-
tect on radar, particularly when they hug the terrain. Ex-
amples of cruise missiles are the AGM-86, Tomahawk [9],
and Harpoon. The Tomahawk uses a TERCOM guidance
during the cruise phase. For terminal guidance, a conven-
tionally armed Tomahawk uses an electrooptical digital
scene matching area correlator (DSMAC) guidance sys-
tem, which compares measured images with stored imag-
es. This technique is often referred to as an offset
navigation or guidance technique. At no time during the
terminal scene matching process does the missile look at
the target. Its sensor always looks down. DSMAC makes
Tomahawk one of the most accurate weapon systems in
service around the world.
3.4.11. Skid-to-Turn and Bank-to-Turn Missiles. Skid-to-
turn (STT) missiles, like speedboats, skid to turn. Bank-to-
turn (BTT) missiles, like airplanes, bank to turn [5,1016].
BTT airframe designs offer higher maneuverability than
conventional STT designs by use of an asymmetric shape
and/or the addition of a wing. BTT missile autopilots are
more difcult to design than STT autopilots because of
cross-coupling issues. STT missiles achieve velocity vector
control by permitting the missile to develop angle-of-at-
tack and sideslip angles [5]. The presence of sideslip
imparts a skidding motion to the missile. BTT missiles
ideally should have no sideslip. To achieve the desired
orientation, a BTT missile is rolled (banked) so that the
plane of maximum aerodynamic normal force is oriented
to the desired direction. The magnitude of the force is
controlled by adjusting the attitude (i.e., angle of attack)
in that plane. BTT missile control is made more difcult by
the high roll rates required for high performance (i.e.,
short response time) [4, p. 285]. STT missiles typically
require pitch-yaw acceleration guidance commands,
whereas BTT missiles require pitchroll acceleration
commands. An overview of tactical missile control design
issues and approaches is provided in Ref. 44.
4. GUIDANCE ALGORITHMS
In practice, many guidance algorithms are used [4,8,
1719]. The purpose of a guidance algorithm is to update
missile guidance commands that will be issued to the
autopilot. This update is to be performed on the basis
of missile and target information. The goal of any guid-
ance algorithm is to steer the missile toward the target,
resulting in an intercept within an allotted time period
(i.e., until the fuel runs out or the target is out of range).
The most common algorithms are characterized by the
following terms: proportional navigation, augmented
proportional navigation, and optimal [8,19]. To simplify
the mathematical details of the exposition to follow, sup-
pose that the missiletarget engagement is restricted to
the two-dimensional pitch plane of the missile. Given this,
the engagement dynamics take the following simplied
form [20]
.
Rt V
t
coslt g
t
t V
m
coslt g
m
t 1
.
lt
1
Rt
V
t
sinlt g
t
t V
m
sinlt g
m
t 2
where (V
m
, V
t
), (g
m
, g
t
) denote missiletarget speeds (as-
sumed constant) and ight path angles.
4.1. Proportional Navigation Guidance (PNG)
For proportional navigation guidance (PNG) [8,19], the
missile is commanded to turn at a rate proportional to the
closing velocity V
c
(i.e., range rate) and to the angular
velocity of the LoS
.
l. The constant of proportionality N is
referred to as the PNG gain or constant. For a PNG law,
the pitch plane acceleration command a
c,PNG
(t) takes the
form
a
c;PNG
t NV
c
t
.
lt 3
Typically, N takes on values in the range [3,5].
PNG is relatively easy to implement. For tactical radar
homing missiles using PNG, an active seeker provides
LOS rate while a Doppler radar provides closing velocity.
Traditionally, LOS rate has been obtained by ltering the
output of a 2-degree-of-freedom rate gyro mounted to the
inner gimbal of the seeker [21]. More recently, ring laser
gyros (RLGs) have been used. Unlike conventional spin-
ning gyros, the RLG has no moving parts, no friction, and
hence negligible drift. For IR missiles using PNG, the IR
system provides LOS rate information, but V
c
must be es-
timated. The Lark was the rst missile to use PNG [8].
4.1.1. PNG Optimality and Performance Issues. It can be
shown that PNG minimizes the square integral criterion
_
t
f
0
a
2
c
t dt subject to a zero-miss distance at t
f
, linearized
(small angle) missiletarget dynamics, and constant mis-
siletarget speeds [22], where t
f
denotes the ight time. A
missile using PNG is red not at the target, but at the
expected intercept point if the target were to move at con-
stant velocity in a straight line; thus, the missile is red so
that, at least initially, it is on a collision triangle with the
MISSILE GUIDANCE 3089
target. The initial angle between the missile velocity vec-
tor and the LOS is the missile lead angle. If the missile is
not on a collision triangle with the target, then there ex-
ists a heading error (HE). It is instructive to understand
how PNG missile acceleration requirements vary with (1)
initial heading error when the target is not maneuvering,
and (2) a constant acceleration target maneuver. These
cases are now briefly discussed assuming linearized
(small-angle) 2D dynamics with constant missile and tar-
get speeds (V
m
, V
t
), missile autopilot responds instanta-
neously to guidance acceleration commands (i.e., no lag),
and ideal sensor dynamics [8]. We note that the Stinger is
an example of a re-and-forget supersonic SAM that uses
PNG with passive IR/UV homing.
4.1.2. PNG Performance: Nonmaneuvering Target, Head-
ing Error. First, consider the impact of a heading error on
PNG missile acceleration requirements when the target
moves at a constant speed in a straight line. Under the
simplifying assumptions given above, the resulting com-
manded acceleration is as follows:
a
c;PNG
t
V
m
NHE
t
f
1
t
t
f
_ _
N2
4
This expression shows that PNG immediately begins
removing any heading error (HE) and continues doing so
throughout the engagement. The acceleration require-
ment decreases monotonically to zero as the ight pro-
gresses. A larger N results in a larger initial missile
acceleration requirement, but a lesser nal missile accel-
eration requirement. The larger the N, the faster the
heading error is removed.
4.1.3. PNG Performance: Target Undergoing Constant
Acceleration. Now, consider the impact of a constant tar-
get acceleration a
t
on PNG missile acceleration require-
ments. Under the simplifying assumptions given above,
the resulting commanded acceleration is as follows:
a
c;PNG
t
N
N 2
1 1
t
t
f
_ _
N2
_ _
a
t
5
In sharp contrast to the heading error case examined
above, this expression shows that the PNG missile accel-
eration requirement for a constant target maneuver in-
creases monotonically throughout the ight. As in the
heading error case, a higher N results in a greater initial
acceleration requirement and a relaxed acceleration re-
quirement near the end of the ight a
c;PNG
max
a
c;PNG
t
f

N=N 2a
t
a
t
.
4.1.4. Zero-Effort Miss (ZEM) Distance. An important
concept in guidance law design is that of zero-effort miss
distance, denoted ZEM(t) and dened as the miss distance
that would result if the target would continue at a con-
stant speed in a straight line and the missile made no
further corrective maneuvers. Given this, if one denes
the time to go as t
go

def
t
f
t and the ZEM distance per-
pendicular to the LOS as ZEM
PLOS
(t), then for PNG it can
be shown that
a
c;PNG
t N
ZEM
PLOS
t
t
2
go
_ _
6
where ZEM
PLOS
t y
.
yt
go
, yERl denotes the relative
(small angle) vertical displacement between the missile
and target, and REV
c
t
go
. The concept of ZEM distance is
used to derive more advanced guidance laws [8]. The con-
cept is very powerful since ZEM can be approximated in so
many different ways.
4.1.5. PNG Miss Distance Performance: Impact of System
Dynamics. For the two cases considered above, the asso-
ciated relative displacement yERl satises
y
N
t
f
t
.
y
N
t
f
t
2
y a
t
; yt
f
0
and we have zero-miss distance. The preceding discussion
on PNG assumes that guidancecontrolseeker dynamics
are negligible. In practice, this assumption is not satised
and the inherent lag degrades miss distance performance.
When a rst order lag with time constant t is assumed for
the combined guidancecontrolseeker dynamics, one ob-
tains small miss distances so long as t is much smaller
than t
f
(e.g., t
f
410t). In practice, of course, high-frequency
dynamics impose bandwidth constraints that limit how
small t can be. Despite the above (general) rule of thumb,
it is essential that high-frequency system dynamics be
carefully modeled or analyzed to obtain reliable perfor-
mance predictions. Such dynamics include those associat-
ed with: control system, computational delays, A/D and D/
A conversion, actuators (e.g., thrusters, canards, tailns),
missile structure (e.g., exible modes), guidance system
(e.g., leadlag compensation), and sensors (e.g., seeker ra-
dome, accelerometers, gyros). As one might expect, noise
and parasitic effects place a practical upper bound on the
achievable guidance system bandwidth. In practice, sta-
tistical Monte Carlo simulations (exploiting adjoint meth-
ods [8]) are used to evaluate performance prior to ight
testing. Such simulations consider the above as well as
acceleration/control saturation effects [14,15], typical tar-
get maneuvers, and worst-case target maneuvers.
4.1.6. TPNG and PPNG. Shukla and Mahapatra [23]
distinguish between true PNG (TPNG) and pure PNG
(PPNG). For missiles using TPNG, acceleration com-
mands are issued perpendicular to the LOS (as above).
For PPNG, acceleration commands are issued perpendic-
ular to the missile velocity vector. The advantages of
PPNG over traditional TPNG are highlighted [23]. In con-
trast to PPNG, TPNG requires (1) a forward acceleration
and deceleration capability (because acceleration com-
mand is perpendicular to LOS; not missile velocity), (2)
unnecessarily large acceleration requirements, and (3)
restrictions on the initial conditions to ensure intercept.
3090 MISSILE GUIDANCE
4.2. Tactical Missile Maneuverability
Tactical radar guided missiles use a seeker with a radome.
The radome causes a refraction or bending of the incoming
radar wave, which in turn, gives a false indication of tar-
get location. This phenomenon can cause problems if the
missile is highly maneuverable. One parameter that mea-
sures maneuverability is the so-called missile (pitch) turn-
ing rate frequency (or bandwidth) dened by [2]
o
a

def
.
g
a
7
where
.
g denotes the time rate of change of ight path angle
and a denotes angle of attack (AOA). o
a
measures the rate
at which the missile rotates (changes ight path) by an
equivalent AOA. Assuming that the missile is modeled as
a ying cylinder [8] with length L and diameter D, it has
a lift coefcient
C
L
2a10:75S
plan
=S
ref
a 8
where S
plan
% LD, S
ref
pD
2
=4. Noting that a
m
V
m
.
g is
the missile acceleration, Q
1
2
rV
2
m
the dynamic pressure,
Wmg the missile weight, and r the density of air, it fol-
lows that
o
a

def
.
g
a

a
m
V
m
a

gQS
ref
C
L
WaV
m

rgV
m
S
ref
10:75
S
plan
S
ref
a
_ _
W
9
From this, it follows that o
a
decreases with increasing
missile altitude and with decreasing missile speed, i.e.,
when aerodynamic effectiveness is low.
4.3. Radome Effects: HomingRobustness Tradeoffs
Let o denote the guidancecontrolseeker bandwidth. If o
is too small, homing is poor and large miss distances re-
sult. Typically, we desire o
a
oo so that the closed-loop
system accommodates the capabilities of the missile. As
expected, problems can occur if o is too large. This, in part,
is because of radomeaerodynamic feedback of the missile
acceleration a
m
into
.
l. Assuming n-pole dynamics, it can
be shown that the missile acceleration a
m
takes the form
a
m
FG
.
l R
.
y
FG
.
l RAa
m

FG=1FGRA
.
l;
10
where GNV
c
represents the guidance system, F
[o/(s o)]
n
represents the ight control system, R is the
radome slope (can be positive or negative), and A
(s o
a
)/o
a
V
m
) denotes the missile transfer function from
a
m
to pitch rate
.
y. For stability robustness, we require the
associated open-loop transfer function
L
def
FGRANV
c
o
s o
_ _
n
R
s o
a
o
a
V
m
_ _
to satisfy an attenuation specication such as |L(jo)|
NV
c
|R|[o/o
a
V
m
]oe for some sufciently small constant
e40. This, however, requires
ooe
V
m
jRjNV
c
_ _
o
a
11
for stability robustness. This implies that the bandwidth o
must be small when V
m
is small, (|R|, N, V
c
) are large, or
o
a
is small (high altitude and low missile speed). In gen-
eral, therefore, designers must tradeoff homing perfor-
mance (bandwidth) and stability robustness properties.
Missiles using thrust vectoring (e.g., exoatmospheric mis-
siles) experience similar performancestability robustness
tradeoffs.
4.4. Augmented Proportional Guidance (APNG)
Advanced guidance laws reduce acceleration require-
ments and miss distance but require more information
(time to go, missile-target range, etc.) [18]. In an attempt
to take into account a constant target acceleration ma-
neuver a
t
, guidance engineers developed augmented pro-
portional guidance (APNG). For APNG, the commanded
acceleration is given by
a
c;APNG
t NV
c
.
lt
1
2
Na
t
a
c;PNG
t
1
2
Na
t
12
or a
c;APNG
t NZEM=t
2
go
, where ZEMy
.
yt
go

1
2
a
t
t
2
go
is the associated zero-effort miss distance. Equation (10)
shows that APNG is essentially PNG with an extra term
to account for the maneuvering target. For this guidance
law, it can be shown (under the simplifying assumptions
given earlier) that
a
c;APNG
t
1
2
N 1
t
t
f
_ _
N2
a
t
13
In contrast with PNG, this expression shows that the
resulting APNG acceleration requirements decrease rath-
er than increase with time. From the expression, it follows
that increasing N increases the initial acceleration re-
quirement but also reduces the time required for the ac-
celeration requirements to decrease to negligible levels.
For N4, the maximum acceleration requirement for
APNG, a
c;APNG
max

1
2
Na
t
, is equal to that for PNG,
a
c;PNG
max
N=N 2a
t
. For large N5, APNG requires
a larger maximum acceleration but less acceleration than
PNG for tZ0.2632t
f
. Therefore, APNG is more fuel-ef-
cient for exoatmospheric applications than PNG. Finally,
it should be noted that APNG minimizes
_
t
f
0
a
2
c
t dt sub-
ject to zero-miss distance, linear dynamics, and constant
target acceleration [8].
MISSILE GUIDANCE 3091
4.5. PNG Command Guidance Implementation
To implement PNG in a command guidance setting (i.e.,
no seeker), a differentiating lter must be used to estimate
the LOS rate. As a result, command guidance is more sus-
ceptible to noise than homing guidance. This issue is ex-
acerbated as the engagement takes place further from the
tracking station, noise increases, and guidance degrades.
Within [24], the authors address command guided SAMs
by spreading the acceleration requirements over t
go
. The
method requires estimates for target position, velocity, ac-
celeration, and t
go
, but takes into account nonlinear en-
gagement geometry.
4.6. Advanced Guidance Algorithms
Classical PNG and APNG were initially based on intu-
ition. Modern or advanced guidance algorithms exploit
optimal control theory: optimizing a performance measure
subject to dynamic constraints. Even simple optimal con-
trol formulations of a missiletarget engagement (e.g.,
quadratic acceleration measures) lead to a nonlinear
two-point boundary value problem requiring creative so-
lution techniques, such as, approximate solutions to the
associated HamiltonJacobiBellman equationa formi-
dable nonlinear partial-differential equation [22]. Such a
formulation remains somewhat intractable given todays
computing powereven for command guidance imple-
mentations that can exploit powerful remotely situated
computers. Given this, researchers have sought alterna-
tive approaches to design advanced (near-optimal) guid-
ance laws. Within [19], Nesline and Zarchan present a
PNG-like control law that optimizes square-integral ac-
celeration subject to zero-miss distance in the presence of
a one-pole guidancecontrolseeker system.
Even for advanced guidance algorithms (e.g., optimal
guidance methods), the effects of guidance and control
system parasitics must be carefully evaluated to ensure
nominal performance and robustness [19]. Advanced (op-
timal) guidance methods typically require additional in-
formation such as time-to-go, target acceleration, and
target model parameters (e.g., ballistic coefcient). Given
this, Kalman lter and extended Kalman lter (EKF)
techniques are often used to estimate the required infor-
mation. For optimal guidance (OG) algorithms to work
well, the estimates must be reliable [19]. Cloutier et al.
[17] give an overview of guidance and control techniques,
including a comprehensive set of references. Other ap-
proaches to guidance law design are discussed below.
4.7. Variants of PNG
Nesline and Zarchan [19] compare PNG, APNG, and op-
timal guidance (OG). The zero-miss distance (stability)
properties of PPNG are discussed by Oh [25]. A nonlinear
PPNG formulation for maneuvering targets is provided by
Yang and Yang [26]. Closed-form expressions for PPNG
are presented by Becker [27]. A more complex version of
PNG that is quasioptimal for large maneuvers (but re-
quires t
go
estimates) is discussed by Axelband and Hardy
[28]. Park and Kabamba [20] conducted 2D miss distance
analysis [20] for a guidance law that combines PNG and
pursuit guidance. White et al. extend PNG by using an
outer LOS rate loop to control the terminal geometry of
the engagement (e.g. approach angle) [29]. Generalized
PNG, in which acceleration commands are issued normal
to the LOS with a bias angle, is addressed by Yuan and
Hsu [30]. Yang and Yang address 3D generalized PNG [31]
using a spherical coordinate system xed to the missile to
better accommodate the spherical nature of seeker mea-
surements. Analytical solutions are presented without lin-
earization. Yang et al. present generalized guidance
schemes [32] that result in missile acceleration commands
rotating the missile perpendicular to a chosen (general-
ized) direction. When this direction is appropriately se-
lected, standard laws result. Timeenergy performance
criteria are also examined. Capturability issues for vari-
ants of PNG are addressed in Ref. 33 and the references
cited therein. Yang and Yang [34] present a 2D framework
showing that many developed guidance laws are special
cases of a general law. The 3D case, utilizing polar coor-
dinates, is considered by Tyan [35].
4.8. Optimal Guidance (OG) Laws
Kalman ltering techniques are often combined with OG
laws. Such is the case when weaving targets are under
consideration. Weaving targets can cause large miss dis-
tances when classical and standard OG laws are used.
Tactical ballistic missiles, for example, can spiral or weave
into resonances as they enter the atmosphere as a result of
mass or congurational asymmetries. An OG law, based
on weaving (variable amplitude) sinusoidal target maneu-
vers, is developed by Aggarwal [36]. An EKF is used to
estimate the target maneuver weave frequency. Methods
for intercepting spiraling weaving tactical ballistic targets
are also presented in [37]. This includes an optimal weave
guidance law incorporating an EKF to estimate relative
position, relative velocity, target acceleration, target jerk
information, and weave frequency information.
4.9. Differential Game Guidance
Differential game-theoretic concepts have been addressed
[22]. In such formulations, a disturbance (e.g., target ma-
neuver) competes with a control (e.g., missile accelera-
tion command). The disturbance attempts to maximize a
performance index (e.g., miss distance) while the control
attempts to minimize the index. Shima and Golan [38]
provide an analytical study using a zero-sum pursuit-eva-
sion differential game formulation to develop endgame
guidance laws assuming that the interceptor has two con-
trols. Linear biproper transfer functions are used to rep-
resent the missiles control systemsa minimum phase
transfer function for the canard system and a non-mini-
mum-phase (NMP) transfer function for the tail control
system. A rst-order strictly proper transfer function is
used for the target dynamics. Bounds are assumed for
each of the abovementioned transfer function inputs
(i.e., reference commands). The optimal strategy is bang-
bang in portions of the game space. A switching time
exists prior to interception because of the NMP nature of
the tail control system. This feature requires good esti-
mates of t
go
. H
1
theory [7] provides a natural differential
3092 MISSILE GUIDANCE
game-theoretic framework for developing guidance laws
as well as control laws.
4.10. Lyapunov-Based and Other Guidance Laws
Lyapunov methods have been very useful for deriving sta-
bilizing control laws for nonlinear systems [39]. Such
methods have been used to obtain guidance laws that re-
quire target aspect angle (relative to LOS) rather than
LOS rate [40] and that address maneuvering targets in 3D
[41]. A new guidance lawreferred to as circular naviga-
tion guidance (CNG)steers the missile along a circular
arc toward the target [42]. Traditionally, the guidance and
control systems are designed separately. While this ap-
proach has worked well for years, increasing performance
requirements afrm the value of an integrated guidance
and control system design methodology. Integrated guid-
ance and control issues have been addressed within a po-
lar coordinate framework [43]. New advanced guidance
laws may benet from linear parameter varying (LPV)
[44] and state-dependent Riccati equation (SDRE) [45]
concepts.
4.11. Nonlinear State Estimation: Extended Kalman Filter
As discussed earlier, OG laws often require missiletarget
model state/parameter estimates, such as relative posi-
tion, velocity of target, and acceleration of target t
go
. An
extended Kalman lter (EKF) is often used to obtain the
required estimates. This involves using quasilinearized
dynamics to solve the associated matrix Riccati differen-
tial equation for a covariance matrix that is used with a
model based estimatormimicking the original nonlinear
dynamicsto generate quasioptimal estimates. It is well
known that poor estimates for t
go
, for example, can result
in large miss distances and significant capture region re-
duction [19]. Estimating t
go
as R/V
c
is valid only if V
c
is
nearly constant. A recursive (noniterative) algorithm for
t
go
estimates, which can be used with OG laws, is provided
by Tahk et al. [46].
To develop useful estimation techniques, much atten-
tion has been placed on modeling the target. Initially, re-
searchers used simple uncorrelated target acceleration
models that yielded misleading results. This led to the
use of simple dynamical modelspoint mass and more
complex. Both Cartesian and spherical coordinate formu-
lations have been investigated [47]; the latter better re-
ect the radial nature of an engagement. Single- and
multiple-modeled EKFs have been used [48] to address
the fact that no single model captures the dynamics that
may arise. Low-observability LOS measurements make
the problem particularly challenging [48]. Target observ-
ability is explored [49] under PNG and noise-free angle-
only measurements in 2D. Williams and Friedland pre-
sent a method for obtaining required estimates for APNG
(e.g. y,
.
y, a
t
, t
go
) [50]. Since no single (tractable) model and
statistics can be used to accurately capture the large set of
possible maneuvers by todays modern tactical ghters,
adaptive ltering techniques have been employed. Such
lters attempt to adjust the lter bandwidth to reect the
target maneuver. Some researchers have used classical
NeymanPearson hypothesis testing to detect bias in the
innovations to appropriately reinitialize the lter. Thresh-
old levels must be judiciously selected to avoid false de-
tections that result in switching to an inappropriate
estimator.
4.12. Long-Range Exoatmospheric Missions: Weight
Considerations
For long-range exoatmospheric missions approaching in-
tercontinental ranges, orbital speeds are required (e.g.,
B20,000 ft/s or 13,600mi/h or 4 mi/s). To study such inter-
ceptors, two new concepts are essential. Fuel-specific im-
pulse, denoted I
sp
, is dened as the ratio of thrust to the
time rate of change of total missile weight. It corresponds
to the time required to generate a weight equivalent
amount of thrust. Fuel-efcient missiles have higher
fuel-specific impulses. Typical tactical missile fuel-specif-
ic impulses lie in the range of 200300s. Fuel mass frac-
tion, denoted mf, is dened as the ratio of propellant
weight W
prop
to total weight W
T
W
prop
W
structure

W
payload
. SAMs, for example, have a larger fuel mass frac-
tion than do AAMs because SAMs must travel through the
denser air at lower altitudes. For fuel-specific impulses
less than 300s, large fuel mass fractions (approaching 0.9)
are required for exoatmospheric applications. A conse-
quence of this is that it takes considerable total booster
weight to propel even small payloads to near-orbital
speeds. More precisely, it can be shown [8, pp. 265267]
that the weight of the propellant required for a single-
stage booster to impart a speed change DV to a payload
weighing W
payload
is given by
W
prop
W
payload
mf
exp
DV
gI
sp
_ _
1
1 1 mf exp
DV
gI
sp
_ _
_

_
_

_
14
where g denotes the acceleration due to gravity near
Earths surface and mf
def
W
prop
=W
prop
W
structure
de-
notes an (approximate) fuel mass fraction that neglects
the weight of the payload W
payload
. Staging can be used to
reduce total booster weight for a given fuel-specific im-
pulse I
sp
and (approximate) fuel mass fraction mf . Ef-
cient propellant expenditure for exoatmospheric
intercepts has been addressed [51]. 3D midcourse guid-
ance for SAMs intercepting nonmaneuvering high-alti-
tude ballistic targets has also been addressed [52].
Neural networks are used to approximate (store) optimal
vertical guidance commands and estimate t
go
. Feedback
linearization [39] is used for lateral guidance commands.
4.13. Acceleration Limitations
Endoatmospheric missile acceleration is limited by alti-
tude, speed, structural, stall AOAAOA, and drag con-
straintsstall AOA at high altitudes and structural
limitations at low altitudes [see Eq. (8)]. Exoatmospheric
interceptor acceleration is limited by thrust-to-weight ra-
tios and ight time; the latter is due to the fact that when
the fuel is exhausted, exoatmospheric missiles cannot
maneuver.
MISSILE GUIDANCE 3093
4.14. THAAD Systems
More recent research efforts have focused on the develop-
ment of THAAD systems. Calculations show that high-al-
titude ballistic intercepts are best made head-on so that
there is little target deceleration perpendicular to the LOS
[8]. This is because such decelerations appears as a target
maneuver to the interceptor. EKF methods have been
suggested for estimating target ballistic coefcients and
state information to be used in OG laws. Estimating bal-
listic coefcients
b
def
W=S
ref
C
D;0
; 15
where C
D,0
is the zero-lift drag coefcient] is particularly
difcult at high altitudes where there is little drag
a
drag
1=2brgV
2
m
. Also, the high closing velocity of a
ballistic target engagement significantly decreases the
maximum permitted guidance system bandwidth for ra-
dome slope stability. Noise issues significantly exacerbate
the ballistic intercept problem.
5. FUTURE DEVELOPMENTS
Future developments will focus on theater-class ballis-
tic missiles, guided projectiles, miniature kill vehicles,
space-based sensors for missile defense, and boost-phase
interceptors. The future of missile guidance depends to
a large extent on the ongoing reinterpretation of the
ABM treaty between the United States and the former
Soviet Union. After September 11, 2001, work was ini-
tiated on the development of mininukes for underground
bunkers. The need for guided missiles that permit
precision strikes with minimal collateral damage was
also reafrmed.
Acknowledgment
This research has been supported, in part, by a 1998 White
House Presidential Excellence Award from President Clin-
ton, by National Science Foundation (NSF) Grants
0231440 and 9851422, by the Western Alliance to Expand
Student Opportunities (WAESO), Center for Research on
Education in Science, Mathematics, Engineering and
Technology (CRESMET), Boeing A.D. Welliver Faculty
Fellowship, Intel, and Microsoft. For additional informa-
tion, please contact aar@asu.edu.
BIBLIOGRAPHY
1. M. J. Neufeld, The Rocket and the Reich, Harvard Univ. Press,
Cambridge, MA, 1995.
2. M. W. Fossier, The development of radar homing missiles, J.
Guid. Control Dynam. 7:641651 (Nov.Dec. 1984).
3. W. Haeussermann, Developments in the eld of automatic
guidance and control of rockets, J. Guid. Control Dynam.
4(3):225239 (MayJune 1981).
4. J. H. Blakelock, Automatic Control of Aircraft and Missiles,
McGraw-Hill, New York, 1992, p. 229.
5. D. E. Williams, B. Friedland, and A. N. Madiwale, Modern
control theory for design of autopilots for bank-to-turn mis-
siles, J. Guid. Control 10(4):378386 (JulyAug. 1987).
6. C. F. Lin, Modern Navigation, Guidance, and Control Process-
ing, Prentice-Hall, Englewood Cliffs, NJ, 1991, pp. 14, 184.
7. K. Zhou and J. C. Doyle, Essentials of Robust Control, Pren-
tice-Hall, Upper Saddle River, NJ, 1998.
8. P. Zarchan, Tactical and Strategic Missile Guidance, AIAA
Inc., 1990.
9. N. Macknight, Tomahawk Cruise Missile, Motorbooks Inter-
national, 1995.
10. A. Arrow, An Analysis of Aerodynamic Requirements for
Coordinated Bank-to-Turn Missiles, NASA CR 3544, 1982.
11. J. J. Feeley and M. E. Wallis, Bank-to-Turn Missile/Target
Simulation on a Desk Top Computer, The Society for Com-
puter Simulation International, 1989, pp. 7984.
12. F. W. Reidel, Bank-to-Turn Control Technology Survey for
Homing Missiles, NASA R 3325, 1980.
13. M. J. Kovach, T. R. Stevens, and A. Arrow, A bank-to-turn
autopilot design for an advanced air-to-air interceptor, Proc.
AIAA GNC Conf., Monterey, CA, Aug. 1987, pp. 13461353.
14. A. A. Rodriguez and J. R. Cloutier, Performance enhancement
for a missile in the presence of saturating actuators, AIAA J.
Guid. Control Dynam. 19:3846 (Jan.Feb. 1996).
15. A. A. Rodriguez and Y. Wang, Performance enhancement for
unstable bank-to-turn (BTT) missiles with saturating actua-
tors, Int. J. Control 63(4):641678 (1996).
16. A. A. Rodriguez and M. Sonne, Evaluation of missile guidance
and control systems on a personal computer, SIMULATION,
J. Soc. Comput Simul. 68(6):363376 (1997).
17. J. R. Cloutier, J. H. Evers and J. J. Feeley, Assessment of air-
to-air missile guidance and control technology, IEEE Control
Syst. Mag. 2734 (Oct. 1989).
18. T. L. Riggs and P. L. Vergaz, Advanced Air-to-Air Missile
Guidance Using Optimal Control and Estimation, AFATL-TR-
81-56, Air Force Armament Laboratory, Eglin AFB, Florida.
19. F. W. Nesline and P. Zarchan, A new look at classical versus
modern homing guidance, J. Guid. Control 4(1):7885
(Jan.Feb. 1981).
20. J. Park and P. T. Kabamba, Miss distance analysis in a new
guidance law, Proc. 1999 American Control Conf. June 24,
1999, Vol. 4, pp. 29452949.
21. J. Waldmann, Line-of-sight rate estimation and linearizing
control of an imaging seeker in a tactical missile guided by
proportional navigation, IEEE Trans. Control Syst. Technol.
10(4):556567 (July 2002).
22. A. E. Bryson and Y. C. Ho, Applied Optimal Control: Optimi-
zation, Estimation, and Control, Hemisphere Publishing Co.,
1975.
23. U. S. Shukla and P. R. Mahapatra, The proportional naviga-
tion dilemmapure or true? IEEE Trans. Aerospace Electron.
Syst. 26(2):382392 (March 1990).
24. D. Ghose, B. Dam, and U. R. Prasad, A spreader acceleration
guidance scheme for command guided surface-to-air missiles,
Proc. IEEE 1989 Nat. Aerospace and Electronics Conf., NAE-
CON 1989, May 2226, 1989, Vol. 1, pp. 202208.
25. J. H. Oh, Solving a nonlinear output regulation problem: Zero
miss distance of pure PNG, IEEE Trans. Automatic Control
47(1):169173 (Jan. 2002).
26. C. D. Yang and C. C. Yang, Optimal pure proportional
navigation for maneuvering targets, IEEE Trans. Aerospace
Electron. Syst. 33(3):949957 (July 1997).
3094 MISSILE GUIDANCE
27. K. Becker, Closed formsolution of pure proportional navigation,
IEEE Trans. Aerospace Electron. Syst. 26(3):526533 (1990).
28. E. Axelband and F. Hardy, Quasi-optimum proportional nav-
igation, IEEE Trans. Automatic Control 15(6):620626 (Dec.
1970).
29. B. A. White, R. Zbikowski, and A. Tsourdos, Aim point guid-
ance: An extension of proportional navigation to the control of
terminal guidance, Proc. 2003 American Control Conf., June
46, 2003, Vol. 1, pp. 384389.
30. P. J. Yuan and S. C. Hsu, Solutions of generalized proportional
navigation with maneuvering and nonmaneuvering targets,
IEEE Trans. Aerospace Electron. Syst. 31(1):469474 (Jan.
1995).
31. C. D. Yang and C. C. Yang, Analytical solution of generalized
3D proportional navigation, Proc. 34th IEEE Conf. Decision
and Control, Dec. 1315, 1995, Vol. 4, pp. 39743979.
32. C. D. Yang, F. B. Hsiao, and F. B. Yeh, Generalized guidance
law for homing missiles, IEEE Trans. Aerospace Electron.
Syst. 25(2):197212 (March 1989).
33. A. Chakravarthy and D. Ghose, Capturability of realistic gen-
eralized true proportional navigation, IEEE Trans. Aerospace
Electron. Syst., 32(1):407418 (Jan. 1996).
34. C. D. Yang and C. C. Yang, A unied approach to proportional
navigation, IEEE Trans. Aerospace Electron. Syst. 33(2):557
567 (April 1997).
35. F. Tyan, An unied approach to missile guidance laws: A 3D
extension, Proc. 2002 American Control Conf. May 810,
2002, Vol. 2, pp. 17111716.
36. R. K. Aggarwal, Optimal missile guidance for weaving tar-
gets, Proc. 35th IEEE Decision and Control Dec. 1113, 1996,
Vol. 3, pp. 27752779.
37. P. Zarchan, Tracking and intercepting spiraling ballistic mis-
siles, Proc. IEEE Position Location and Navigation Symp.
March 1316, 2000, pp. 277284.
38. T. Shima and O. M. Golan, Bounded differential games guid-
ance law for a dual controlled missile, Proc. 2003 American
Control Conf., June 46, 2003, Vol. 1, pp. 390395.
39. H. Khalil, Nonlinear Systems, 2nd ed., Prentice-Hall, S
Englewood Cliffs, NJ, 1996.
40. T. L. Vincent and R. W. Morgan, Guidance against maneu-
vering targets using Lyapunov optimization feedback control,
Proc. American Control Conf. May 810, 2002, pp. 215220.
41. Z. Youan, H. Yunan and G. Wenjin, Lyapunov stability based
three-dimensional guidance for missiles against maneuvering
targets, Proc. 4th World Congress on Intelligent Control and
Automation, June 1014, 2002, Vol. 4, pp. 28362840.
42. I. R. Manchester and A. V. Savkin, Circular navigation guid-
ance law for precision missile/target engagements, Proc. 41st
IEEE Conf. Decision and Control, Dec. 1013, 2002, Vol. 2, pp.
12871292.
43. S. N. Balakrishnan, D. T. Stansbery, J. H. Evers, and J. R.
Cloutier, Analytical guidance laws and integrated guidance/
autopilot for homing missiles, Proc. 2nd IEEE Conf. Control
Applications, Sept. 1316, 1993, Vol. 1, pp. 2732.
44. D. B. Ridgely and M. B. McFarland, Tailoring theory to prac-
tice in tactical missile control, IEEE Control Syst. Mag.
19(6):4955 (Dec. 1999).
45. J. S. Shamma and J. R. Cloutier, Existence of SDRE stabiliz-
ing feedback, IEEE Trans. Automatic Control 48(3):513517
(March 2003).
46. M. J. Tahk, C. K. Ryoo and H. Cho, Recursive time-to-go
estimation for homing guidance missiles, IEEE Trans. Aero-
space Electron. Syst. 38(1):1324 (Jan. 2002).
47. C. N. DSouza, M. A. McClure, and J. R. Cloutier, Spherical
target state estimators, Proc. American Control Conf., June
29July 1, 1994, Vol. 2, pp. 16751679.
48. C. Rago and R. K. Mehra, Robust adaptive target state esti-
mation for missile guidance using the interacting multiple
model Kalman lter, Proc. IEEE 2000 Position Location and
Navigation Symp., March 1316, 2000, pp. 355362.
49. M. J. Tahk, H. Ryu, and E. J. Song, Observability character-
istics of angle-only measurement under proportional naviga-
tion, Proc. 34th SICE Annual Conf. Int. Session Papers, July
2628, 1995, pp. 15091514.
50. D. E. Williams and B. Friedland, Target maneuver detection
and estimation [missile guidance], Proc. 27th IEEE Conf.
Decision and Control, Dec. 79, 1988, Vol. 1, pp. 851855.
51. S. Brainin and R. McGhee, Optimal biased proportional
navigation, IEEE Trans. Automatic Control 13(4):440442
(Aug. 1968).
52. E. J. Song and M. J. Tahk, Three-dimensional midcourse
guidance using neural networks for interception of ballistic
targets, IEEE Trans. Aerospace and Electron. Syst.,
38(2):404414 (April 2002).
MIXED-SIGNAL CMOS RF INTEGRATED
CIRCUITS
MICHIEL STEYAERT
PATRICK REYNAERT
KULeuven ESAT-MICAS
Leuven, Belgium
1. INTRODUCTION
The world of wireless communication and its applications
have begun to grow rapidly. The driving force behind this
lies in the introduction of digital coding and digital signal
processing in wireless communications. This digital revo-
lution is driven by the development of high-performance,
low-cost CMOS technologies that allow for the integration
of an enormous amount of digital functions on a single die.
As CMOS is mainly a digital technology, placing all digital
functions on a single die is merely a matter of handling the
system complexity. To achieve a truly single-chip solution,
the analog part also has to be integrated in CMOS.
The telecommunication market is generally considered
to be a business where a single-chip solution in a cheap
(CMOS) technology results in a huge cost benet; the
main reason is the high volume of user equipment. Fur-
thermore, in these systems, small size, low board area, low
power consumption, and high talktime are crucial, and
therefore it is of utmost importance to achieve a high level
of integration. This trend toward single-chip, fully inte-
grated systems can clearly be seen in the development of
RF systems such as GSM, EDGE, Bluetooth, and wireless
LAN. In all these systems, the analog part mainly consists
of an RF front end.
Deep submicrometer technologies allow for the opera-
tion frequency of CMOS circuits above 1 GHz, which opens
the way to fully integrated RF systems. Several research
groups have developed high-performance downconverters,
MIXED-SIGNAL CMOS RF INTEGRATED CIRCUITS 3095
low-phase-noise voltage-controlled oscillators, and dual-
modulus prescalers in standard CMOS technologies. The
research has already demonstrated fully integrated re-
ceivers and synthesizers with no external components, nor
tuning or trimming. Further research on low-noise ampli-
ers, power ampliers, and synthesizers has resulted in
fully integrated CMOS RF transceivers for DCS1800,
Bluetooth, and wireless LAN [13].
In this article, we will focus on the evolution from the
well-known heterodyne receiver topology to the zero- and
low-IF topology used in modern receivers. We will also
discuss the interaction between the analog part and the
digital part regarding substrate noise and decoupling.
2. TECHNOLOGICAL ASPECTS OF MIXED-SIGNAL DESIGN
2.1. Deep Submicrometer MOS Transistors
Because of the never-ending progress in technology down-
scaling and the requirement to achieve a higher degree of
integration for DSP circuits, deep submicrometer technol-
ogies are now considered as standard CMOS technologies.
Transistors with f
T
values near 100 GHz have been dem-
onstrated in 0.1-mm technologies [4,5]. However, the speed
increase of deep submicrometer technologies is reduced by
the parasitic capacitance of the transistor, meaning the
gatedrain overlap capacitance and the drainbulk junc-
tion capacitance. This can clearly be seen in Fig. 1 in the
comparison for different technologies between f
T
and the
f
3dB
, the latter is dened as the 3 dB point of a diode con-
nected transistor [6]. The f
3 dB
is more important for an-
alog design because it reects the speed limitation of a
transistor in a practical conguration, namely, a simple
two-stage common-source amplier. As can be seen in Fig.
1, the f
T
rapidly increases, but for real circuit designs
(f
3dB
) the speed improvement is only moderate.
2.2. Integration of Passive Components
In integrated CMOS RF circuits [7,8] it becomes clear that
the transistor will not be the limiting factor but rather the
passive components and packaging will be. Since the RF
signals have to come off the chip sooner or later, and since
the RF antenna signal has to get into the chip, any
parasitic PCB, packaging, or bondwire in combination
with the ESD (electrostatic discharge) protection network
and packaging pin capacitances will strongly affect and
degrade the RF signal. Another important aspect in
mixed-signal design is the quality factor of the passive el-
ements. High-quality metalinsulatormetal capacitors
and low-resistance top-metal layers for inductors are pro-
cess options that add to the total mask count, and a high
amount of passive elements will increase the total die
area. An alternative is to separate the processing of the
active devices from the processing of the passive compo-
nents. An example is given in Ref. 9, where a low-cost
passive integration and packaging technology is combined
with a high-performance CMOS technology. These solu-
tions are feasible as long as the parasitics due to the
interconnection of the two dies are small.
3. ARCHITECTURAL ASPECTS
In mixed-signal design, the aim is to integrate both the
digital and the analog parts on one single die. However,
many analog front-end architectures are not well suited
for integration. Therefore, new architectures have been
(re)invented that allow a fully integrated solution. The
heterodyne receiver, for example, is the best-known and
most frequently used receiver topology. In this receiver the
desired signal is downconverted to a relatively high inter-
mediate frequency. Very high performances can be
achieved with the heterodyne topology. However, the
main problem is the poor degree of integration that can
be achieved as every stage requires going off chip and
requires the use of a discrete bandpass lter.
The zero-IF receiver (see Fig. 2) has been introduced as
an alternative that can achieve a much higher degree of
integration because this topology uses a direct quadrature
downconversion of the desired signal. Theoretically, no
discrete high-frequency bandpass lter is required, allow-
ing for the realization of a fully integrated receiver [10].
However the zero-IF receiver is intrinsically very sensi-
tive to parasitic baseband signals such as DC-offset volt-
ages and self-mixing. These drawbacks have kept the
zero-IF receiver from being used on large scale in new
wireless applications. It has, however, been shown that
with the use of dynamic nonlinear correction algorithms,
Quadrature
mixing
Wanted signal
Lowpass filter
LO-feedthrough, DC-offset,...
f
LO
f
Figure 2. Zero-IF receiver principle.
0.2
20
40
60
80
100
0.3 0.4 0.5
Effective gate length (jm)
f
T

a
n
d

f
3
d
B

(
G
H
z
)
0.6
f
T
f
3dB
Figure 1. Comparison between f
T
and f
3dB
.
3096 MIXED-SIGNAL CMOS RF INTEGRATED CIRCUITS
implemented in the DSP, the zero-IF topology can be used
for high-performance applications such as GSM and
DECT [11,12]. In such a system, the performance of the
analog architecture is improved by the use of digital al-
gorithms, which clearly demonstrates another benet of a
single-chip mixed-signal approach.
New receiver topologies such as the low-IF receiver
[3,13,28] have been introduced. The low-IF receiver per-
forms a downconversion from the antenna frequency di-
rectly toas the term already indicatesa low IF (i.e., in
the range of a few 100kHz; see Fig. 3). Downconversion is
done in quadrature and the mirror signal suppression is
performed at low frequency, after downconversion, by a
polyphase lter. This polyphase lter is a complex lter,
consisting of two signal paths: an in-phase path and a
quadrature path. This allows for an asymmetric lter
characteristic: a passband lter for positive frequencies
and a suppression at the same negative frequencies. The
polyphase lter can be implemented either as an analog
lter [13] or in the DSP, together with the other digital
functions. This again demonstrates how the interaction
between the analog architecture and the digital part im-
proves the performance of the system. The low-IF receiver
is closely related to the zero-IF receiver since it can also be
fully integrated and uses a single-stage direct downcon-
version. The difference is that the low-IF receiver does not
use baseband operation, resulting in a total immunity to
parasitic baseband signals, resolving in this way the main
disadvantage of the zero-IF receiver. By the use of a dou-
ble-quadrature structure, converters requiring neither
any external components nor any tuning or trimming
have been demonstrated [28].
4. ANALOG CIRCUIT DESIGN IN A DIGITAL CMOS
TECHNOLOGY
The general transceiver architecture, depicted in Fig. 4,
requires analog functions implemented in a digital tech-
nology. As mentioned before, due to the high f
T
and f
3 dB
of
current technologies, operating frequencies of 5GHz and
above become possible. The low-noise amplier, the power
amplier, and the synthesizer are the most critical analog
functions since the overall performance of the transceiver
will depend mainly on the performance of these analog
building blocks.
4.1. The Low-Noise Amplier
The low-noise amplier is a very critical building block,
since mainly this block will determine the overall noise
gure and linearity of the receiver. Furthermore, in deep
submicrometer technologies, ESD issues are becoming
very important, and this will have a great inuence on
the LNA design. Most RF CMOS LNA topologies use sin-
gle-stage inductive degeneration techniques [14,15] to
provide resistive input impedance to the antenna. The in-
put of the LNA is usually protected against ESD by two
reverse-biased diodes. Care has to be taken since those
protection networks increase the noise, capacitive input
load, and as such the power drain of the circuit. In Fig. 5,
an example of a 0.8-dB noise gure LNA is presented [15].
The LNA has been measured in its nominal 9-mWregime,
drawing 6 mA from a 1.5-V power supply. The forward
gain (S
21
) reaches more than 20 dB at 1.23 GHz. At the
same time, the reverse isolation is better than 31 dB. The
Quadrature
mixing
Wanted signal
Polyphase filter
If
LO-feedthrough, DC-offset,...
f
LO
f
Figure 3. Low- IF receiver principle.
PA
PLL
A
n
t
e
n
n
a
s
w
i
t
c
h
LNA
Quad.
gen
90
2
l
l
+
2
2
2
Q
Q
A/D
A/D
D/A
DSP
D/A
Polyphase
filter
0
90
0
Figure 4. General transceiver architecture.
MIXED-SIGNAL CMOS RF INTEGRATED CIRCUITS 3097
noise gure of the LNA, in nominal operation, reaches a
minimum of 0.79dB at 1.24GHz. An HBM (human body
model) test has shown that the LNA is able to withstand
positive ESD pulses up to 0.6 kVand negative ESD pulses
up to 1.4 kV, surpassing the 0.5kV specication.
4.2. The Voltage-Controlled Oscillator
The local oscillator is responsible for the correct frequency
selection in up- and downconverters. The signal level of
the desired receive channel can be very small, whereas
adjacent channels can have very large power levels.
Therefore the phase noise specications for the local-os-
cillator signal are very critical. Usually, the local oscillator
is realized as a phase-locked loop. The very rigid speci-
cations are reected in the design of the voltage-controlled
oscillator (VCO). For the realization of a gigahertz VCO in
a submicrometer CMOS technology, two options exist:
either ringoscillators or oscillators based on the resonance
frequency of an LC tank. The inductor in this LC tank can
be implemented as an active inductor or a passive one. It
has been shown that for ring oscillators as well as active
LC oscillators [16], the phase-noise is inversely related to
the power consumption. Therefore, the only viable solu-
tion to a low-power, low-phase-noise VCO is an LC oscil-
lator with a passive inductor. As could be expected, the
limitation in this oscillator is the integrated passive in-
ductor. For extremely low phase noise requirements, the
concept of bondwire inductors has been investigated
[16,17]. Since a bondwire has a parasitic inductance of
approximately 1 nH/mm and a very low series resistance,
very-high-Q inductors can be created. The most elegant
solution is the use of a spiral coil on a standard silicon
substrate, without any modications. In combination with
fractional-N techniques, low-phase-noise PLL circuits can
be obtained. For example, in Fig. 6 a fully integrated syn-
thesizer in a 0.25-mm CMOS technology is presented [18].
The measured phase noise is less than 120dBc/Hz at
600kHz, while the reference and fractional spurious sig-
nals are respectively 70 dB and 100dB below the carrier
signal.
4.3. A/D Conversion
After downconversion, an analog-to-digital converter is
the interface to the DSP. Once the signal is downconverted
to DC or low IF, there is the desired signal, along with
unwanted blockers that are significantly higher than the
signal itself. Digitizing that combination of signals re-
quires a high-dynamic range analog-to-digital converter
with excellent noise and spurious-free dynamic range per-
formance. As an example, in the case of Global System for
Mobile Communications (GSM), the blocker at 3MHz off-
set from the carrier can be 76 dB above the signal, while
the blocker at 600kHz offset is 56 dB above the signal.
This sets the upper limit of the A/D converter. Further-
more, at reference sensitivity level, the desired signal at
the A/D input would be 1 mV ( 60 dBV). Since the quan-
tization noise oor must be low enough not to degrade the
noise gure performance, the noise oor required would be
80 dBV. On the other hand, CDMA and wideband code-
division multiple access have much lower signal-to-noise
requirements, so the tolerable quantization noise oor is
relaxed for these application. A trend can be observed to
shift the A/D conversion more toward the antenna. This
clearly enables the use of multiple standard radio and re-
duces the chip area of the analog interface. The cost how-
ever, is a higher power consumption of the A/D converter.
4.4. The Power Amplier
In the transmitter path, the power amplier is the most
challenging block for integration in a CMOS technology.
Most of the CMOS transceivers reported in open literature
deliver power levels in the range of 0 dBm. To achieve a
fully integrated CMOS system, the power amplier also
has to be realized in a CMOS technology, on the same die.
An alternative is to place the 0-dBm transceiver and the
power amplier in the same package. In this case, one can
still benet from implementing the PA in CMOS, because
of the cost reduction and the advantage of a single-tech-
nology solution. Another alternative is to use a low-cost
technology to integrate the passives [9] and to combine
this substrate with the CMOS die.
Figure 5. A 0.8-dB noise gure LNA in 0.25-mm CMOS.
3098 MIXED-SIGNAL CMOS RF INTEGRATED CIRCUITS
In a digital CMOS technology, switching-mode power
ampliers are the favored candidates for wireless commu-
nications, due to their excellent high efciency. A simpli-
ed presentation of a switching amplier is an NMOS
transistor loaded with a parallel tank, consisting of a pow-
er supply inductor, a shunt capacitor, and a load resistor
(Fig. 7). Because of the switching LC structure, the max-
imum voltage at the drain will be higher than the power
supply, hence making the transistor sensitive to several
failure mechanisms, such as oxide breakdown and hot
electrons. The major advantage of a switching topology is
that the maximum voltage is present only when the
NMOS transistor is switched off. As a consequence, the
hot-electron issue is simplied and the main cause of de-
struction is oxide breakdown, a failure mechanism that is
easy to characterize. An important aspect is the integra-
tion of the passive components in a CMOS technology. It
can be shown that a xed relationship exists between the
required inductor L
DC
, the output power, the operating
frequency, and the breakdown voltage [19]. When moving
toward GHz frequencies and deep-submicrometer technol-
ogies, the value of the inductor L
DC
becomes very small, as
can be seen in Fig. 8. The only way to realize such a small
inductance is to integrate it on the same die as the CMOS
transistors, since the parasitic inductance and capacitance
of bondwires will be too large relative to the required in-
ductance. The decoupling capacitance required for the PA
also needs to be integrated on the same die as the power
amplier itself. The resonance frequency of an off-chip
decoupling capacitor typically lies around 1GHz for a
10 pFof decoupling. In order to achieve either more decou-
pling or a higher frequency, the decoupling has to be
Decoupling
capacitor
50% Duty cycle
V
DD
L
DC
R
L
C
SH
Figure 7. Basic CMOS switching amplier.
Figure 6. A fully integrated 0.25-mm CMOS Phase-
locked loop.
2
0.2
0.4
0.6
0.8
1
x10
9
0
3 4 5
Breakdown voltage (V)
I
n
d
u
c
t
a
n
c
e

(
H
)
6 7
Figure 8. Value of L
DC
for a lossless Bluetooth power amplier.
MIXED-SIGNAL CMOS RF INTEGRATED CIRCUITS 3099
integrated on the same die as the power amplier, hence
justifying the trend toward fully integration. Finally, as
can be seen in Fig. 4, the power amplier needs to be
driven by the upconversion mixer. Therefore, it is of ut-
most importance to achieve sufcient power gain in the
amplier. On the other hand, a high power gain will re-
quire more driver stages, hence lowering the overall ef-
ciency of the transceiver. Therefore, many tradeoffs are
involved in the design of a power amplier and overall
optimization is required [20,21]. In Fig. 9 a fully integrat-
ed power amplier in a 0.25-mm CMOS technology is pre-
sented [8]. It can deliver up to 21dBm, the required
input power is only 10 dBm, and the measured power-
added efciency (PAE) is 25.8%.
5. SUBSTRATE COUPLING BETWEEN THE DIGITAL AND
ANALOG PARTS
In mixed-signal design, substrate coupling between the
analog and the digital parts is an important aspect that
needs special attention during design and layout. The is-
sue of substrate coupling is still under research, since a
simple and straightforward method is lacking. However,
the designer can take some precautions to minimize the
inuence of the digital switching noise on the sensitive
analog blocks [29].
It is a common practice to dene the ultimate voltage
reference, the ground, off chip. In high-speed applications,
various techniques such as separated digital and analog
powerlines are used to make an on-chip ground close to
the external reference. In this approach all voltages are
articially referred to the external ground, with the on
chip ground tied as close to this external reference as pos-
sible. As integrated circuits are on chip, however, they are
naturally related to the on-chip ground, which will never
be exactly equal to the off-chip reference. It is therefore
more suitable to dene the reference for a circuit on chip.
An adequate decoupling will keep the local circuit power
constant relative to this local reference. Signal distortion
due to limited power supply rejection ratio (PSRR) is thus
minimized. As a result of fast current variations in power
pads, two analog subcircuits may have a different local
reference, even when they share the same analog ground
pad [22,23]. Eventually, several local references may be
dened for various subsystems and decoupled locally. The
transmission of the signal to another subcircuit or to the
outside world can be regarded as a separate problem when
using a differential approach even for voltages that are
single-ended at rst sight. A voltage is not an absolute
entity, but a difference in electric potential relative to a
predetermined reference. For example, the input of a tran-
sistor is the voltage difference and the unwanted bulk to
Figure 9. Fully integrated 0.25-mm CMOS Bluetooth power
amplier.
(a)
(b)
L
bond
L
bond
L
path
L
path
Vgs
Vbs
Vout
1
Vin
2
An. Vdd 1
An. Gnd 1 An. Gnd 2
Stage 1
Stage 2
i=f(Vgs,Vbs)
An. Vdd 2
Stage 1 Stage 2
Reference
Power Power
Signal
Figure 10. The use of differential circuits in
mixed-mode ICs.
3100 MIXED-SIGNAL CMOS RF INTEGRATED CIRCUITS
source voltage (Fig. 10a). To avoid signal disturbance due
to reference variations from one place to another, voltages
should be transferred concurrently with their reference
through a dedicated path (Fig. 10b) rather than rely on a
common ground. Similarly, to get the signal off chip, ded-
icated pins are used for the references.
The relative bondwires placement requires some at-
tention too. Although coupling between the wires is rela-
tively small [24], it can be sufcient to transfer noise from
a noisy path to an adjacent node. Sensitive inputs bond-
wires should therefore never be close to noisy wires. Even-
tually they can be shielded by enclosing them with extra
bondwires connected to a quiet ground. Even an optimally
decoupled analog circuit can be disturbed by substrate
noise injected on some other place on the chip [25]. Guard
rings can limit this effect when used correctly. Figure 11
summarizes the correct placement and biasing of the
rings. The rst step in reducing substrate coupling is the
limitation of the injected noise. A guard ring close to the
digital transistors and biased with a dedicated pin will
provide a return path for injected currents (Fig. 11A). This
ring may not be biased with either of the on-chip grounds.
Biasing with the digital ground would inject extra noise
into the substrate, while using the analog ground would
couple substrate noise directly into it. To reduce the effect
of the current that reaches the bulk, a low-impedance re-
turn path is of utmost importance [26,27]. For heavily
doped substrates, the best result is obtained by mounting
the die with conductive epoxy to the leadframe using
several bondwires to connect it to the external ground
(Fig. 11B). Eventually, large substrate contacts with a
dedicated pin lling spare places on the chip can be an
alternative (Fig. 11C). In lightly doped substrates, where
most currents ow just underneath the chip surface, a
guard ring with dedicated pin surrounding the digital
block is an effective return pad. In these substrates, phys-
ical separation of noise source and sensitive circuit is also
very effective as the resistance in the noise path continu-
ously increases with the distance. For heavily doped ma-
terial, a separation of more than 4 times the epilayer
thickness is useless as most of the disturbing current then
just passes through the low ohmic bulk [26]. Substrate
noise disturbs the analog circuits through their bulk to
source voltage. To reduce this bulk effect, bulk source
voltage variations of analog MOS transistors should be
minimized. The bulk must thus be tied locally to the an-
alog reference rather than to the (slightly different) exter-
nal one. This is achieved with bulk contacts close to the
analog transistors and biased with the local analog ground
(Fig. 11D), which results in an optimal output voltage rel-
ative to the local on-chip analog reference. A guard ring
with dedicated pin around the analog circuits eventually
enhances the noise immunity even further (Fig. 11E) [26],
but does not eliminate the need of the good bulk contacts
to the local analog ground.
6. CONCLUSIONS
The trend toward deep-submicrometer technologies has
enabled the use of CMOS for the integration of high-per-
formance analog functions. First, the analog architecture
needs to be adjusted to allow a fully integrated solution.
The next step is to place both the digital and the analog
functions on a single die, which allows. For an interaction
between the analog architecture and the digital DSP,
which enables one to achieve a higher performance of
the analog part. This results in highly recongurable
mixed-circuit systems in the cheapest technology avail-
able. The trend toward deep-submicrometer technologies
will allow achieving those goals as long as the short-chan-
nel effects will not limit the performance concerning lin-
earity and intermodulation problems. Furthermore,
substrate noise coupling between the digital part of the
system and the sensitive analog blocks can degrade the
analog performance, hence demanding some precautions
regarding bulk contacts, guard rings, and packaging.
BIBLIOGRAPHY
1. A. Rofougaran et al., A 5-GHz direct-conversion CMOS trans-
ceiver utilizing automatic frequency control for the IEEE
802.11a Wireless LAN Standard, IEEE J. Solid-State Circ.
38(12):22092220 (Dec. 2003).
2. J. Rudell et al., A single-chip digitally calibrated 5.15-5.825-
GHz 0.18mm CMOS transceiver for 802.11a Wireless LAN,
IEEE J. Solid-State Circ. 38(12):22212231 (Dec. 2003).
Substrate contact
A
B
E
D
C
An. ground
and
bulk
An. guard
Die
Conductive epoxy
Package
Substrate contact
Dig.ground
Dig.guard
Digital
Analog
Analog
Digital
Figure 11. Placement and biasing of guard
rings.
MIXED-SIGNAL CMOS RF INTEGRATED CIRCUITS 3101
3. M. Steyaert et al., Asingle chip CMOS transceiver for DCS1800
wireless communications, Proc. IEEE-ISSCC, Feb. 1998.
4. R. Yan et al., High performance 0.1 micron room temperature
Si mosfets, Digest of Technical Papers, 1992 Symp. VLSI
Technology, June 24, 1992.
5. J. Chen et al., A high speed SOI technology with 12 ps/18 ps
gate delay operation at 1.5 V, Proc. IEEE Int. Electron Devices
Meeting, San Francisco, CA, Dec. 1316, 1992.
6. M. Steyaert and W. Sansen, Opamp design towards maximum
gain-bandwidth, Proc. AACD workshop, The Netherlands,
Delft, March 1993, pp.6385.
7. I. Aoki et al., Fully integrated CMOS power amplier design
using the distributed active transformer architecture, IEEE
J. Solid-State Circ. 37(3):371383 (March 2002).
8. K. Mertens and M. Steyaert, A fully integrated class 1
Bluetooth 0.25mm CMOS PA, Proc. ESSCIRC, Sept. 2002,
pp. 219222.
9. P. Lok, RF power ampliers, Proc. GiRaFe Workshop, ISSCC,
Feb. 2004.
10. C. H. Hull, R. R. Chu, and J. L. Tham, A direct-conversion
receiver for 900MHz (ISM band) spread-spectrum digital
cordless telephone, Proc. ISSCC, San Francisco, Feb. 1996,
pp. 344345.
11. J. Sevenhans, A. Vanwelsenaers, J. Wenin, and J. Baro, An
integrated Si bipolar transceiver for a zero IF 900MHz GSM
digital mobile radio front-end of a hand portable phone, Proc.
CICC, May 1991, pp.771774.
12. J. Sevenhans et al., An analog radio front-end chip set for a
1.9 GHz mobile radio telephone application, Proc. ISSCC, San
Francisco, Feb. 1994, pp. 4445.
13. J. Crols and M. Steyaert, A single-chip 900MHz CMOS re-
ceiver front-end with a high performance low-IF topology,
IEEE J. Solid-State Circ. 30(12):14831492 (Dec. 1995).
14. J. Janssens and M. Steyaert, CMOS noise performance under
impedance matching constraints, Electron. Lett. 35(15):1278
1280 (July 1999).
15. P. Leroux, J. Janssens, and M. Steyaert, A 0,8dB NF ESD-
protected 9 mW CMOS LNA, Proc. ISSCC 2001, San Fran-
cisco, 2001, pp. 410411.
16. J. Craninckx and M. Steyaert, Low-noise voltage controlled
oscillators using enhanced LC-tanks, IEEE Trans. Circ.
Syst.II: Analog Digital Signal Process. 42(12):794804
(Dec. 1995).
17. A. Rofougaran, J. Rael, M. Rofougaran, and A. Abidi, A
900MHz CMOS LC-oscillator with quadrature outputs,
Proc. ISSCC, Feb. 1996, pp. 392393.
18. B. De Muer and M. Steyaert, A 1.8 GHz CMOS sigma-delta
fractional-N synthesizer, Proc. European Solid-State Circuits
Conf., Villach, Austria, Sept. 1721, 2001, pp. 4447.
19. P. Reynaert, K. Mertens, and M. Steyaert, A statespace be-
havioral model for CMOS class E power ampliers, IEEE
Trans. Comput. Aided Design Integr. Circ. Syst. 22(2):132138
(Feb. 2003).
20. K. Mertens, P. Reynaert, and M. Steyaert, Performance study
of CMOS power ampliers, Proc. European Solid-State Cir-
cuits Conf., Villach, Austria, Sept. 1721, 2001, pp. 440443.
21. P. Reynaert, K. Mertens, and M. Steyaert, Optimizing the
dimensions of driver and power transistor in switching CMOS
RF ampliers, Analog Integr. Circ. Signal Process. 32(2):
171182 (Aug. 2002).
22. A. J. Rainal, Eliminating inductive noise of external chip
interconnections, IEEE J. Solid-State Circ. 29:126129 (Feb.
1994).
23. Y.-I. S. Shin, Maintain signal integrity at high digital speeds,
Electron. Design, 7790 (May 14, 1992).
24. L. J. Giacoletto, Electronics Designers Handbook, McGraw-
Hill, New York, 1977, pp. 3.423.49.
25. L. Gal, On-chip cross talkthe new signal integrity chal-
lenge, Proc. CICC, 1995, pp. 251254.
26. D. K. Su, M. J. Loinaz, S. Masui, and B. A. Wooley, Experi-
mental results and modeling techniques for substrate noise in
mixed-signal integrated circuits, IEEE J. Solid-State Circ.
28:420428 (April 1993).
27. R. Gharpurey and R. G. Meyer, Modeling and analysis of
substrate coupling in integrated circuits, IEEE J. Solid-State
Circ. 31:344353 (March 1996).
28. M. Steyaert, J. Janssens, B. De Muer, M. Borremans, and
N. Itoh, A 2-V CMOS cellular transceiver front-end, IEEE
J. Solid-State Circ. 35(12):18951907 (Dec. 2000).
29. M. Ingels and M. Steyaert, Design strategies and decoupling
techniques for reducing the effects of electrical interference in
mixed-mode ICs, IEEE J. Solid-State Circ. 32(7):11361141
(July 1997).
MIXER CIRCUITS
KATSUJI KIMURA
NEC Corporation
A frequency mixer inputs two frequenciesa radiofre-
quency (RF) and a localoscillator (LO) frequencymixes
them, and produces their difference frequency and sum
frequency. The output signal is tuned by a lter, and one of
the two output frequencies is selected: the difference or the
sum. When the output difference frequency is an interme-
diate frequency (IF), the mixer is usually called a down-
conversion frequency mixer, and when the output sum
frequency is a high frequency, it is usually called an up-
conversion frequency mixer.
A frequency mixer is fundamentally a multiplier, be-
cause the analog multiplier outputs a signal proportional
to the product of the two input signals. Therefore, a fre-
quency mixer is represented by the symbol for the multi-
plier, as shown in Fig. 1.
The transfer function of a nonlinear element is ex-
pressed as
f u a
0
a
1
ua
2
u
2
a
3
u
3
a
n
u
n
1
The product xy of the two input signals x and y can be de-
rived fromonly the second-order term: a
2
u
2
, where ux y,
RF
LO
IF
Figure 1. A symbol for a frequency mixer. The symbol for a mul-
tiplier is used.
3102 MIXER CIRCUITS
and x and y are the two input signals. The product of the
two input signals is produced by a nonlinear element, such
as a diode or transistor. For example, single-diode mixers,
singly balanced diode mixers, doubly balanced diode mix-
ers, single-transistor mixers, singly balanced transistor
mixers, and doubly balanced transistor mixers are usually
used as frequency mixers.
1. APPLICATION TO RECEIVERS
Mixers are used to shift the received signal to an inter-
mediate frequency, where it can be amplied with good
selectivity, high gain, and low noise, and nally demodu-
lated in a receiver. Mixers have important applications in
ordinary low-frequency and microwave receivers, where
they are used to shift signals to frequencies where they
can be amplied and demodulated most efciently. Mixers
can also be used as phase detectors and in demodulators,
and must perform these functions while adding minimal
noise and distortion.
Figure 2 shows, for example, the block diagram of a
VHF or UHF communication receiver. The receiver has
a single stage input amplier; this preamp, which is
usually called an RF amplier, increases the strength of
the received signal so that it exceeds the noise level of
the following stage; therefore, this preamp is also called a
low-noise amplier (LNA). The rst IF is relatively high
(in a VHF or UHF receiver, the widely accepted standard
has been 10.7MHz); this high IF moves the image fre-
quency well away from the RF, thus allowing the image to
be rejected effectively by the input lter. The second con-
version occurs after considerable amplication, and is
used to select some particular signal within the input
band and to shift it to the second IF. Because narrow
bandwidths are generally easier to achieve at this lower
frequency, the selectivity of the lter used before the de-
tector is much better than that of the rst IF. The fre-
quency synthesizer generates the variable-frequency LO
signal for the rst mixer, and the xed-frequency LO for
the second mixer.
Figure 3 illustrates an ideal analog multiplier with two
sinusoids applied to it. The signal applied to the RF port
has a carrier frequency o
s
and a modulation waveform
A(t). The other, the LO, is a pure, unmodulated sinusoid at
frequency o
p
.
Applying some basic trigonometry to the output is
found to consist of modulated components at the sum
and difference frequencies. The sum frequency is rejected
by the IF lter, leaving only the difference.
Fortunately, an ideal multiplier is not the only device
that can realize a mixer. Any nonlinear device can perform
the multiplying function. The use of a nonideal multiplier
results in the generation of LO harmonics and in mixing
products other than the desired one. The desired output
frequency component must be ltered from the resulting
chaos.
Another way to view the operation of a mixer is as a
switch. Indeed, in the past, diodes used in mixers have
been idealized as switches operated at the LO frequency.
Figure 4a shows a mixer modeled as a switch; the switch
interrupts the RF voltage waveform periodically at the LO
frequency. The IF voltage is the product of the RF voltage
and the switching waveform.
Another switching mixer is shown in Fig. 4b. Instead of
simply interrupting the current between the RF and IF
ports, the switch changes the polarity of the RF voltage
periodically. The advantage of this mixer over the one in
Fig. 4a is that the LO waveform has no DC component, so
the product of the RF voltage and switching waveform
does not include any voltage at the RF frequency. Thus,
Input
Output
Frequency
set commands
First mixer Second mixer
Second LO
Second IF
First IF
First
LO
Filter Filter
Filter Demod
Frequency
synthesizer
Figure 2. Double-superheterodyne VHF or UHF communication
receiver.
cos[(o
s
o
p
)t ] + cos[(o
s
+ o
p
)t ] A(t ) cos(o
s
t ) cos(o
p
t ) =
A(t )
2
A(t ) cos[(o
s
o
p
)t ] A(t ) cos(o
s
t )
A(t ) cos(o
p
t )
Filter
Multiplier
Figure 3. A mixer is fundamentally a multiplier. The difference
frequency in the IF results from the product of sinusoids.
S(t )
S(t )
t
(a)
V
IF
V
RF
V
IF
S(t )
S(t )
t
(b)
V
RF
Figure 4. Two switching mixers: (a) a simple switching mixer; (b)
a polarity-switching mixer. The IF is the product of the switching
waveform s(t) and the RF input, making these mixers a type of
multiplier.
MIXER CIRCUITS 3103
even though no lters are used, the RF and LO ports of
this mixer are inherently isolated. Doubly balanced mix-
ers are realizations of the polarity-switching mixer.
2. SEMICONDUCTOR DEVICES FOR MIXERS
Only a few devices satisfy the practical requirements of
mixer operation. Any device used as a mixer must have
strong nonlinearity, electrical properties that are uniform
between individual devices, low noise, low distortion, and
adequate frequency response. The primary devices used
for mixers are Schottky barrier diodes and eld-effect
transistors (FETs). Bipolar junction transistors (BJT) are
also used occasionally, primarily in Gilbert cell multiplier
circuits (see Fig. 6d), but because of their superior large-
signal-handling ability, higher frequency range, and low
noise, FET devices such as metaloxidesemiconductor
FETs (MOSFET), gallium arsenide (GaAs) metalsemi-
conductor FETs (MESFET), and high-electron-mobility
transistors (HEMTs) have been usually preferred.
The Schottky barrier diode is the dominant device used
in mixers. Because Schottky barrier diodes are inherently
capable of fast switching, have very small reactive para-
sitics, and do not need DC bias, they can be used in very
broadband mixers. Schottky barrier diode mixers usually
do not require matching circuits, so no tuning or adjust-
ment is needed.
Although mixers using Schottky barrier diodes always
exhibit conversion loss, transistor mixers are capable of
conversion gain. This helps simplify the architecture of a
system, often allowing the use of fewer amplier stages
than necessary in diode mixer receivers.
Since the 1950s, bipolar transistors have dominated
mixer applications as single-transistor mixers in AM radio
and communication receivers. In particular, an analog
multiplier consisting of a doubly balanced differential am-
plier, called the Gilbert cell, was invented in the 1960s.
Since then, the Gilbert cell mixer has been used as a
monolithic integrated circuit (IC) for AM radio receivers
and communication equipment. Silicon BJTs are used in
mixers because of their low cost and ease of implementa-
tion with monolithic ICs. These bipolar devices are used as
mixers when necessary for process compatibility, although
FETs generally provide better overall performance. Sili-
con BJTs are usually used in conventional single-device or
singly and doubly balance mixers. Progress in the devel-
opment of heterojunction bipolar transistors (HBT), which
use a heterojunction for the emitter-to-base junction, may
bring about a resurgence in the use of bipolar devices as
mixers. HBTs are often used as analog multipliers oper-
ating at frequencies approaching the microwave range;
the most common form is a Gilbert cell. Silicongermani-
um (SiGe) HBTs are a new technology that offers high
performance at costs close to that of silicon BJTs.
Avariety of types of FETs are used in mixers. Since the
1960s, silicon MOSFETs (often dual-gate devices) have
dominated mixer applications in communication receivers
up to approximately 1GHz. At higher frequency, GaAs
MESFETs are often used. The LO and RF signals can be
applied to separate gates of dual-gate FETs, allowing good
RF-to-LO isolation to be achieved in a single-device mixer.
Dual-gate devices can be used to realize self-oscillating
mixers, in which a single device provides both the LO and
mixer functions.
Although silicon devices have distinctly lower trans-
conductance than GaAs, they are useful up to at least the
lower microwave frequencies. In spite of the inherent in-
feriority of silicon to GaAs, silicon MOSFETs do have some
advantages. The primary one is low cost, and the perfor-
mance of silicon MOSFET mixers is not significantly
worse than GaAs in the VHF and UHF range. The high
drain-to-source resistance of silicon MOSFETs gives them
higher voltage gain than GaAs devices; in many applica-
tions this is a distinct advantage. Additionally, the positive
threshold voltage (in an n-channel enhancement MOS-
FET), in comparison with the negative threshold voltage
of a GaAs FET, is very helpful in realizing low-voltage
circuits and circuits requiring only a single DC supply.
Mixers using enhancement-mode silicon MOSFETs often
do not require gate bias, and dual-gate MOSFETs offer
convenient LO-to-RF isolation when the LO and RF are
applied to different gates.
A MESFET is a junction FET having a Schottky barrier
gate. Although silicon MESFETs have been made, they
are now obsolete, and all modern MESFETs are fabricated
on GaAs. GaAs is decidedly superior to silicon for high-
frequency mixers because of its higher electron mobility
and saturation velocity. The gate length is usually less
than 0.5 mm, and may be as short as 0.1mm; this short gate
length, in conjunction with the high electron mobility and
saturation velocity of GaAs, results in a high-frequency,
low-noise device.
HEMTs are used for mixers in the same way as con-
ventional GaAs FETs. Because the gate IV characteristic
of a HEMT is generally more strongly nonlinear than that
of a MESFET, HEMT mixers usually have greater inter-
modulation (IM) distortion than FETs. However the noise
gure (NF) of an HEMT mixer usually is not significantly
lower than that of a GaAs FET. An HEMT is a junction
FET that uses a heterojunction (a junction between two
dissimilar semiconductors), instead of a simple epitaxial
layer, for the channel. The discontinuity of the bandgaps
of the materials used for the heterojunction creates a layer
of charge at the surface of the junction; the charge density
can be controlled by the gate voltage. Because the charge
in this layer has very high mobility, high-frequency oper-
ation and very low noise are possible. It is not unusual for
HEMTs to operate successfully as low-noise ampliers
above 100GHz. HEMTs require specialized fabrication
techniques, such as molecular beam epitaxy, and thus
are very expensive to manufacture. HEMT heterojunc-
tions are invariably realized with IIIV semiconductors;
AlGaAs and InGaAs are common.
2.1. Passive Diode Mixers
Figure 5 shows the most common form of the three diode
mixer types: a single-device diode mixer, a singly balanced
diode mixer, and a doubly balanced diode mixer. Conver-
sion loss of 68 dB is usually accepted in these passive
mixers.
3104 MIXER CIRCUITS
2.2. Active Transistor Mixers
Active transistor mixers have several advantages, and
some disadvantages, in comparison with diode mixers.
Most significantly, an active mixer can achieve conversion
gain, while diode and other passive mixers always exhibit
loss. This allows a system using an active mixer to have
one or two fewer stages of amplication; the resulting
simplication is especially valuable in circuits where
small size and low cost are vital. A precise comparison of
distortion in diode and active transistor mixers is difcult
to make because the comparison depends on the details of
the system. Generally, however, it is fair to say that dis-
tortion levels of well-designed active mixers are usually
comparable to those of diode mixers.
It is usually easy to achieve good conversion efciency
in active mixers. Thus, active transistor mixers have
gained a reputation for low performance. Nevertheless,
achieving good overall performance in active transistor
mixers is not difcult.
Because transistors cannot be reversed, as can diodes,
balanced transistor mixers invariably require an extra
hybrid at the IF. This can be avoided only by using a p-
channel device instead of an n-channel device, or vice ver-
sa, however, this is possible only in silicon circuits, and
even then the characteristics of p- and n-channel devices
are likely to be significantly different.
2.2.1. Bipolar Junction Transistor Mixers. Figure 6
shows BJT mixers, a single-device BJT mixer, a singly
balanced BJT mixer, a differential BJT mixer, and a dou-
bly balanced BJT mixer.
In a single-device BJT mixer (Fig. 6a), the input signals
are introduced into the device through the RF and LO dip-
lexer, which consists of an RF bandpass lter, an LO band-
pass lter, and two strips, l/4 long at the center of the RF
and LO frequency ranges; the square-law term of the de-
vices characteristic provides the multiplication action. A
single-device BJT mixer achieves a conversion gain of typ-
ically 2024dB, a noise gure of typically 45dB (which is
about 3dB more than that of the device in the amplier at
the RF), and a third intercept point near 0dBm. The IM
product from this type of single-device BJT mixer usually
depends on its collector current, but when the supplied
collector-to-emitter voltage V
CE
is not enough (typically
below 1.2V), the IM product increases as V
CE
decreases.
A singly balanced BJT upconversion mixer (Fig. 6b)
consists of two BJTs interconnected by a balun or hybrid.
The two collectors are connected through a strip, l/2 long
at the center of the LO frequency range, for reducing the
LO leakage. This upconversion mixer exhibits 16 dB con-
version gain and 12 dB LO leakage suppression versus the
wanted RF output level at 900MHz.
A singly balanced BJT differential mixer (Fig. 6c) con-
sists of an emitter-coupled differential pair. The RF is su-
perposed on the tail current by AC coupling through
capacitor C
2
, and the LO is applied to the upper transis-
tor pair, where capacitive degeneration and AC coupling
substantially reduce the gain at low frequencies. Note that
the circuit following C
2
is differential and hence much less
susceptible to even order distortion.
A multiplier circuit (Fig. 6d) conceived in 1967 by Bar-
rie Gilbert and widely known as the Gilbert cell (although
Gilbert himself was not responsible for his eponymy; in-
deed, he has noted that a prior art search at the time
found that essentially the same ideaused as a synchro-
nous detector and not as true mixerhad already been
patented by H. Jones) is usually used as an RF mixer and
sometimes as a microwave mixer.
Ignoring the basewidth modulation, the relationship
between the collector current I
C
and the base-to-emitter
voltage V
BE
for a BJT is
I
C
I
S
exp
V
BE
V
T
_ _
2
RF

LO
RF
LO
(a)
IF IF
(b)
IF
RF
LO
(c)
LO
RF
IF
D
2
D
1
D
3
D
4

Figure 5. The three most common diode mixer types: (a) single-
device; (b) singly balanced; (c) doubly balanced.
MIXER CIRCUITS 3105
where V
T
kT=q is the thermal voltage, k is Boltzmanns
constant, T is absolute temperature in Kelvin, and q is the
charge of an electron. I
S
is the saturation current for a
graded-base transistor.
Assuming matched devices, the differential output volt-
age of the Gilbert cell is
V
IF
R
L
I
EE
tanh
V
RF
2V
T
_ _
tanh
V
LO
2V
T
_ _
3
For small inputs
V
IF
%
R
L
I
EE
4V
2
T
V
RF
V
LO
4
The product V
RF
V
LO
is obtained by the Gilbert cell at
small signals.
2.2.2. FET Mixers. Figure 7 shows FET mixers: a sin-
gle-device FET mixer, a dual-gate FET mixer, a singly
balanced FET mixer, a differential FET mixer, and a dou-
bly balanced FET mixer.
In a single-device FET mixer (Fig. 7a), the RFLO dip-
lexer must combine the RF and LO and also provide
matching between the FETs gate and both ports. The IF
lter must provide an appropriate impedance to the drain
of the FETat the IFand must short-circuit the drain at the
RVFand especially at the LO frequency and its harmonics.
The conguration of a dual-gate mixer (Fig. 7b) pro-
vides the best performance in most receiver applications.
In this circuit, the LO is connected to the gate closest to
the drain (gate 2), while the RF is connected to the gate
closest to the source (gate 1). An IF bypass lter is used at
gate 2, and an LORF lter is used at the drain. A dual-
gate mixer is usually realized as two single-gate FETs in a
cascade connection.
A singly balanced FET mixer (Fig. 7c) uses a transform-
er hybrid for the LO and RF; any appropriate type of hy-
brid can be used. A matching circuit is needed at the gates
of both FETs. The IF lters provide the requisite short
circuits to the drains at the LO and RF frequencies, and
additionally provide IF load impedance transformations.
RF
4
BPF
LO
BPF
Matching
network
IF
LPF
(a)
Q
1
4

LO
RF
Matching
network
(b)
2

Q
1
Q
2
Matching
network
LO
IF
V
RF
Q
1
Q
3
Q
4
V
CC
C
2
V
o
C
1
(c)
V
RF
Q
5
Q
5
I
O
(d)
V
LO
V
LO
V
IF
R
L
R
L
VCC
Q
1
Q
2
Q
4
Q
3
Figure 6. BJT mixers: (a) a single-device BJT mixer; (b) a singly balanced BJT upconversion
mixer; (c) a singly balanced BJT differential mixer; (d) a doubly balanced BJT mixer consisting of a
Gilbert cell.
3106 MIXER CIRCUITS
The singly balanced mixer of Fig. 7c is effectively two sin-
gle-device mixers interconnected by hybrids.
In a differential FET mixer (Fig. 7d), the RF is applied
to the lower FET, and the LO is applied through a balun
or hybrid to the upper FETs. This mixer operates as an
alternating switch, connecting the drain of the lower FET
alternately to the inputs of the IF balun. An LO matching
circuit may be needed. Because the RFand LO circuits are
separate, the gates of the upper FETs can be matched at
the LO frequency, and there is no tradeoff between effec-
tive LO and RF matching. Similarly, the lower FETcan be
matched effectively at the RF. An IF lter is necessary to
reject LO current.
A doubly balanced FET mixer (Fig. 7e) is frequently
used as an RF or microwave mixer. Like many doubly
balanced mixers, this mixer consists of two of the singly
balanced mixers shown in Fig. 7d. Each half of the mixer
operates in the same manner as that of Fig. 7d. The in-
terconnection of the outputs, however, causes the drains of
the upper four FETs to be virtual grounds for both LO and
RF, as well as for even order spurious responses and IM
products.
3. IMAGE-REJECTION MIXERS
The image-rejection mixer (Fig. 8) is realized as the inter-
connection of a pair of balanced mixers. It is especially
useful for applications where the image and RF bands
overlap, or the image is too close to the RF to be rejected by
a lter. The LO ports of the balanced mixers are driven in
phase, but the signals applied to the RF ports have 901
RF
IF
LO RF
diplexer
LO
IF
filter
G
D
S
(a)
S
LO
IF
LO
matching
IF bypass
RF
RF
matching
IF
matching
LO & RF
bypass
G
2
G
1
D
(b)
(c)
LO
RF
IF LPF
IF
MC
MC
LO
balun
LO
RF
(d)
IF
balun
IF
RF
balun
LO
balun
IF
balun
LO
IF
RF
(e)
Figure 7. FET mixers: (a) a single-device FET mixer; (b) a dual-gate FET mixer; (c) a singly bal-
anced FET mixer; (d) a differential mixer; (e) a doubly balanced mixer.
MIXER CIRCUITS 3107
phase difference. A 901 IF hybrid is used to separate the
RF and image bands. A full discussion of the operation of
such mixers is a little complicated.
The most difcult part of the design of an image-rejec-
tion mixer is the IF hybrid. If the IF is fairly high, a
conventional RF or microwave hybrid can be used. How-
ever, if the mixer requires a baseband IF, the designer is
placed in the problematical position of trying to create a
Hilbert transforming lter, a theoretical impossibility.
Fortunately, it is possible to approximate the operation
of such a lter over a limited bandwidth.
4. MIXING
A mixer is fundamentally a multiplier. An ideal mixer
multiplies a signal by a sinusoid, shifting it to both a
higher and a lower frequency, and selects one of the re-
sulting sidebands. A modulated narrowband signal, usu-
ally called the RF signal, represented by
S
RF
t at sino
s
t bt cos o
s
t 5
is multiplied by the LO signal function
f
LO
t cos o
p
t 6
to obtain the IF signal
S
IF
t
1
2
at sino
s
o
p
t sino
s
o
p
tg

1
2
bt cos o
s
o
p
t cos o
s
o
p
t
7
In the ideal mixer, two sinusoidal IF components, called
mixing products, result from each sinusoid in s(t). In
receivers, the difference-frequency component is usually
desired, and the sum-frequency component is rejected by
lters.
Even if the LO voltage applied to the mixers LO port is
a clean sinusoid, the nonlinearities of the mixing device
distort it, causing the LO function to have harmonics.
Those nonlinearities can also distort the RF signal,
resulting in RF harmonics. The IF is, in general, the
combination of all possible mixing products of the RF
and LO harmonics. Filters are usually used to select the
appropriate response and eliminate the other (so-called
spurious) responses.
Every mixer, even an ideal one, has a second RF that
can create a response at the IF. This is a type of spu-
rious response, and is called the image; it occurs at the
frequency 2f
LO
f
RF
. For example, if a mixer is designed to
convert 10 GH
Z
to 1 GH
Z
with a 9-GH
Z
LO, the mixer will
also convert 8 GH
Z
to 1 GH
Z
at the same LO frequency.
Although none of the types of mixers we shall examine
inherently reject images, it is possible to create combina-
tions of mixers and hybrids that do reject the image
response.
It is important to note that the process of frequency
shifting, which is the fundamental purpose of a mixer, is a
linear phenomenon. Although nonlinear devices are in-
variably used for realizing mixers, there is nothing in the
process of frequency shifting that requires nonlinearity.
Distortion and spurious response other than the sum and
difference frequency, though often severe in mixers, are
not fundamentally required by the frequency-shifting
operation that a mixer performs.
4.1. Conversion Efciency
Mixers using Schottky barrier diodes are passive compo-
nents and consequently exhibit conversion loss. This loss
has a number of consequences: the greater the loss, the
higher the noise of the system and the more amplication
is needed. High loss contributes indirectly to distortion
because of high signal levels that result from the addi-
tional preamplier gain required to compensate for this
loss. It also contributes to the cost of the system, since
the necessary low-noise amplier stages are usually
expensive.
Mixers using active devices often (but not always) ex-
hibit conversion gain. The conversion gain (CG) is dened
as
CG
IF power available at mixer output
RF power available to mixer input
8
High mixer gain is not necessarily desirable, because it
reduces stability margins and can increase distortion.
Usually, a mixer gain of unity, or at most a few decibels,
is best.
4.2. Noise
In a passive mixer whose image response has been elim-
inated by lters, the noise gure is usually equal to, or
only a few tenths of a decibel above, the conversion loss. In
this sense, the mixer behaves as if it were an attenuator
having a temperature equal to or slightly above the
ambient.
In active mixers, the noise gure cannot be related eas-
ily to the conversion efciency; in general, it cannot even
be related qualitatively to the devices noise gure when
used as an amplier. The noise gure (NF) is dened by
the equation
NF
input signal-to-noise power ratio
output signal-to-noise power ratio
9
The sensitivity of a receiver is usually limited by its in-
ternally generated noise. However, other phenomena
sometimes affect the performance of a mixer front end
RF
90
hybrid
90
0
90
hybrid
LO LO
RF IF
RF IF
LO
USB
90
hybrid
LSB
Figure 8. Image-rejection mixer.
3108 MIXER CIRCUITS
more severely than does noise. One of these is the AM
noise, or amplitude noise, from the LO source, which is
injected into the mixer along with the LO signal. This
noise may be especially severe in a single-ended mixer
(balanced mixers reject AM LO noise to some degree)
or when the LO signal is generated at a low level and
amplied.
Phase noise is also a concern in systems using mixers.
LO sources always have a certain amount of phase jitter,
or phase noise, which is transferred degree for degree via
the mixer to the received signal. This noise may be very
serious in communications systems using either digital or
analog phase modulation. Spurious signals may also be
present, along with the desired LO signal, especially if a
phase-locked-loop frequency synthesizer is used in the LO
source. Spurious signals are usually phase modulation
sidebands of the LO signal, and, like phase noise, are
transferred to the received signal. Finally, the mixer may
generate a wide variety of intermodulation products,
which allow input signalseven if they are not within
the input passbandto generate spurious output at the
IF. These problems must be circumvented if a successful
receiver design is to be achieved.
An ideal amplier would amplify the incoming signal
and incoming noise equally and would introduce no addi-
tional noise. From Eq. (9) such an amplier would have a
noise gure equal to unity (0dB).
The noise gure of several cascaded amplier stages is
NFNF
1

NG
2
1
G
1

NF
3
1
G
1
G
2

NF
n
1
P
n
1
G
n
10
where NF is the total noise gure, NF
n
is the noise gure
of the nth stage, and G
n
is the available gain of the nth
stage.
From Eq. (10), the gain and noise gure of the rst
stage of a cascaded chain will largely determine the total
noise gure. For example, the system noise gure (on a
linear scale) for the downconverter shown in Fig. 9 is
NF
1
L
RF

NF
LNA
1
L
RF

1
L
RF
G
LNA
1
L
IM
1
_ _

NF
M
1
L
RF
G
LNA
L
I

1
L
RF
NF
LNA

NF
M
L
I
G
LNA
L
I

_ _
11
where L
RF
and L
I
are the insertion losses of the RF lter
and the image-rejection lter, respectively, NF
LNA
and
NF
M
are the noise gures of the LNA and the mixer, re-
spectively, and G
LNA
is the power gain of the LNA. This
equation assumes that the noise gures of the lters are
the same as their insertion losses.
4.3. Bandwidth
The bandwidth of a diode mixer is limited by the external
circuit, especially by the hybrids or baluns used to couple
the RF and LO signals to the diodes. In active mixers,
bandwidth can be limited either by the device or by hy-
brids or matching circuits that constitute the external cir-
cuit; much the same factors are involved in establishing
active mixers bandwidths as ampliers bandwidths.
4.4. Distortion
It is a truism that everything is nonlinear to some degree
and generates distortion. Unlike ampliers or passive
components, however, mixers often employ strongly non-
linear devices to provide mixing. Because of these strong
nonlinearities, mixers generate high levels of distortion.
A mixer is usually the dominant distortion-generating
component in a receiver.
Distortion in mixers, as with other components, is man-
ifested as IM distortion (IMD), which involves mixing be-
tween multiple RF tones and harmonics of those tones. If
two RF excitations f
1
and f
2
are applied to a mixer, the
nonlinearities in the mixer will generate a number of new
frequencies, resulting in the IF spectrum shown in Fig. 10.
Figure 10 shows all intermodulation products up to third
order; by nth order, we mean all n-fold combinations of the
excitation tones (not including the LO frequency). In gen-
eral, an nth-order nonlinearity gives rise to distortion
products of nth (and lower) order.
An important property of IMD is that the level of the
nth order IM product changes by n decibels for every deci-
bel of change in the levels of the RF excitations. The ex-
trapolated point at which the excitation and IMD levels
are equal is called the nth-order IM intercept point, ab-
breviated IP
n
. This dependence is illustrated in Fig. 11. In
most components, the intercept point is dened as an out-
put power: in mixers it is traditionally an input power.
Given the intercept point IP
n
and input power level in
decibels, the IM input level P
I
in decibels can be found
from
P
I

1
n
P
1
1
1
n
_ _
IP
n
12
where P
1
is the input level of each of the linear RF tones
(which are assumed to be equal) in decibels. By conven-
tion, P
1
and P
I
are the input powers of a single frequency
component where the linear output level and the level of
the nth order IM product are equal; They are not the total
S(f )
f
2
f
1
f
2
+ f
1
2f
2
+ f
1
2f
1
+ f
2
2f
1
f
2
2f
1
3f
1
3f
2
f 2f
2
2f
1
f
1
f
1
f
2
Figure 10. IF spectrum of intermodulation products up to third
order. The frequencies f
1
and f
2
are the excitation.
RF in
IF
stage
Mixer
LO
LNA
RF
filter
Image
rejection
filter
IF
filter
Figure 9. RF front end.
MIXER CIRCUITS 3109
power of all components. For example, P
1
is the threshold
level for the receiver. The uctuation of the IMD level is
rather small in spite of the uctuations of P
1
and IP
n
.
4.5. Spurious Responses
A mixer converts an RF signal to an IF signal. The most
common transformation is
f
IF
f
RF
f
LO
13
although others are frequently used. The discussion of
frequency mixing indicated that harmonics of both the RF
and LO could mix. The resulting set of frequencies is
f
IF
mf
RF
nf
LO
14
where m and n are integers. If an RF signal creates an in-
band IF response other than the desired one, it is called a
spurious response. Usually the RF, IF, and LO frequency
ranges are selected carefully to avoid spurious responses,
and lters are used to reject out-of-band RF signals that
may cause in-band IF responses. IF lters are used to
select only the desired response.
Many types of balanced mixers reject certain spurious
responses where m or n is even. Most singly balanced
mixers reject some, but not all, products where m or n (or
both) are even.
4.6. Harmonic Mixer
A mixer is sensitive to many frequencies besides those at
which it is designed to operate. The best known of these is
the image frequency, which is found at the LO sideband
opposite the input, of the RF frequency. The mixer is also
sensitive to similar sidebands on either side of each LO
harmonic. These responses are usually undesired; the
exception is the harmonic mixer, which is designed to op-
erate at one or more of these sidebands.
When a small-signal voltage is applied to the pumped
diode at any one of these frequencies, currents and volt-
ages are generated in the junction at all other sideband
frequencies. These frequencies are called the small-signal
mixing frequencies o
n
and are given by the relation
o
n
o
0
no
p
15
where o
p
is the LO frequency and
n ; 3; 2; 1; 0; 1; 2; 3; . . . 16
These frequencies are shown in Fig. 12. The frequencies
are separated from each LO harmonic by o
0
, the difference
between the LO frequency and the RF.
5. MODULATION AND FREQUENCY TRANSLATION
5.1. Modulation
Modulation is the process by which the information con-
tent of an audio, video, or data signal is transferred to an
RF carrier before transmission. Commonly, the signal be-
ing modulated is a sine wave of constant amplitude and is
referred to as the carrier. The signal that varies some pa-
rameter of the carrier is known as the modulation signal.
The parameters of a sine wave that may be varied are the
amplitude, the frequency, and the phase. Other types of
modulation may be applied to special signals, such as
pulsewidth and pulse position modulation of recurrent
pulses. The inverse processrecovering the information
from an RF signalis called demodulation or detection. In
its simpler forms a modulator may cause some character-
istic of an RF signal to vary in direct proportion to the
modulating waveform: this is termed analog modulation.
More complex modulators digitize and encode the modu-
lating signal before modulation. For many applications
digital modulation is preferred to analog modulation.
A complete communication system (Fig. 13) consists of
an information source, an RF source, a modulator, an RF
channel (including both transmitter and receiver RF stag-
es, the antennas, the transmission path, etc.), a demodu-
lator, and an information user. The system works if
the information user receives the source information
with acceptable reliability. The designers goal is to cre-
ate a low-cost working system that complies with the legal
restrictions on such things as transmitter power, antenna
P
O
U
T

d
B
m
10
0
10
20
30
20 P
l
P
I
P
in
, dbm
IP
n
10 10 20 0
Intercept
point
Linear output
level
IM output
level
Figure 11. The output level of each nth-order IM product varies
n decibels for every decibel change in input level. The intercept
point is the extrapolated point at which the curves intersect.
V( ), I( )
0

2

p

1
2
p
Figure 12. Small-signal mixing frequencies o
n
and LO harmon-
ics no
p
. Voltage and current components exist in the diode at
these frequencies.
3110 MIXER CIRCUITS
height, and signal bandwidth. Since modulation demodu-
lation schemes differ in cost, bandwidth, interference re-
jection, power consumption, and so forth, the choice of the
modulation type is an important part of communication
system design.
Modulation, demodulation (detection), and heterodyne
action are very closely related processes. Each process in-
volves generating the sum and/or difference frequencies of
two or more sinsuoids by causing one signal to vary as a
direct function (product) of the other signal or signals. The
multiplication of one signal by another can only be accom-
plished in a nonlinear device. This is readily seen by con-
sidering any network where the output signal is some
function of the input signal e
1
, for example
e
0
f e
1
17
In any perfectly linear network, this requires that
e
0
ke
1
18
and, assuming two different input signals
e
0
kE
a
cos o
a
t E
b
cos o
b
t 19
where k is a constant. In this case the output signal con-
tains only the two input-signal frequencies. However, if
the output is a nonlinear function of the input, it can, in
general, be represented by a series expansion of the input
signal. For example, let
e
0
k
1
e
1
k
2
e
2
2
k
3
e
3
3
k
n
e
n
n
20
When e
1
contains two frequencies, e
0
will contain the input
frequencies and their harmonics plus the products of these
frequencies. These frequency products can be expressed as
sum and difference frequencies. Thus, all modulators, de-
tectors, and mixers are of necessity nonlinear devices. The
principal distinction between these devices is the frequen-
cy differences between the input signals and the desired
output signal or signals. For example, amplitude modula-
tion in general involves the multiplication of a high-fre-
quency carrier by low-frequency modulation signals to
produce sideband signals near the carrier frequency. In
a mixer, two high-frequency signals are multiplied to pro-
duce an output signal at a frequency that is the difference
between the input-signal frequencies. In a detector for
amplitude modulation, the carrier is multiplied by the
sideband signals to produce their different frequencies at
the output.
To understand the modulation process, it is helpful
to visualize a modulator as a blackbox (Fig. 14) with two
inputs and one output connected to a carrier oscillator
producing a sinusoidal voltage with constant amplitude
and frequency f
RF
. The output is a modulated waveform
Ft At cos o
s
t Yt At cos Ft 21
whose amplitude A(t) or angle F(t), or both, are controlled
by v
m
(t). In amplitude modulation (AM) the carrier enve-
lope A(t) is varied while Y(t) remains constant; in angle
modulation A(t) is xed and the modulating signal con-
trols F(t). Angle modulation may be either frequency mod-
ulation (FM) or phase modulation (PM), depending upon
the relationship between the angle F(t) and the modula-
tion signal.
Although the waveform [21] might be called a modu-
lated cosine wave, it is not a single-frequency sinusoid
when modulation is present. If either A(t) or Y(t) varies
with time, the spectrum of F(t) will occupy a bandwidth
determined by both the modulating signal and the type of
modulation used.
5.1.1. Amplitude Modulation. Amplitude modulation in
the form of ONOFF keying of radio-telegraph transmitters
is the oldest type of modulation. Today, amplitude modu-
lation is widely used for those analog voice applications
that require simple receivers (e.g., commercial broadcast-
ing) and require narrow bandwidths.
In amplitude modulation the instantaneous amplitude
of the carrier is varied in proportion to the modulating
signal. The modulating signal may be a single frequency,
or, more often, it may consist of many frequencies of var-
ious amplitudes and phases, e.g., the signals constituting
speech. For a carrier modulated by a single-frequency sine
wave of constant amplitude, the instantaneous signal e(t)
is given by
et E1 m cos o
m
t cos o
c
t f 22
where E is the peak amplitude of unmodulated carrier, m
is the modulation factor as dened below, o
m
is the fre-
quency of the modulating voltage (radians per second), o
c
RF source Modulator
Information
source
RF channel Demodulator
Information
user
Figure 13. Conceptual diagram of a commu-
nication system.
Oscillator
( f
RF
)
Modulated
signal
F(t)
v
m
(t)
Modulating voltage
Modulator
Figure 14. Blackbox view of a modulator.
MIXER CIRCUITS 3111
is the carrier frequency (radians per second), and f is the
phase angle of the carrier (radians).
The instantaneous carrier amplitude is plotted as a
function of time in Fig. 15. The modulation factor m is
dened for asymmetrical modulation in the following
manner:
m
E
max
E
E
upward or positive modulation 23
m
E E
min
E
downward or negative modulation 24
The maximum downward modulation factor, 1.0, is
reached when the modulation peak reduces the instanta-
neous carrier envelope to zero. The upward modulation
factor is unlimited.
The modulation carrier described by Eq. (22) can be
rewritten as follows:
et E1m cos o
m
t cos o
c
t f
E cos o
c
t f
mE
2
cos o
c
o
m
t f

mE
2
cos o
c
o
m
t f
25
Thus, the amplitude modulation of a carrier by a cosine
wave has the effect of adding two new sinusoidal signals
displaced in frequency from the carrier by the modulating
frequency. The spectrum of the modulated carrier is
shown in Fig. 16.
5.1.2. Angle Modulation. Information can be transmit-
ted on a carrier by varying any of the parameters of the
sinusoid in accordance with the modulating voltage. Thus,
a carrier is described by
et E
c
cos y 26
where y o
c
t f.
This carrier can be made to convey information by
modulating the peak amplitude E
c
or by varying the in-
stantaneous phase angle y of the carrier. This type of
modulation is known as angle modulation. The two types
of angle modulation that have practical application are
phase modulation (PM) and frequency modulation (FM).
In phase modulation, the instantaneous phase angle y
of the carrier is varied by the amplitude of the modulating
signal. The principal application of phase modulation is in
the utilization of modied phase modulators in systems
that transmit frequency modulation. The expression for a
carrier phase-modulated by a single sinusoid is given by
et E
c
cos o
c
t fDf cos o
m
t 27
where Df is the peak value of phase variation introduced
by modulation and is called the phase deviation, and o
m
is
the modulation frequency (radians per second).
In frequency modulation, the instantaneous frequency
of the carrier, that is, the time derivative of the phase an-
gle y, is made to vary in accordance with the amplitude of
the modulating signal. Thus
f
1
2p
dy
dt
28
When the carrier is frequency-modulated by a single
sinusoid
f f
RF
Df cos o
m
t 29
where Df is the peak frequency deviation introduced by
modulation. The instantaneous total phase angle y is
E
max
E
min
e(t)
Modulation envelope
E
Figure 15. Amplitude-modulated carrier.
0
Spectrum of
modulating signal
Spectrum of modulated carrier
Frequency
E
Frequency
c

(b)
(a)
m

m

m
+
m

E
m
2
E
m
2
E
m m
Figure 16. Frequency spectrum of an amplitude-modulated car-
rier: (a) carrier modulated by a sinusoid of frequency o
m
; (b) car-
rier modulated by a complex signal composed of several sinusoids.
3112 MIXER CIRCUITS
given by
y 2p
_
f dt y
0
30
y 2pf
RF
t
Df
f
m
sin 2pf
m
t y
0
31
The complete expression for a carrier that is frequency-
modulated by a single sinusoid is
et E
c
cos o
t
c

Df
f
m
sin 2pf
m
t y
0
_ _
32
The maximum frequency difference between the modu-
lated carrier and the unmodulated carrier is the frequency
deviation Df. The ratio of Df to the modulation frequency
f
m
is known as the modulation index or the deviation ratio.
The degree of modulation in an FM system is usually
dened as the ratio of Df to the maximum frequency
deviation of which the system is capable. Degree of
modulation in an FM system is therefore not a property
of the signal itself.
In digital wireless communication systems, Gaussian-
ltered minimum-shift keying (GMSK) is the most popu-
lar, and four-level frequency-shift keying (4-FSK) and
p/4-shifted differential encoded quadriphase (or quadra-
ture) phase-shift keying (p/4-DQPSK) are also used.
GMSK and 4-FSK are both frequency modulation, but
p/4-DQPSK is phase modulation.
5.1.3. Pulse Modulation. In pulse-modulated systems,
one or more parameters of the pulse are varied in accor-
dance with a modulating signal to transmit the desired
information. The modulated pulse train may in turn be
used to modulate a carrier in either angle or amplitude.
Pulse modulation provides a method of time duplexing,
since the entire modulation information of a signal chan-
nel can be contained in a single pulsetrain having a low
duty cycle, i.e., ratio of pulse width to interpulse period,
and therefore the time interval between successive pulses
of a particular channel can be used to transmit pulse in-
formation from other channels.
Pulse modulation systems can be divided into two basic
types: pulse modulation proper, where the pulse parame-
ter which is varied in accordance with the modulating
signal is a continuous function of the modulating signal;
and quantized pulse modulation, where the continuous
information to be transmitted is approximated by a nite
number of discrete values, one of which is transmitted by
each single pulse or group of pulses. The two methods are
illustrated in Fig. 17. In quantized pulse modulation sys-
tems, the input function can be approximated with arbi-
trary accuracy by increase of the number of discrete
values available to describe the input function. An exam-
ple of a quantized pulse modulation system is shown in
Fig. 18; the information is transmitted in pulse code
groups, the sequence of pulses sent each period indicat-
ing a discrete value of the modulating signal at that in-
stant. Typically, the pulse group might employ a binary
number code, the presence of each pulse in the group
indicating a 1 or 0 in the binary representation of the
modulating signal.
The principal methods for transmitting information by
means of unquantized pulse modulation are pulse-ampli-
tude modulation (PAM; see Fig. 19), pulsewidth modula-
tion (PWM), and pulse position modulation (PPM).
5.2. Frequency Translation
The most common form of radio receiver is the superhet-
erodyne conguration shown in Fig. 20a. The signal input,
Modulating signal
Quantized pulse code groups
Figure 18. Example of a quantized pulse modulation system.
Instantaneous
modulating signal
(b)
Modulation of
pulse parameter
Modulation of
pulse parameter
Instantaneous
modulating signal
(a)
Figure 17. Inputoutput relationships of quantized and unqu-
antized pulse modulation systems: (a) unquantized modulation
system; (b) quantized modulation system.
MIXER CIRCUITS 3113
with a frequency o
s
, is usually rst amplied in a tunable
bandpass amplier, called the RF amplier, and is then
fed into a circuit called the mixer along with an oscillator
signal, which is local to the receiver, having a frequency
o
p
. The LO is also tunable and is ganged with the input
bandpass amplier so that the difference between the in-
put signal frequency and that of the LO is constant.
In operation, the mixer must achieve analog multipli-
cation. With multiplication, sum and difference frequency
components at o
s
7o
p
are produced at the output of the
mixer. Usually, the sum frequency is rejected by sharply
tuned circuits and the difference frequency component is
subsequently amplied in a xed-tuned bandpass ampli-
er. The difference frequency is called the intermediate
frequency (IF), and the xed-tuned amplier is called the
IF amplier. The advantage of this superheterodyne con-
guration is that most amplication and outband rejection
occurs with xed-tuned circuits, which can be optimized
for gain level and rejection. Another advantage is that the
xed-tuned amplier can provide a voltage-controlled gain
to achieve automatic gain control (AGC) with input signal
level. In high-performance and/or small-size receivers, the
ltering in the IF amplier is obtained with electrome-
chanical crystal lters.
To formalize the mixer operation, assume that both the
input signal and the local oscillator output are unmodu-
lated, single-tone sinusoids:
V
s
E
s
cos o
s
t 33
V
p
E
p
cos o
p
t 34
If the multiplier (mixer) has a gain constant K, the output
is
V
0

K
2
E
s
E
p
cos o
s
o
p
t cos o
s
o
p
t 35
The difference frequency, o
s
o
p
, is denoted by o
if
.
If the input is a modulated signal, the modulation also
is translated to a band about the new carrier frequency,
o
if
. For example, if the input is amplitude-modulated,
V
s
E
s
1 m cos o
m
t cos o
s
t
E
s
cos o
s
t
m
2
E
s
cos o
s
o
m
t

m
2
E
p
cos o
s
o
m
t
36
The input can be represented as in Fig. 20b, with the car-
rier frequency term and an upper sideband and a lower
sideband, each containing the modulation information.
For a linear multiplier, each of the input components
is multiplied by the LO input, and the output of the
multiplier contains six terms, as shown in Fig. 20c: the
difference-frequency carrier with two sidebands and
the sum-frequency carrier with two sidebands. The latter
combination is usually rejected by the bandpass of the IF
amplier.
Modulating signal
Pulse train
Frequency
0
f
m
1
T
(a)
(b)
2
T
3
T
4
T
5
T
6
T
7
T
8
T
9
T
10
T
11
T
Figure 19. Pulse amplitude modulation: (a) amplitude-modulat-
ed pulsetrain; (b) frequency spectrum of the modulated pulse-
train.
RF
amp.
Coupled
tuning
f
RF
f
m
f
RF
+ f
m
f
RF
(a)
(b)
f
RF
f
LO
f
m
f
m
f
m
f
m
f
RF
+ f
LO
(c)
Mixer

IF
amp.
Demod.
Audio
amp.
Local
oscill.
Figure 20. (a) The superheterodyne congu-
ration; frequency spectra of (b) the input and
(c) the multiplier output.
3114 MIXER CIRCUITS
6. ANALOG MULTIPLICATION
An analog multiplier can be used as a mixer. A multiplier
inputs two electrical quantities, usually voltages but
sometimes currents, and outputs the product of the two
inputs, usually currents but sometimes voltages. The
product of two quantities is derived from only the sec-
ond-order term of the transfer characteristic of the ele-
ment, because the product xy can be derived from only the
second term of (x y)
2
. The second-order term is, for ex-
ample, obtained from the inherent exponential law for a
bipolar transistor or the inherent square law for a MOS
transistor.
There are three methods of realizing analog multi-
pliers: the rst is by cross-coupling two variable-gain
cells, the second is by cross-coupling two squaring cir-
cuits, and the third is by using a multiplier core. Block
diagrams of these three multiplication methods are shown
in Fig. 21ac. For example, the bipolar doubly balanced
differential amplier, the so-called Gilbert cell, is the rst
case, and utilizes two-quadrant analog multipliers as vari-
able-gain cells. The second method has been known for a
long time and is called the quarter-square technique. The
third method is also based on the quarter-square tech-
nique, because a multiplier core is a cell consisting of the
four properly combined squaring circuits.
6.1. Multipliers Consisting of Two Cross-Coupled
Variable-Gain Cells
6.1.1. The Gilbert Cell. The Gilbert cell, shown in
Fig. 22, is the most popular analog multiplier, and consists
of two cross-coupled, emitter-coupled pairs together with a
third emitter-coupled pair. The two cross-coupled, emitter-
coupled pairs form a multiplier cell. The Gilbert cell con-
sists of two cross-coupled variable-gain cells, because the
lower emitter-coupled pair varies the transconductance of
the upper cross-coupled, emitter-coupled pairs.
Assuming matched devices, the differential output cur-
rent of the Gilbert cell is expressed as
DI I

I
C13
I
C15
I
C14
I
C16

a
2
F
I
0
tanh
V
x
2V
T
_ _
tanh
V
y
2V
T
_ _
37
where a
F
is the DC common-base current gain factor.
The differential output current of the Gilbert cell is ex-
pressed as a product of two hyperbolic tangent functions.
Therefore, the operating input voltage ranges of the Gil-
bert cell are both very narrow. Many circuit design tech-
niques for linearizing the input voltage range of the
Gilbert cell have been discussed to achieve wider input
voltage ranges.
In addition, the Gilbert cell has been applied to ultra-
high-frequency (UHF) bands of some tens of gigahertz us-
ing GaAs heterojunction bipolar transistor (HBT) and InP
HBT technologies. The operating frequency of the Gilbert
cell was 500MHz at most in the 1960s.
V
x
V
y
+
I
+
I

+
Input stage
(c)
(b)
(a)
Four-transistor
multiplier core
I

V
x
V
y
+
+

I
+
I

I
I
+
Variable
gain cell

+
Variable
gain cell

+
+

V
x
I
+
I

V
y
+
+

X
2
+

X
2
Figure 21. Multiplier block diagrams: (a) built from two cross-
coupled variable-gain cells; (b) built fromtwo cross-coupled squar-
ing circuits; (c) built from a multiplier core and an input system.
V
x
Q
1
R
L
V
CC
R
L
Q
4
Q
2
Q
3
Q
6
V
y
I
o
Q
5
Figure 22. Gilbert cell.
MIXER CIRCUITS 3115
The series connection of the two cross-coupled, emitter-
coupled pairs with a third emitter-coupled pair requires a
high supply voltage, more than 2.0V. Therefore, many
circuit design techniques for linearizing the low-voltage
Gilbert cell have also been discussed.
6.1.2. Modied Gilbert Cell with a Linear Transconduc-
tance Amplier. The modied Gilbert cell with a linear
transconductance amplier in Fig. 23 possesses a linear
transconductance characteristic only with regard to the
second input voltage V
y
, because it utilizes a linear trans-
conductance amplier for the lower stage. Low-voltage
operation is also achieved using the differential current
source output system of two emitterfollower-augmented
current mirrors. The general structure of the mixer is a
Gilbert cell with a linear transconductance amplier,
since the cross-coupled emitter-coupled pairs that input
the LO signal possess a limiting characteristic. To achieve
the desired low distortion, the differential pair normally
used as the lower stage of the cell is replaced with a
superlinear transconductance amplier. In practice, the
linear input voltage range of the superlinear transconduc-
tance amplier at a 1.9 V supply voltage is 0.9V peak to
peak for less than 1% total harmonic distortion (THD) or
0.8 V for less than 0.1% THD.
The differential output current of the modied Gilbert
cell with a linear transconductance amplier is
DI I

I
C1
I
C3
I
C2
I
C4

2G
y
V
y
tanh
V
x
2V
T
_ _
38
where G
y
1/R
y
and the DC common-base current gain
factor a
F
is taken as equal to one for simplication, since
its value is 0.98 or 0.99 in current popular bipolar tech-
nology.
The product of the hyperbolic tangent function of the
rst input voltage and the second input voltage of the lin-
ear transconductance amplier is obtained.
6.2. Quarter-Square Multipliers Consisting of Two
Cross-Coupled Squaring Circuits
To realize a multiplier using squaring circuits the basic
idea is based on the identity (x y)
2
(x y)
2
4xy or (x
y)
2
x
2
y
2
2xy. The former identity is usually ex-
pressed as
1
4
x y
2
x y
2
_
xy 39
The quarter-square technique based on the above identity
has been well known for a long time.
The two input voltage ranges and the linearity of the
transconductances of the quarter-square multiplier usu-
ally depend on the square-law characteristics of the squar-
ing circuits and sometimes depend on the linearities of the
adder and subtractor in the input stage. A quarter-square
multiplier does not usually possess limiting characteris-
tics with regard to both inputs.
6.3. Four-Quadrant Analog Multipliers with a
Multiplier Core
The multiplier core can be considered as four properly
combined square circuits. The multiplication is based on
the identity
ax by
2
a cx b
1
c
_ _
y
_ _
2
a cx by
2
ax b
1
c
_ _
y
_ _
2
2xy
40
where a, b, and c are constants.
V
y
V
x
Q
5
R
y
Q
8
Q
11
Q
6
Q
7
Q
10
Q
1
Q
2
I
o
I
o
R
L
R
E
R
E
R
E
R
E
R
L
Q
4
Q
9
Q
12
V
CC
Figure 23. Modied Gilbert cell with a linear
transconductance amplier.
3116 MIXER CIRCUITS
If each squaring circuit is a square-law element with
another parameter z, the identity becomes
ax by z
2
a cx b
1
c
_ _
y z
_ _
2
a cx by z
2
ax b
1
c
_ _
y z
_ _
2
4xy
41
In Eqs. (40) and (41), the parameters a, b, c, and z can be
canceled out.
MOS transistors operating in the saturation region can
be used as square-law elements. Four properly arranged
MOS transistors with two properly combined inputs pro-
duce the product of the hyperbolic functions of the inputs.
A cell consisting of four emitter- or source-common tran-
sistors biased by a single cell tail current can be used as a
multiplier core.
6.3.1. Bipolar Multiplier Core. Figure 24a shows a
bipolar multiplier core. The individual input voltages
applied to the bases of the four transistors in the core
can be expressed as V
1
aV
x
bV
y
V
R
, V
2
a 1V
x

b 1V
y
V
R
, V
3
a 1V
x
bV
y
V
R
, V
4
aV
x

V
x
V
R
I
o
V
y
R R
R
R
R
R
(c)
I
I
+
I

I
V
x
V
x
+ V
x
VM2 VM1
V
R
R
L
R
L
I
o
V
y
(b)
V
CC
Q
2
Q
3
aV
x
+ (b 1)V
y
(a 1)V
x
+ bV
y
aV
x
bV
y
(a 1)V
x
+ (b 1)V
y
I
o
Q
4
Q
1
Q
2
Q
3
Q
4
Q
1
Q
2
Q
3
Q
4
Q
1
(a)
V
R
I
+
I

Figure 24. Bipolar multiplier: (a) general cir-


cuit diagram of core; (b) the core with the sim-
plest combination of the two input voltages;
(c) the bipolar multiplier consisting of a mul-
tiplier core and resistive dividers.
MIXER CIRCUITS 3117
V
x
M
5
M
6
I
oo
I
oo
I
o
(c)
V
DD
V
R
V
x
M
2
M
4
M
12
M
11
I
M
3
M
7
M
8
M
9
M
10
M
1
V
x
+

V
x
M
2
M
1
M
4
M
4
(b)
I
o
V
R
V
x
V
y
I
+
I

I
I
M
2
M
1
M
1
M
3
M
3
(a) Multiplier core Input system
I
o
V
3
V
2
V
4
M
4
I
+
I

aV
x
+ (b )V
y
1
c
(a c)V
x
+ (b )V
y
1
c
(a c)V
x
+ bV
y
V
x
V
y
aV
x
+ bV
y
V
1
Figure 25. MOS multiplier: (a) general circuit diagram of core; (b) the core with the simplest
combination of the two input voltages; (c) MOS multiplier consisting of the multiplier core and an
active voltage adder.
3118 MIXER CIRCUITS
b 1V
y
V
R
. The differential output current is ex-
pressed as
DI I

I
C1
I
C2
I
C3
I
C4

a
F
I
0
tanh
V
x
2V
T
_ _
tanh
V
y
2V
T
_ _
42
The parameters a and b are canceled out. The transfer
function of the bipolar multiplier core is expressed as the
product of the two transfer functions of the emitter-cou-
pled pairs. The difference between Eqs. (42) and (38) is
only in whether the tail current value is multiplied by the
parameter a
F
or by its square. Therefore, a bipolar multi-
plier core consisting of a quadritail cell is a low-voltage
version of the Gilbert cell.
Simple combinations of two inputs are obtained when
ab
1
2
; a
1
2
, and b1, and ab1 as shown in Fig. 24b.
In particular, when ab1, resistive voltage adders are
applicable because no inversion of the signals V
x
and v
y
is
needed (Fig. 24c).
6.3.2. MOS Multiplier Core. Figure 25a shows the MOS
four-quadrant analog multiplier consisting of a multiplier
core. Individual input voltages applied to the gates of the
four MOS transistors in the core are expressed as
V
1
aV
x
bV
y
V
R
, V
2
a cV
x
b 1=cV
y
V
R
,
V
3
a cV
x
bV
y
V
R
, V
4
aV
x
b 1=cV
y
V
R
.
The multiplication is based on the identity of Eq. (41).
Ignoring the body effect and channel-length modula-
tion, the equations for drain current versus drain-to-
source voltage can be expressed in terms of three regions
of operation as
I
D
0 43a
for V
GS
rV
T
, the OFF region,
I
D
2b V
GS
V
T

V
DS
2
_ _
V
DS
43b
for V
DS
rV
GS
V
T
, the triode region, and
I
D
bV
GS
V
T

2
43c
for V
GS
ZV
T
and V
DS
ZV
GS
V
T
, the saturation region,
where bm (C
o
/2)(W/L) is the transconductance parame-
ter, m is the effective surface carrier mobility, C
o
is the gate
oxide capacitance per unit area, W and L are the channel
width and length, and V
T
is the threshold voltage.
The differential output current is expressed as
DI I

I
D1
I
D2
I
D3
I
D4

2bV
x
V
y
V
2
x
V
2
y
V
x
V
y

oI
0
=2b
44
The parameters a, b, and c are canceled out. Four properly
arranged MOS transistor with two properly combined in-
puts produce the product of two input voltages. Simple
combinations of two inputs are obtained when ab
1
2
and c 1, a
1
2
and b c 1, and ab c 1 as shown
in Fig. 25b.
Figure 25c shows a CMOS four-quadrant analog mul-
tiplier consisting of only a multiplier core and an active
voltage adder.
In addition, a multiplier consisting of the multiplier
core in Fig. 25a and a voltage adder and subtractor has
been implemented with a GaAs MESFET IC, and a useful
frequency range from dc to UHF bands of 3 GHz was ob-
tained for a frequency mixer operating on a supply voltage
of 2 or 3 V.
7. RADIOFREQUENCY SIGNAL AND LOCAL OSCILLATOR
Figure 26 shows a block diagram of a communication sys-
tem, showing modulation and demodulation. A wireless
communication system will usually consists of an infor-
mation source, which is modulated up to RF or microwave
frequencies and then transmitted. A receiver will take the
modulated signal from the antenna, demodulate it, and
send it to an information sink, as illustrated in Fig. 26.
The rate at which information can be sent over the chan-
nel is determined by the available bandwidth, the modu-
lation scheme, and the integrity of the modulation
demodulation process.
Frequency synthesizers are ubiquitous building blocks
in wireless communication systems, since they produce
the precise reference frequencies for modulation and de-
modulation of baseband signals up to the transmit and/or
receive frequencies.
Info.
source
Data
modulator
Frequency
synthesizer
Transmit
Baseband
data
signal
Info.
source
Data
modulator
Frequency
synthesizer
Receive
Figure 26. Block diagram of communications system, showing modulation and demodulation.
MIXER CIRCUITS 3119
A simple frequency synthesizer might consist of a tran-
sistor oscillator operating at a single frequency deter-
mined by a precise crystal circuit. Tunable transistor
frequency sources rely on variations in the characteristics
of a resonant circuit to set the frequency. These circuits
can then be embedded in phase-locked loops (PLLs) to
broaden their range of operation and further enhance
their performance.
A representative view of a frequency synthesizer is giv-
en in Fig. 27 which shows a generic synthesizer producing
a single tone of a given amplitude that has a delta-func-
tion-like characteristic in the frequency domain.
Indirect frequency synthesizers rely in feedback, usu-
ally in the form of the PLL, to synthesize the frequency. A
block diagram of a representative PLL frequency synthe-
sizer is shown in Fig. 28. Most PLLs contain three basic
building blocks: a phase detector, an amplier loop lter,
and a voltage-controlled oscillator (VCO). During opera-
tion, the loop will acquire (or lock onto) an input signal,
track it, an exhibit a xed phase relationship with respect
to the input. The output frequency of the loop can be var-
ied by altering the division ratio (N) within the loop, or by
tuning the input frequency with an input frequency di-
vider (Q). Thus, the PLL can act as a broadband frequency
synthesizer.
8. FREQUENCY SYNTHESIZER FIGURES OF MERIT
An ideal frequency synthesizer would produce a perfectly
pure sinusoidal signal, which would be tunable over some
specied bandwidth. The amplitude, phase, and frequency
of the source would not change under varying loading,
bias, or temperature conditions. Of course, such an ideal
circuit is impossible to realize in practice, and a variety of
performance measures have been dened over the years to
characterize the deviation from the ideal.
8.1. Noise
The output power of the synthesizer is not concentrated
exclusively at the carrier frequency. Instead, it is distrib-
uted around it, and the spectral distribution on either side
of the carrier is known as the spectral sideband. This is
illustrated schematically in Fig. 29. This noise can be rep-
resented as modulation of the carrier signal, and resolved
into AM and FM components. The AM portion of the sig-
nal is typically smaller than the FM portion.
FM noise power is represented as a ratio of the power in
some specied bandwidth (usually 1 Hz) in one sideband
to the power in the carrier signal itself. These ratios are
usually specied in dBc/Hz at some frequency offset from
the carrier. The entire noise power can be integrated over
a specied bandwidth to realize a total angular error in
the output of the oscillator, and oscillators are often spec-
ied this way.
8.2. Tuning Range
The tuning range of an oscillator species the variation in
output frequency with input voltage or current (usually
voltage). The slope of this variation is usually expressed in
megahertz per volt. In particular, the key requirements of
oscillator or synthesizer tuning are that the slope of the
frequency variation remain relatively consistent over the
entire range of tuning and that the total frequency vari-
ation achieve some minimum specied value.
8.3. Frequency Stability
Frequency stability of an oscillator is typically specied in
parts per million per degree centigrade (ppm/1C). This pa-
rameter is related to the Q of the resonator and the fre-
quency variation of the resonator with temperature. In a
free-running system this parameter is particularly impor-
tant, whereas in a PLL it is less so, since an oscillator that
drifts may be locked to a more stable oscillator source.
Reference
oscillator
Phase
detector
Loop
filter
Q VCO Output
N Figure 28. Indirect frequency synthesizer
using a phase-locked loop.
P
s
P
ssb
P
s
f
m
f
0
P
ssb
1 Hz
Frequency
Signal
Sideband
noise
P
o
w
e
r
Figure 29. Phase noise specication of frequency source. The
noise is contained in the sidebands around the signal frequency
at f
0
.
Frequency
synthesizer
Time
T
V
amp
V
a
Figure 27. Block diagram of frequency synthesizer producing
singletone sinusoidal output.
3120 MIXER CIRCUITS
8.4. Harmonics
Harmonics are output from the oscillator synthesizer that
occur at integral multiples of the fundamental frequen-
cies. They are typically caused by nonlinearities on the
transistor or other active devices used to produce the sig-
nal. They can be minimized by proper biasing of the active
device and design of the output matching network to lter
out the harmonics. Harmonics are typically specied in
dBc below the carrier.
8.5. Spurious Outputs
Spurious outputs are outputs of the oscillator synthesizer
that are not necessarily harmonically related to the
fundamental output signal. As with harmonics, they are
typically specied in dBc below the carrier.
BIBLIOGRAPHY
1. A. A. Abidi, Low-power radio-frequency ICs for portable com-
munications, Proc. IEEE 83:544569 (1995).
2. L. E. Larson, RF and Microwave Circuit Design for Wireless
Communications, Artech House, Norwood, MA, 1996.
3. N. Camilleri et al., Silicon MOSFETs, the microwave device
technology for the 90s, 1993 IEEE MTT-S Int. Microw. Symp.
Digest, June 1993, pp. 545548.
4. C. Tsironis, R. Meierer, and R. Stahlman, Dual-gate MESFET
mixers, IEEE Trans. Microwave Theory Tech. MTT-32:
248255(March 1984).
5. S. A. Maas, Microwave Mixers, 2nd ed., Artech House,
Norwood, MA, 1993.
6. J. M. Golio, Microwave MESFETs & HEMTs, Artech House,
Norwood, MA, 1991.
7. F. Ali and A. Gupta, eds., HEMTs & HBTs: Devices, Fabrica-
tion, and Circuits, Artech House, Norwood, MA, 1991.
8. D. Haigh and J. Everard, GaAs Technology and Its Impact on
Circuits and Systems, Peter Peregrinus, London, 1989.
9. D. O. Pederson and K. Mayaram, Analog Integrated Circuits
for CommunicationPrinciples, Simulation and Design,
Kluwer Academic, Norwell, MA, 1991.
10. W. Gosling, R A D I O Receivers, Peter Peregrinus,
London, 1986.
11. K. Murota and K. Hirade, GMSK modulation for digital mo-
bile telephony, IEEE Trans. Commun. COM-29:10441050
(1981).
12. Y. Akaiwa and Y. Nagata, Highly efcient digital mobile com-
munications with a linear modulation method, IEEE J. Select.
Areas Commun. SAC-5(5):890895 (June 1987).
13. J. Eimbinder, Application Considerations for Linear Integrat-
ed Circuits, Wiley, New York, 1970.
14. H. E. Jones, Dual Output Synchronous Detector Utilizing
Transistorized Differential Ampliers, U.S. Patent 3,241,078
(March 15, 1966).
15. B. Gilbert, A precise four-quadrant analog multiplier with
subnanosecond response, IEEE J. Solid-State Circ. SC-
3(4):365373 (1968).
16. P. R. Gray and R. G. Meyer, Analysis and Design of Analog
Integrated Circuits, Wiley, New York, 1977, pp. 667681.
17. K. W. Kobayashi et al., InAlAs/InGaAs HBT X-band double-
balanced upconverter, IEEE J. Solid-State Circ. 29(10):
12381243 (1994).
18. F. Behbahani et al., A low distortion bipolar mixer for low
voltage direct up-conversion and high IF frequency systems,
Proc. IEEE 1996 Bipolar Circuits Technology Meeting, Sept.
1996, pp. 5052.
19. H. Song and C. Kim, An MOS four-quadrant analog multi-
plier using simple two-input squaring circuits with source-
followers, IEEE J. Solid-State Circ. 25(3):841848 (1990).
20. K. Kimura, A unied analysis of four-quadrant analog mul-
tipliers consisting of emitter and source-coupled transistors
operable on low supply voltage, IEICE Trans. Electron. E76-
C(5):714737(1993).
21. K. Bult and H. Wallinga, A CMOS four-quadrant analog
multiplier, IEEE J. Solid-State Circ. SC-21(3):430435
(1986).
22. K. Kimura, An MOS four-quadrant analog multiplier based
on the multitail technique using a quadritail cell as a multi-
plier core, IEEE Trans. Circ. Syst. I Fund. Theory Appl.
42:448454 (1995).
23. Z. Wang, A CMOS four-quadrant analog multiplier with sin-
gle-ended voltage output and improved temperature perfor-
mance, IEEE J. Solid-State Circ. 26(9):12931301 (1991).
24. K. Kimura, A bipolar very low-voltage multiplier core using
a quadritail cell, IEICE Trans. Fund. E78-A(5):560565
(May 1995).
25. K. Kimura, Low voltage techniques for analog functional
blocks using triple-tail cells, IEEE Trans. Circ. Syst. I Fund.
Theory Appl. 42:873885 (1995).
26. R. Siferd, A GaAs four-quadrant analog multiplier circuit,
IEEE J. Solid-State Circ. 28(3):388391 (1993).
27. B. Razavi, Challenges in the design of frequency synthesizers
for wireless applications. Proc. IEEE 1997 Custom Integrated
Circuits Conf., May 1997, pp. 395402.
MOBILE COMMUNICATION
TADEUSZ WYSOCKI
Curtin University of Technology
HANS-JU

RGEN ZEPERNICK
Cooperative Research Center for
Broadband
Telecommunications and
Networking
RALF WEBER
Ericsson Eurolab Deutschland
GmbH
The desire for mobility and for communication with others
is deeply ingrained in human nature. The need to develop
an efcient public mobile communication system has been
driving a lot of researchers since the late nineteenth cen-
tury. It is generally accepted that mobile communication
(precisely speaking, mobile radiocommunication) was
born in 1897, when Guglielmo Marconi gained a patent
for his wireless telegraph. Since then, mobile communica-
tions have gone from the early stages at the beginning
of the twentieth century, when mobile communication
was widely used in navigation and in maintaining con-
tacts with remotely traveling ships and airplanes, through
MOBILE COMMUNICATION 3121
infancy of the 1950s and the 1960s, to maturity at the end
of the twentieth century, when public mobile telephony,
paging, and other mobile services are common place.
There are several comprehensive readings available on
each of those specific services [14], and this article does
not pretend to cover all topics related to mobile commu-
nication. Rather, we concentrate on some specific issues
that, in our opinion, allow the reader to understand major
differences between mobile and stationary or xed com-
munications.
1. HISTORY
Mobile communication has always been used by people,
particularly during military struggles, when commanders
needed to pass their orders to remote troops in the middle
of a battle. Several methods have been used, with horns
and drums among the most popular. Different optical
mobile communication systems based on so-called
signal ags have also been used for maritime applications.
All of those methods have been, however, of a limited
range and small capacity. There were no major improve-
ments in mobile communication until the birth of electro-
magnetic theory.
In the late nineteenth century, after the theoretical
predictions made by Maxwell in his treatise on electricity
and magnetism and Rudolf Hertzs experimental work on
the transmission and reception of electromagnetic waves,
Guglielmo Marconi and some other researchers started to
look into possible applications of electromagnetic radia-
tion for communication purposes. Since then, radio com-
munications have been used to save lives, win battles,
generate new businesses, maximize opportunities, and so
forth. From the introduction of public mobile cellular
telephony in the 1980s, mobile communications have
become elements of mass communication with the objec-
tive of providing a broad range of services similar to, and
in some instances exceeding, those offered by the public
switched telephone network (PSTN). The level of penetra-
tion for mobile phones varies among different countries,
but in some countries it is already high.
Before the 1970s, numerous private mobile radio net-
workscitizen band (CB) radio, ham operator mobile ra-
dio, and portable home radio telephonesused different
types of equipment utilizing diverse fragments of radio
spectrum located in the frequency band from about
30 MHz to 3GHz. Standardization started to take place
in the 1970s with the development of the Nordic Mobile
Telephone (NMT) system by Ericsson and the Advanced
Mobile Phone Services (AMPS) by AT&T. Both systems
have become de facto and de jure the technical standards
for the analog mobile telephony as the so-called rst-
generation systems. Soon after the deployment of those
rst-generation systems, second generation, fully digital
mobile cellular systems appeared on drawing boards
throughout the world. The Groupe Special Mobile
(GSM), pan-European, fully digital cellular telephony
standard was developed in late 1980s. After its successful
deployment, it began to be accepted as a standard for the
second-generation mobile radio not only in Europe but
also in other parts of the world. The main competition
for the GSM monopoly in digital mobile telephony has
come from code-division multiple-access (CDMA) spread-
spectrum technology, developed primarily for military
applications. The CDMA-based IS-95 systems, and time-
division multiple-access (TDMA)based GSM systems
have their powerful supporters and vigorous opponents,
and it is not clear yet which approach is going to be adopt-
ed in the development of the third generation of mobile
telephony or the future public land mobile telecommuni-
cation system (FPLMTS), which recently has been re-
named International Mobile Telecommunications2000
(IMT-2000).
2. OVERVIEW OF CONCEPTS
A typical mobile communication system consists of mobile
terminals, base stations, mobile switching centers, and
telecommunication channels. Those telecommunication
channels are either of a xed nature (cables or dedicated
radiolinks), to provide connections between base stations
and the mobile switching center, or mobile radio channels
between mobile terminals and base stations servicing
those terminals.
Unlike xed telecommunication channels, the mobile
radio channels are nonstationary and exhibit a high level
of unpredictability with regard to channel characteristics.
In addition, due to the nature of radiocommunication and
low directivity of antennas used, there is always a possi-
bility of strong interference from other users of the radio
spectrum. All these factors need to be carefully taken into
account while calculating a power budget for a mobile ra-
dio channel.
One of the specific features of mobile communication
systems is the need to assign a free radio channel to the
user requiring the connection. This is done during setup of
the connection. This setup, combined with the limited fre-
quency spectrum available for mobile services, means that
the number of simultaneous calls within the coverage area
of a single base station is highly limited. Therefore, before
introduction of the cellular concept (which is explained
later in the section on spectrum management), the
number of simultaneous calls within a system covering
sometimes a huge area was very low. For example, the
single-base-station mobile system in New York City in the
1970s could only support a maximum of 12 simultaneous
calls over one thousand square miles [5]. The concept of
such a cellular system is illustrated in Fig. 1.
There are generally four different types of channels
that are used for communication between the base station
and mobiles: (1) forward voice channels (FVCs), used
for voice transmission from the base station to mobiles;
(2) reverse voice channels (RVCs), used for voice trans-
mission from mobiles to the base station; (3) forward con-
trol channels (FCC), used for transmission of signaling
data from the base station to mobiles; and (4) reverse con-
trol channels (RCC), used for transmission of signaling
data from mobiles to the base station. The control chan-
nels transmit and receive data necessary to set up a call,
moving it to an unused voice channel, and to manage the
3122 MOBILE COMMUNICATION
handovers between base stations. They are also used for
constant monitoring of the system and for synchronization
purposes.
The base station serves as a bridge between all mobile
users in its coverage area and the mobile switching center
(MSC). The MSC acts as a central switching point, closing
the trafc among the connected base stations, and serves
as a gateway to the PSTN. It also coordinates all activities
of the base stations and accommodates all billing and
maintenance functions. A typical MSC handles 100,000
cellular subscribers and 5000 simultaneous conversa-
tions [6].
3. CLASSIFICATION
Mobile communication systems are classied into genera-
tions in accordance with evolution of systems in time. In-
dicators for a generation are involved transmission
techniques, supported services, and status of unication.
First-generation systems were introduced in the early
1980s and used analog techniques basically for speech
services. Second generation systems evolved in the late
1980s and are now in a mature form. They utilize digital
techniques, and, apart from speech, they support some
low-rate data services. Second-generation systems may be
further classied into cellular, cordless, and professional
radio systems. Due to the wide range of second-generation
systems and their immense complexity, we only summa-
rize some air interface parameters of selected digital cel-
lular systems in Table 1. Standards for third-generation
systems are currently being developed to provide mobile
multimedia telecommunications and universal coverage.
In the following subsections, we will give a concise over-
view of mobile communication systems and refer to the
literature for details.
3.1. First-Generation Systems
The rst-generation cellular systems use analog frequen-
cy modulation (FM) for trafc channels, digital frequency
shift keying (FSK) for signaling channels, and a frequency
division duplex (FDD) method. In addition, frequency-di-
vision multiple access (FDMA) is employed to share the
transmission medium. In the beginning, businesspeople
were the main customers, but later acceptance in resi-
dential markets started to increase immensely. In 1981,
the Scandinavian countries introduced the Nordic
Mobile Telephone standard NMT-450 [7] and in 1986 the
NMT-900 standard, where the numbers in the acronyms
indicate the utilized frequency band in MHz. The AMPS
system [8] was developed in the United States, and service
opened in 1983. AMPS has been adapted by many coun-
tries, such as Canada and Australia. Avariant of AMPS is
the Total Access Communication system (TACS) deployed
in 1985 in the United Kingdom, which basically uses a
smaller channel spacing than AMPS. In 1986, the C-450
system [9] opened its service in Germany.
3.2. Second-Generation Systems
3.2.1. Cellular Systems
3.2.1.1. Global System for Mobile Communication.
Although mobile communication in most European coun-
tries was well covered by their individual analog cellular
systems, incompatible standards made it impossible to
Mobile terminal
BS Base station
MSC Mobile switching center
Public
switched
telephone
nework
BS
BS
BS
BS
MSC
Figure 1. Conguration of a cellular system.
Table 1. Characteristics of Selected Second-Generation Systems
Mobile System GSM DCS-1800 DECT IS-95
Frequency band
MS-BS 890915MHz 17101785 MHz 18801990MHz 824849MHz
BS-MS 935960MHz 18051880 MHz 18801900MHz 869894MHz
Carrier spacing 200kHz 200kHz 1728kHz 1250kHz
Duplex spacing 45MHz 95MHz 0 Hz 45 MHz
No. of carriers 125 375 10 20
System bandwidth 2 25MHz 2 75MHz 20MHz 2 25MHz
Speech coder
Full rate 13kb/s RPE-LTP 13kb/s RPE-LTP 32kb/s ADPCM 8, 4, 2, 1 kb/s QCELP
Half rate 5.6 kb/s VSELP 4.5kb/s VSELP
Multiple access TDMA TDMA TDMA CDMA
Duplexing method FDD FDD TDD FDD
Modulation GMSK (BT
a
0.3) GMSK (BT
a
0.3) GMSK (BT
a
0.5) QPSK/BPSK
Frame bit rate 271kbps 271kbps 1.152Mbps 1.288Mbps
Frame length 4.615ms 4.615ms 10ms 20 ms
a
BT:3 dB bandwidth and bit duration product of the Gaussian lter.
MOBILE COMMUNICATION 3123
interwork among systems or share equipment. To over-
come this deciency, the Conference Europeenne des Post-
es et Telecommunications (CEPT) established in 1982 the
Groupe Special Mobile to develop a pan-European stan-
dard. The outcome was the GSM system [10], which now
stands for global system for mobile communication. The
standard species a digital cellular system on the basis of
a dedicated pan-European frequency band allocated at
900MHz. GSM supports a variety of speech and low-rate
data services. In 1991, the rst GSM system opened and
since then it has experienced tremendous popularity
worldwide, as indicated by the more than 65 countries
that have already adopted GSM. An extension of GSM is
the Digital Cellular System1800 (DCS-1800) standard
allocated in the 1.8-GHz band. DCS-1800 has been de-
signed to meet the requirements of personal communica-
tion networks (PCN).
3.2.1.2. Interim Standard 54 (IS-54). During the 1980s,
increasing demand on cellular services was observed in
the United States, approaching trafc capacity limits of
analog AMPS. To satisfy the enormous capacity require-
ments, the Cellular Telecommunication Industry Associa-
tion asked for a digital standard. As a result, IS-54 [11]
has been developed and is also known as United States
Digital Cellular (USDC). IS-54 is designed to coexist with
analog AMPS in the same frequency band but to replace
the analog system step by step. For that reason, IS-54 has
to be upward compatible to AMPS and hence is sometimes
referred to as digital AMPS (D-AMPS). With the used dig-
ital techniques and planned employment of a half-rate co-
dec, IS-54 is expected to provide six times the trafc
capacity than AMPS.
3.2.1.3. Interim Standard 95 (IS-95). Development of IS-
95 was launched in 1991 after Qualcomm successfully
demonstrated a CDMA digital cellular validation system.
The IS-95 standard [12] species a direct-sequence CDMA
digital cellular system. This wideband digital cellular
standard employs a set of spreading sequences that are
assigned to users. All users in the cellular system transmit
in the same radio channel but using different sequences.
Therefore, frequency planning is not required and can be
thought of as replaced by planning how to allocate spread-
ing sequences in different cells.
3.2.1.4. Personal Digital Cellular (PDC). The Japanese
effort to increase capacity over analog systems is docu-
mented in a PDC air interface standard [13], which was
issued in 1991. This digital cellular system has been allo-
cated a different frequency band than the analog system.
It supports speech, data, and short message services.
3.2.2. Cordless Systems
3.2.2.1. Cordless Telephony (CT2). The initial goal of
cordless telephony was to provide wireless pay phone ser-
vices with low-cost equipment but no support of incoming
calls. These systems cover only a single cell with a radius
of about 300m outdoor and 50 m indoor. Most manufac-
tures in Europe agreed on a common air interface (CAI),
which become the CT2/CAI standard [14]. It uses digital
techniques and replaces analog cordless telephony, which
offered only a small number of channels. In Canada, CT2
was developed to support incoming calls as well. By
dedicating more carriers for signaling purposes, location
management was practicable.
3.2.2.2. Digital European Cordless Telecommunications
(DECT). The DECT standard [14] was developed by the
European Telecommunications Standards Institute
(ETSI) and has been allocated a guaranteed pan-Europe-
an frequency. It is designed as a exible interface based on
open system interconnection (OSI) and was nalized in
1992. The system provides mobility in picocells with very
high capacity. Speech and data services are supported
where incoming and outgoing calls can be managed. Ini-
tially, DECT was intended for interworking with private
automatic branch exchange (PABX) to provide mobility
within the area of a PABX. Its application-independent
interface allows also interworking with PSTN, integrated
services digital network (ISDN), or even GSM. For oper-
ators of public networks, DECT can be employed to span
the last mile to subscribers by radio local loop (RLL).
3.2.2.3. Personal Handy Phone Systems (PHS). In 1989,
the Japanese Ministry of Posts and Telecommunications
initiated the standardization process for another cordless
system, which become the PHS standard [15]. Among oth-
er things, the standard denes the air interface, voice ser-
vices, and data services. The system is designed for small
cells, and it maintains incoming as well as outgoing calls.
A special feature of PHS is that mobiles that are close
enough may bypass the base station and communicate di-
rectly with each other.
3.2.3. Professional Mobile Radio. Besides cellular and
cordless systems, a variety of professional mobile radio
(PMR) systems have been designed for professional and
private users. In 1988, the European Commission and
ETSI initiated standardization of a PMR system known as
trans-European trunked radio (TETRA). Applications in-
clude group calls within a eet of users and eet manage-
ment as required by police, safety organizations, or taxi
companies. Similarly, in the United States the so-called
Associated Public Safety Communications Ofcers Project
25 (APCO 25) is specifically concerned with public safety
radio services. There are a number of other PMR systems,
such as European radio message (ERMES), digital short-
range radio (DSSR), and terrestrial ight telephone sys-
tem (TFTS), to mention only a few.
3.3. Third-Generation Systems
At present, mobile communications is realized by many
kinds of competitive and incompatible standards, systems,
and services. On the other hand, unication of cellular,
paging, cordless, and professional mobile radio is desirable
to manage limited physical resources, improve system
quality, and keep up with the great demand for mobile
services. Third-generation systems aim to provide uni-
cation and worldwide coverage.
3124 MOBILE COMMUNICATION
3.3.1. Universal Mobile Telecommunication System. The
European effort to support the same type of services any-
where and anytime is known as the Universal Mobile
Telecommunication System (UMTS) and is described in
Ref. 16. It is based on GSM, DCS-1800, and DECT. Stan-
dardization is concerned with air interface and protocol
issues aiming for global coverage for speech, low-to-medi-
um bit rate services, and multimedia capabilities. A major
challenge is to achieve higher data rates, up to 2 mbps. In
1987, the European Union launched a program called Re-
search and Development in Advanced Communications
Technologies in Europe (RACE). RACE was supposed to
investigate advanced options for mobile communications
and in that way assist ETSI in standardization of UMTS.
Several subprojects within RACE were concerned with
advanced topics, as the following examples indicate. The
advanced TDMA (ATDMA) project investigated antenna
systems and equalization issues. An approach to increase
data rate was undertaken in the Code Division Testbed
(CODIT) project. Recently, an agreement was reached
among European companies on the radio interface for
UMTS based on wideband CDMA (W-CDMA) and time
division CDMA (TD-CDMA) technologies [17]. Operation
of UMTS is expected to commence in the beginning of the
twenty-rst century.
3.3.2. International Mobile Telecommunications2000.
In the mid-1980s, the International Telecommunications
Union (ITU) began its studies of future public land mobile
telecommunication systems, the goal of which is to sup-
port mobile communication anywhere, anytime. The goals
are similar to those intended to be reached by UMTS, but
under a worldwide perspective. The ITU approach is now
called International Mobile Telecommunications2000
(IMT-2000) [18], the former FPLMTS (the attached 2000
indicates the frequency band of operation in MHz as well
as the year in which service is planned to open). At the
World Administrative Radio Conference in 1992 (WARC-
92), a bandwidth of 230MHz in the 2 GHz frequency band
was allocated worldwide to IMT-2000. A major objective of
IMT-2000 is to offer users a small, inexpensive pocket
communicator and provide for seamless roaming of a
mobile terminal across various networks. Services range
from voice to data and mobile multimedia applications or
even Internet access. These services will be offered for a
wide range of operating environments, such as indoor,
outdoor, terrestrial, and satellite networks. ITM-2000 will
utilize technologies like the asynchronous transfer mode
(ATM) to provide broadband transport services.
3.3.3. Other Systems
3.3.3.1. Wireless Local Area Networks (WLANs). Mobile
data of second-generation systems offers only low bit rate
wireless data transmission but with wide-area mobility and
roaming. UMTS and IMT-2000 are supposed to increase the
bit rate to some 2Mbps. On the other hand, for in-house
and on-premises networking there is substantial demand
for higher bit rates in the range of 20Mbps, whereas mo-
bility is required only in a restricted area. WLAN systems
for such local networking are planned to complement third-
generation systems and are considered a exible and cost-
effective alternative to cable-based LANs. Since proprietory
solutions are typically customized and rely on products of a
particular equipment supplier, standardization of WLAN
systems has been undertaken as follows. The high-perfor-
mance radio local-area network (HIPERLAN) standard
[19] was developed by ETSI. HIPERLAN achieves a bit
rate of 20Mbps and can be used to extend wired LANs such
as Ethernet. Spectrum has been recommended in the
5- and 17-GHz bands. Access to the shared transmission
channel is gained by a contention-based and collision avoid-
ance strategy. All time-critical services are supported by
best effort and, in principle, may achieve the same access
priority. The Institute of Electrical and Electronics Engi-
neers (IEEE) 802.11 standard [20] represents the rst ap-
proach for WLAN products from an internationally
recognized, independent organization. It denes the proto-
col for two types of networksnamely, adhoc and client/
server networks. IEEE 802.11 operates in the industrial,
scientific, and medical (ISM) band at 2.4GHz using spread-
spectrum modulation. The medium access control (MAC)
uses a collision avoidance mechanism. The third major ef-
fort in the area of WLANs is known as wireless ATM
(WATM) [21,22]. With ATM being a widely accepted stan-
dard for broadband networking, it is natural to extend this
technology into the wireless domain. Accordingly, the ATM
Forum is working toward a WATM standard. A WATM
system will provide bandwidth on demand for low- and
high-priority services, hence providing real support of time
critical multimedia services. Furthermore, a WATM system
will interwork seamlessly with wired ATM networks and
can be expected to cooperate very well with IMT-2000.
3.3.3.2. Satellite Systems. Further improvement of cov-
erage can be provided by satellite systems [23], which are
a component of future-generation mobile systems. Low-
Earth-orbit (LEO) satellite systems typically operate on
orbits at an altitude of 7001400 km, whereas medium-
Earth-orbit (MEO) satellite systems have their orbits at
about 10,000km. Compared to geostationary satellite sys-
tems, LEO and MEO systems allow low-power handhelds
and smaller antennas and offer a smaller round-trip delay
between Earth and satellite. Both LEO and MEO systems
employ a number of satellites that forma satellite network
and offer worldwide coverage. Satellite systems can be
used to cover remote areas that are out of range of a ter-
restrial cellular system or where a telephony system does
not exist. The rst generation of satellite personal com-
munications networks (S-PCN) will basically support
voice, data, facsimile, and paging. Promising candidates
of LEO/MEO systems are Intermediate Circular Orbit
(ICO) satellite cellular telephone systems, formerly known
as Inmarsat-P, Odyssey, Globalstar, and Iridium. The sec-
ond generation of S-PCN systems, such as the Teledesic
approach, will evolve toward multimedia services and
employ ATM technology as well.
4. MOBILE RADIO CHANNELS
Mobile radio channels considered to be a complex and se-
vere transmission medium. Path loss often exceeds that of
MOBILE COMMUNICATION 3125
free space by several tens of decibels. Reection, diffrac-
tion, scattering, and shadowing lead to fading and multi-
path reception (Fig. 2). Mobile radio channels are time
variant, where signals uctuate randomly as the receiver
moves over irregular terrain and among buildings. Un-
derstanding channel behavior and development of chan-
nel models for a specific band is always vital for efcient
system design.
4.1. Large-Scale Propagation Models
Propagation models that predict the average signal
strength of a received signal at a given distance from the
transmitter are called large-scale propagation models.
These models are concerned with loss along the wave
propagation path between the mobile and base stations.
Extensive measurement campaigns have been undertak-
en to develop models for some typical environments. We
distinguish roughly between rural, sub-urban, and urban
environments.
4.1.1. Free-Space Model. The ideal large-scale model is
free-space propagation, assuming that no objects or obsta-
cles inuence propagation. Free-space path loss is given
by [24]
L
F
10 log G
t
10 log G
r
20 log f
c
20 log d 32:44 dB
1
where G
t
and G
r
are the transmitter and receiver antenna
gain compared to an isotropic antenna, f
c
is the carrier
frequency in MHz, and d is the distance between the
transmitter and receiver in km.
4.1.2. OkumuraHata Model. Okumura [25] presented
a graphical method to predict the median attenuation rel-
ative to free space for a quasismooth terrain. The model
consists of a set of curves developed from measurements
and is valid for a particular set of system parameters in
terms of carrier frequency, antenna height, and so forth.
After the free-space path loss has been computed, the me-
dian attenuation, as given by Okumuras curves, has to be
added. Additional correction factors apply for different
terrains. Later, Hata [26] transformed Okumuras graph-
ical method into an analytical framework. The Hata model
for urban areas is given by the empirical formula
L
50;urban
69:55 dB26:16 log f
c
13:82 log h
t
ah
r

44:9 6:55 log h


t
log d
2
where L
50, urban
is the median path loss in dB. Equation (2)
is valid for carrier frequencies f
c
in the range from 150 to
1500 MHz, mobile antenna height h
r
from 1 to 10 m, and
base station antenna height ranging from 30 to 200m.
The distance d between mobile and base is supposed to
be within 120km. The correction factor a(h
r
) for mobile
antenna height h
r
for a small- or medium-sized city is
given by
ah
r
1:1 log f
c
0:7h
r
1:56 log f
c
0:8 dB
3
and for a large city it is given by
ah
r

8:29 log1:54h
r

2
1:1dB for f
c
300MHz
3:2 log11:75h
r

2
4:97 dB for f
c
! 300MHz
_
4
Equation (2) serves as the standard formula in urban
areas and has to be modied for suburban areas:
L
50;suburban
L
50;urban
2logf
c
=28
2
5:4 dB
5
For rural areas, we have to use
L
50;rural
L
50;urban
4:78log f
c

2
18:33 log f
c
40:94 dB
6
A uniform extension of these formulas for carrier the fre-
quency range 1500MHzrf
c
r2000 MHz and small cells,
such as those of personal communications systems, is
specied by the European Co-operative for Scientific and
Technical research (COST-231) recommendation [27].
4.2. Wideband Characterization
To model the time and frequency dispersion of a mobile
radio channel with respect to wideband transmission, a
system theory approach can be utilized [28]. This
approach is concerned with fading and multipath in the
vicinity of a small area. A set of system functions provides
a description in either the time or frequency domain. Due
to the random nature of the radio channel, system func-
tions describe stochastic processes.
4.2.1. Time-Variant Impulse Response and Autocorrela-
tion Function. Bello [28] showed that a radio channel may
be regarded as a linear time-variant systems. He intro-
duced a set of continuous-time and continuous-frequency
1. Echo path
LOS path
3. Echo path
2. Echo path
Figure 2. Illustration of multipath propagation typically experi-
enced in a mobile radio environment. Signals between transmit-
ter and receiver propagate along a line-of-sight (LOS) path and
are scattered through several echo paths.
3126 MOBILE COMMUNICATION
system functions, each of which completely describes the
channel and can be transformed into any of the remaining
functions. For the sake of clarity, we assume hereinafter
that signal spectra are narrow compared with carrier fre-
quency and channel bandwidth. Thus, we can use a com-
plex lowpass equivalent to represent a bandpass system
[29]. In doing so, let us focus on the time domain and con-
sider the input delay spread function h(t, t), dened by
yt
_
1
1
xt tht; t dt 7
where x(t) and y(t) denote the complex envelope of the
transmitted and received signals, respectively. The input
delay spread function h(t, t) can be thought of as time-
variant impulse response of the lowpass equivalent chan-
nel at time t due to unit input impulse applied in the past
at time t t.
Correlation functions provide significant insight into
stochastic processes and are often used to avoid specica-
tion of multidimensional probability density functions. In
this context, the autocorrelation function of the channel
impulse response is given by [28]
R
h
t
1
; t
2
; t
1
; t
2
ht
1
; t
1
h

t
2
; t
2
8
where h(t, t) is assumed to be a random process without
deterministic component, h denotes the ensemble aver-
age of h( ), and * indicated a conjugate complex. Variables
t
1
and t
2
denote time instants, whereas t
1
and t
2
denote
delays. For many mobile radio channels Eq. (8) does not
depend on absolute time but on time difference. In addi-
tion, scatterers may be regarded as uncorrelated. Such a
channel is called a wide-sense stationary uncorrelated
scattering (WSSUS) channel, and its autocorrelation func-
tion simplies to [28]
R
h
t
1
; t
2
; t
1
; t
2
P
h
Dt; t
2
dt
2
t
1
9
where P
h
(Dt; t
2
) is a cross-power spectral density, Dt
t
2
t
1
indicates time difference, and d(t) denotes a unit
impulse at time t t
2
t
1
.
4.2.2. Time and Frequency Dispersion Parameters. In
practice, a set of typical channel parameters is used to
characterize the dispersive behavior of a mobile radio
channel in the time and frequency domain. Similar to
Bellos system approach of corresponding functions, time
and frequency dispersion parameters possess dual repre-
sentations in the frequency and time domain, respectively.
These parameters can be obtained from measurements
and employed for channel classication.
4.2.2.1. Delay Spread and Coherence Bandwidth. Be-
cause of multipath propagation, the impulse response of
a mobile radio channel appears as a series of pulses rather
than a single delayed pulse. A received signal suffers
spreading in time compared to the transmitted signal. De-
lay spread can range from a few hundred nanoseconds in-
side buildings up to some microseconds in urban areas.
Delay-related parameters can be obtained from the power
delay prole P
h
(t) [29], which is dened as the power
spectral density for Dt 0:
P
h
t P
h
Dt; t
Dt 0
j 10
Maximum excess delay is dened as the period between
the time of the rst arriving signal and the maximum time
at which a multipath signal exceeds a given threshold.
The rst moment of a power delay prole is called mean
excess delay m
t
and is dened by
m
t

_
1
0
tP
h
tdt
_
1
0
P
h
tdt
11
The square root of the second central moment of the power
delay prole is referred to as root-mean-square (RMS)
delay spread, s
t
, and is dened by
s
t

_
1
0
t m
t

2
P
h
tdt
_
1
0
P
h
tdt

_
12
The coherence bandwidth B
c
translates time dispersion
into the language of the frequency domain. It species the
frequency range over which a channel affects the signal
spectrum nearly in the same way, causing an approxi-
mately constant attenuation and linear change in phase.
Coherence bandwidth is inversely proportional to rms
delay spread:
B
c
/
1
s
t
13
4.2.2.2. Doppler Spread and Coherence Time. Move-
ment of a mobile station relative to a base station or move-
ment of objects within the channel causes the received
frequency at the mobile station to differ from the transmit-
ted frequency due to Doppler shift. In a multipath envi-
ronment, a mobile station receives signals from different
paths. Its relative movement with respect to each path dif-
fers, which results in a range of Doppler shifts. The band-
width over which dispersion of the transmitted frequency
occurs is referred to as the Doppler spread, B
d
. The time
domain equivalent to Doppler spread B
d
is called coherence
time, T
c
. It species a period over which the channel im-
pulse response h(t,t) is nearly time invariant. Coherence
time is inversely proportional to Doppler spread:
T
c
/
1
B
d
14
4.2.3. Classication of Multipath Channels
4.2.3.1. Flat Fading. This type of fading is related to
delay spread. It occurs when the signal symbol period is
MOBILE COMMUNICATION 3127
much larger than rms delay spread. As a result, inter-
symbol interference (ISI) almost vanishes. In the dual
domain, the signal bandwidth is narrow compared to the
coherence bandwidth. The channel has a at transfer
function with almost linear phase, thus affecting all
spectral components of the signal similarly.
4.2.3.2. Frequency Selective Fading. If the signal symbol
period is much lower than rms delay spread, the receiver
is able to resolve multipath components, and ISI impairs
transmission. In that case, the bandwidth of the signal
exceeds the coherence bandwidth, and various spectrum
components may be affected differently. Frequency selec-
tive fading is also caused by delay spread.
4.2.3.3. Fast Fading. This type of fading is caused by
motion in a mobile environment and hence relates to
Doppler spread. Fast fading can be observed when signi-
ficant changes in the channel impulse response occur
within the signal symbol period. In other words, the band-
width of the Doppler spectrum is wide compared with the
signal bandwidth, which then causes significant signal
distortion.
4.2.3.4. Slow Fading. In the case when the channel im-
pulse response is almost time invariant for the duration of
a signal symbol period, we observe a slow fading and only
a minor signal distortion. The Doppler spread is then nar-
row compared to the signal bandwidth.
4.3. Narrowband Characteristics
We consider an unmodulated sinusoidal waveform being
transmitted at carrier frequency f
c
and described in com-
plex notation by x(t) exp(j2pf
c
t), where j

1
p
. The
equivalent lowpass signal at the receiver can be written
as [29]
yt y
I
t jy
Q
t

1
i 1
a
i
t
.
expj2pf
d;i
t j2pf
c
t
i
t
15
where y
I
(t) and y
Q
(t) denote, respectively, in-phase and
quadrature components of the complex-valued signal y(t),
a
i
(t) is complex amplitude, f
d,i
is Doppler frequency, and
t
i
(t) is delay of the ith multipath component.
4.3.1. Fading Distributions
4.3.1.1. Rayleigh Fading. Suppose there is no dominant
path between transmitter and receiver, and all multipath
components are multiply reected to build a diffuse re-
ceived signal. In that case, complex amplitude a
i
(t), Dopp-
ler frequency f
d,i
, and delay t
i
(t) can be considered
statistically independent of each other. The probability
density function of the received signal envelope
rt jytj

y
2
I
t y
2
Q
t
_
leads to a Rayleigh distribution
given by [24]
p
Ray1eigh
r
r
s
2
y
exp
r
2
2s
2
y
_ _
16
where rZ0 is the received signal envelope and
s
2
y
Efy
2
I
tg Efy
2
Q
tg is the variance of the zero-mean
and normally distributed processes, describing y
I
(t) and
y
Q
(t). Mean m
r
and variance s
r
of r(t) are given by
m
r
Efrtg

p=2
_
.
s
y
and s
2
r
2 p=2
.
s
2
y
, respectively.
4.3.1.2. Rice Fading. Let the received signal contain a
dominant component that might be caused by a line-of-
sight (LOS) path or single reected multipath compo-
nents. In terms of an equivalent lowpass signal, we
have to add a constant r
0
to the real part of y(t) and thus
y(t) [r
0
y
I
(t)] jy
Q
(t). The corresponding probability
distribution function of the signal envelope rt

r
0
y
I
t
2
y
2
Q
t
_
leads to a Rician distribution [24]
p
Rice
r
r
s
2
y
exp
r
2
r
2
0
2s
2
y
_ _
I
0
r
.
r
0
s
2
y
_ _
17
where r
0
Z0 is the peak amplitude of the dominant com-
ponent and I
0
( ) is the zero-order modied Bessel function
of the rst kind.
4.3.1.3. Lognormal Fading. When a mobile station
moves within an area about the base station, the local-
mean power P
r
of the received signal varies about the
area-mean power P
s
due to shadowing effects. Measure-
ments have shown that the logarithmic value L
r

10 log(P
r
) in dB of the local-mean power P
r
is normal dis-
tributed about the logarithmic value L
s
10 log(P
s
) in dB
of the area-mean power P
s
. This gives rise to the call P
r
being lognormal distributed, and the corresponding prob-
ability density function is given by [30]
p
lognormal
P
r

10 log e
P
r
.
1

2ps
s
p
exp
10 logP
r
L
s

2
2s
2
s
_ _ 18
where standard deviation s
s
in dB characterizes the shad-
owing effect.
5. SPECTRUM MANAGEMENT
The radiofrequency spectrum is a limited resource that
has to be shared in some way among the communication
community. With the increasing demand for mobile radio
communication, regulations and methods for efcient
spectrum usage are required. Frequency authorities
such as the ITU and the Federal Communications Com-
mission (FCC) play a vital part in allocating frequency
bands to new systems and in worldwide coordination
of the radio spectrum. Once a frequency band has been
3128 MOBILE COMMUNICATION
licensed, the cellular concept enables spectrum efciency
by reusing frequency in spatial by distant areas. Band-
width-efcient modulation schemes can be used to in-
crease further the total number of available channels in
a system.
5.1. Frequency Licensing
Spectrum planning is a hierarchical process that starts at
the highest international level at the ITU and is covered
by the WARC. This international framework is a base for
national frequency planning and allocation. From the
start of a licensing process to actual frequency allocation
with specied services, often several years ellapse. On the
other hand, todays main requirements of licensing proce-
dures are speed, transparency, fairness, efciency, and
conformity with government objectives. As a response,
several national regulatory methods for licensing of
frequency bands have been applied and are still used,
depending on the local situation [31].
5.1.1. Over-the-Counter Allocation. This is the tradi-
tional method, which can be regarded as a rst come, rst
served principle. Obviously, with this method there will be
unsatised demand. In addition, there is no guarantee
that those who apply rst will be those who value the
spectrum most highly or whose service is of greatest im-
portance to the community. In cases where demand does
not exceed supply (e.g., for many military applications),
this approach is still favored.
5.1.2. Comparative Assessment. In this approach gove-
rnments and regulators assess the relative merits of dif-
ferent applicants for a frequency band. The criteria are
variedfor example, anticipated consumer benets, the
technology applied, or the perceived overall social worth of
the service to be supplied. A regulator must be able to
make judgments about the most valued use of a frequency
band. Apparently, comparative assessment is neither
transparent nor fast or even fair and should only be
used when a decision can be easily reached (e.g., if there
is no competition between providers due to a monopolistic
market).
5.1.3. Lotteries. Lotteries involve a random distribu-
tion of licenses to different applicants of the same fre-
quency band. Lotteries are quick and fair as long as all
participants have the same weight and do not circumvent
this by submitting multiple entries under different names.
Even then lotteries might not be effective because the val-
ue of the service cannot be accounted for.
5.1.4. Tenders. Applicants provide sealed bid tenders
for a desired frequency band. The advantages of this
method are its fairness and transparency. The problem
is that it is not effective in the sense that the most valued
service will be gained by the company with the highest
nancial resources. This may lead to proprietary services
if the nal service in that band is not completely dened in
advance. Otherwise, if the service is already dened, this
approach is better than those previously mentioned. One
major disadvantage still remains: the high possibility of
overvaluing the asset because the bids of others are not
known.
5.1.5. Auctions. Auctioning of frequency bands com-
bines the advantages of the tender approach with infor-
mation about the bids of the applicants. In addition,
government interests concerning enhanced competition
can be taken into account by restricting the occupied
amount of frequency bands for an applicant to a certain
percentage. Sometimes the fairness of this approach is
criticized because similar lots could be sold for very dif-
ferent prices. Furthermore, collisions may happen at auc-
tions. To offer multiple frequency bands at the same time,
the simultaneous multiple-round auction was developed.
It seems that the latter method is one of the best for ef-
cient and fair spectrum management. It fullls most of
the requirements mentioned previously and returns the
pressure from the regulators back to the applicants. Thus,
it is going to be applied in future licensing procedures in
which an excess of demand over supply can be forecast
(e.g., for third-generation mobile systems).
5.2. Frequency Reuse
The concept of frequency reuse is a core element of all cel-
lular mobile radio systems [1,6]. It significantly increases
system capacity, which may be indicated by the total num-
ber of available duplex channels. The frequency band al-
located to a cellular system is organized into a nite
number of frequency channels, each of which can be si-
multaneously reused in different geographic locations (so-
called cells). In that way, a cellular system can serve more
customers compared with the case when the whole system
area is covered by just a single base station. On the other
hand, reuse of frequency channels causes interference be-
tween those cells that use the same channel (this is called
cochannel interference). It is a major task of cellular sys-
tem design to maximize capacity and to minimize inter-
ference. Since capacity can be increased by using smaller
cells and interference decreases with larger cells, a com-
promise is required.
In cellular system design, some idealized assumptions
are made that ease the complex task of capacity and in-
terference analysis. First, a hexagonal cell shape is nor-
mally proposed as a model to approximate the actual
footprint of a base station. Thus, the whole coverage
area of a cellular system can be represented by a homo-
geneous grid of hexagons. Because of this geometry, the
number of cells in a cluster or cluster size can only take
certain values and is given by
NI
2
IJ J
2
19
where the shift parameters I and J are nonnegative inte-
gers. Figure 3 shows the frequency reuse concept for a
seven-cell cluster in which each letter denotes a set of fre-
quencies. A certain cell is surrounded by adjacent channel
neighbors. The nearest co-channel neighbor (say, to cell G)
can be found by moving along a chain of I 2 hexagons,
MOBILE COMMUNICATION 3129
turning 601 counter-clockwise, and nally moving in that
new direction along J1 cells.
Here, we consider a homogeneous hexagonal cellular
system in which cells are roughly of equal size. It turns out
that cochannel interference does not depend on transmis-
sion power but on the ratio between distance D of a cell to
the center of the nearest cochannel cell and radius R of a
cell (Fig. 4). This parameter is called the cochannel reuse
ratio and is given by
QD=R

3N
p
20
A large cochannel reuse ratio means that transmission
quality is high, since cochannel interference is kept low
because of a reasonable spatial separation of cochannel
neighbors. Large capacity can be obtained when the clus-
ter size and correspondingly the cochannel reuse ratio is
small.
Usually, cochannel interference can be quantied by
computing the signal-to-interference ratio (SIR). For that
purpose, let g denote the path loss exponent and assume g
to be constant over the whole coverage area of the cellular
system. In a mobile radio environment, g typically ranges
between two and four. In addition, assume that all base
stations transmit the same power. Then the SIR at a mo-
bile station can be estimated by
S
I

R
g

i
0
i 1
D
g
i
21
where i
0
is the number of cochannel interfering cells. Let
us now consider cochannel interfering cells from the rst
tier only and assume that all those base stations are at a
distance Dfrom the desired base station. Then the SIR can
be approximated as
S
I

Q
g
i
0

D=R
g
i
0

3N
p

g
i
0
22
Finally, in the worst-case scenario, in which the mobile
station is located at the cell boundaries, the SIR can be
approximated as
S
I

1
2Q1
g
2Q
g
2Q1
g
23
Table 2 shows SIR values for some typical cluster sizes.
The pass loss exponent is taken as g 4, and the number i
0
of cochannel interfering cells in a fully developed system is
about six. Subjective tests undertaken for voice services
have shown that an S/I 18 dB gives satisfactory quality.
In a homogeneous hexagonal system, this requires a clus-
ter size of at least seven.
Apart from frequency reuse, there are some other tech-
niques to improve the capacity of a cellular system. Im-
proved capacity is often required to adapt the initial
system to an increasing demand on services or to better
cover congested areas. Sectoring is such a technique and
replaces omnidirectional antennas at a base station by
several sector antennas. Most common is an arrangement
of three or six sectors whereby a 1201 or 601 directional
antenna radiates within a certain sector. Due to less co-
channel interferers in the rst tier of a sectorized system,
this approach decreases the signal-to-interference ratio
and thus allows for a smaller cluster size (i.e., higher ca-
pacity). A second technique is called cell splitting, which
basically reduces the cell sizes. In that way, more cells t
within an area, resulting in more available channels per
area.
5.3. Digital Modulation Techniques for
Mobile Communication
A modulation scheme for a mobile communication system
should utilize the allocated frequency band and power as
efciently as possible. With regard to second-generation
systems and the intended application of data services in
future wireless systems, digital modulation techniques
are a natural choice. Selection of an appropriate digital
modulation scheme can be made on the basis of the fol-
lowing characteristics. The power density spectrum is de-
ned as the relative power in a modulated signal versus
A B C D
F G A B
C D E F G
I = 2
J = 1
Figure 3. Illustration of the cellular concept by means of a seven-
cell cluster. The seven frequencies reused in the system are
labeled A through G. Shift parameters are I 2 and J1.
D
R
Figure 4. Interference geometry between two co-channel cells
used to compute the signal-to-interference ratio assuming an ide-
alized hexagonal cell shape. Cells are of radius R and have dis-
tance D from each other.
Table 2. Signal-to-Interference Ratio for Various Cluster
Sizes
S/I in dB
I J N Q Eq. (22) Eq. (23)
1 1 3 3.00 11.30 8.03
2 1 7 4.58 18.65 17.26
3 0 9 5.19 20.84 19.74
2 2 12 6.00 23.34 22.57
3130 MOBILE COMMUNICATION
frequency. It consists of a main lobe and several sidelobes,
which indicate interference into adjacent channels. A
modulation scheme can be further assessed by its robust-
ness against interference and channel impairments,
which is indicated by a low bit error rate. Bandwidth ef-
ciency measures the bit rate that can be transmitted per
unit of frequency bandwidth and is expressed as the num-
ber of bits per second per hertz (bps/Hz). Apart from that,
implementation complexity and costs have to be consid-
ered as well. A desirable modulation scheme should
achieve high bandwidth efciency at a given bit error
rate and simultaneously offer a narrow power density
spectrum. Digital modulation schemes currently being
used in second generation systems can be classied into
phase shift keying (PSK) and continuous phase modula-
tion (CPM).
The family of PSK schemes belongs to the class of lin-
ear modulation techniques [29]; that is, a modulating dig-
ital signal is used to vary linearly the amplitude of a
transmitted signal. The most popular schemes within this
family include quaternary phase shift keying (QPSK), off-
set QPSK (OQPSK), p/4 phase shift QPSK (p/4-QPSK),
and differentially encoded p/4-QPSK, called p/4-DQPSK.
QPSK splits the baseband data signal into two pulse-
streams (namely, in-phase and quadrature components),
which reduces the data rate to half that of the baseband
signal. The phase of the carrier takes one of four values
(say, 01, 901, 1801, or 2701 or an equally spaced but rotated
constellation of these). Side lobes in the power density
spectrum of a QPSK-modulated signal are usually sup-
pressed by passing the signal through a front-end lter,
but then the signal envelope will no longer have a constant
envelope. When the ltered signal experiences a 1801 shift
in carrier phase, the envelope uctuates significantly and
even goes through zero. Such an envelope uctuation will
cause reappearance of sidelobes every time the ltered
QPSK modulated signal passes a nonlinearity (e.g., a non-
linear amplier). An improvement over QPSK offers the
OQPSK modulation scheme, in which a shift of one bit
delay is introduced to the quadrature component. As a re-
sult, in-phase and quadrature components have signal
transitions at separate time instants, and thus shifts in
carrier phase are limited to a maximum of 7901. Because
1801 phase shifts have been removed, the envelope cannot
go through zero any longer. Envelope uctuations are less
severe, and nonlinear amplication can be applied. Final-
ly, p/4-QPSK can be regarded as a compromise between
QPSK and OQPSK, which limits the maximum phase
shift to 7135. p/4-QPSK can be differentially encoded
and is then called p/4-DQPSK. The p/4-DQPSK technique
uses the phase shifts in the carrier instead of the phase to
transmit information. It can be noncoherently detected,
which is one of the reasons that p/4-DQPSK has been em-
ployed in many digital mobile communication systems,
such as IS-54, PDC, and PHS.
Improvement of OQPSK in terms of out-of-band radi-
ation can be obtained from the CPM technique [32], which
continuously varies the carrier phase and thus completely
avoids discontinuous phase transitions. Further, all CPM
schemes have constant carrier envelopes and hence allow
for nonlinear amplication. Among the most popular CPM
schemes are minimum shift keying (MSK) and Gaussian
minimum shift keying (GMSK). MSK can be regarded as a
special case of OQPSK in which the baseband signal uses
half-sinusoidal pulses instead of a rectangular pulseshape
(this produces the desired smooth phase transitions). Un-
fortunately, MSK does not have a power density spectrum
as compact as desirable for mobile radio. A tighter spec-
trum can be achieved by passing the modulating signal
through a premodulating pulseshaping lter. Following
this concept, GMSK uses a lter with a Gaussian or bell-
shaped transfer function. The compact power density
spectrum of GMSK is gained at the expense of an
increased irreducible bit error rate due to intersymbol
interference. GMSK has been adopted for GSM, DECT,
and CT-2.
Table 3 summarizes modulation techniques used in
second-generation systems along with the achieved
bandwidth efciency. The most popular schemes are
p/4-DQPSK and GMSK. Note that the employed GMSK
schemes differ in the 3 dB bandwidth and bit duration
product BT of the Gaussian lter.
6. CONCLUSION
Mobile communication has already become an integral
part of modern life and is one of the driving forces of tele-
communications. Wireless access to global telecommuni-
cation can be expected to change the face of
telecommunication even further. Apart from traditional
voice services and low-rate data, future mobile communi-
cation will provide for multimedia services that are a mix-
ture of voice, data, text, graphic, and video. Land mobile,
Table 3. Spectral Efciency of Second-Generation Systems
System Modulation Technique Data Rate (kbps/s) Channel Spacing (kHz) Efciency (bps/Hz)
GSM GMSK (BT
a
0.3) 270.8 200 1.35
IS-54 p/4-DQPSK 48.6 30 1.62
IS-95 QPSK/BPSK 1288 1250 1.03
PDC p/4-DQPSK 42 25 1.68
CT-2 GMSK (BT
a
0.5) 72 100 0.72
DECT GMSK (BT
a
0.5) 1152 1728 0.67
PHPS p/4-DQPSK 384 300 1.28
TETRA p/4-DQPSK 36 25 1.44
a
BT3 dB-bandwidth and bit duration product of the Gaussian lter.
MOBILE COMMUNICATION 3131
maritime, aeronautic, and satellite systems will not just
coexist but will establish a global mobile system allowing
users to communicate at any time and from anywhere in
the world. Before this vision of a global mobile system can
become reality, several technical challenges have to be
resolved. Such resolution will require support from all
areas of communication engineering, such as source and
channel coding, bandwidth-efcient modulation, multiple
access control, and protocol and security issues.
BIBLIOGRAPHY
1. W. C. Y. Lee, Mobile Cellular Telecommunications, McGraw-
Hill, New York, 1995.
2. J. Tisal, GSM Cellular Radio Telephony, Wiley, New York,
1997.
3. A. D. Hadden, Personal Communications Networks: Practical
Implementation, Artech House, Boston, 1995.
4. G. Calhoun, Wireless Access and the Local Telephone Network,
Artech House, Boston, 1992.
5. G. Calhoun, Digital Cellular Radio, Artech House, Boston,
1988.
6. T. S. Rappaport, Wireless CommunicationsPrinciples and
Practice, IEEE Press, Piscataway, NJ, 1996.
7. Nordic Mobile Telephone Group, Nordic Mobile Telephone,
System Description, NMT Doc. 1. 1997, February 1978.
8. W. R. Young, Advanced mobile phone services: Introduction,
background, and objectives, Bell Syst. Tech. J. 58:114 (1979).
9. FTZ der Deutschen Bundespost, Funkfernsprechdienst Netz
C, Technische Vorschriften, FTZ 171R60, Darmstadt, 1982.
10. European Telecommunications Standards Institute, GSM
Recommendations Series 01-12, ETSI Secretariat, Sophia
Antipolis Cedex, France, 1990.
11. Electronic Industries Association/Telecommunications
Industrie Association, Cellular System, Dual-Mode Mobile
Station-Base Station Compatibility Standard, EIA/TIA
Interim Standard 54, 1991.
12. Electronic Industries Association/Telecommunications In-
dustrie Association, Mobile StationBase Station Compati-
bility Standard for Dual-Mode Wideband Spread Spectrum
Cellular System, EIA/TIA Interim Standard 95, 1993.
13. Research and Development Center for Radio Systems, Per-
sonal Digital Cellular System Common Air Interface, RCR-
STD 27B, Tokyo, Japan, 1991.
14. W. H. W. Tuttlebee, ed., Cordless Telecommunications in
Europe, Sringer-Verlag, Berlin, 1990.
15. Research and Development Center for Radio Systems, Perso-
nal Handy Phone System: Second Generation Cordless Tele-
phone System Standard, RCR-STD 28, Tokyo, Japan, 1993.
16. S. Chia, The universal mobile telecommunication system,
IEEE Commun. Mag. 30(2):5462 (1992).
17. European Telecommunications Standards Institute, Agree-
ment Reached on Radio Interface for Third Generation
Mobile System, UMTS (Universal Mobile Telecommunications
System), press release, Tdoc 40/98, ETSI Secretariat, Sophia
Antipolis Cedex, France, Jan. 29, 1998.
18. M. Callendar and T. F. La Porta eds., IMT-2000: Standards
efforts of the ITU, IEEE Personal Commun. (special issue),
4(4) (1997).
19. European Telecommunications Standards Institute, Radio
Equipment and Systems (RES)High Performance Radio
Local Area Network (HIPERLAN), technical report, DTR/
RES-1003, ETSI Secretariat, Sophia Antipolis Cedex, France,
1993.
20. The Institute of Electrical and Electronics Engineers, Wire-
less LAN Medium Access Control (MAC) and Physical Layer
(PHY) Specication, IEEE Draft Standard P802.11/D2.1-95/
12, IEEE Press, Piscataway, NJ, 1995.
21. M. Naghshinen, ed., Wireless ATM, IEEE Personal Commun.
(special issue) 3(4) (1996).
22. T. R. Hsing et al. (eds.), Wireless ATM, IEEE J. Select. Areas
Commun. (special issue) 15(1) (1997).
23. E. Del Re et al. (eds.), Mobile satellite communications for
seamless PCS, IEEE J. Select. Areas Commun. (special issue)
13(2) (1995).
24. J. D. Parsons, The Mobile Radio Propagation Channel,
Pentech Press, London, 1992.
25. T. Okumura, E. Ohmori, and K. Fukuda, Field strength and
its variability in VHF and UHF land mobile services, Rev.
Electron. Commun. Lab. 16(910):825873 (1968).
26. M. Hata, Empirical formula for propagation loss in land
mobile radio services, IEEE Trans, Vehic. Technol., VT-29:
317325 (1980).
27. European Cooperation in the Field of Scientific and Technical
Research EURO-COST 231, Urban Transmission Loss Mod-
els for Mobile Radio in the 900 and 1800MHz Bands, Revision
2, The Hague, Sept. 1991.
28. P. A. Bello, Characterization of randomly time-variant linear
channels, IEEE Trans. Commun. COM-11:360393 (1963).
29. J. G. Proakis, Digital Communications, McGraw-Hill, New
York, 1994.
30. R. W. Lorenz, Field strength prediction method for a mobile
telephone system using a topographical data bank, IEE Conf.
Proc. 188:611 (1980).
31. A. J. Shaw, Spectrum auctions: Are they the best approach?
Proc. 3rd Asia-Pacific Conf. Commun., Sydney, Australia,
pp. 551557.
32. J. B. Anderson, T. Aulin, and C.-E. Sundberg, Digital Phase
Modulation, Plenum Press, New York, 1986.
MOBILE RADIO CHANNELS
RODNEY G. VAUGHAN
Simon Fraser University
Burnaby, British Columbia
Canada
1. INTRODUCTION
The term mobile channel refers to the transfer function of
a radio link when one or both of the terminals are moving.
The moving terminal is typically in a vehicle such as a car,
or a personal communications terminal such as a cell-
phone. Normally one end of the radio link is xed, and this
is referred to as the base station. In the link, there is usu-
ally multipath radiowave propagation, which is changing
with time, or as a function of position of the moving ter-
minal. The effects of this multipath propagation dominate
the behavior and characterization of the mobile channel.
3132 MOBILE RADIO CHANNELS
The radiofrequency of the link ranges from hundreds of
kilohertz, as in broadcast AM radio, to microwave fre-
quencies, as in cellphone communications. Indeed, even
optical frequencies are used, as in an infrared link used for
indoor computer communications. The kind of channel
most often referred to as mobile, however, is that using
microwave frequencies, and this article concentrates on
the characteristics of a mobile microwave radio link. Much
of the dynamic channel behavior can be scaled by the car-
rier frequency and by the speed of the mobile terminal.
Current spectral usage is a result of many different
historical developments, so the bands used by mobile radio
channels have evolved to be at many frequencies. For ex-
ample, current vehicular and personal communications
terminals mostly use frequencies around 900MHz and
1.8 GHz. In the future, higher frequencies will be used.
The frequency has a denitive bearing on the rate at
which the channel changes.
Some examples of mobile channels include domestic
cordless telephones; cellular telephones and radiotele-
phones; pagers; satellite communication terminals, includ-
ing navigational services such as Global Positioning System
(GPS) reception; and radio networks for local data commu-
nications. Finally, the reception by portable receivers of
broadcast radio, at frequencies of a few hundred kilohertz
(AM radio), are common forms of the mobile radio channel.
The use of mobile channels has grown very quickly
since the early 1990s. This growth will continue. It is
driven by a combination of consumer demand for mobile
voice and data services and advances in electronic tech-
nology. A limiting factor to the growth is that many users
must share the radio spectrum, which is a nite resource.
The spectral sharing is not only local; it is also interna-
tional, and so spectral regulatory issues have also become
formidable. The increasing pressure to use the spectrum
more efciently is also a driving force in regulatory and
technical developments.
To a user, a mobile or personal communications system is
simple: it is a terminal, such as a telephone, that uses a ra-
dio link instead of a wire link. The conspicuous result is that
the terminal is compact for portability, and it has an anten-
na, although for personal communications the antenna is
often no longer visible. To the communication engineer,
however, the mobile terminal can be viewed as a compo-
nent in an electrical circuit. The mobile channel is one link
in the circuit, but this link is the most complex, owing to its
use of radiowaves in complicated propagation environments
and of radio signal-processing technology needed to facili-
tate wireless transmission among multiple users.
In mobile channels, efcient spectral utilization is a
function of the basic limitations on controlling radiowave
behavior in complicated physical environments, including
the launching and gathering of the waves. Thus antennas
and propagation are key topics, and their roles character-
ize the channel behavior.
2. THE MOBILE CHANNEL
The mobile channel covers many different transfer func-
tions that have different properties. Figure 1 illustrates
individual channels [1]. The gure shows half of a link,
Multipath
environment
Antenna(s)
(Electromagnetic)
Propagation channel
Radioband Baseband
Analog
sensor(s)
(De)coder
Sensor
processing
Conversion
(A-D, D-A)
Waveform
coding
Purely
digital
information
packetization
error coding
equalization
(De)modulator
radio
waveform
coder
mixers
filters
Frequency
shift
Antenna
processing
and
front end
beamformers
combiners
amplifiers
duplexers
filters
etc.
(Electromagnetic) signal channel
Radio channel
Baseband equivalent (complex envelope) radio channel
Digital channel
Raw data channel
Raw information channel
Figure 1. Various channels in a mobile communications link. The term mobile channel refers to
the analog aspects of the channel, excluding modulation and coding.
MOBILE RADIO CHANNELS 3133
with the other half essentially an inverse process. The
multipath propagation environment represents the phys-
ical environment of the radio waves in the mobile channel.
The ow of information is described here for transmission,
but the description adapts readily to reception. A raw in-
formation channel refers to the transfer function that sep-
arates the transmitted and received raw information. For
speech, for example, degradations of the immediate acous-
tical environment from reverberation and acoustic noise
form part of the channel seen by the user at the receiv-
ing end. The quality of the information channel may be
subjective, although standard metrics of distortion and
signal-to-noise ratio can be applied for characterization.
The electrical signal is often digitized for efcient trans-
mission, and the digital channel is nonlinear, but its chan-
nel quality can be measured directly as a bit error ratio
(BER). This digital form is sometimes rearranged by en-
coding techniques for more robust transmission of the in-
formation. The digital information is coded into analog
waveforms and then mixed, or heterodyned, to the radio
carrier frequency and transmitted via the antenna.
The distinguishing feature of the mobile channel is the
changing multipath propagation between transmitter and
receiver. The receiving antenna gathers the many incident
electromagnetic waves from the multipath environment.
These multipath contributions mutually interfere in a
random, time-varying manner, and so statistical tech-
niques are needed to characterize the channel. In the
physical transmission media, the waves that bear the in-
formation are the signals of the electromagnetic propaga-
tion channel. The antenna reduces the signals from a
vector form of orthogonal polarizations to a scalar volt-
age. The signal at the open-circuited receiving antenna
terminal is the output of the electromagnetic signal chan-
nel. The antenna needs to be terminated in order to max-
imize the power received by the front end. The signal-to-
noise ratio (SNR) is established at this point in the link,
and the resulting signal is the output of the radio channel.
The antenna is a critical part of the mobile channel, and it
can control much of the channel behavior. The baseband
equivalent form of the radio channel, which is the radio
channel shifted in frequency to a lowpass spectral posi-
tion, is the signal that engineers use for mathematical
characterization and most electronic (including digital)
signal processing. The analog form of the radio channel
is what will be referred to from here on as the mobile
channel.
2.1. Multiple Access for Mobile Channels
Most mobile communications systems are for multiple us-
ers, and a multiple access technique is required to allow
the spectrum to be shared. In cellular systems, for exam-
ple, the frequencies are reused at geographically spaced
locations. For indoor systems, the frequency reuse spacing
may be between oors. In a system design, the multiple
access technique interacts with the choice of channel mod-
ulation and signal coding. The three basic techniques are
frequency-division multiple access (FDMA), which has
channels occupying different narrow bandwidths simulta-
neously; code-division multiple access (CDMA), in which
multiple users share wider band-widths simultaneously
by using differently coded waveforms; and time-division
multiple access (TDMA), in which users share a band-
width by occupying it at multiplexed times. Some systems
employ a combination of these techniques.
Multiple access is not a part of the mobile channel as
such. However, the reader should remain aware that mul-
tiple access is part of the communations system and the
choice of technique has an inuence on the mobile channel
bandwidth, its usage, and the type of signaling employed.
Multiple access also brings in co- and adjacent-channel
interference, in which the unwanted signals at a receiver
may not be noiselike, but in fact be signals with very sim-
ilar characteristics to the wanted signal. In systems with
densely packed users, the system capacity is interference-
limited.
3. MULTIPATH PROPAGATION EFFECTS
Multipath radiowave propagation is the dominant feature
of the mobile channel. More often than not, the transmit-
ted signal has no line-of-sight path to the receiver, so that
only indirect radiowave paths reach the receiving anten-
na. For micro-wave frequencies, the propagation mecha-
nisms are a mixture of specular (i.e., mirrorlike) reection
from electrically smooth surfaces such as the ground,
walls of buildings, and sides of vehicles; diffraction from
edges of buildings, hills, and other structures or forma-
tions; scattering from posts, cables, furniture, and other
components; and diffuse scattering from electrically rough
surfaces such as some walls, trees, and grounds.
Some multipath propagation occurs in nearly all com-
munications links. The basic phenomenon is that several
replicas of the signal are received, instead of one clean
version. The result can be seen as television ghosts, for
example. On transmission lines, reections from mis-
matches on the line give the same effect, for example, ech-
oes on telephone lines. On a long distance point-to-point
radio link, a direct line-of-sight wave, a single ground
bounce, and atmospherically refracted waves can all con-
tribute to the received signal. When signal replicas are too
close together to be discriminated and processed as dis-
crete contributions, the received signal becomes distorted.
This distortion limits the capacity of the channel. The
phenomena is akin to the severe acoustic distortion known
as the railway station effect, where increasing power out-
put (volume) does not increase the intelligibility of the
message. In digital communications, the distortion caused
by multipath propagation creates an analogous effect; an
increase in transmitted power does not decrease the BER
as a simplistic implication of Shannons theorem might
suggest. The amount and nature of the multipath propa-
gation sets the level of power at which the BER becomes
essentially independent of the SNR. The effect has often
been referred to as the irreducible BER, but the use of
signal processing, in particular equalization, can in fact
reduce the BER further. Experimental examples of the ir-
reducible BER in the digital channel are given below, but
this article otherwise concerns the analog mechanisms
and the statistical nature of the mobile channel.
3134 MOBILE RADIO CHANNELS
3.1. Fading in the Mobile Channel
3.1.1. Fast Fading. The interference, or phase mixing, of
the multipath contributions causes time- and frequency-
dependent fading in the gain of the channel. The time de-
pendence is normally from the changing position of the
mobile terminal, and so is also referred to as space depen-
dence. At a given frequency, the power of the received sig-
nal, and thus the gain of the mobile channel, changes with
time. This changing SNR is called signal fading and is of-
ten experienced as audible swooshing or picket fencing
when an FM station (with a radio frequency of about
100MHz) is received by the antenna on a moving car. If
the mobile terminal is stationary, the signal may continue
to experience some fading, and this is caused by changes
in the multipath environment, which may include moving
vehicles and other objects.
In nearly all situations the changing mobile position
dominates the time variation of the mobile channel. Usu-
ally, the multipath environment is taken, or at least mod-
eled, as unchanging. This is called the static multipath
assumption. In this case, a static mobile experiences an
unchanging channel. If now the radiofrequency is swept,
then the gain of the transfer function experiences fading
similar to that due to changes in position, because the
electrical path distances of the multipath components are
frequency-dependent. For a continuous-wave (CW) signal,
the time- and frequency-dependent fades can be some
40 dB below the mean power level, and up to 10 dB above
the mean. This indicates the large dynamic range re-
quired of the receiver just to handle the multipath inter-
ference. This fading is variously called the fast fading,
short-term fading, or Rayleigh fading after the Rayleigh
distribution of the signal magnitude. The maximum den-
sity of fading is a fade about every half-wavelength on av-
erage, and this occurs typically in urban outdoor and
indoor environments. The fast fading dominates the mo-
bile channel characteristics and usage. For example, tra-
ditional amplitude modulation at microwave frequencies
is not feasible, because for a fast-moving mobile terminal
the fading interferes directly with the modulation.
3.1.2. Slow Fading. The dynamic range of the received
signal is also affected by slow fading, also called long-term
fading or shadow fading. This is superimposed on the fast
fading. It is caused by shadowing of the radio signal to the
scatterers as the mobile terminal moves behind large ob-
stacles such as hills and buildings. The rate of the slow
fading therefore depends on the large-scale nature of the
physical environment. The basic short-term multipath
mechanism remains unchanged. The dynamic range of
the slow fading is typically less than that of the fast fad-
ing, being conned to about 710 dB for most of the time in
urban and suburban environments. The total dynamic
range for the fading therefore becomes about 70 dB. The
distance-based path loss, as a mobile terminal roams near
to and far from a base station, adds to this range.
3.2. Narrowband and Wideband
In a typical mobile microwave signal link, the relative
bandwidth is small. This means that the spectral extent of
the signal is less than a few percent of the nominal carrier
frequency. The fading within the frequency response of the
transfer function is referred to as frequency-selective fad-
ing. If the bandwidth is sufciently small so that all the
frequency components fade together, then this is called a
at fading channel.
In the mobile channel context, a narrowband channel
has at fading and a wideband channel has frequency-
selective fading. The use of a single frequency, or CW, for
channel characterization is the limiting case of the
narrowband channel. Historically, fading has been the
principal observed characteristic of the mobile channel.
Fast fading is merely one manifestation of the reception of
several replica signals.
3.3. The Effect of Fading on the Digital Channel: Irreducible
Bit Error Ratio
3.3.1. Timing Errors from Random Frequency Modula-
tion. The digital channel in Fig. 1 is in principle the sim-
plest channel to characterize experimentally, since it
concerns a BER measure. The fading in the mobile chan-
nel has a particular effect on the BER curves, namely, the
irreducible BER mentioned above. The example in Fig. 2
[2] shows curves of BER against carrier-to-noise ratio
(CNR) from simulations of the narrowband mobile chan-
nel with carrier frequency 920MHz. The static (no fading)
1
10
1
10
2
10
3
10
4
10
5
10
6
A
v
e
r
a
g
e

B
E
R
S
t
a
t
i
c
16 kbit/s Guassian filtered
minimum shift keying
B
b
T = 0.25
cos 2T differentially
detected
f
D
= 400 Hz
100 Hz
40 Hz
4 Hz
D
y
n
a
m
i
c
0 10 20 30 40 50 60
Average CNR (dB)
Figure 2. The irreducible BER for a digital mobile channel is
attained when an increase of SNR does not improve the BER. The
static (no fading) channel shows the classical waterfall shape of
the Gaussian noise-limited channel, but as the fading rate in-
creases, the form of the curve alters drastically. (From Ref. 2.)
MOBILE RADIO CHANNELS 3135
curve shows the classical waterfall shape of the Gaussian
channel. But the fading channel curves, shown with fad-
ing rate f
D
, feature irreducible BERs, which occur at lower
CNRs with increasing fading rate. The fading rate of
40 Hz corresponds to a mobile speed of about 40 km/h
and a carrier frequency of 900MHz. This corresponds ap-
proximately with using a cellphone from a moving car. The
curves hold their basic form independently of the type of
angle modulation used. The mechanism for the bit errors
is timing error caused by the random FM, discussed below,
imposed on the signal by the fading channel. The random
FM causes jitter on the symbols after they have passed
through the mobile channel.
3.3.2. Intersymbol Interference from Multiple Time De-
lays. As the signaling rate increases, an analogous irre-
ducible BER effect occurs as a result of the several signal
replicas arriving at different times. This spread of delays
causes intersymbol interference when one dispersed sym-
bol overlaps with other, similarly dispersed symbols. In
analog parlance, this is called dispersive distortion. In the
mobile channel the situation is complicated by the disper-
sion changing with time. The effect is depicted in the ex-
perimental example of Fig. 3 [3], where for a xed fading
rate of f
D
40 Hz the increasing digital transmission rate
experiences an increasing irreducible BER. As in Fig. 2,
the effect is that the capacity of a given link cannot be
increased by simply increasing the CNR, for example,
by increasing the transmitted power. Signal processing
is required.
3.4. Signal Processing for Mitigation of the Multipath Effect
Several signal-processing techniques can be applied to the
mobile channel to reduce distortion and recover the ca-
pacity relative to the static channel. Equalization and
rake systems basically attempt to gather the delayed sig-
nal replicas and recombine them into a single signal,
which, ideally, is no longer distorted or faded. Antenna
diversity uses multiple antenna elements to receive the
same signal but with different multipath degradations,
and combines the signals so that the resultant channel
has better capacity than any of the channels from the in-
dividual antenna elements. A combination of the equal-
ization, or rake, and antenna diversity methods is called
space-time processing. All these techniques can be effec-
tive in improving the mobile channel. In fact, the use of
antenna diversity offers very large potential capacities by
effectively reusing the frequency at different positions in
space.
3.5. The Mobile Channel as a Transfer Function
Figure 4 depicts a static mobile channel, which is taken as
the baseband equivalent radio channel of Fig. 1. Recall
that the effect of the antennas is included in the transfer
function. The impulse response h(t) and the transfer func-
tion H(o) are related by Fourier transformation in the
usual way, denoted h(t)3H(o). Here t is the delay time
and o is the angular baseband equivalent frequency. The
impulse response indicates the dispersive nature of the
1
10
1
10
2
10
3
10
4
10
5
0 10 20 30 40 50
Average CNR (dB)
f
b
= 256 kbit/s
128 kbit/s
64 kbit/s
32 kbit/s
16 kbit/s
16
32
64
128
Minimum shift keying
2-bit differentially
detected
Transmission
bit rate
f
b
= 256 kbit/s
f
d
40 Hz
1 js
A
v
e
r
a
g
e

B
E
R
Figure 3. The irreducible BER caused by intersymbol inter-
ference. As the signaling rate increases relative to the spread
of multipath propagation delay times, the irreducible BER
increases. (From Ref. 3.)
Multipath
environment
x(t)
h(t)
y(t)
x(t)
y(t)
|h(t)|
t
h(t) H(o)
F
|H(o)|
o
Figure 4. The static mobile channel transfer function. x(t) and
y(t) are electronic signals before the transmitting antenna and
after the receiving antenna, respectively. The impulse response
can be found by Fourier transformation of a swept frequency
measurement, for example.
3136 MOBILE RADIO CHANNELS
channel, which causes distortion of the signals which are
transmitted through it. This impulse response is modeled
as a series of discrete delta functions below.
The example of Fig. 4 is for an instant in time t. As the
mobile terminal moves, the delays and phases of the indi-
vidual multipath contributions become functions of time.
The impulse response and transfer function therefore be-
come expressed mathematically as functions of time, that
is, h(t, t) and H(o, t). If the scatterers in the multipath
environment can be considered to be essentially station-
ary, then the time t and position z are related by the ve-
locity V of the mobile: z Vt. From now on the spatial
variable z will be mostly used.
The following sections will develop, through the use of
several assumptions about the channel, a double Fourier
transform relation between the impulse response as a
function of delay time and time (i.e., position) and the
transfer function as a function of baseband angular fre-
quency and Doppler frequency. Because of the variation of
the transfer functions, the statistical parameters of the
channel are relevant, and these also can be couched in
terms of Fourier relations.
3.6. The Receiving Antenna in Multipath Transmission
The moving antenna combines the radiowave contribu-
tions, which have continuously changing delays, ampli-
tudes, and polarizations. Deterministic analysis is not
feasible except in simplistic situations, and to be able to
interpret the statistical description requires an apprecia-
tion of multipath phenomena.
A base station transmitter is taken to emit power in a
xed radiation pattern. After multiple scattering, for ex-
ample from many reections, the polarization is changed
in a random way and the electric (and magnetic) eld has
all three Cartesian components, independent of the trans-
mitted polarization. These components can be indepen-
dent functions of frequency and position. So the total
incident electric eld, at a point in space, can be written
in baseband equivalent form [i.e., with a complex enve-
lope, in which a factor of exp(jo
C
t) is suppressed, where o
C
is the carrier frequency] as the complex vector
E
I
o; x; y; z E
x
o; x; y; z ^ xx E
y
o; x; z ^ yy
E
z
o; x; y; z ^ zz
1
in which the components, such as E
x
, are complex scalars.
The introduction of an antenna promotes a change to
spherical coordinates referred to the antenna orientation
and position. The position is denoted with the single spa-
tial variable z. The incident elds are now written as
E
I
o; z; y; f E
y
o; z; y; f
^
yy E
f
o; z; y; f
^
ff 2
The open circuit voltage of an antenna depends on both
the incident eld and the receiving pattern,
h
a
o; y; f h
y
o; y; f
^
yy h
f
o; y; f
^
ff. This notation for
the receiving pattern should not be confused with the
symbol for the impulse response, h(t, z). The open circuit
voltage is dened by
V
O
o; z
_
2p
0
_
p
0
E
I
o; z; y; f
.
h
a
o; y; f siny dy df 3
and represents the transfer function of the electromag-
netic signal channel.
By expanding the dot product, this transfer function is
written in terms of the incident eld components, which
are now collectively detected as standing waves, and the
receiving pattern components, as
Ho; z
_
2p
0
_
p
0
E
y
o; z; y; fh
y
w; yf
E
f
o; z; y; fh
f
o; y; f siny dy df
4
This formula shows the inseparability of the antenna pat-
tern and the incident elds in the definition of the mobile
channel.
The antenna pattern is recognized as a lter in the
spatial (including polarization) domain. The frequency de-
pendence of the antenna pattern also represents a lter in
the more familiar frequency domain. The spacefrequency
lter of the antenna is the difference between the vector
electromagnetic propagation channel and the scalar elec-
tromagnetic signal channel of Fig. 1. If terminating
(i.e., matching) the antenna has a negligible effect over
the band of interest, then Eq. (4) represents the mobile
channel.
4. CHANNEL MODEL USING DISCRETE
EFFECTIVE SCATTERERS
Modeling the incident waves as emanating from discrete
directions allows the convenience of using effective point
sources. These are referred to as effective scatterers, be-
cause their scalar contribution is the physical incident
wave weighted by the receiving pattern. The transfer
function is written as the sum of effective scatterers,
which have an amplitude a, a phase c, and a delay time
t for the information carried:
Ho; z

i
a
i
expjc
i
expjo
R
t
i
5
Here the radiofrequency is the sum of the carrier fre-
quency (the center frequency of the radio band) and base-
band equivalent frequency:
o
R
o
C
o 6
In the static situation, the terms in the transfer func-
tion containing the delays are constant and can be incor-
porated into the phases of the effective scatterers.
The effect of the moving terminal on the transfer func-
tion can be seen by considering an effective scatterer at a
relatively large distance r
0
from it. The geometry is shown
in Fig. 5. The mobile terminal moves a distance z along
the spatial axis in the positive direction. The electrical
MOBILE RADIO CHANNELS 3137
distance to the ith effective scatterer changes from k
R
r
0i
,
where k
R
is the radiofrequency wavenumber, to
k
R
r
i
%k
R
r
0i
k
R
z cos y
i
o
R
t
i

o
R
c
cos y
i
z
o
R
t
i
u
i
z
7
where
u
i
k
R
cos y
i
8
is the spatial Doppler frequency in radians per meter. The
Doppler frequency in radians per second is
o
Di
u
i
V k
R
V cos y
i
9
Here u
i
is a scaled directional cosine to the ith effective
scatterer, and a receiver movement z produces a phase
shift u
i
z in the signal from the scatterer.
The changing phase term of an effective scatterer at
position z in Eq. (5) is
o
R
t
i
o
C
t
i
ot
i

o
C
c
cos y
i
z
o
c
cos y
i
z 10
The rst term is independent of the position and baseband
frequency, and can be incorporated in the phase of the
scatterer. The last term is negligible, because in micro-
wave communications we normally have a small relative
bandwidth (i.e., o/o
C
51). So within the approximations
above, the transfer function is
Ho; z

i
a
i
expjc
i
expjot
i
zu
i
11
Fourier transformation with respect to the baseband fre-
quency o gives the position-dependent impulse response
as a function of the delay time and position,
ht; z

i
a
i
expjc
i
dt t
i
expju
i
z 12
A further Fourier transformation, this time with respect
to the position z, gives a function of delay time and spatial
Doppler frequency, denoted
at; u

i
a
i
expjc
i
dt t
i
2pdu u
i
13
4.1. Fourier Transform Relations with Continuous
Transfer Functions
The Fourier pair a(t, u)3H(o, z) have the continuous
form
Ho; z
1
2p
_
1
0
_
k
C
k
C
at; u expjot zu du dt 14
at; u
1
2p
_
1
1
_
1
1
Ho; z expjot zu do dz 15
Note the mixed signs of the exponents. Moving in the neg-
ative z direction instead of the positive z direction, for ex-
ample, changes the sign of the exponent zu in Eqs. (14)
and (15).
From the double Fourier transform relation, there
can be four complex functions that carry the same infor-
mation for characterization of the mobile channel. These
are denoted:
a(t, u), the scattering function in the time delayspatial
Doppler domain (referred to as the effective scatter-
ing distribution)
h(t, z), the impulse response in the delayspace domain
(spatial spectrum)
A(o, u), the transfer function in the baseband frequen-
cyspatial Doppler domain (frequency spectrum)
H(o, z), the transfer function in the baseband frequen-
cyspace domain (spacefrequency spectrum)
The functions are related by the following single-dimen-
sional Fourier transforms of the mobile channel:
at; u
1
2p
_
Ao; ue
jot
do
Ao; u
_
at; ue
jot
dt
16
i th
effective
source
a
i
exp( j
i
) exp( jo
R
t
i
)
0
i
z
z cos 0
i
Mobile
receiving
antenna
Figure 5. Point source with moving receiver. In a model of the
channel, Eq. (5), the point source is not necessarily a physical
scatterer, but can rather be considered as a point representation
(an effective scatterer) that produces the waves received from a
given angular direction.
3138 MOBILE RADIO CHANNELS
ht; z
1
2p
_
Ho; ze
jot
do
Ho; z
_
ht; ze
jot
dt
17
at; u
_
ht; ze
jzu
dz
ht; z
1
2p
_
at; ue
jzu
du
18
Ao; u
_
Ho; ze
jzu
dz
Ho; z
1
2p
_
Ao; ue
jzu
du
19
The amplitudes, phases, delays, and directions of the
effective sources are randomly distributed, and the trans-
fer function consequently behaves randomly, so a statisti-
cal approach is called for their characterization.
4.2. Averaging across a Transfer Function for Channel Gain
In terms of an individual channel transfer function, the
total power, or channel gain, is given by
P
i

1
Lo
B
_
L
_
o
B
jHo; zj
2
do dz 20
where L is an averaging distance or locus covering the
positional averaging, and o
B
is an averaging bandwidth.
Any of the abovementioned channel functions can be used
to get the power in this way (Parsevals theorem). Inte-
grating single variables gives the frequency-dependent
power transfer function averaged over position
jHoj
2

1
L
_
L
jHo; zj
2
dz 21
and the position-dependent- (time-dependent) power
transfer function averaged over the frequency band:
jHzj
2

1
o
B
_
o
B
jHo; zj
2
do 22
This quantity is approximated in a receiver by the posi-
tion-varying (or time-varying) received-signal strength in-
dicator (RSSI) signal. However, in practice, the RSSI
voltage is normally proportional to the logarithm of the
channel power.
On averaging the power across a wideband channel,
the total received power fades less than a narrowband
component. This is the advantage of wideband modulation
systems. Analogously, antenna diversity is used to reduce
the fading by averaging the channel over samples of the
spatial variable.
5. STATISTICAL BASIS OF A MOBILE CHANNEL
5.1. Power Spectra and Channel Correlation Functions
Assuming ergodicity so that the statistics remain second
order, the autocorrelation function, denoted by R, of the
effective scatterer distribution with respect to the delay
times is written
R
a
t
1
; t
2
; u at
1
; ua

t
2
; u
_
23
where the angular brackets denote averaging over all
relevant realizations of the effective scattering distribu-
tion. This contrasts with the averaging over frequency or
space for a single channel realization as in the previous
section. The average power in the effective scattering dis-
tribution is
Pt; u R
a
t; t; u jat; uj
2
_
24
Note that the averaging is of the powers, not of the com-
plex values, of the a(t, u).
This averaged power distribution can be expressed in
several different statistical forms as seen below. Substi-
tuting Eq. (16) into Eq. (23) gives the Fourier transform
R
a
t
1
; t
2
; u

1
4p
2
_
1
1
_
1
1
R
A
o
1
; o
2
; u expjo
1
t
1
o
2
t
2
do
1
do
2
25
The inverse relation is
R
A
o
1
; o
2
; u

_
1
0
_
1
0
R
a
t
1
; t
2
; u expjo
1
t
1
o
2
t
2
dt
1
dt
2
26
Major simplications are possible under certain assump-
tions, as follows.
5.1.1. Wide-Sense Stationarity in Frequency. The chan-
nel is now assumed to be wide sense stationary in the fre-
quency domain. This means that the mean and correlation
of A(o,u) do not depend on the choice of frequency, o, but
on only the frequency difference, Doo
2
o
1
. This is a
reasonable assumption for the frequencies within the
small relative bandwidths of most mobile communications
systems. Denote the autocorrelation of a wide-sense sta-
tionary (WSS) process using S, for example, by
R
A
o
1
; o
2
; u R
A
o; oDo; u
S
A
Do; u WSS in o
27
that is, the autocorrelation of the transfer function in the
frequencyspatial Doppler domain is a power spectrum
whose argument is the frequency difference. The symbols
S and R are used to represent the correspondence of the
power spectra S and the autocorrelation R of a process
MOBILE RADIO CHANNELS 3139
that is WSS. As a result of the wide sense stationarity in o,
we can write Eq. (25) as
R
a
t
1
; t
2
; u
1
4p
2
_ _
S
A
Do; u expjDot
2

expjo
1
t
2
t
1
do
1
dDo
Pt
2
; udt
2
t
1

28
where
Pt; u
1
2p
_
1
1
S
A
Do; u expjDot dDo 29
is the averaged power delayDoppler frequency distribu-
tion of Eq. (24). The delta function in the autocorrelation
of Eq. (28) is referred to as the uncorrelated scattering
(US), and here means that a fading signal received at a
given delay time is uncorrelated (when averaged over the
relevant realizations) with a fading signal received at any
other delay time. The wide-sense stationarity (via the Do
factor) in the baseband frequency domain and the uncor-
related scattering in the delay time domain [the d(Dt)
factor] are equivalent characteristics.
5.1.2. Wide-Sense Stationarity in Space. Similarly, wide-
sense stationarity in the spatial domain corresponds to
uncorrelated scattering in the Doppler domain. This
means that the fading signal at one spatial Doppler fre-
quency u [or angle y cos
1
(u/k
C
)] is uncorrelated with a
fading signal received from any other spatial Doppler
frequency. Denoting the spatial difference Dz z
2
z
1
,
we have
R
a
t; u
1
; u
2

__
S
h
t; Dz expjz
2
u
2
z
1
u
1
dz
1
dz
2
Pt; u
2
2pdu
2
u
1

30
where the averaged power of the effective scattering dis-
tribution is expressed as
Pt; u
_
S
h
t; Dz expjDzudDz 31
5.1.3. Wide-Sense Stationary Uncorrelated Scattering
Channel. Combining the space and frequency wide-sense
stationary conditions, we have
R
a
t
1
; t
2
; u
1
; u
2

1
2p
__
S
H
Do; Dz expjDot
2
Dzu
2

dDodDzdt
2
t
1
2pdu
2
u
1

Pt
2
; u
2
dt
2
t
1
2pdu
2
u
1

32
where now
Pt; u
1
2p
__
S
H
Do; Dz expjDot DzudDodDz
33
The inverse Fourier transform is
S
H
Do; Dz
1
2p
__
Pt; u expjDot Dzu dt du 34
Thus the wide-sense stationarity conditions presented
above result in the frequencyspace correlation function
being the double Fourier transform of the average power
density of the effective scatterer distribution.
The term WSSUS was used by Bello [4] to describe
tropospheric multipath channels containing scintillating
scatterers being illuminated by static antennas. In the
context of the mobile channel, the WSS refers to wide-
sense stationarity in position, which implies uncorrelated
scattering in the spatial Doppler frequency. The US refers
to the delta function in delay time (effective sources at
different delays are mutually uncorrelated), which implies
WSS in the frequency domain.
The assumption of the WSSUS conditions in the chan-
nel allows the convenience of the double Fourier transform
relations. However, in applying the Fourier relations for a
given situation, the validity of the WSSUS model should
always be questioned. The channel can often be arranged
to be sufciently valid for gaining useful insight and in-
ferring channel behavior, by appropriately arranging the
averaging. This averaging, denoted with the angular
brackets, is often taken as several sampled records over
short distances (tens of carrier wavelengths or several
tens of fades) in order to stay within a given physical en-
vironment, followed by the power distribution averaging.
Statistically, ensemble averaging implies many realiza-
tions. We can interpret this as several sampled records
that should have uncorrelated data (e.g., well-separated
spatial paths) within the same physical environment, or
else as several records in different (i.e., independent) phys-
ical environments. The two cases are different. One case
averages within a single environment; the other case av-
erages over many different environments. Strictly speak-
ing, the presence of multiple uncorrelated records in the
same immediate environment does not truly satisfy the
hypothesis of statistically independent records, because
the scattering distribution is the same, that is, the signal
sources constituting the physical scatterers are common to
all the data records.
5.2. Key Relations for a Mobile Channel
Equations (14), (15) and, (33), (34) are key results for the
mobile channel. They relate, respectively, by double Fou-
rier transformation, a baseband channel transfer function
H(o, z) to an effective source distribution a(t, u) that pro-
vides the incident multipath signals, and the average
power spectral density of the channel S
H
(Do, Dz) to
the average power distribution of the effective scatterers,
3140 MOBILE RADIO CHANNELS
P(t, u). Figure 6 [1] depicts the relations between the
functions.
5.3. Averaged Power Proles
The more familiar single transformations also are of in-
terest. Mathematically, we can put Dz 0 in the frequency
correlation
S
H
Do S
H
Do; Dz 0 Ho; z
0
H

oDo; z
0

_
35
from which Eq. (34) reduces to
S
H
Do
_
Pt expjDot dt 36
where the average power delay prole
Pt
_
Pt; u du 37
is the average power at delay t, found by integrating over
all spatial Doppler frequencies (uk
C
to u k
C
), that is,
in all directions over the averaged power of the effective
scattering distribution. In practice, the antenna performs
this integration [recall that the effective scattering distri-
bution P(t, u) already includes the effect of the antenna];
for example, an omnidirectional antenna will gather the
waves from all the directions. However, a single measure-
ment from an antenna only accounts for a single realiza-
tion of the effective scattering distributionthat is, for one
point in the space of one environment. To estimate P(t)
from measurements, the averaging of the prole needs to
be done over several different positions [i.e., several z
0
values in Eq. (35))], either in the same physical environ-
ment or in many different physical environments, as dis-
cussed above.
The frequency correlation function S
H
(Do) is the
Fourier transform of the average power delay prole P(t)
for the WSS channel with uncorrelated scattering. The
inverse relation is
Pt
1
2p
_
S
H
Do expj Dot dDo 38
The Fourier relation in Eqs. (36) and (38) is identical to the
relation between the transfer function and its impulse
response, as in Fig. 4.
Similarly to the average delay prole, the average
spatial Doppler prole is averaged over all delays:
Pu
_
Pt; u dt 39
P(t) and P(u) are sometimes called the delay spectrum and
Doppler spectrum, respectively. Finally, the total power of
the effective scatterers is given by
P
__
Pt; u dt du 40
Many details, extending to situations outside the mobile
channel, may be found in Ref. 3.
5.3.1. Spreads. The spread, or second centralized
moment, of a distribution is a standard characterizing
parameter. For an instantaneous (i.e., snapshot, or un-
averaged) channel distribution function, the instanta-
neous spread is the standard deviation of that function.
For example, for a channel with a snapshot transfer
function h(t), the definition of the instantaneous delay
spread is
s
i
t

_
t
2
jhtj
2
dt
_
jhtj
2
dt

_
tjhtj
2
dt
_
jhtj
2
dt
_ _
2

_
41
Individual channel
Effective
scattering
distribution
:(t, u)
(o,t) (u, z)
Transfer function
at spatial Doppler
frequency u
A(o, u) h(t, z)
(u, z)
H(o, z)
(o, t)
Transfer function
at position z
Impulse
response
at position z
Squaring and averaging
Correlation of channel
transfer function
at frequency spacing
o and spatial
Doppler frequency u
Averaged channel
Average power
of effective
scattering distribution
P(t, u) = |a(t, u)|
2

(o, t)
(u, z)
S
A
(o, u) S
h
(t, z)
(u,z)
(o, t)
S
H
(o, z)
Correlation of
channel transfer
functions spaced
by z and by o
Correlation of
impulse responses
spaced by
distance z and
with delay time t.
Correlation
Figure 6. Fourier transform relations for the mobile channel functions and for their statistical
representations under wide sense stationarity in frequency and position, uk
C
cos y is the spatial
Doppler frequency, with y the zenith angle with respect to the direction of motion z, and k
C
the
wavenumber of the radio carrier frequency. (From Ref. 1.)
MOBILE RADIO CHANNELS 3141
The (average) delay spread, denoted s
t
, follows the
same definition but uses the averaged distribution
P(t) /|h(t)|
2
S instead of |h(t)|
2
. The analogous deni-
tion for the Doppler spread is
s
u

_
u
2
Pu du
_
Pu du

_
uPu du
_
Pu du
_ _
2

42
It is important to note that it is the individual power
distributions that are averaged to produce the power pro-
les, which are then used to produce the spreads. Statis-
tically, it is wrong to calculate the spreads of individual
channels, average these, and call the result the average
spread.
5.3.2. Power Prole Examples. Two power proles that
are commonly used for modeling because of their simplic-
ity, are the one-sided exponential
Pt
1
s
t
exp
t
s
t
_ _
,S
H
Do

1
1j2p Dos
t
t ! 0
43
and the two-path
Pt dt ja
2
expja
2
j
2
dt t
2

,S
H
Do 1ja
2
exp ja
2
j
2
exp jDot
2

44
which are shown in Fig. 7. The exponential is the most
commonly used model.
The two-path model offers much insight into the mech-
anisms of the channel and is used in the following sections
to develop the basic characteristics and parameters of
interest of the mobile channels behavior. It is later ex-
tended to the many-path situation. Liberties are taken
with the mathematical use of the delta functions to allow
convenient modeling.
6. THE TWO-PATH MODEL
The two-path model and its statistics (the model is treat-
ed statistically despite the situation being deterministic)
produce and explain nearly all of the behavior that can be
found in real-world mobile channels. Such a model is also
used in point-to-point communications where there can be
a direct wave with a single ground bounce. The term two-
path refers to two effective sources. However, the intro-
duction of the directions of the effective sources is delayed
until later, since the directions have no bearing on the re-
ceived signal while the receiver is static. The moving re-
ceiver introduces a changing frequency dependence, and
the rate of change is determined by the directions. Under-
standing the behavior of the static model allows a smooth
transition to understanding the many-path channel be-
havior.
6.1. Static Model for Frequency-Selective Fading
The two-path scenario is shown with its variation with
frequency in Fig. 8. The impulse response, on setting t
1
0
and a
1
0 for the rst path, is
h
2
t dt a
2
exp ja
2
dt t
2
45
and so represents a signal arriving with zero delay with
normalized magnitude and zero phase, and a signal arriv-
ing at a delay of t
2
with magnitude a
2
and phase a
2
. The
transfer function is minimum phase when a
2
r1, and is
maximum phase (or in the general case, non-minimum
phase) when a
2
41.
This model, which is static in the sense that the
two effective scatterers are constant in amplitude and
phase, needs no averaging to obtain the power prole. So
P(t) /|h(t)|
2
S|h(t)|
2
for the static case. The delay
spread is thus the same as the instantaneous delay
spread, and from Eqs. (44) and (41), is s
2
t
a
2
t
2
=
1 a
2
2
. The delay spread is not affected by time reversal
or magnitude scaling of the power prole. In the two-path
case, this means that a
2
can be replaced by 1/a
2
(i.e., a
1.0
0.8
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Delay time (js)
Two-path profile
1
a
2
2
Exponential profile
Figure 7. Examples of the exponential and the two-path models
for the power delay prole. The two-path model comprises idea-
lized discrete multipath contributions, whereas the exponential
prole has a continuum of multipath contributions.
Re h
(2)
(t)
Im h
(2)
(t)
1
0
a
2
:
2
t
t
2
Impulse response
Im H
(2)
(o)
|
H
(
2
) (
o
)
|
a
2
Locus of H
(2)
(o)
:
2
o
2
Re H
(2)
(o)
Transfer function
1
Figure 8. The impulse response of the static two-path model
amplitudes 1 and a
2
, and a complex plane representation of the
transfer function for the case a
2
41.
3142 MOBILE RADIO CHANNELS
change from a minimum- to a maximum-phase channel)
and the delay spread stays the same.
6.1.1. Transfer Function. The transfer function is ob-
tained by Fourier transformation of Eq. (45), and is (the
factor 1/2p is omitted for brevity)
H
2
o 1a
2
expja
2
ot
2
46
where the delay difference is Dt t
2
t
1
t
2
. The in-phase
component is the real part of the transfer function, I(o)
1 a
2
cos(ot
2
a
2
), and similarly the quadrature part is
Q(o) a
2
sin(ot
2
a
2
). Apart from the DC term, these are
simply quadrature sinusoids. The phase of the second ef-
fective scatterer, a
2
, is now set to zero for brevity. The
power transfer function is jHoj
2
1 a
2
2
2a
2
cosot
2
,
and so the frequency fading behavior is periodic with pe-
riod 1/t
2
(Hz). The phase of the transfer function is
f
2
o tan
1
a
2
sinot
2
1 a
2
cos ot
2
_ _
47
which has a maximum rate of change when the power is a
minimum. For the case a
2
r1, the maximum and mini-
mum values of the phase are 7sin
1
a
2
. When a
2
1 and
ot
2
np (n is an integer), the phase changes by p over an
innitesimally small change in o.
6.1.2. Group Delay. The group delay of a transfer func-
tion is the negative derivative of the phase with respect to
frequency, t
g
(o) @f(o)/@o. It approximates the time
delay of the envelope of a narrowband signal after it has
passed through a transfer function with phase f(o) [5].
Changes in the group delay mean changes in the expected
arrival times of information, such as symbols, at the
receiver.
For a channel that contains many delay values, the
received signal becomes distorted owing to the dispersion.
For the two-path model, the group delay is found by
differentiating Eq. (47) to be
t
2
g
o
a
2
t
2
a
2
cos ot
2

1 a
2
2
2a
2
cos ot
2
48
For the minimum-phase case, this varies between a
2
t
2
/
(a
2
1) and a
2
t
2
/(a
2
1). If different frequencies were sent
through the channel, then these values are the extrema of
the group delays that would be experienced. Figure 9
shows the in-phase and quadrature signals, the envelope
and phase, and the group delay for the transfer function of
a static two-path model.
6.1.3. Features of the Static Two-Path Model. The fea-
tures from this deterministic model are frequency depen-
dence with
*
Smoothly varying in-phase and quadrature compo-
nents
*
Fading envelope
*
Sharp transitions of the phase of the transfer func-
tion, occurring when the envelope is at a minimum
*
Possibility of both minimum-phase fades (a
2
r1) and
non-minimum-phase fades (a
2
41)
*
Dispersive channel with sharp spikes in the group
delay at the envelope minima
These transfer function variations are all periodic in the
two-path model, but as seen below, the same effects occur
also in the real-world channel, but with a random fre-
quency and space dependence.
The reason for the phase behavior coinciding with the
envelope is best seen from the locus of the signal in Fig. 8,
where the envelope minima occur as the locus is passing
closest to the origin, which is also when the phase is
changing the quickest. For deep fades, the phase change is
always nearly 7p (the sign depends on whether a
2
is less
than or greater than one), and such phase jumps are also a
characteristic of the many-path channel.
6.2. Moving Receiver
In a moving receiver, we can x the frequency to a CW for
simplicity and get behavior as in the static channel of Fig.
8, but with spatial (i.e., time, for a given mobile speed),
instead of frequency, dependence. For a CW channel, the
transfer function is
H
2
z 1 a
2
exp ja
2
Duz 49
where Duk
C
(cosy
2
cosy
1
) is the spatial Doppler fre-
quency difference between the two effective sources. The
transfer function now has spatial periodicity with a period
(in meters) of 2p/Du. For example, with sources exactly in
front of (y
1
0) and behind (y
2
p) the moving receiver,
the periodicity is given by a spacing of exactly z l
C
/2,
that is, half the carrier wavelength.
10
0
10
20
0 5 10 15 20 25
2
2
0 5 10 15 20 25
5
0
5
10
0 5 10 15 20 25
0
M
a
g
n
i
t
u
d
e

(
d
B
)
P
h
a
s
e

(
r
a
d
)
G
r
o
u
p

d
e
l
a
y
/
t
2
ot
2
Figure 9. The periodic frequency selective channel behavior for
the static two-path model. The receiver is at a xed position. The
magnitude shows fading, the phase is changing quickly at the
fade frequencies, and the group delay is correspondingly large
(and negative for a
2
r1) at the fade frequencies.
MOBILE RADIO CHANNELS 3143
6.3. Random Frequency Modulation
The spatial analogy to the group delay is the random FM,
given in radians per meter by the derivative of the phase
with respect to position as o
R
(z) 2p@f(z)/@z. The random
FM is an angle modulation in the channel and will be ap-
plied to a signal borne by the channel. It means that angle
modulation systems are affected as the receiver moves. In
practice, the random FM is often too small to be noticed in
a working system, but as carrier frequencies increase, the
fading rate and the spectrum of random FM increasingly
invades the signal band. In summary, the CW spatial mo-
bile channel follows the same behavior as that in the fre-
quency-dependent static channel, the transfer function
signals shown in Fig. 9 apply with the abscissa ot
2
re-
placed with zDu, and the group delay becomes the random
FM (with the opposite polarity).
6.3.1. Two-Dimensional Transfer Function. The frequen-
cy and spatial dependences can be combined to give the
two-dimensional transfer function, again with a
2
0
H
2
o; z 1a
2
exp jDuz ot
2
50
which explicitly indicates the two-dimensional nature of
the fading. The range of angles Du determines the spatial
fading rate, and the range of delay times, Dt t
2
, deter-
mines the rate of fading in the frequency domain. The
statistical equivalents of these quantities, the Doppler
spread and the delay spread, are used for describing the
average fading rates found in the real world many-path
situation.
7. STATISTICAL APPROACH USING TWO-PATH MODEL
The statistical approach is required when there are too
many paths to determine the channel, which is normally
the case in mobile communications. The statistical ap-
proach to the two-path model also offers insight into the
statistical behavior of the many-path case. In the static
case, the transfer function of the two-path model assumes
all its possible values as the relative amplitude a
2
and
phase a
2
are varied. In practice, averaging is over the
phase-mixing process, so here we x the amplitude and
average over the changing phase only. In the static case,
the phase of the frequency-dependent transfer function
can be changed by changing the frequency. In a mobile
channel, the xed-frequency transfer function is averaged
over the varying phase by averaging over many positions.
Since the two-path transfer function has a symmetric,
periodic envelope with half period p/t
2
(rad), equally likely
frequencies are expressed by a uniform probability density
function (pdf) over one of the periods:
p
o
o
t
2
p
; n
p
t
2
o 2n1
p
t
2
n any integer 51
The analogous expression for the moving receiver holds
for equally likely positions (viz., p
z
(z) Du/p). These pdfs
allow the pdfs of the channel function to be calculated
below.
7.1. Probability Density Function of Channel Power
For a
2
o1 and equally likely frequencies, the pdf for the
power g(o) |H(o)|
2
is, from function transformation of
p
o
Pr
2
g p
o
o
@o
@go

1
p

2a
2

2
g 1a
2
2

2
_ 52
where 1a
2
2
and (2a
2
)
2
are the mean and variance, re-
spectively, of the power in the two-path channel.
7.2. Cumulative Density Function of Channel Power
The cumulative density function (cdf) is the integral of
the pdf over its range values (1a
2
)
2
to (1a
2
)
2
, and is
written
Probg
2
o g
0
1
1
p
cos
1
g
0
1 a
2
2

2a
2
_ _
53
This probability approach is an alternative to the deter-
ministic form H
(2)
(o) for characterizing the two-path
channel. The approach is needed when a deterministic
form is not available. The cdfs for the n-path model with
all the a
n
1 are given in Fig. 10 for n2,3,8. The eight-
path case is very close, except at the tails of the distribu-
tion, to the Rayleigh distribution, which corresponds to
the limiting case n-N, discussed further below.
The pdf for the two-path case is centered at its mean,
1 a
2
2
, and is conned to its limits, that is, between
(1 a
2
)
2
and (1a
2
)
2
. At these limits, the pdf p
2
g
goes to
innity. The many-path pdfs can behave the same way.
This does not cause interpretation problems, however,
since the probability of the power being at these limits is
innitesimal and the integral of the function of course
maintains its unity value. For example, for a
2
1, the
10
0
10
1
10
2
10
3
30 25 20 15 10 5 0 5 10
Normalized envelope (dB)
3 8
4
n = 2
P
r
o
b
a
b
i
l
i
t
y

t
h
a
t

e
n
v
e
l
o
p
e

i
s

l
e
s
s

t
h
a
n

a
b
s
c
i
s
s
a
Figure 10. The cdf for the power of the n2,3,4,8 channels,
where all the multipath amplitudes are the same. The n8 model
is essentially the same, for the cdf range displayed, as Rayleigh
(n-N) distribution, given in Fig. 14.
3144 MOBILE RADIO CHANNELS
fades go exactly to zero in the transfer function. In the cdf
of Fig. 10, the interpretation is that there is an innites-
imally small probability of the power being zero:
Probgo g
0
!0 as g
0
!0; a
2
1 54
A similar situation holds for the power approaching its
maximum value (1 a
2
)
2
:
Probgo g
0
!1 as g
0
!1 a
2

2
55
In the a
2
1 two-path example, the cdf diagram shows
that for 10% of the frequencies the power transfer function
is more than 13 dB below its mean value. The cdf curves
are arranged so that the mean power always corresponds
to 0 dB. A at channel (a
2
0) would be represented by a
line at g
0
0dB.
In summary, it is the phase difference between the
source contributions that is the generic random variable
for the statistical approach to the short-term variation of
the power or envelope. In the static scenario, the averag-
ing over the phase difference is implemented by varying
the frequency. For the moving-receiver case, the CW
transfer function is averaged over space. In the general
case, the transfer function is a two-dimensional distribu-
tion with phase mixing causing fading in both frequency
and position.
7.3. Coherence Bandwidth
An important parameter in a frequency-selective fading
channel is the frequency separation for which the fading
becomes effectively independent in the statistical sense.
This frequency separation is determined by the auto-
correlation of the channel transfer function. It is present-
ed here as independent of frequency, that is, the channel
is assumed to be WSS. The frequency correlation coef-
cient function, sometimes referred to as the coherence
function, is
CDo
S
H
Do Ho
_
H

o
_
S
H
0 Ho
_
H

o
_ 56
and for the static two-path model with a
2
1, the magni-
tude of this is
jC
2
Doj cos
Dot
2
2

; a
2
1 57
The coherence bandwidth O
C
(rad/s) is dened as the
frequency span from the maximum (unity) of the frequen-
cy correlation coefcient function to where the magnitude
of the function rst drops to a value C
C
jCDoO
C
j C
C
58
as illustrated in Fig. 11.
C
C
is taken by various authors from 1/e 0.37 to 0.9
[68]. A change of C
C
scales the coherence bandwidth
nonlinearly, so any results derived from some value of C
C
are also scaled in some way. The coherence function is
periodic in Do for the two-path channel, since H
(2)
(o) is
periodic. O
2
C
is minimum for a
2
1, and for this case, the
coherence bandwidth in hertz, O
2
C
=2p, can be written
directly from Eqs. (57) and (58) as
B
2
C;Hz

1
pt
2
cos
1
C
C
; a
2
1 59
The coherence bandwidth decreases with increasing delay
difference between the two-path contributions, t
2
. Also,
the coherence bandwidth decreases with increasing rela-
tive amplitude a
2
. When a
2
is small, the coherence band-
width becomes undened, as the coherence function does
not drop down to C
C
.
7.4. Product of Coherence Bandwidth and Delay Spread
While the delay spread is a measure of the channel time
dispersion, the coherence bandwidth is a measure of the
fading rate with changing frequency. The ideal communi-
cations channel has a zero delay spread and innite
coherence bandwidth. For the two-path model, the delay
spread increases while the coherence bandwidth decreas-
es for increasing relative delay t
2
and increasing relative
amplitude a
2
. The coherence bandwidth and the delay
spread are thus inversely related, but the exact relation-
ship is not simple in the many-path case.
The product of these two parameters was taken for ex-
perimental channels using C
C
0.75 [7], and an empirical
law was found that Bs
t
was constant and approximately
equal to
1
8
(Gans law). The constancy of the product can
also be viewed as an uncertainty principle [5,9]. It gives a
lower bound for the many-path channel as
B
C;Hz
C
C

.
s
t
!
1
2p
cos
1
C
C
60
The equality holds for the two-path case with equal pow-
ers, as in Eq. (59), which corresponds to maximum delay
spread.
C(o)
1
C
c
0
0

C
o
Figure 11. The definition of a coherence bandwidth O
C
in terms
of the frequency correlation coefcient function, or coherence
function, and a correlation value of C
C
. Narrowband channels,
separated by a minimum frequency O
C
, will display mutually un-
correlated fading in the sense that the correlation coefcient is
C
c
oB0.75.
MOBILE RADIO CHANNELS 3145
For the two-path channel, the product B
2
C
s
2
t
does not
exist for small a
2
, since B
2
C
does not exist. The dependence
of this product on a
2
, is weaker than its dependence on the
choice of C
C
. The product B
2
C
s
2
t
is a minimum when a
2
is
1, that is, when the frequency fades are the deepest. In
this case and for the value C
C
0.75, the two-path product
is in close agreement with Ganss law, B
2
C;Hz
s
2
t

1=2p cos
1
0:75 %
1
8
.
In the two-path model, then, the virtually constant
value of the product allows the delay spread to be calcu-
lated from a measured correlation bandwidth, or vice
versa. However, in a general many-path case, the expres-
sion for the coherence-bandwidthdelay-spread product
must be heeded as a lower limit. It should always be borne
in mind that the choice of C
C
for the coherence bandwidth
affects the value of the product. Because the delay spread
is mathematically unbounded in the model (no limit is
placed on t
2
), there is no theoretical upper limit for Bs
t
in
the many-path case, even though the coherence band-
width can simultaneously remain essentially constant. In
practice, physical and practical considerations such as the
space loss described below are imposed on the model and
the delay spread and the product become bounded through
these.
7.5. Correlation Distance
The correlation distance is the spatial counterpart of the
coherence bandwidth. It is traditionally dened as
the spatial displacement d
d
Dz corresponding to when
the spatial correlation coefcient, dened at a given fre-
quency, decreases to some value. Instead of using the
complex transfer function H(z), analogously to using
H(o) for the coherence function, the envelope correlation
coefcient function
r
r
Dz
R
r
Dz r h i
2
R
r
0 r h i
2
61
has been used traditionally, and the coefcient value is
taken as r
r
(d
d
) 0.7. The correlation distance is a mea-
sure of the spatial fading rate and therefore depends in-
versely on the spatial Doppler spread s
u
. The product of
these, d
d
s
u
, is lower bounded, but not with the same re-
lationships as Bs
t
.
8. MANY-PATH MODEL
The preceding discussion has touched several times on the
many-path model. Many channel parameters for the
three-path model can be derived deterministically. The
three-path model has been of interest in point-to-point
links because it matches the physical situation of a direct,
a ground bounce, and a single atmospherically diffracted
ray. It has been also used to help randomize, relative to
the two-path model, a transfer function for a more realis-
tic-looking (over two or three fades), but tractable, model.
However, it otherwise offers little more insight into the
channel behavior than does the two-path model. The sta-
tistics for the few-path (less than about 10) model are
rather complicated. When there are more than about 10
components of similar amplitude, however, the statistics
follow, to a good approximation, the limiting case of a very
large number of paths. The phase-mixing process of add-
ing many random phasors gives, from the central limit
theorem, the classical Rayleigh channel. The distribution
functions are given below.
8.1. Phase Mixing with Many Random Contributions
Equations (11) and (12) describe the model. For a narrow
band channel, the in-phase and quadrature components
are Gaussian-distributed from the central limit theorem.
It follows that: the distribution of the power is chi-square
with 2 degrees of freedom (i.e., exponential), the envelope
is Rayleigh-distributed, and the phase is uniformly dis-
tributed. The transfer function signals, as a function of
position, are depicted in Fig. 12. The incident power is
from all directions for this example. The gure can be
compared with the signals from the two-path model,
shown as a function of frequency in Fig. 9. The features
of the channel are essentially the same as those in the two-
path model, although the process is random. There are
both minimum-phase and maximum-phase deep fades.
Similarly, the random FM spikes have an associated
polarity that is random.
8.1.1. Rayleigh Envelope and Uniform Phase. The signal
representing the channel transfer function is represented
as a complex Gaussian process. The in-phase component
and quadrature components are denoted x and y, the
envelope r, and the phase y, and these are related as
x jy r e
jy
62
Here x and y are independent, zero mean Gaussians, so
the pdf for each is (here for x)
p
x
x
1

2p
p
s
exp
x
2
2s
2
_ _
63
where s is the standard deviation of each component. The
envelope and phase pdfs are established as independent
with Rayleigh and uniform distributions respectively,
through the steps
p
r;y
r; y p
x;y
x; y
@x; y
@r; y

r
s
2
exp
r
2
2s
2
_ _
.
1
2p
r ! 0
p
r
r
.
p
y
y
64
The pdf of the phase is 1/(2p), so the mean phase is p and
the standard deviation is p=

3
p
. The averaged power is
r
2
_
x
2
_
y
2
_
2s
2
65
3146 MOBILE RADIO CHANNELS
and r
2
is recognized as having a chi-square distribution
with 2 degrees of freedom,
p
r
2 r
2

1
2s
2
exp
r
2
2s
2
_ _
66
The Rayleigh statistics are included in the more general
Rice statistics, below.
8.1.2. Rice Envelope and Phase. Sometimes there is a
single dominant effective source. This usually corresponds
to a line-of-sight situation, which gives a single dominant
effective scatterer. Multipath transmission still occurs,
and the Rice distribution describes the statistics of the
narrowband envelope. The Rice distribution results from
one or both of the Gaussian processes having nonzero
mean. These phase processes become
x
Ri
x x
s
; y
Ri
y y
s
67
where the x and y are zero-mean Gaussian and x
s
and y
s
are the respective means representing the dominant com-
ponent (sometimes called the specular, or coherent, com-
ponent, with x and y representing the diffuse, or
incoherent component) of the signal. The phasor combina-
tion is shown in Fig. 13 in which ftan
1
(y
Ri
/x
Ri
) is the
absolute phase of the Rice envelope r
Ri
, and y is the phase
difference between r
Ri
(Rayleigh component plus domi-
nant component) and the dominant component r
s
. The
mean of the absolute phase of the process is E{y f}. A
coordinate rotation allows the phase to be dened as just y.
From
x
Ri
x
s

2
y
Ri
y
s

2
r
2
r
2
Ri
r
2
s
2r
Ri
r
s
cos y 68
the Rice pdf is
p
r
Ri
;y
r
Ri
; y
r
Ri
2ps
2
exp
r
2
Ri
r
2
s
2r
Ri
r
s
cos y
2s
2
_ _
69
The envelope and phase are thus statistically dependent,
unlike the Rayleigh case. The p and p transitions that
occur in the phase of the Rayleigh signal as the locus
passes near the origin are now reduced to smaller values,
which depend on the length of the envelope phasor com-
ponent r
Ri
. The Rice channel can be purely minimum
phase when the dominant component is large enough.
The Rice k factor is the ratio of powers of the dominant
component and the Rayleigh component:
k
Ri

r
2
s
2s
2
70
When the dominant component r
s
approaches zero, k
Ri
approaches 0, and the distribution reduces to Rayleigh.
20
0
20
0 5 10 15
10
0
10
10
20
30
0 5 10 15
C
o
m
p
o
n
e
n
t
s
M
a
g
n
i
t
u
d
e

(
d
B
)
15 10 5 0
50
0
50
0 5 10 15
Distance (wavelengths)
P
h
a
s
e

(
r
a
d
)
R
a
n
d
o
m

F
M
Figure 12. The signals of a many-path, narrow-
band channel as a function of position. As the mobile
receiver moves, the narrowband signal quantities
vary in a way similar to the behavior of the plots.
The in-phase and quadrature components comprise
complex Gaussians, the magnitude or envelope is
Rayleigh-distributed, the phase is uniformly distrib-
uted, and the random FM is Student-t-distributed.
y
Ri
y
s
+y
y
s
r
r
Ri
r
s
x
s
x
s
+x
x
Ri
[
0
Figure 13. The Rice process has envelope r
Ri
comprising the ad-
ditive constant r
s
and the Rayleigh envelope r. The phase of the
Rice signal is y.
MOBILE RADIO CHANNELS 3147
Similarly, when the dominant component becomes very
large, the Rice distribution for the envelope approaches
Gaussian with mean r
s
.
8.1.3. Rice Envelope. For convenience, the envelope is
normalized by the Gaussian standard deviation:
r
n
Ri

r
Ri
s
71
The envelope pdf is
p
r
Ri
r
Ri

r
Ri
2ps
2
exp
r
2
Ri
r
2
s
2s
2
_ __
2p
0
exp
r
Ri
r
s
cos y
s
2
_ _
dy

r
Ri
s
2
exp
r
2
Ri
r
2
s
2s
2
_ _
I
0
r
Ri
r
s
s
2
_ _
; r
Ri
! 0
72
or in terms of r
n
Ri
and k
Ri
,
p
r
n
Ri
r
n
Ri

1
s
r
n
Ri
exp
1
2
r
n
Ri
_ _
2
k
Ri
_ _ _ _
I
0
r
n
Ri

2k
Ri
_
_ _
; r
n
Ri
! 0
73
As k
Ri
approaches innity, the Rice pdf becomes a deltalike
function, being a Gaussian with a variance approaching
zero.
The Rice distribution is sometimes called Nakagami
Rice, in recognition of its independent development by
Rice [10] and by Nakagami [11], who reported it in English
at a later time. Because of its physical justication for
many situations, the Rice distribution is the preferred one
for short-term fading. Review material covering aspects of
Rices work is presented in Refs. 12 and 13. The distribu-
tion for the random FM and group delay for the Rice
channel is given by the Student t distribution [1,14].
The Rice envelope cumulative density function (cdf) is
expressed as
Probr
Ri
r
0
1 Q
1
r
s
s
;
r
0
s
_ _
74
where Q is the Marcum Q function [15]. Further worth-
while discussion on the Q function is given in Refs. 1, 16,
and 17. The Rice envelope cdf is sketched in Fig. 14 for
values of the Rice k factor, including the Rayleigh case.
8.2. Lognormal Shadow Fading
Shadow fading has been found experimentally to be well
described by the lognormal distribution. Whereas the
Gaussian distribution results from the addition of many
random variables, the lognormal distribution results from
the product of many positive random variables. It follows
that when Gaussian variables are expressed in logarith-
mic units, they then follow a lognormal distribution. The
transformation of variables between the distributions is
z e
x
, or ln z x. (z here is a variable, not distance.) If x is
Gaussian, then z is lognormal. Alternatively stated, if z is
lognormal, then ln z is Gaussian. The pdf of the lognormal
distribution is found from the Gaussian pdf
p
l
z p
x
x
@x
@z

2p
p
s
lz
z
exp
lnz m
lz

2
2s
2
lz
_ _
75
where m
lz
and s
2
lz
are the mean and variance respectively
of ln z. The lognormal signal representing the local mean
of the envelope looks like one of the phase components of
Fig. 14, except that the scale would be in decibels rather
than linear. Typically s
lz
is 38 dB in urban environments.
8.3. Suzuki: Lognormal and Rayleigh
Combining the short-term Rayleigh and long-term lognor-
mal distributions provides a model for the stochastic com-
ponent of the path loss of a narrowband signal in mobile
communications.
The lognormal distribution is over the mean of the en-
velope. This can be interpreted as Gaussian for the enve-
lope mean in decibels. The Rayleigh envelope mean is
linearly related to the Gaussian standard deviation, viz.,
r h i

p=2
_
s, so the lognormal distribution can be applied
to the s [18]. The distribution can be written
p
Su
r
_
1
0
r
s
2
exp
r
2
2s
2
_ _
.
1

2p
p
ss
l
exp
lns m
l

2
2s
2
l
_ _
ds
76
No closed form has been found for the integral, which is a
practical inconvenience when applying the Suzuki distri-
bution. However, the distribution has the advantage of
being based on a physical model for the envelope, and thus
10
0
10
1
10
2
10
3
30 25 20 15 10 5 0 5 10
Normalized Rice envelope (dB)
P
r
o
b
a
b
i
l
i
t
y

t
h
a
t

e
n
v
e
l
o
p
e

i
s

l
e
s
s

t
h
a
n

a
b
s
c
i
s
s
a
Rayleigh
k
Ri
= 0 dB
k
Ri
= 5 dB
k
Ri
= 15 dB
k
Ri
= 10 dB
Figure 14. The Rice envelope cdf. For zero specular component,
the distribution is Rayleigh, and approaches Gaussian (vertical
line at 0dB) for an asymptotically large specular component.
3148 MOBILE RADIO CHANNELS
offers good agreement with experimental results on large-
scale records of envelopes of narrowband signals.
Many other distributions have been used to t mobile
channel fading [19]. Some have various advantages for
mathematical manipulations or for the tting of experi-
mental data. Two are noteworthy because of their versa-
tility. The Nakagami m [11] distribution has a single
parameter that allows the shape of the distribution to be
altered, in particular for small values of r. The generalized
gamma distribution [20] has effectively two parameters
that can independently adjust the shape of the small and
large values of r.
9. PATH LOSS AND THE MOBILE CHANNEL
Much of the preceding discussion has been a statistical
description of the behavior of the mobile channel. The
interest in the envelope or power of the mobile transfer
function is because this dominates the SNR of the received
signal. The power is also referred to as the channel gain.
How this ties in with the path loss is addressed in this
section. In so doing, the discussion returns to the electro-
magnetic propagation and antenna issues of the opening
sections.
Path loss is a well-dened concept originating from
point-to-point radio links. It comes from the Friis trans-
mission equation, which relates the transmitted and re-
ceived powers (P
T
, P
R
respectively), the antenna gains (G
T
,
G
R
respectively), and the path loss L:
P
R
P
T
G
T
G
R
1
L
77
Path loss is seen from this equation to be the reciprocal of
the path gain. For frequency-independent antenna gains,
the free space path loss for a separation distance d and
wavelength l c/f is
L
F

4pd
l
_ _
2

4pfd
c
_ _
2
78
so that it varies as the frequency squared and the distance
squared. The incident eld strength is not dependent on
frequency. In Eq. (77), the antennas are considered im-
pedance-and polarization-matched.
9.1. Mean Path Loss and Mean Antenna Gain
In a mobile channel, the classical point-to-point situation
does not apply. The received power and the receiving an-
tenna gain become statistical quantities. The antennas
mean gain can be dened by the average gain into a well-
dened distributed direction. The mean received power
can be dened from a time average. The path loss is the
time-varying quantity (because of the spatially dependent
phase mixture of multipath propagation signals), and so
the mean received power with Eq. (77) denes a mean
path loss. Sometimes the term mean effective gain is used
when comparing antennas by measuring their time-aver-
aged received powers in the same environment. In this
context, it must be assumed that the transmitting power
and the mean path loss are both common to each mea-
surement record used for the averaging. The mean effec-
tive gains are then proportional to the mean received
powers and include polarization mismatches. What is be-
ing measured is how well, on average, the vector antenna
pattern is directed toward the vector distribution of in-
coming power from the measurement environments.
9.2. Scenario Models
Model distributions are used to approximate the average
incident power incident power directions for various ap-
plications. For a mobile vehicle, for example, the Clarke
scenario [21,22], given by
S
C
y; f S
C
y dy p=2 79
is often used. This corresponds to a uniform source distri-
bution at the horizon, surrounding the antenna. Trans-
forming to the spatial Doppler variable results in the pdf
p
C
u
u
1
p

k
2
C
u
2
_ 80
and this spatial Doppler spectrum is for the incident elds
or the electromagnetic propagation channel (for one po-
larization), and also for the mobile channel if an omnidi-
rectional (in the y p/2 plane) antenna is used. The spatial
Doppler spread is s
C
u
k
C
=

2
p
rad=m. The spatial corre-
lation coefcient for the envelope is r
C
r
Dz % J
2
0
k
C
Dz,
giving a 0.7 correlation distance of about 0.13 wavelengths
and an average distance between fades of about 0.5 wave-
length.
For a directional antenna, the spatial Doppler distri-
bution corresponding to the pattern must be multiplied
with Eq. (80) to get the spatial Doppler spectrum of the
mobile channel. This is how the antenna pattern can con-
trol the mobile channel behavior. A single-lobed, direc-
tional pattern acts as a spatial Doppler bandpass lter
and results in a decreased (relative to an omnidirectional
pattern) Doppler spread, and therefore a decreased spatial
fading rate. This effect can be seen with laser speckle,
where the dark areas are the deep fades of energy, and the
interspeckle distance, even though the frequency is opti-
cal, is sufciently large to be visible to the eye because the
spatial Doppler spread of the illuminating beam is so
small.
BIBLIOGRAPHY
1. R. G. Vaughan and J. Bach Andersen, Channels, Propagation
and Antennas for Mobile Communications, London: Pere-
grinus, 2003.
2. T. Miki and M. Hata, Performance of 16 kbits/s GMSK trans-
mission with postdetection selection diversity in land mobile
radio, IEEE Trans. Veh. Technol. VT-33(3):128133 (1984).
3. K. Sakoh et al., Advanced radio paging service supported by
ISDN, Proc. Nordic Seminar on Digital Land Mobile Radio-
communication, Espoo, Finland, Feb. 1985, pp. 239248.
MOBILE RADIO CHANNELS 3149
4. P. A. Bello, Characterization of randomly time-variant linear
channels, IEEE Trans. Circuits Syst. CS-11:360393, (Dec.
1963).
5. A. Papoulis, Signal Analysis, McGraw-Hill, New York, 1977.
6. P. A. Bello and B. D. Nelin, The effect of frequency selective
fading on the binary error probabilities of incoherent and dif-
ferentially coherent matched lter receivers, IEEE Trans.
Circuits Syst., CS-21:170186 (June 1963).
7. M. J. Gans, A power spectral theory of propagation in the
mobile-radio environment, IEEE Trans. Veh. Technol. VT-
21(1):2738 (Feb. 1972).
8. D. C. Cox and R. P. Leck, Correlation bandwidth and delay
spread multipath propagation statistics for 910MHz urban
mobile radio channels, IEEE Trans. Commun., Com-23(11):
12711280 (1975).
9. B. H. Fleury, An uncertainty relation for WSS processes and
its application to WSSUS systems, IEEE Trans. Commun.
Com-44(12):16321635 (Dec. 1996).
10. S. O. Rice, Mathematical analysis of random noise, Bell Syst.
Tech. J (3) (1944); (1) (1945).
11. M. Nakagami, The m-distributiona general formula of in-
tensity distribution of rapid fading, in W. C. Hoffman, ed.,
Statistical Methods in Radio Wave Propagation, Permagon
Press, Oxford, 1960.
12. W. B. Davenport and W. L. Root, An Introduction to the The-
ory of Random Signals and Noise, McGraw-Hill, New York,
1958; reprinted, IEEE Press, Piscataway, NJ, 1987.
13. D. Middleton, An Introduction to Statistical Communications
Theory, McGraw-Hill, NewYork, 1960; reprinted, IEEEPress,
Piscataway, NJ, 1997.
14. J. Bach Andersen, S. L. Lauritzen, and C. Thommesen, Dis-
tributions of phase derivatives in mobile communications,
IEE Proc. 137(4):197201 (1990).
15. J. I. Marcum, A statistical theory of target detection by pulsed
radar, IRE Trans. IT-6:59267 (April 1960).
16. S. Stein, M. Schwartz, W. R. Bennett, and S. Stein, Commu-
nications Systems and Techniques, McGraw-Hill, New York,
1966, Part III; reprinted, IEEE Press, Piscataway, NJ, 1996.
17. J. G. Proakis, Digital Communications, McGraw-Hill, New
York, 1983.
18. H. Suzuki, A statistical model for urban radio propagation,
IEEE Trans. Commun. Com-25(7):673680 (July 1977).
19. J. Grifths and J. McGeehan, Interrelationship between some
statistical distributions used in radio-wave propagation, IEE
Proc. 129(Part F)(6):411417 (Dec. 1982).
20. E. W. Stacy, A generalization of the gamma function, Ann.
Math. Stat. 33:11871192 (1962).
21. R. H. Clarke, A Statistical theory of mobile radio reception,
Bell Syst. Tech. J. 47:9571000 (1968).
22. W. C. Jakes (ed.), Mobile Microwave Communications, New
York: AT&T, 1974; reprinted, Piscataway, NJ: IEEE Press,
1989.
FURTHER READING
H. L. Bertoni, Radio Propagation for Modern Wireless System,
Prentice-Hall, Englewood Cliffs, NJ, 2000.
J. K. Cavers, Mobile Channel Characteristics, Shady Island Press,
Richmond, BC, 2003.
W. C. Jakes, ed., Mobile Microwave Communications, AT&T, New
York: 1974; reprinted, IEEE Press, Piscataway, NJ, 1989.
W. C. Y. Lee, Mobile Communications Engineering, McGraw-Hill,
New York, 1982.
R. C. V. Macario, Personal and Mobile Radio Systems, IEE Tele-
communications Series, 25, Peter Peregrinus, London: 1991.
J. D. Parsons, The Mobile Radio Propagation Channel, Pentech
Press, London, 1992.
T. S. Rappaport, Wireless Communications, Principles and Prac-
tice, IEEE Press, New York, 1996.
S. O. Rice, Statistical properties of sine wave plus random noise,
Bell Syst. Tech. J. 27:109157 (1948).
R. Steele, Mobile Radio Communications, Pentech Press, London,
1992.
G. Stu ber, Principles of Mobile Communications, Kluwer, Boston,
1996.
R. G. Vaughan and J. Bach Andersen, Channels, Propagation
and Antennas for Mobile Communications, Peter Peregrinus,
London, 2003.
MOBILE SATELLITE COMMUNICATIONS
JOHN LODGE
Communications Research
Centre
Ottawa, Ontario, Canada
Mobile satellite (MSAT) systems provide communications
services to mobile and portable terminals using a radio-
transmission path between the terminal and the satellite.
An example of such a system, illustrating its typical com-
ponents, is shown in Fig. 1. The mobile terminal may be
installed in any one of a number of platforms including
cars, trucks, railcars, aircraft, and ships. Alternatively, it
could be a portable terminal with a size ranging from that
of a handheld unit up to that of a briefcase, depending on
the system and the service provided. Yet a third class
could be small but xed remote terminals serving func-
tions such as seismic data collection and pipeline moni-
toring and control. A mobile satellite system requires one
or more satellites with connectivity to the terrestrial in-
frastructure (e.g., to the public switched telephone net-
work and to the various digital networks) supplied by one
or more Earth stations. Typically, most of the communi-
cations trafc is between the mobile terminal and another
terminal or application outside the mobile satellite sys-
tem. However, most mobile satellite systems allow for mo-
bile-to-mobile communications within the system. The
Earth stations are coordinated by a control center in a
way that shares the satellite transmission resources
efciently. Also, the control center may issue commands
to the satellites via the Earth stations.
A number of radiolinks are required for such a system.
Communication from the Earth station to the mobile ter-
minal is said to be in the forward direction, whereas com-
munication from the mobile terminal to the Earth station
is said to be in the return direction. In both the forward
and return directions, an uplink to the satellite and a
downlink from the satellite are required, for a total of four
radiolinks. The links between the Earth station and the
satellite are sometimes referred to as feeder links, whereas
3150 MOBILE SATELLITE COMMUNICATIONS
Next Page

You might also like