You are on page 1of 67

Chapter 1 Introduction to Communication Systems

Analog and Digital Communication Systems
A communication system conveys information from its source to a destination some distance away. There are so many different applications of communication systems that we cannot attempt to cover every type. Nor can we discuss in detail all the individual parts that make up a specific system. A typical system involves numerous components that run the gamut of electrical engineering-circuits, electronics, electromagnetic signal processing, microprocessors, and communication networks, to name a few of the relevant fields. Moreover, a piece-by-piece treatment would obscure the essential point that a communication system is an integrated whole that really does exceed the sum of its parts. We therefore approach the subject from a more general viewpoint. Recognizing that all communication systems have the same basic function of information transfer, we'll seek out and isolate the principles and problems of conveying information in electrical form. These will be examined in sufficient depth to develop analysis and design methods suited to a wide range of applications.

Information, Messages, and Signals
Clearly, the concept of information is central to communication. But information is a loaded word, implying semantic and philosophical notions that defy precise definition. We avoid these difficulties by dealing instead with the message, defined as the physical manifestation of information as produced by the source. Whatever form the message takes, the goal of a communication system is to reproduce at the destination an acceptable replica of the source message. There are many kinds of information sources, including machines as well as people, and messages appear in various forms. Nonetheless, we can identify two distinct message categories, analog and digital. This distinction, in turn, determines the criterion for successful communication.

Fig 1.1 Communication system with input and output transducers An analog message is a physical quantity that varies with time, usually in a smooth and continuous fashion. Examples of analog messages are the acoustic pressure produced when you speak, the angular position of an aircraft gyro, or the light intensity at some point in a television image. Since the information resides in a time-varying waveform, an analog communication system should deliver this waveform with a specified degree of fidelity. A digital message is an ordered sequence of symbols selected from a finite set of discrete elements. Examples of digital messages are the letters printed on this page, a listing of hourly temperature readings, or the keys you press on a computer keyboard. Since the information resides in discrete symbols, a digital communication system should deliver these symbols with a specified degree of accuracy in a specified amount of time. Whether analog or digital, few message sources are inherently electrical. Consequently, most communication systems have input and output transducers as shown in Fig. 1.1. The input transducer converts the message to an electrical signal, say a voltage or current, and another transducer at the destination converts the output signal to the desired message form. For instance, the transducers in a voice communication system could be a microphone at the input and a loudspeaker at the output. We'll assume hereafter that suitable transducers exist, and we'll concentrate primarily on the task of signal transmission. In this context the terms signal and message will be used interchangeably since the signal, like the message, is a physical embodiment of information.

Elements of a Communication System

Fig 1.2 Elements of a communication system Figure 1.2 depicts the elements of a communication system, omitting transducers but including unwanted contaminations. There are three essential parts of any communication system, the transmitter, transmission channel, and receiver. Each pat plays a particular role in signal transmission, as follows. The transmitter processes the input signal to produce a transmitted signal suited to the characteristics of the transmission channel. Signal processing for transmission almost always involves modulation and may also include coding. The transmission channel is the electrical medium that bridges the distance from source to destination. It may be a pair of wires, a coaxial cable, Sr a radio wave or laser beam. Every channel introduces some amount of transmission loss or attenuation, so the signal power progressively decreases with increasing distance. The receiver operates on the output signal from the channel in preparation for delivery to the transducer at the destination. Receiver operations include amplification to compensate for transmission loss, and demodulation and decoding to reverse the signalprocessing performed at the transmitter. Filtering is another important function at the receiver, for reasons discussed next. Various unwanted undesirable effects crop up in the course of signal transmission. Attenuation is undesirable since it reduces signal strength at the receiver. More serious, however, are distortion, interference, and noise, which appear as alterations of the signal shape. Although such contaminations may occur at any point, the standard convention is to blame them entirely on the channel, treating the transmitter and receiver as being ideal. Figure 1.2 reflects this convention. Distortion is waveform perturbation caused by imperfect response of the system to the desired signal itself. Unlike noise and interference, distortion disappears when the signal is turned off. If the channel has a linear but distorting response, then distortion may be corrected, or at least reduced, with the help of special filters called equalizers. Interference is contamination by extraneous signals from human sources-other transmitters, power lines and machinery, switching circuits, and so on. Interference occurs most often in radio systems whose receiving antennas usually intercept several signals at the same time. Radio-frequency interference (WI) also appears in cable systems if the transmission wires or receiver circuitry pick up signals radiated from nearby sources. Appropriate filtering removes interference to the extent that the interfering signals occupy different frequency bands than the desired signal. Noise refers to random and unpredictable electrical signals produced by natural processes both internal and external to the system. When such random variations are superimposed on an information-bearing signal, the message may be partially corrupted or totally obliterated. Filtering reduces noise contamination, but there inevitably remains some amount of noise that cannot be eliminated. This noise constitutes one of the fundamental system limitations. Finally, it should be noted that Fig. 1.2 represents one-way or simplex (SX) transmission. Two-way communication, of course, requires a transmitter and receiver at each end. A full-duplex (FDX) system has a channel that allows simultaneous

we express sinusoids in terms of the cosine function and write Where A is the peak value or amplitude and o. Equation (1) implies that v(t) repeats itself for all time. no real signal goes on forever. The phasor representation of a sinusoidal signal comes from Euler's theorem Where and θ is an arbitrary angle. Then we'll invoke the Fourier series expansion to obtain the line spectrum of any periodic signal that has finite average power. but Eq. (1) could be a reasonable model for a sinusoidal waveform that lasts a long time compared to the period. If we let sinusoid as the real part of a complex exponential. We'll begin with the line spectrum of a sinusoidal signal. namely . Chapter 2 Signals and Spectra Line spectra and Fourier series This section introduces and interprets the frequency domain in terms of rotating phasors. The phase angle Φ represents the fact that the peak has been shifted away from the time origin and occurs at t =-Φ/ω0. A half-duplex (HDX) system allows transmission in either direction but not at the same time. Phasors also play a major role in the spectral analysis. we can write any . with repetition period To = 2π/ω0. ac steady-state circuit analysis depends upon the assumption of an eternal sinusoid-usually represented by a complex exponential or phasor.transmission in both directions. The reciprocal of the period equals the cyclical frequency measured in cycles per second or hertz. Obviously. In particular. is the radian frequency. Phasors and Line Spectra By convention.

a suitable frequency-domain description would be the line spectrum in Fig. When negative signs appear.180" since the phasor ends up in the same place either way.1a illustrates. [b) Line spectrum 1. which consists of two plots: amplitude versus frequency and phase versus frequency. rather than radian frequency w. we must associate the corresponding amplitude and phase with the particular frequency f0. and any specific frequency such as f0 will be identified by a subscript. equivalently. it does have great conceptual value when extended to more complicated signals. Hence. phase angle. 2. 2. But before taking that step. Phase angles will be measured with respect to cosine waves or. and at time t = 0 makes an angle 4 with respect to the positive real axis. While this figure appears simple to the point of being trivial. The phasor has length A. Hence. (4). The projection of the phasor on the real axis equals the sinusoid in Eq. rotates counterclockwise at a rate of f0 revolutions per second.rrfsince that combination occurs so often. they must be absorbed in the phase using It does not matter whether you take + 180" or . To describe the same phasor in the frequency domain. sine waves need to be converted to cosines via the identity 3. . as Fig.This is called a phasor representation because the term inside the brackets may be viewed as a rotating vector in a complex plane whose axes are the real and imaginary parts. In all our spectral drawings the independent variable will be cyclical frequency f hertz. Now observe that only three parameters completely speclfy a phasor: amplitude.) 2.1b. and rotational frequency. (We'll still use o with or without subscripts as a shorthand notation for 2. four conventions regarding line spectra should be stated.1 Representations of [a) Phosor diagram . We regard amplitude as always being a positive quantity. with respect to the positive real axis of the phasor diagram. Fig 2.

But another spectral representation turns out to be more valuable.2a. consider the signal which is sketched in Fig. . (5) and (6) gives the sum of cosines whose spectrum is shown in Fig. 2. where z is any complex quantity with complex conjugate z*.2b. Drawings like Fig.2b. 2. Phase angles usually are expressed in degrees even though other angles such as wt are inherently in radians. We obtain this representation from Eq. (4) by recalling that Hence. can be constructed for any linear combination of sinusoids. then and Eq.(4) becomes so we now have a pair of conjugate phasors. Converting the constant term to a zero frequency or dc (direct-current) component and applying Eqs. 2. No confusion should result from this mixed notation since angles expressed in degrees will always carry the appropriate symbol. called one-sided or positive-frequency line spectra. even though it involves negative frequencies.4. if . To illustrate these conventions and to carry further the idea of line spectrum.

Fig 2.2 The corresponding phasor diagram and line spectrum are shown in Fig. 2.3. The phasor diagram consists of two phasors with equal lengths but opposite angles and directions of rotation. The phasor sum always falls along the real axis to yield . The line of spectrum is two-sided since it must include negative frequencies to allow for the opposite rotational directions, and one-half of the original amplitude is associated with each of the two frequencies ±f0. The amplitude spectrum has even symmetry while the phase spectrum has odd symmetry because we are dealing with conjugate phasors. This symmetry appears more vividly in Fig. 2.4, which is the two-sided version of Fig. 2.2b.

Fig 2.3 (a) Conjugate phasors; [b) two-sided spectrum.

Fig 2.4 It should be emphasized that these line spectra, one-sided or two-sided, are just pictorial ways of representing sinusoidal or phasor time functions. A single line in the one-sided spectrum represents a real cosine wave, whereas a single line in the twosided spectrum represents a complex exponential and the conjugate term must be added to get a real cosine wave. Thus, whenever we speak of some frequency interval such as f1 to f2, in a two-sided spectrum, we should also include the corresponding negative-frequency interval –f1 to –f2,. A simple notation for specifying both intervals is f1 ≤ |f| ≤ f2.

Fourier Series
The signal w(t) back in Fig. 2.2 was generated by summing a dc term and two sinusoids. Now we'll go the other way and decompose periodic signals into sums of sinusoids or, equivalently, rotating phasors. We invoke the exponential Fourier series for this purpose. Let v(t) be a power signal with period To = 1/fo. Its exponential Fourier series expansion is

The series coefficients are related to v(t) by

so , equals the average of the product . Since the coefficients are complex quantities in general, they can be expressed in the polar form where arg c, stands for the angle of c,. Equation (13) thus expands a periodic power signal as an infinite sum of phasors, the nth term being The series convergence properties will be discussed after considering its spectral implications. Observe that v(t) in Eq. (13) consists of phasors with amplitude and angle arg c, at the frequencies Hence, the corresponding frequency-domain picture is a two-sided line spectrum defined by the series coefficients. We emphasize the spectral interpretation by writing so that |c(nfo)| represents the amplitude spectrum as a function off, and arg c(nfo) represents the phase spectrum. Three important spectral properties of periodic power signals are listed below. 1. All frequencies are integer multiples or harmonics of the fundamental frequency fo = l/To. Thus the spectral lines have uniform spacing fo. 2. The dc component equals average value of the signal, since setting n = 0 in Eq. (14) yields

Calculated values of c(0) may be checked by inspecting v(t)-a wise practice when the integration gives an ambiguous result. 3. If v(t) is a real (noncomplex) function of time, then which follows from Eq. (14) with n replaced by -n. Hence which means that the amplitude spectrum has even symmetry and the phase spectrum has odd symmetry.

except for c0. Equation (13) then becomes which is the trigonometric Fourier series and suggests a one-sided spectrum. Figure 2. we'll use the exponential series and two-sided spectra.5 The function sinc λ = (sin πλ)/πλ Parseval's Power Theorem Parseval's theorem relates the average power P of a periodic signal to its Fourier coefficients . so Fig 2. Some authors use the related sampling function defined as so that .When dealing with real signals. Most of the time. One final comment should be made before taking up an example. we'll now introduce the sinc function defined by where A represents the independent variable. (16) allows us to regroup the exponential series into complex-conjugate pairs.5 shows that sinc A is an even function of A having its peak at A = 0 and zero crossings at all other integer values of A. the property in Eq. however. The integration for often involves a phasor average in the form Since this expression occurs time and again in spectral analysis.

The Fourier transform is a complex function. Above two equations constitute the pair of Fourier integrals. which compares with the periodic case where c(0) equals the average value of v(t). The value of V ( f ) at f = 0 equals the net area of v(t). however. At first glance. 3. since. In a given problem. average power can be found by squaring and adding the heights = of the amplitude lines. so /V(f)/ is the amplitude spectrum of v(t) and arg V ( f ) is the phase spectrum.There fore. . Also Parseval's theorem implies superposition of average power. since the total average power of v(t) is the sum of the average powers of its phasor components. FOURIER TRANSFORMS AND CONTINUOUS SPECTRA Fourier Transforms The Fourier transform of v(t) symbolized by V(f) or and defined as an integration over all time that yields a function of the continuous variable f. then And so again we have even amplitude symmetry and odd phase symmetry. 2. Three major properties of V(f) are listed below. 1. these integrals seem to be a closed circle of operations. The time function v(t) is recovered from V(f) by the inverse Fourier transform an integration over all frequency f. you usually know either V ( f ) or v(t). If u (t) is real.

Symmetric and Causal Signals When a signal possesses symmetry with respect to the time axis. Time-symmetry properties are listed below. defined by the property that. Since causality precludes any time symmetry. its transform integral can be simplified. And. • If v(t) has odd symmetry so that. If v(t) is real. Now consider the case of a causal signal. the spectrum consists of both real and imaginary parts computed from . Then. This simply means that the signal "starts" at or after t = 0. The spectrum of a real symmetrical signal will be either purely real and even or purely imaginary and odd. Where. And. • If v(t) has even symmetry so that. Then.

we write the superposition (or linearity) theorem as This theorem simply states that linear combinations in the time domain become linear combinations in the frequency domain. Time and Frequency Relations Superposition Superposition applies to the Fourier transform in the following sense. Time Delay and Scale Change In the frequency domain. is defined by .Rayleigh's Energy Theorem Rayleigh's energy theorem is analogous to Parseval's power theorem. so that Frequency Translation and Modulation We designate this as frequency translation or complex modulation. Spectral Analysis Convolution Integral The convolution of two functions of the same variable. It states that the energy E of a signal v(t) is related to the spectrum V(f) by Integrating the amplitude spectrum over all frequency yields the total energy. time delay causes an added linear phase with slope -2πtd. say v(t) and w(t). since multiplying a time function by causes its spectrum to be translated in frequency by +fc. If a1 and a2 are constants and Then Generalizing to sums with an arbitrary number of terms.

The impulse function is often written as δ(t). often referred to as the unit impulse or delta function. This function is one that is infinitesimally narrow.Convolution Theorems Convolution is commutative. associative and distributive. These properties are listed below along with their properties. infinitely tall. yet integrates to unity. Step and Signum Functions • Unit step function • Signum function . is the function that defines the idea of a unit impulse. Having defined and examined the convolution operation. we now list the two convolution theorems: Dirac Delta Function The Dirac Delta function. one.

.

Fig 3.Chapter 3 Signal Transmission and Filtering LTI System A system is linear if it has the following two properties: 1. Response of a LTI System Figure 3. then for a constant a . Energy storage elements and other internal effects may cause the output waveform to look quite different from the input. . we call it a LTI system. We'll develop the input-output relationship in the time domain using the superposition integral and the system's impulse response.1 depicts a system inside a "black box" with an external input signal x(t) and an output signal y(t). the system is characterized by an excitationand-response relationship between input and output. Superposition: If and 2.1 where F[x(t)] stands for the functional relationship between input and output. Scaling: If . But regardless of what's in the box. producing another voltage or current at the output port. .one may have a linear time-varying system or a non-linear time invariant system. Impulse Response One of the most important properties of an LTI system is that the system is characterized by its impulse response. then A system is time invariant if for any If a system that is both linear and time-invariant. Then we'll turn to frequency-domain analysis expressed in terms of the system's transfer function. Here we're concerned with the special but important class of linear time invariant systems-or LTI systems for short. Given the response of the system to an impulse. In the context of electrical communication. Note that the properties are independent of each other . the system usually would be a two-port network driven by an applied voltage or current at the input port.

Step Response When x(t) = u(t) we can calculate the system's step response. • When h(t) is a real time function. All systems have this. but only in LTI systems does this allow us to characterize the response to other input signals using this. and the mathematical complications tend to obscure significant points. . Transfer Functions and Frequency Response Time-domain analysis becomes increasingly difficult for higher-order systems.the response to any other signal can be computed in a straightforward manner. As the name suggests the impulse response is the response of a system given an impulse. This derivative relation between the impulse and step response follows from the general convolution property Thus. since g(t) = h (t)* u(t) Superposition Integral Superposition integral expresses the forced response as a convolution of the input x(t) with the impulse response h(t). H(f) has the hermitian symmetry. namely. System analysis in the time domain therefore requires knowledge of the impulse response along with the ability to carry out the convolution. We'll gain a different and often clearer view of system response by going to the frequency domain. As a first step in this direction. we define the system transfer function to be the Fourier transform of the impulse response.

the output spectrum Y(f) equals to input spectrum X(f) multiplied by transfer function H(f). represents the system's amplitude ratio as a function of frequency (sometimes called the amplitude response or gain). By the same token. the system's frequency response. we'll refer to H(f) as either the transfer function or frequencyresponse function. equivalently. Henceforth. The corresponding amplitude and phase spectra are.So that. Block-Diagram Analysis • Parallel connection • Cascade connection • Feedback connection . we take the transform of y(t) = h(t)* x(t) to obtain This elegantly simple result constitutes the basis of frequency-domain system analysis. Plots of and arg H(f) versus frequency give us the frequency-domain representation of the system or. Calling upon the convolution theorem. It says that. arg H(f) represents the phase shift. Now let x(t) be any signal with spectrum X(f).

The properties of a distortionless system are easily found by examining the output spectrum Now by definition of transfer function. given an input signal x(t). Figure 3. More precisely. we have distortionless transmission if. Since the overall transfer function is H( f ) = Hc(f )Heq( f ) the final output will be distortionless if Hc(f )Heq( f ) = and td are more or less arbitrary constants. it means simply that the output frequency components are not in correct proportion. we require that . where K where X(f)≠0. Amplitude distortion is easily described . so Linear Distortion Linear distortion includes any amplitude or delay distortion associated with a linear transmission system. Analytically. Equalization Linear distortion-both amplitude and delay-is theoretically curable through the use of equalization networks.in the frequency domain. Since this is caused by |H(f )|not being constant with frequency.Signal Distortion in Transmission Distortionless Transmission Distortionless transmission means that the output signal has the same "shape" as the input. Therefore.2 shows an equalizer in cascade with a distorting transmission channel . amplitude distortion is sometimes called frequency distortion. . we say that the output signal is distorted if it differs from input only by a multiplying constant and finite time delay. where K and td are constants. Y(f ) = H(f )X(f ).

the average signal power at the output will be proportional to Pin. or attenuation In the case of transmission lines. coaxial and fiber-optic cables. and waveguides. the system's power gain is Transmission Loss and Repeaters Any passive transmission medium has power loss rather than gain. We'll write this relation in the form where l is the path length between source and destination and α is the attenuation coefficient in dB per unit length. Filters and Filtering Ideal Filters By definition. If the system is distortionless. We therefore prefer to work with the transmission loss.Transmission Loss Power Gain Above figure represents an LTI system whose input signal has average power Pin. In particular. since Pout < Pin. the transfer function of an ideal bandpass filter (BPF) is . the output power decreases exponentially with distance. an ideal filter has the characteristics of distortionless transmission over one or more specified frequency bands and zero response at all other frequencies. Thus.

The parameters fl and fu are the lower and upper cutoff frequencies. Ideal lowpass filter. say . Its impulse response will be. (a) Transfer function. Ideal band-rejection or notch filters provide distortionless transmission over all frequencies except some stopband. so B =fu while an ideal highpass filter (HPF) has fl > 0 and fu = α. In similar fashion. since they mark the end points of the passband. Transfer function of ideal low pass filter. we'll give an instructive plausibility argument based on the impulse response. . an ideal lowpass filter (LPF) is defined with fl = 0. Instead. (b) impulse response. respectively. But all such filters are physically unrealizable in the sense that their characteristics cannot be achieved with a finite number of elements. where H( f ) = 0. The filter's bandwidth is which we measure in terms of the positive-frequency portion of the passband. We'll slup the general proof of this assertion.

The relative displacement τ is the independent variable in above equation.Correlation and Spectral Density Correlation focuses on time averages and signal power or energy. Properties of the autocorrelation function include . Schmarz's inequality relates the scalar product to the signal powers Pvand Pw in that The crosscorrelation of two power signals as at Where v(t) and w(t) are power signals and this is a scalar product with the second signal delayed by τ relative to the first or. Taking the Fourier transform of a correlation function leads to frequency-domain representation in terms of spectral density functions. the variable t having been washed out in the time average. the spectral density function tells us the power distribution over frequency. the average (v(t)w*(t))is called the scalar product of v(t) and w(t). In the case of a power signal. Correlation of Power Signals Let v(t) be a power signal. equivalently. that serves as a measure of similarity between the two signals. equivalent to energy spectral density in the case of an energy signal. generating the autocorrelation function This autocorrelation tells us something about the time variation of v(t). but not necessarily real nor periodic. at least in an averaged sense. the first signal advanced by τ relative to the second. General properties of Ruv(τ) are But suppose we correlate a signal with itself. The scalar product is a number. possibly complex. Our only stipulation is that it must have well-defined average power If v(t) and w(t) are power signals.

with total energy E. for instance. Given a power or energy signal v(t). These two properties are which expresses the output power or energy Ry(0) in terms of the input spectral density. we have the property Spectral Density Functions At last we're prepared to discuss spectral density functions. . Thus. But we can meaningfully speak of the total energy Similarly.Correlation of Energy Signals Averaging products of energy signals over all time yields zero. if x(t) is the input to an LTI system with and output spectral density functions are related by Since combined in then the input is the power or energy gain at any f. the correlation functions for energy signals can be defined as Since the integration operation has the same mathematical properties as the time-average operation . all of our previous correlation relations hold for the case of energy signals if we replace average power P. First. the area under Gv(f) equals the average power or total energy. so Second.. its spectral density function Gv(f) represents the distribution of power or energy in the frequency domain and has two essential properties.

the amplitude of sinusoidal carrier is varied with incoming message signal. the original baseband signal is restored by the process of demodulation. Frequency-modulated signal. Parts (a) and (b) show the sinusoidal carrier and modulating waves. Angle Modulation. Following figure displays the waveforms of amplitude-modulated and anglemodulated signals for the case of sinusoidal modulation. . which is the reverse of the modulation process. the instantaneous frequency or phase of sinusoidal carrier is varied with the message signal. by which some characteristic of a carrier is varied in accordance with a modulating signal. Communication channel requires a shift of the range of baseband frequencies into other frequency ranges suitable for transmission. respectively. and a corresponding shift back to the original frequency range after reception. Parts (c) and (d) show the corresponding amplitude-modulated and frequency-modulated waves. A shift of the range of frequencies in a signal is accomplished by using modulation. Amplitude-modulated signal. At the receiving end.Chapter 4 Modulation and Frequency Conversion Continuous-wave Modulation • • Amplitude Modulation. (a) (b) (c) (d) Carrier wave Sinusoidal modulating signal. respectively. Modulation is performed at the transmitting end of the communication system.

that is.Amplitude Modulation Consider a sinusoidal carrier wave c(t) defined by c(t) = Ac cos(2π fct) where Ac is the carrier amplitude and fc is the carrier frequency. 1. The envelope of s(t) has essentially the same shape as the baseband signal m(t) provided that two requirements are satisfied: Illustrating the amplitude modulation process (a) Baseband signal m(t). When the amplitude sensitivity ka of the modulator is large enough. (b) AM wave for |kam(t)| < 1 for all t. The amplitude of kam(t) is always less than unity. resulting in . Let m(t) denote the baseband signal and the carrier wave c(t) is physically independent of the message signal m(t).3b and 2. and Figures 2. Following figure shows a baseband signal m(t). An amplitude-modulated (AM) wave can be described as: s(t) = Ac [1 + kam(t)] cos(2π fct) where ka is the amplitude sensitivity of the modulator responsible for the generation of the modulated signal s(t). the carrier wave becomes over-modulated. and the envelope of the AM wave s(t) can be expressed as Ac[1+kam(t)]. | kam(t)| > 1. (c) AM wave for |kam(t)| > 1 for some t. kam(t) < 1 for all t This condition illustrated in following figure ensures that 1+kam(t) is always positive.3c show the corresponding AM wave s(t) for two values of amplitude sensitivity ka.

provided the carrier frequency satisfies fc > W. as in following figure (a). From the spectrum of Figure (b). the spectrum S(f) of the AM wave is as shown in figure (b) for the case when fc > W. and two versions of the baseband spectrum translated in frequency by ±fc and scaled in amplitude by kaAc/2. The spectrum of the message signal m(t) for negative frequencies becomes visible for positive frequencies.fc) + δ(f + fc)] + (kaAc/2)[M(f .fc) + M(f + fc)] For baseband signal m(t) band-limited to the interval –W≤ f ≤ W. The carrier frequency fc is much greater than the highest frequency component W of the message signal m(t). we note the following: 1. The difference between the highest frequency fc + W and the lowest frequency fc . 2. (b) Spectrum of AM wave. 2. an envelope cannot be detected satisfactorily. The Fourier transform of the AM wave s(t) is given by S(f) = (Ac/2)[δ(f . If the above condition is not satisfied. This spectrum consists of two delta functions weighted by Ac/2 and occurring at ±fc. The AM spectrum lying above the carrier frequency fc is the upper sideband.W defines the transmission bandwidth BT for AM wave: BT = 2W (a)Spectrum of baseband signal. whereas the symmetric portion below fc is the lower sideband.carrier phase reversals whenever the factor 1+kam(t) crosses zero. that is fc >> W We call W the message bandwidth. The modulated wave then exhibits envelope distortion. . 3.

we write s(t) = Acm(t) cos(2π fct) The modulated signal s(t) undergoes a phase reversal whenever the message signal m(t) crosses zero. which may be extracted by means of a BPF. The carrier wave c(t) is independent of the information signal m(t). Depending on sI(t) and sQ(t). 3. Specifically. three types of linear modulation are defined: 1. DSB modulation. AM is wasteful of bandwidth.AM VIRTUES AND LIMITATIONS In the transmitter. 2. AM demodulation is accomplished usinga nonlinear device. where only the lower or the upper sideband is transmitted. hence the name "envelope detector. 2. Only a fraction of the total transmitted power is actually affected by m(t). VSB modulation. Only one sideband is necessary. and the communication channel needs to provide only the same bandwidth as the baseband signal. both sI(t) and sQ(t) are low-pass signals that are linearly related to the message signal m(t). The upper and lower sidebands of an AM wave are related by their symmetry about the carrier. AM is accomplished using a nonlinear device. • In the receiver. DSB-SC MODULATION DSB-SC modulation is generated by using a product modulator that simply multiplies the message signal m(t) by the carrier wave Accos(2π fct). where only a vestige of one of the sidebands and a modified version of the other sideband are transmitted. as illustrated in following figure. where only the upper and lower sidebands are transmitted. AM is wasteful of power. The demodulator output developed across the load resistor is nearly the same as the envelope of the incoming AM wave. linear modulation is defined by where SI(t) is the in-phase component and SQ(t) the quadrature component of the modulated wave s(t)." Amplitude modulation suffers from two major limitations: 1. Fourier analysis of the voltage developed across resistive load reveals the AM components. In linear modulation. . SSB modulation. • Linear Modulation Schemes In its most general form.

namely. . The envelope of a DSB-SC signal is different from the message signal. DSB-SC requires the same transmission bandwidth as that for AM. Except for a change in scale factor. as in following figure. COHERENT DETECTION The baseband signal m(t) is uniquely recovered from DSB wave s(t) by first multiplying s(t) with a locally generated sinusoidal wave and then low-pass filtering the product. 2W. (b) Baseband signal. (b) Spectrum of DSB-SC modulated wave. (a)Spectrum of baseband signal. The Fourier transform of s(t) is obtained as. the modulation process simply translates the spectrum of the baseband signal by ±fc.(a) Block diagram of product modulator. This scheme is known as coherent detection or synchronous demodulation. (c) DSB-SC modulated wave. When m(t) is limited to the interval -W < f < W. unlike the case of an AM wave that has a percentage modulation < 100 %. The local oscillator signal is assumed coherent or synchronized with the carrier wave c(t) used in the product modulator to generate s(t).

and using s(t) = Acm(t) cos(2π fct) for the DSB-SC wave s(t). and is minimum (zero) when Φ = ± π/2. Spectrum of a product modulator with a DSB-SC modulated wave as input. the product modulator output in above figure. As long as the phase error Φ is constant. It is further illustrated by the spectrum V(f) shown in below figure. . represents the quadrature null effect of the coherent detector. The zero demodulated signals occur for f = ± π/2. The amplitude of the demodulated signal is maximum. where it is assumed that the baseband m(t) is limited to -W < f < W. v(t) =Ac' cos(2π fct + φ ) s(t) = Ac Ac' cos(2π fct) cos(2π fct + φ ) m(t) = (1/2)AcAc' cos(4π fct + φ ) m(t) + (1/2)AcAc' (cos φ ) m(t) The 1st term represents a DSB-SC signal with carrier frequency 2fc.Denoting the local oscillator signal by Ac'cos(2π fct + φ ). whereas the 2nd term is proportional to the baseband signal m(t). when Φ = 0. The phase error Φ in LO causes the detector output to be attenuated by a factor of cos Φ. At the filter output we obtain a signal given by The demodulated vo(t) is proportional to m(t) when the phase error Φ is a constant. the detector provides an undistorted version of the original baseband signal m(t).

The nearest frequency component of the unwanted sideband is separated from the desired sideband by twice the lowest frequency component of the modulating signal. we may use a coherent detector. tuned to the same frequency as the carrier frequency.SINGLE-SIDEBAND MODULATION In SSB modulation. the spectrum of the SSB signal is as shown in figure(b) in below. (b) Spectrum of corresponding SSB signal containing the upper sideband. the message spectrum must have an energy gap centered at the origin as illustrated in figure(a) below and assuming that the upper sideband is retained. • The second stage is a BPF. which generates a DSB-SC wave. For SSB signal generation. We may generate such a modulated wave by the frequency-discrimination: • The first stage is a product modulator. (a) Spectrum of a message signal m(t) with an energy gap of width 2fa centered on the origin. • The filter's transition band. is twice the lowest frequency component of the message signal. is used in the receiver. which separates the passband from the stopband. . • • This requirement is usually met in one of two ways: A low-power pilot carrier is transmitted in addition to the selected sideband. • This method of demodulation assumes perfect synchronism between the oscillator in the coherent detector and the oscillator in the transmitter. The most severe requirement of SSB generation using frequency discrimination arises from the unwanted sideband. which is designed to pass one of the sidebands of the modulated wave and suppress the other. which multiplies s(t) by a locally generated carrier and then low-pass filters the product. Three basic requirements in designing the BPF used in the frequencydiscriminator for generating a SSB-modulated wave: • The desired sideband lies inside the passband of the filter. only the upper or lower sideband is transmitted. • To demodulate a SSB modulated signal s(t). A highly stable oscillator. • The unwanted sideband lies inside the stopband of the filter.

because the human ear is relatively insensitive to phase distortion. In the transmission of music and video signals. Assuming that a vestige of the lower sideband is transmitted. where each frequency component of the original message signal undergoes a phase shift Φ. only the positive-frequency portion is shown This frequency response is normalized. the presence of this form of waveform distortion is utterly unacceptable. as shown in figure below. It is the special design of the BPF that distinguishes VSB modulation from SSB modulation. The effect is to introduce a phase distortion in the demodulated signal. one of the sidebands is partially suppressed and a vestige of the other sideband is transmitted to compensate for that suppression.fv < | f | < fc + fv. VESTIGIAL SIDEBAND MODULATION In VSB modulation. The cutoff portion of the frequency response around the carrier frequency fc exhibits odd symmetry. The sum of the values of the magnitude response |H(f)| at any two frequencies equally displaced above and below fc is unity. . This phase distortion is tolerable in voice communications. we generate a DSB-SC modulated wave and then pass it through a BPF. the frequency response H(f) of the BPF takes the form shown in following figure. VSB wave can be generated with the frequency discrimination method. the two conditions are satisfied 1. First. Filtering scheme for the generation of VSB modulated wave.In the latter method. so that at the carrier frequency fc we have |H(fc)| = 1/2. Magnitude response of VSB filter. there would be some phase error Φ the local oscillator output with respect to the carrier wave used to generate the SSB wave. The presence of phase distortion gives rise to a Donald Duck voice effect. In the interval fc .

The quadrature component HQ(f) is to interfere with the in-phase component.. The resulting angle-modulated wave is s(t) = Ac cos[θ i(t)] . and the “-” sign corresponds to the transmission of a vestige of the lower sideband. This is achieved at the expense of increased transmission bandwidth. Frequency response of a filter for producing the quadrature component of the VSB modulated wave. The signal m'(t) in the quadrature component of s(t) is obtained by passing the message signal m(t) through a filter having the frequency response HQ(f) = j[H(f . and fv is the width of the vestigial sideband.fc) . That is.H(f + fc) for –W < f < W Following figure displays a plot of the frequency response HQ(f). The VSB wave is described in the time domain as s(t) = (Acm(t)/2)cos(2πfct) + (Acm'(t)/2)sin(2πfct) where the “+” sign corresponds to the transmission of a vestige of the upper sideband. Let θ i(t) denote the angle of a modulated sinusoidal carrier.e. When the vestigial sideband is reduced to zero (i. The phase response arg(H(f)) is linear. assumed to be a function of the message signal. angle modulation provides with practical means of exchanging channel bandwidth for improved noise performance. Angle Modulation Angle modulation can provide better discrimination against noise and interference than amplitude modulation. SSB may be viewed as a special case of VSB modulation. we set fv = 0).2. the modulated wave s(t) takes the limiting form of a SSB wave.fc) + H(f + fc) = 1 for –W < f < W The transmission bandwidth of VSB modulation is BT = W + fv where W is the message bandwidth. H(f . H(f) satisfies the condition. so as to partially reduce power in one of the sidebands of s(t) and retain simply a vestige of the other sideband. that is.

In the simple case of an unmodulated carrier. the angle θ i(t) is θ i(t) = 2πfct + Φc and the corresponding phasor rotates with angular velocity equal to 2πfc. The constant Φc is the value of θ i(t) at t = 0. Phase modulation (PM). o Integrating above equation with time and multiplying the result by 2π. Two common forms of angle modulation: 1. over an interval from t to t + Δt. the average frequency in Hz. fi(t) = fc + kfm(t) The term fc represents the frequency of the unmodulated carrier. The angle of the unmodulated carrier is assumed zero at t = 0. If θ i(t) increases monotonically with time. Frequency modulation (FM). we get . the constant kp represents the phase sensitivity of the modulator. The angular velocity of such a phasor is dθ i(t)/dt measured in radians/second. the angle θ i(t) is varied linearly with the message signal m(t).where Ac is the carrier amplitude. as shown by θ i(t) = 2πfct + kpm(t) The term 2πfct represents the angle of the unmodulated carrier. the constant kf represents the frequency sensitivity of the modulator. the instantaneous frequency fi(t) is varied linearly with the message signal m(t). The phase-modulated signal s(t) is thus described by s(t) = Accos[2πfct + kpm(t)] 2. expressed in radians/volt. is given by The instantaneous frequency of the angle-modulated signal s(t) is: We may interpret the angle-modulated signal s(t) as a rotating phasor of length Ac and angle θ i(t) as s(t) = Ac cos[θ i(t)].

whereas the envelope of an AM signal is dependent on the message signal. The envelope of a PM or FM signal is constant. (b) PM scheme by using a frequency modulator. as in Figure (b). o The frequency-modulated signal s(t) is therefore described by o Allowing the angle θ i(t) to become dependent on the message signal m(t) as in θ i(t) = 2πfct + kpm(t) or on its integral as causes the zero crossings of a PM signal or FM signal no longer have a perfect regularity in their spacing. Frequency Modulation Consider a sinusoidal modulating signal defined by m(t) = Amcos(2πfmt) The instantaneous frequency of the resulting FM signal equals fi(t) = fc + kf Amcos(2πfmt) = fc + Δf cos(2πfmt) where Δf = kfAm .where the angle of the unmodulated carrier wave is assumed zero at t = 0. An FM signal can be generated by first integrating m(t) and then using the result as the input to a phase modulator. A PM signal can be generated by first differentiating m(t) and then using the result as the input to a frequency modulator. (a) FM scheme by using a phase modulator. as in figure(a) below Relationship between FM and PM.

or heterodyning. .31) and θ i(t) = 2πfct + β sin(2πfmt) (2. The FM signal itself is given by s(t) = Accos[2pπfct + bsin(2pπfmt)] Depending on the modulation index b. the maximum departure of the angle θ i(t) from the angle 2πfct of the unmodulated carrier. the frequency deviation Δf is proportional to the amplitude of the modulating signal and is independent of the modulation frequency. is commonly called the modulation index of the FM signal: β = Δf / fm (2.The frequency deviation Δf represents the maximum departure of the instantaneous frequency of the FM signal from the carrier frequency fc. β is measured in radians. hence. for which b is large compared to one radian. Frequency Translation SSB modulation is also referred to as frequency mixing. Wideband FM. we may distinguish two cases of FM: • • Narrowband FM. • The angle θi(t) of the FM signal is obtained as • The ratio of the frequency deviation Δf to the modulation frequency fm. for which b is small compared to one radian. Its operation is illustrated in the signal spectrum shown in following figure compared to that of the original message signal in that figure.32) From above equation the parameter β represents the phase deviation of the FM signal. • For an FM signal.

(a) Spectrum of a message signal m(t) with an energy gap of width 2fa centered on the origin. This may be accomplished using the mixer shown in Figure below. In following figure assume that the mixer input s1(t) is an AM signal with carrier frequency f1 and bandwidth 2W. A modulated wave s1(t) centered on carrier frequency f1 is to be translated upward such that its carrier frequency is changed from f1 to f2.(b) Spectrum of corresponding SSB signal containing the upper sideband. Part (a) of above figure displays the AM spectrum S1(f) assuming that f1 > W. Part (b) of the figure displays the spectrum S'(f) of the resulting signal s'(t) at the product modulator output. . Block diagram of mixer The mixer is a device that consists of a product modulator followed by a BPF. A message spectrum from fa to fb for positive frequencies in Figure(a) shifted upward by an amount fc and the message spectrum for negative frequencies is translated downward in a symmetric fashion.

f1 The unshaded spectrum in Figure (b) defines the wanted signal s2(t) and the shaded spectrum defines the image signal associated with s2(t). This objective is to align the midband frequency of the filter with f2 and to assign it a bandwidth equal to that of the signal s1(t). and the local oscillator frequency fL is defined by f2 = f1 + fL or fL = f2 . The BPF in the mixer of is to pass the wanted modulated signal s2(t) and to eliminate the associated image signal. and the required oscillator frequency fL is f2 = f1 – fL or fL = f1 . and the other represented by the unshaded spectrum in this figure.f2 The shaded spectrum in Figure (b) defines the wanted modulated signal s2(t). . Depending on the carrier frequency f1 is translated upward or downward. and the unshaded spectrum defines the associated image signal.The signal s'(t) may be viewed as the sum of two modulated components: one component represented by the shaded spectrum in Figure (b). • Down Conversion: In this case the translated carrier frequency f2 is smaller than the incoming carrier frequency f1. we may identify two different situations: • Up Conversion: In this case the translated carrier frequency f2 is greater than the incoming carrier frequency f1.

such treatment of circuit elements is not possible since voltage and current waves do not affect the entire circuit at the same time.Chapter 5 Transmission Lines Introduction In an electronic system. At low frequencies. • The circuit must be broken down into unit sections within which the circuit elements are considered to be lumped. Coaxial cable 3. power is considered to be delivered to the load through the wire. In the microwave frequency region. Two wire line 2. Planar Transmission Lines  Strip line  Microstrip line  Slot line  Fin line  Coplanar Waveguide  Coplanar slot line Analysis of differences between Low and High Frequency • At low frequencies. Waveguide  Rectangular  Circular 4. the delivery of power requires the connection of two wires between the source and the load. • At microwave frequencies. the circuit elements are lumped since voltage and current waves affect the entire circuit at the same time. Any physical structure that will guide an electromagnetic wave place to place is called a Transmission Line. Types of Transmission Lines 1. • This is because the dimensions of the circuit are comparable to the wavelength of the waves according to the formula: λ = c/f where. . power is considered to be in electric and magnetic fields that are guided from lace to place by some physical structure.

Reflection from Resistive loads When the resistive load termination is not equal to the characteristic impedance. • The value of inductance and capacitance of each part determines the velocity of propagation of energy down the line. • Line terminated in a short : When the end of the transmission line is terminated in a short (RL = 0). Thus the current at the open end is zero. Γ = Vr/Vi where Vr = reflected voltage Vi = incident voltage The reflection coefficient is also given by Γ = (ZL . the voltage at the short must be equal to the product of the current and the resistance. and then the transmission line can be modeled as an LC ladder network with inductors in the series arms and the capacitors in the shunt arms.5 . part of the power is reflected back and the remainder is absorbed by the load.5 • Line terminated in its characteristic impedance: If the end of the transmission line is terminated in a resistor equal in value to the characteristic impedance of the line as calculated by the formula Z=(L/C)0. • Time taken for a wave to travel one unit length is equal to T(s) = (LC)0. the resistance between the open ends of the line must be infinite.c = velocity of light f = frequency of voltage/current Transmission Line Concepts • The transmission line is divided into small units where the circuit elements can be lumped. then the voltage and current are compatible and no reflections occur.ZO)/(ZL + ZO) .5 • Velocity of the wave is equal to v (m/s) = 1/T • Impedance at any point is equal to Z = V (at any point)/I (at any point) Z = (L/C)0. • Line terminated in an open: When the line is terminated in an open. The amount of voltage reflected back is called voltage reflection coefficient. • Assuming the resistance of the lines is zero.

5 Ω Effect of Lossy line on voltage and current waves • • The effect of resistance in a transmission line is to continuously reduce the amplitude of both incident and reflected voltage and current waves. Zinput = (Zinput Zoutput)0. • Voltage Standing Wave Ratio: VSWR = Vmax/Vmin Voltage standing wave ratio expressed in decibels is called the Standing Wave Ratio: SWR (dB) = 20log10VSWR • The maximum impedance of the line is given by: Zmax = Vmax/Imin • The minimum impedance of the line is given by: Zmin = Vmin/Imax or alternatively: Zmin = Zo/VSWR • Relationship between VSWR and Reflection Coefficient: VSWR = (1 + |Γ|)/(1 . Zinput = ZL Ω The relationship of the input impedance at the input of the quarter-wave transmission line with its terminating impedance is got by letting L = λ/4 in the impedance equation. This results in the confinement of the voltage and current waves at the boundary of the transmission line.Standing Waves A standing wave is formed by the addition of incident and reflected waves and has nodal points that remain stationary with time. .|Γ|) Γ = (VSWR – 1)/(VSWR + 1) General Input Impedance Equation Input impedance of a transmission line at a distance L from the load impedance ZL with a characteristic Zo is Zinput = Zo [(ZL + j Zo BL)/(Zo + j ZL BL)] where B is called phase constant or wavelength constant and is defined by the equation B = 2π/λ Half and Quarter wave transmission lines The relationship of the input impedance at the input of the half-wave transmission line with its terminating impedance is got by letting L = λ/2 in the impedance equation. Skin Effect: As frequency increases. thus making the transmission more lossy. depth of penetration into adjacent conductive surfaces decreases for boundary currents associated with electromagnetic waves.

H/m γ = conductivity. S/m Smith chart For complex transmission line problems. Components of a Smith Chart • Horizontal line: The horizontal line running through the center of the Smith chart represents either the resistive or the conductive component. Circles of constant resistance and conductance: Circles of constant resistance are drawn on the Smith chart tangent to the right-hand side of the chart and its intersection with the centerline. These circles of constant resistance are used to locate complex impedances and to assist in obtaining solutions to problems involving the Smith chart. An indispensable graphical method of solution is the use of Smith Chart. • . Hz µ = permeability.• The skin depth is given by: Skin depth (m) = 1/ (πµγf) 0.5 where f = frequency. the use of the formulae becomes increasingly difficult and inconvenient. Zero resistance or conductance is located on the left end and infinite resistance or conductance is located on the right end of the line.

o The VSWR circle is drawn for the load. 6. • Plotting a Complex Impedance on a Smith Chart o To locate a complex impedance. Plotting a complex impedance on a Smith chart 2. Finding the input impedance of a transmission line terminated in a short or open.• Lines of constant reactance: Lines of constant reactance are shown on the Smith chart with curves that start from a given reactance value on the outer circle and end at the right-hand side of the center line. advance in clockwise direction from the located point. o The intersection of the right-hand side of the circle with the horizontal resistance line locates the value of the VSWR.jB on a Smith chart. o Draw a circle with a radius equal to the distance between the 1. Z = R+-jX or admittance Y = G +. Locating the normalized value of the imaginary term on the outer circle locates the curve of constant reactance. Locating the value of the normalized real term on the horizontal line scale locates the resistance circle. o A line is drawn from the 1. Finding the admittance for a given impedance 4. Matching a transmission line with a single parallel stub 9. Finding VSWR for a given load 3. normalize the real and imaginary part of the complex impedance. The intersection of the circle and the curve locates the complex impedance on the Smith chart. Finding the input impedance at any distance from a load ZL. Finding the Input Impedance at any Distance from the Load o The load impedance is first normalized and is located on the Smith chart. Matching a transmission line to a load with a single series stub.0 point and the location of the normalized load and the center of the Smith chart as the center. o To locate the input impedance on a Smith chart of the transmission line at any given distance from the load. 5. a distance in wavelength equal to the distance to the new location on the transmission line. 8. Matching a transmission line to a load with two parallel stubs. Locating the first maximum and minimum from any load 7. Solutions to Microwave problems using Smith chart The types of problems for which Smith charts are used include the following: 1.0 point through the load to the outer wavelength scale. • • . Finding the VSWR for a given load o Normalize the load and plot its location on the Smith chart.

Pr) • Chapter 6 Radio Wave Propagation Radio Spectrum A linear radio frequency scale of 1Hz = 1/3 mm (109m) would extend beyond the Moon (3. part of the incident power is reflected back down the line. . It is defined as Pmismatch = 10 log10 Pi/(Pi .Almost all RF spectrum is regulated and allocated to various services.8x108m).Power Loss • Return Power Loss: When an electromagnetic wave travels down a transmission line and encounters a mismatched load or a discontinuity in the line. The return loss is defined as: Preturn = 10 log10 Pi/Pr Preturn = 20 log10 1/Γ Mismatch Power Loss: The term mismatch loss is used to describe the loss caused by the reflection due to a mismatched line.

the wave is vertically polarized. If the electric field vector is parallel to the surface. the wave is horizontally polarized. Electromagnetic fields have a property known as polarization. . Such is the environment in which we live and in which modern wireless communication systems have to operate/ Wireless communication is facilitated by electromagnetic waves.Electromagnetic Waves The EM field in any point around us is a result of vector combination of uncountable components coming from the Universe Generated by natural processes and by man-made devices during the past time elapsed from the big-bang up to present moment. If the electric field vector is perpendicular to the surface. An electromagnetic wave consists of a time varying electric field traveling through space with a time varying magnetic field. The two fields are perpendicular to each other and the direction of propagation. The polarization of an electromagnetic wave is determined by the orientation of the electric field vector relative to the surface of the earth.

The electromagnetic waves that we wish to receive are referred to as signals. TEM . The signals that we don’t want are noise. • Indoor propagation . dependence on • Wavelength (frequency) & polarization • Environment/ climate/ weather • Time Relation between the signal radiated and signal received as a function of distance and other variables is defined as a Propagation Model. non-clear air 5. Ionospheric effects (outdoor) Generally. 2. outdoor) Effects of the ground Tropospheric effects (outdoor) a. clear air b. Basic energy spreading Effects of obstructions (indoor. and like other transmission lines it has characteristic impedance. 3. man-made or natural is known as RFI (Radio Frequency Interference). mitigating RFI can become a full-time job (and headache). As the number of wireless devices increases.simplest EM wave Principal propagation effects 1. 4.Since electromagnetic waves travel through space. Interference to the desired signal caused by other sources of RF waves. For free space the characteristic impedance is 377 ohms. space can be thought of as a kind of transmission line without any conductors.

• Outdoor propagation: long-term modes • Outdoor propagation: short-term modes .

Diffraction The mechanism the waves spread as they pass barriers in obstructed radio path (through openings or around barriers).g..g. usually thermal. snow. The conversion takes place as a result of interaction between the incident energy and the material medium. Absorption The conversion of the transmitted EM energy into another form. – One cause of signal attenuation due to walls. Scattering A phenomenon in which the direction (or polarization) of the wave is changed when the wave encounters propagation medium discontinuities smaller than the wavelength (e. at the molecular or atomic level. Diffraction . Reflecting object is large compared to wavelength. or earth atmosphere) or through a boundary between two dissimilar media – For two media of different refractive indices.Reflection The abrupt change in direction of a wave front at an interface between two dissimilar media so that the wave front returns into the medium from which it originated. …). sand) and atmospheric gases Refraction Redirection of a wave-front passing through a medium having a refractive index that is a continuous function of position (e.Results in a disordered or random change in the energy distribution.important when evaluating potential interference between terrestrial/stations sharing the same frequency. the angle of refraction is approximated by Snell's Law known from optics Super-refraction and ducting Important when evaluating potential interference between terrestrial/ earth stations sharing the same frequency • coupling losses into duct/layer a. geometry • nature of path (sea/ land) • propagation loss associated with duct/ layer . precipitations (rain. a graded-index optical fibre. foliage.

c. The first type of interaction is reflection. mirage) Interaction between Electromagnetic Waves and the medium When electromagnetic waves travel through a medium they can interact with that medium in a variety of ways. d. . Metals and sea water are examples of good RF reflectors. land.0. frequency refractivity gradient nature of path (sea.a. and the magnitude and phase of the reflected wave depend on the properties of the reflecting medium. Radio waves can be reflected by a solid object much as light waves are. temperate climates Super-refractive atmosphere: < -40 N units/km. A perfect reflector that reflects all RF incidents on it has a reflection coefficient of 1. The incidence and reflection angles are equal. b. coastal) terrain roughness Standard atmosphere: -40 N units/km (median). Whenever a radio wave move across a boundary from one medium to another (assuming that the media have different refractive indices) there will be a reflection. warm maritime regions Ducting: < -157 N units/km (fata morgana.

are diffracted significantly by the bridge and it is possible to receive FM signals on a car radio while driving under the bridge. Refraction is an important aspect of radio wave propagation. which will be discussed in more detail in the next three sections:: Ground wave propagation Space wave (direct wave) propagation Sky wave propagation .5 µ m). There are 3 basic modes of propagation for radio waves in the vicinity of the earth. they change direction at the interface between the two materials. an interstate underpass is dark underneath. The refractive index of air is dependent on the temperature and relative humidity of the air. FM radio waves. the shadow region is not completely void of radio waves. a sharply defined cliff or mountain causes significant diffraction and a sizeable shadow zone. The second type of interaction is refraction. This is called refraction. the obstacle casts a shadow. Above 100 MHz. However. the ionosphere refracts RF and redirects the waves back towards the earth's surface. On the other hand.The reflection of radio waves by a solid object is affected by their polarization. If the electric field vector is perpendicular to the reflecting surface. When radio waves pass from one material to another. The bridge casts a sharp shadow and there is little illumination. There is so much diffraction that the shadow zone is completely washed out. For example. just as it would when illuminated with light. However. A temperature inversion can cause RF waves to be bent just enough to follow the curvature of the earth and travel for hundreds of miles with little loss. When radio waves encounter an obstacle. As one gets farther from the object. The amount of scattering depends on the size of the electromagnetic wave relative to the size of the object. one eventually reaches a point where the scattered waves have completely filled in the shadow. The degree of diffraction also depends on the sharpness of the edges of the object. A third type of interaction is diffraction. whose wavelength is about 3m. the electric field will be shorted out by the conductivity of the surface. A gradually sloping hill does not diffract radio waves much and the shadow zone behind it is quite small. If the incident wave has its electric field vector parallel to the reflecting surface. At frequencies between 30 and 30 MHz. because some radio waves are scattered around the edge of the object. The angles of incidence and refraction are related to the refractive indices of the two media by Snell’s law: n1sinθ 1 = n2sinθ 2 Variables n1 and θ 1 are the refractive index and direction of travel in the incident medium and n2 and θ 2 are the refractive index and direction of travel in the refracting medium. because its size (~10m) is millions of times larger than light waves (~0. it is reflected.

depending on the phase differences. hills and other obstacles. the path lengths are continuously changing and the signal fluctuates wildly in amplitude. it is possible for radio waves to be reflected by these obstacles. also known as direct waves. Because ground is not a perfect electrical conductor. the waves will not arrive in phase. Ground waves are always vertically polarized. due to its high conductivity. In order for this to occur. Ground waves will propagate long distances over sea water. If the heights are measured in feet. This situation is known as multipath propagation. Submarine communications takes place at frequencies well below 10 KHz. The maximum line of sight distance between two antennas depends on the height of each antenna. that is there must be a line of sight path between them.Ground Wave Propagation Ground Waves are radio waves that follow the curvature of the earth. Ground waves are used primarily for local AM broadcasting and communications with submarines. is given by: d = √2hT + √2hR Because a typical transmission path is filled with buildings. limiting the usefulness of ground wave propagation to frequencies below 2 MHz. in miles. ground waves are attenuated as they follow the earth’s surface. they are greatly affected by the ground’s properties. The diagram on the next page shows a typical line of sight. They may reinforce each other or cancel each other. because a horizontally polarized ground wave would be shorted out by the conductivity of the ground. Ghost images seen on broadcast TV signals are the result of multipath – one picture arrives slightly later than the other and is shifted in position on the screen. resulting in radio waves that arrive at the receive antenna from several different directions. Multipath is very troublesome for mobile communications. the two antennas must be able to “see” each other. which can penetrate sea water (remember the skin effect?) and which are propagated globally by ground waves. This effect is more pronounced at higher frequencies. Because ground waves are actually in contact with the ground. the maximum line of sight. are radio waves that travel directly from the transmitting antenna to the receiving antenna. NBFM is used almost exclusively for . When the transmitter and/or receiver are in motion. Because the length of each path is different. It can cause major distortion to certain types of signals. Space (Direct) Wave propagation Space Waves. For this reason.

the ionosphere does not completely disappear shortly after sunset. Sky waves are radio waves that propagate into the atmosphere and then are returned to earth at some distance from the transmitter. known as the downlink frequency. The ionosphere is created when the sun ionizes the upper regions of the earth’s atmosphere. Above 200 MHz the ionosphere becomes completely transparent. MF. LF and VLF. The D and E layers disappear almost immediately. it appears to stand still in the sky. but the F1 and F2 layers do not disappear. The ionosphere consists of 4 highly ionized regions The D layer at a height of 38 – 55 mi The E layer at a height of 62 – 75 mi The F1 layer at a height of 125 –150 mi (winter) and 160 – 180 mi (summer) The F2 layer at a height of 150 – 180 mi (winter) and 240 – 260 mi (summer) The density of ionization is greatest in the F layers and least in the D layer Though created by solar radiation. An interesting example of direct communications is satellite communications. The satellite is used as a relay station. The ionosphere bends and attenuates radio waves at frequencies below 30 MHz. Amplitude variations caused by multipath that make AM unreadable are eliminated by the limiter stage in an NBFM receiver. a region of charged particles 50 – 300 miles above the earth’s surface. We will consider two cases: • ionospheric refraction • tropospheric scatter Ionospheric Refraction This propagation mode occurs when radio waves travel into the ionosphere. Because two frequencies are used. rather they merge into a single F layer located at a distance of 150 – 250 mi above the earth.000 miles above the equator. known as the uplink frequency. Recombination or charged particles is quite slow at that altitude.mobile communications. as viewed from the ground. A high gain antenna can be pointed at the satellite to transmit signals to it. A satellite operating in this way is known as a transponder. The ionosphere is responsible for most propagation phenomena observed at HF. translates this frequency to a different frequency. . The satellite receives signals from the ground at one frequency. The satellite has a tremendous line of sight from its vantage point in space and many ground stations can communicate through a single satellite. These charged regions are electrically active. Sky Waves Propagation beyond the line of sight is possible through sky waves. and retransmits the signal. so the F layer lasts until dawn. If a satellite is placed in an orbit 22. the reception and transmission can happen simultaneously. from which approximately ¼ of the earth’s surface is visible.

The highest frequency that can be returned when the takeoff angle is zero degrees is called the MUF. The MUF and critical frequency are related by the following formula: The MUF can range from 3 to 50 MHz. and it is possible to view this variation by looking at a real-time critical frequency map The critical frequency varies from 1 to 15 MHz under normal conditions. The critical frequency varies from place to place. to get the maximum possible distance per hop. .The diagram below shows the geometry of ionospheric refraction. maximum usable frequency. You can click here to see a near real-time map of the MUF of the ionosphere. The maximum frequency that can be returned by the ionosphere when the radio waves are vertically incident on the ionosphere (transmitted straight up) is called the critical frequency. Most communications is done using radio waves transmitted at the horizon.

Seasonal variation is linked to the tilt of the earth’s axis and the distance between the earth and sun. After sunset. There are 3 periodic cycles of variation: • Diurnal (daily) cycle • Seasonal cycle • Sunspot cycle The daily cycle is driven by the intensity of the solar radiation ionizing the upper atmosphere. or a major solar flare . Thus attenuation is a severe problem at lower frequencies. but the result is that ionospheric propagation improves dramatically during the for the northern hemisphere during their winter. The amount of attenuation is roughly inversely proportional to the square of the frequency of the wave. This leads to a general increase in MUF’s and attenuation at lower frequencies. which increase the density of the ionosphere. The density of the layers increases until noon and then decreases slowly throughout the afternoon. When the sun becomes extremely active. The thick gray lines indicate the location of the terminator . Near the peak of the cycle (the last peak occurred in December 2001) the sun’s surface is very active. If you aren't sure which region is the daytime region. while seasonal variation in the southern hemisphere is much smaller.the division between day and night. the D and E layers disappear and the F1 and F2 merge to form the F layer. emitting copious amounts of UV radiation and charged particles. making daytime global communications via sky wave impossible at frequencies much below 5 MHz. the F1 and F2. The effects are complex. The D and E layers form immediately after sunrise and the F layer splits into two layers. The properties of the ionosphere are variable.The ionosphere also attenuates radio waves. Take another look at the real-time MUF map and notice the difference between the MUF numbers in the day and night regions. it has a small yellow sun icon in its center. The 11 year sunspot cycle exerts a tremendous effect on the atmosphere.

The maximum distance that can be covered by a single hop using ionospheric propagation is about 2500 miles. The ionosphere is not uniform and different regions refract RF differently. Multipath propagation is the result. Greater distances can be covered using multi-hop propagation. This leads to rapid variations in the received signal amplitude known as fading. in which radio waves are reflected by the ground back up to the ionosphere. the ionosphere can become so dense that global ionospheric communications are disrupted. .occurs.

• Wavelength: this is the length of one RF wave. If the radio wave's electric field vector points in some other direction. It can be computed by either of the following formulas. Polarization may be vertical. The diagram above shows vertical and horizontal polarization. we will consider antennas as transmitting antennas.Chapter 7 Antennas Basic Antenna Theory An antenna is a device that provides a transition between electric currents on a conductor and electromagnetic waves in space. • Polarization: polarization is the orientation of the electric field vector of the electromagnetic wave produced by the antenna. throughout this lecture. Because this is always true. such that its tip follows an elliptical path. There are several basic properties that are common to all antennas: • Reciprocity: an antenna’s electrical characteristics are the same whether it is used for transmitting or receiving. the orientation of the antenna conductor determines the polarization. it is elliptically polarized. If the electric field rotates in space. For most antennas. A transmitting antenna transforms electric currents into radio waves and a receiving antenna transforms an electromagnetic field back into electric current. horizontal or elliptical. it is said to be obliquely polarized. depending on the units required: .

In general. • • . it is possible to compute the power received by the receiving antenna using either of the formulas below: o When using dB: Antenna gain should be expressed in dBi. • Beamwidth: the angular separation between the half-point (-3dB) points in an antenna’s radiation pattern. Near field region: A spherical region of radius 2D/λ centered on the antenna.1 If the directivity of the transmitting and receiving antennas is known. where D is the longest dimension of the antenna.λ (in m) = 300/f(in MHz) or λ (in ft) = 984/f(in MHz) • Gain (directivity): This is a measure of the degree to which an antenna focuses power in a given direction. The two gain measurements can be converted using the following formula: dBi = dBd + 2. wavelength and distances in m and powers in dBm or dBW. relative to the power radiated by a reference antenna in the same direction. Units of measure are dBi (isotopic antenna reference) or dBd (half-wave dipole reference). distances and wavelengths in m and powers in W. o When using gain ratios and powers in W: Antenna gains should be expressed as a number. the beamwidth of the main lobe of the radiation pattern decreases as the directivity increases. Near field (induction field): electromagnetic field created by an antenna that is only significant at distances of less than 2D/λ from the antenna.

This came up in the section on transmission lines. typically 2. At distances greater than 2D/λ from the antenna. ground losses.represents conversion of power into RF waves (real) o Loss resistance – represents conductor losses.0 Azimuth and Elevation: . (real) o reactance – represents power stored in the near field (imaginary) Efficiency: this is the ratio of radiation resistance to total antenna input resistance: • • • The loss resistances come from conductor losses and losses in the ground (the near field of the antenna can interact with the ground and other objects near the antenna). etc. It is the length or distance expressed in terms of wavelengths. Input Impedance: This is the impedance measured at the antenna input terminals. In general it is complex and has two real parts and one imaginary part: o Radiation resistance: . • • • Electrical length. it is the only field. at distances greater than 2D/λ .• Far field (radiation field): electromagnetic field created by the antenna that extends throughout all space. The efficiency of practical antennas varies from less than 1% for certain types of low frequency antennas to 99% for some types of wire antennas. Bandwidth: generally the range of frequencies over which the antenna system’s SWR remains below a maximum value. Far field region: The region outside the near field region. It is the field used for communications.

Antennas Types There are many different types of antennas. The antennas are usually fed through an input coming up to the bottom of the antenna but can be fed into the center of the antenna as well Multiple Element Dipole Antennas Multiple element dipole antennas have some of the same general characteristics as the dipole. Antennas most relevant to designs at 2. It is sensitive to any movement away from a perfectly vertical position. multiple element dipole antennas are very directive in the vertical plane. The biggest differences will be the directionality of the antenna in the elevation pattern. We see a similar elevation radiation pattern. the antenna can be configured with different amounts of gain. ranging from 0 degrees (horizon) to 90 degrees (zenith).4GHz that are further detailed are as follows: • Dipole Antennas • Multiple Element Dipole Antennas • Yagi Antennas • Flat Panel antennas • Parabolic Dish antennas • Slotted Antennas • Microstrip Antennas Dipole Antenna All dipole antennas have a generalized radiation pattern. dipole antennas are cylindrical in nature. A sample elevation pattern can be seen above in Figure 1a. Other dipole antennas may have different amounts of vertical variation before there is noticeable performance degradation. This graph shows that the dipole antenna is not a directive antenna. First. Azimuth is a horizontal angle. and the increased gain that is a result of using multiple elements. you find that the antennas work equally well in a full 360 degrees around the antenna. Its power is equally split through 360 degrees around the antenna. Since the dipole antenna radiates equally well in all directions on the horizontal plane it is able to work equally well in any horizontal configuration . As can be seen from the elevation pattern in following figure. By using multiple elements to construct the antenna. The elevation angle is a vertical angle. generally measured from true north. and may be tapered or shaped on the outside to conform to some size specification. From the azimuth pattern. You can move about 45 degrees from perfect verticality before the performance of the antenna degrades by more than half.These are angles used to describe a specific position in an antenna's radiation pattern. as well as a similar azimuth pattern. This is illustrated above in Figure 1b. the elevation pattern shows that a dipole antenna is best us ed to transmit and receive from the broadside of the antenna. Physically. This allows for multiple antenna designs with similar physical characteristics.

The number of elements (specifically. the number of director elements) determines the gain and directivity. but more directional than flat panel antennas. Yagi-Uda Antenna . with only one of the elements driven to transmit electromagnetic waves. Yagi antennas are not as directional as parabolic dish antennas.Multiple Element Dipole Elevation Pattern Yagi Antennas Yagi antennas consist of an array of independent antenna elements.

Flat panel antennas are quite directional as they have most of their power radiated in one direction in both the vertical and horizontal planes. This can provide excellent directivity and considerable gain. Flat panel antennas can be made to have varying amounts of gain based on the construction. configured in a patch type format and physically in the shape of a square or rectangle. the directivity of the plat panel antenna can be seen. and in the azimuth pattern. .Yagi Antenna Elevation Radiation Pattern Flat Panel Antennas Flat panel antennas are just that. Figure 4. In the elevation pattern below. Figure 5.

These antennas use a reflective dish in the shape of a parabola to focus all received electromagnetic waves on the antenna to a single point.High Gain Flat Panel Elevation Pattern igh Gain Flat Panel Azimuth Pattern Parabolic dish antennas Parabolic dish antennas use physical features as well as multiple element antennas to achieve extremely high gain and sharp directivity. this type of antenna is capable of providing high gain. Elevation Pattern of a Parabolic Dish Antenna . the parabolic dish antenna is very directional. As can be seen in Figure 5. By harnessing all of the antenna's power and sending it in the same direction. The parabolic dish also works to catch all the radiated energy from the antenna and focus it in a narrow beam when transmitting.

but its physical construction consists only of a narrow slot cut into ground plane. . Microstrip antennas offer several tradeoffs that need to be considered. slotted antennas provide little antenna gain. Due to this characteristic. The elevation and azimuth patterns are similar to those of the dipole. and their low cost. limiting the frequencies that can be received is actually beneficial to the performance of a radio. As with microstrip antennas mentioned below. These factors most often offset their mediocre performance characteristics. as evidenced by their radiation plots and their similarity to the dipoles. In many cases. and they are made for very specific frequency ranges. Their most attractive feature is the ease with which they can be constructed and integrated into an existing design. microstrip antennas are not well suited for wideband communications systems. they can be very small and lightweight.Slotted Antennas The slotted antenna exhibits radiation characteristics that are very similar to those of the dipole. and do not exhibit high directionality. Because they are manufactured with PCB traces on actual PCB boards. Microstrip Antennas Microstrip antennas can be made to emulate many of the different styles of antennas explained above. This comes at the cost of not being able to handle as much output power as other antennas.

either active or passive circuits. and the sop-band characteristics of the filter. .Chapter 8 Analog Filter Design Introduction to Analog Filter Frequency Responseand Transfer Function Analog filtering is done by analog electronic circuit. Quality Factor and Filter Design Parameters Filter Design Parameters in designing filters. and Elliptic filter response. Many approaches have been searched to design the filter. the response curve needs to be a square window. The most popular of these approaches are Butterworth. Many realistic filter responses can be seen in figures below: Many approach realize realistic frequency response Ideally. to approximate as close as possible to the ideal filter response. transient-band. but practically that's impossible. Chebyshev. so the frequency beyond the pass band will be completely discarded. the specification requirement is usually the pass-band.

For a pass band filter with mid frequency fm. is another convenient way to specify a filter performance. it's more convenient to specify the Q-factor because we can directly express the actual performance of the filter we need. An example is given in following figure. there would be a transition-band where the attenuation increases before reaching the specified stop-band attenuation level. For low-pass and high-pass filters. Q = fm / (fc2-fc1) The band width is the pass area between cut-off frequency fc1 and fc2. quality factor Q is defined as the ratio of fm to the band width. Because the pass-band and the stop-band are not clearly demarcated. Rather than specifying the n for the order of a certain filter type. Quality Factor Quality factor. .Filter Design Parameters The pass-band is normally defined as the frequency range that the signal is not attenuated more than 3 dB. Q represents the pole quality and is defined as: Q=sqr(bi)/ai High Qs can be graphically presented as the distance between the 0-dB line and the peak point of the filter’s gain response. which shows a tenth-order Tschebyscheff low-pass filter and its five partial filters with their individual Qs. where the signal is passed with no more than -3dB attenuation. or known popularly as Q-factor.

while active filter uses operational amplifier or any kind of amplification circuit. The graphical approximation is good for Q > 3. This blog will dedicate the design reference for active type. the graphical values differ from the theoretical value significantly.85. However. . The operational amplifier symbol are shown in the figure below.48 which is within 1% of the theoretical value of Q = 35. Analog Filter Elements Analog filter element can be passive or active. therefore it will be expensive. and focused on filter design with operational amplifier. only higher Qs are of concern. since the higher the Q is. but it's undesirable for low frequency because large capacitor and inductor consume large component and space. the need for inductor to construct high order filter can be eliminated. Passive filter uses inductor and capacitor. the more a filter inclines to instability. which is the logarithmic valueofQ5: Q5[dB]=20·logQ5 Solving for then numerical value of Q5 yields: Q5=1031/20=35. For lower Qs.Graphical Presentation of Quality Factor Q on a Tenth-Order Tschebyscheff Low-Pass Filter with 3-dB Passband Ripple The gain response of the fifth filter stage peaks at 31 dB. Using operational amplifier. Passive filter is effective and efficient for high frequency because the size of inductor and capacitor will be small.

In designing an active filter. Filter Design There are many ways to construct third or higher order analog filter. This information can be obtained from the data sheet of the op-amp from the manufacturer. Mylar capacitor type is acceptable. General Filter Construction . The input impedance/resistance of the op-amp should be at least 100 times the largest resistor used in the circuit. For resistors. we can cascade two second order stages and a first order stage. and 1% for higher order. A cheap ceramic capacitor can be used for low-grade application. Here are some considerations in choosing the op-amp for the active filter: 1. Following figure show the general filter construction. you can use 5% tolerance for fourth or lower order filter. and one of the most popular method is by cascading first order and second order filter stages. In most design. and to construct fifth order filter. For example. 3. Polystyrene and Teflon capacitor are better. The open loop gain of the op-amp should be at least 50 times the filter gain. assuming that we use 1% tolerance resistors. but more expensive. we have to choose appropriate components to make our design meet the requirement. 2. to construct fourth order filter. we can cascade two second order stages. Use op-amp with appropriate frequency response and slew rate. use it for high performance filter.

low-pass. giving the total cascaded output response equivalent to the multiplication product of their individual transfer functions.The infinite input and zero output approximation of active filter designed with opamp make the cascading produce non-interacting stages. Chebyshev. therefore the transfer function of each stage remain unchanged. and using design reference presented here will be easy to construct many types of high filters (high-pass. . Elliptic). The first order and the second order stages is easy to design. band-pass) with many approach (Butterworth.