You are on page 1of 71

303 chap 4

Learning outcome
4. digital transmission and multiplexing
Assessment criteria – Practical
The learner can:
practical not required
Assessment criteria – Knowledge
The learner can:
4.1 describe how noise appears on different types of signal
4.2 explain reasons for noise being cumulative throughout the length of an analogue communication
system
4.3 describe means by which the effect of noise is reduced and made non-cumulative on a digital system
4.4 define bit rate
4.5 explain the relationship between bit rate, system bandwidth and signal-to-noise ratio as expressed
by the hartley-shannon law
4.6 perform calculations using the hartley-shannon law
4.7 describe the conditions that provide a high bit rate using pulse regeneration
4.8 define the term ‘multiplexing’
4.9 differentiate between multiplexing methods
4.10 explain the need for multiplexing
4.11 explain the connection between multiplexing and modulation
4.12 explain factors that make demultiplexing more difficult than multiplexing
4.13 explain the concept of space division multiplexing (SDM).
4.14 describe channel sharing in frequency division multiplexing (FDM)
4.15 explain the reason for channel separation in FDM.
4.16 explain the mechanism of channel separation in FDM.
4.17 describe the essential features of TDM.
4.18 define the terms used in TDM.
4.19 explain the benefits of digital time division multiplexing (TDM) systems.
4.20 describe the relationship between sampling rate, channel time slot and the possible number of
channels.
4.21 perform calculations using the relationship between sampling rate, channel time slot and the
possible number of channels.
4.22 explain elements of statistical time division multiplexing (TDM)
4.23 describe code division multiplexing (CDM) in terms of its essential elements
4.24 explain multiple access techniques.

Range
Communication links
Analogue, digital
Multiplexing methods
Space Division Multiplexing (SDM), Frequency Division
Multiplexing (FDM), Time Division Multiplexing (TDM),
Code Division Multiplexing (CDM)
Signal
Analogue, rectangular pulses
Reasons
Thermal noise proportional to resistance, resistance
proportional to length
303 chap 4

Hartley-Shannon law
where I is the information capacity in bits/s, B is the
bandwidth in Hertz, S/N is the signal-to-noise power ratio
)/1(log2NSBI+=

Conditions
Wide bandwidth, low signal-noise ratio
Types
SDM, FDM, TDM, CDM
Factors
Synchronisation, filtering
Concept
Circuits spaced apart, one channel per circuit
Channel sharing
Each channel permanently allocated a part of the total
frequency band
Benefits
Noise reduction, cable plant utilisation costs
Features
Common highway, channel interleaving, synchronisation
Terms
Channel, time slot, guard time, sampling rate
Relationship
where n is the number of channels, f is the sampling rate
and TS is the period of a time slot 1S=××Tfn

Elements
Dynamic slot allocation, overheads, data throughput, bit
rate
Essential elements
Spreading code, chip rate, data rate, synchronisation
Multiple access Techniques
Frequency Division Multiple Access (FDMA), Time Division
Multiple Acess (TDMA), Code Division Multiple Access
(CDMA)

describe how noise appears on different types of signal

How types of noise in data communication systems affect the network


Learn about the different types of noise in data communication -- including thermal, intermodulation,
cross-talk, impulse and shot noise
Noise is any unwanted disturbance of a useful signal that obscures its information content. Many
different types of noise in data communication exist, and managing noise successfully requires the use
of multiple techniques. Among the most common types of noise are electronic noise, thermal noise,
intermodulation noise, cross-talk, impulse noise, shot noise and transit-time noise. Acoustic noise is also
a factor for those working within networking office environments.
Electronic noise
303 chap 4

IT staff mainly deals with electronic noise, created in the radio or network systems that transmit data, or
in the medium -- such as wire and air -- through which signals are transmitted.
Thermal noise
Thermal noise occurs in all transmission media and communication equipment, including passive
devices. It arises from random electron motion and is characterized by a uniform distribution of energy
over the frequency spectrum with a Gaussian distribution of levels; the higher the temperature of the
components or the medium, the greater the level of thermal noise.
Intermodulation noise
Intermodulation (IM) effects result when two or more signals pass through a nonlinear device or
medium and interact with each other in ways that produce additional signals, such as harmonics and
subharmonics of input signal frequencies. These resulting IM components may be inside or outside the
frequency band of interest for a particular device; it is only when they are inside the band of interest
that IM effects become IM noise.
IM noise is a significant concern in radio communications, including cellular telephony and data
networks. Component manufacturers test for it -- it can result from either design characteristics of a
system, or from manufacturing defects. But it can become an issue post-manufacture due to damage to
a component, so field engineers need to watch for it, too.

Cross-talk
Cross-talk refers to signals interfering with each other electromagnetically. There are essentially three
causes of cross-talk:

 Electrical coupling between transmission media, like adjacent wires in a


multilane serial interface connection -- for example, Ethernet or Fibre Channel;

 Poor control of frequency response -- i.e., defective filters or poor filter design;
and

 Nonlinear performance in analog multiplex systems. High levels of cross-talk


increase bit-error rates and degrade a digital path's performance.
Impulse noise
303 chap 4

Impulse noise is a noncontinuous series of irregular pulses or noise spikes of short duration, broad
spectral density and of relatively high amplitude. Impulse noise can be caused by positioning a
communications cable near a source of intermittent but strong electromagnetic pulses, such as an
elevator motor. It degrades telephony only marginally, if at all, but can seriously corrupt data
transmissions.
Shot noise 
Shot noise, also called quantum noise, is the variation in a signal that is caused by the quantized nature
of the light and electricity making up the signal. We tend to think of a signal, whether a beam of light or
a stream of electrons, as being uniform: a steady stream of particles traversing a path. The physical
reality, though, is not one of uniform and constant movement, but of clumpy movement that only looks
smooth on average across long, large flows of light or electricity -- as measured by intensity of light or by
electrical current density.
Shot noise has become a major concern, as circuits get smaller and faster, reducing the time over which
flows can be averaged down to -- and past -- the nanosecond level, and current flows down to a
nanoampere or less. Chip and system designs increasingly need to account for shot noise, as the drive to
shrink components and increase component speeds continues.
Transit-time noise
Transit-time noise is a similar phenomenon to shot noise in that it affects systems more as they get
smaller due to the quantized nature of electricity. Transit-time noise results when a signal frequency's
period is the same as the time an electron takes to travel from sender to receiver. The noise results from
the statistical variations in actual electron flow.
Acoustic noise
All these types of electronic noise above are distinct from acoustic noise, which encompasses sounds in
an environment, including:

 Continuous noise that's steady in tone and volume, such as noise created by
some machinery in industrial environments, like conveyor belts or worm gears
moving materials along a production line; and retail environments, such as vent
fans in a restaurant kitchen; as well as by things like fluorescent lighting -- the 60-
cycle hum -- and air-conditioning in all kinds of environments.

 Low-frequency noise, also called infrasound, which is below the range of


sounds normally audible to humans -- i.e., at or below about 20 hertz -- but which
can be very disturbing to many people. Infrasound can be generated by machinery,
and even by the vibration of buildings in response to wind or other forces.

 Workplace noise, variable in volume and tone, such as what's typically heard in
the background in call centers and open-plan offices, or is experienced by workers
303 chap 4

in factories, kitchens and other environments -- often in addition to continuous


noise.
As far as networks are concerned, these types of noise are part of the signal -- the sound being
transmitted -- since they are not artifacts of the technology used to
Acoustic noise is mitigated through a combination of workplace design principles, like  breaking up large
open offices; and furnishings, such as using sound-damping coverings on walls and incorporating
plantings in a space; work practices, like staggering shift breaks in call centers, for instance; and human
interface technologies, such as noise-cancelling headphones, which help people lower their voices, and
directional microphones, which pick up less background noise. 

explain reasons for noise being cumulative throughout the length of an analogue communication system

Introducing Telecommunications
Digital communication systems are becoming, and in many ways have already become, the
communication system of choice among us telecommunication folks. Certainly, one of the reasons for
this is the rapid availability and low cost of digital components. But this reason is far from the full story.
To explain the full benefits of a digital communication system, we'll use Figures 1.7 and 1.8 to help.

Sign in to download full-size image

Figure 1.7. (a) Transmitted analog signal; (b) Received analog signal
303 chap 4

Sign in to download full-size image

Figure 1.8. (a) Transmitted digital signal; (b) Received digital signal
Let's first consider an analog communication system, using Figure 1.7. Let's pretend the transmitter
sends out the analog signal of Figure 1.7(a) from point A to point B. This signal travels across the
channel, which adds some noise (an unwanted signal). The signal that arrives at the receiver now looks
like Figure 1.7(b). Let's now consider a digital communication system with the help of Figure 1.8. Let's
imagine that the transmitter sends out the signal of Figure 1.8(a). This signal travels across the channel,
which adds a noise. The signal that arrives at the receiver is found in Figure 1.8 (b).
Here's the key idea. In the digital communication system, even after noise is added, a 1 (sent as +5 V)
still looks like a 1 (+5 V), and a 0 (−5 V) still looks like a 0 (−5 V). So, the receiver can determine that the
information transmitted was a 1 0 1. Since it can decide this, it's as if the channel added no noise. In the
analog communication system, the receiver is stuck with the noisy signal and there is no way it can
recover exactly what was sent. (If you can think of a way, please do let me know.) So, in a digital
communication system, the effects of channel noise can be much, much less than in an analog
communication system.

Academic Press Library in Signal Processing: Volume 1


Digital communication—analog signal processing of digital signals

In digital communication systems, the situation is opposite to that in the


previous section in that analog signal processing is now used for processing
digital signals. Figure 5.9 shows a simple model of a digital communication
system. The original digital information is contained in xQ(n). This
information is D/A converted before being transmitted over an analog
channel. Finally, on the receiver side, the signal is A/D converted, generating
yQ(n). It is desired to have yQ(n)=xQ(n) so as to receive the information
sent. In practice, due to many different error sources in the communication
channel, advanced digital signal processing methods are required to transmit
information successfully. Regarding the sampling and reconstruction, one
303 chap 4

can make use of the same analysis methods here as for the system depicted
earlier in Figure 5.6.

Sign in to download full-size image

Figure 5.9. Simple model of a digital communication system.

Read full chapter

Digital communication fundamentals for cognitive


radio
DATA TRANSMISSION

A digital communication system is designed to transport a message from an


information source through a transmission medium (i.e., channel) to an
information sink. The goal is to accomplish this task such that the
information is efficiently transmitted with a certain degree of reliability. In
digital communication systems, the metric of reliability for a given
transmission is commonly referred to as the bit error rate (BER) or
probability of bit error, which is measured at the receiver output. Note that
several data transmission applications require a minimum data rate, where
the amount of information transferred from information source to
information sink must be achieved within a specific time duration.
Consequently, bandwidth efficiency is an important characteristic of any
digital communication system.

3.2.1 Fundamental Limits

When designing a digital communication system, it is important to


understand the achievable limits when transmitting data under specific
operating conditions such as the signal-to-noise ratio (SNR) or the available
bandwidth. Consequently, many digital communication system designers use
the concept of channel capacity to mathematically determine these
achievable limits. The channel capacity was first derived by Claude Shannon
and Ralph Hartley and is given by
303 chap 4

(3.1)C=Blog21+γ,

where B is the transmission bandwidth, and γ is the SNR. The channel


capacity C, measured in bits per second (b/s), is defined as the maximum
data rate a system can achieve without error, even when the channel is
noisy. Hence, Equation (3.1) is very useful for the following reasons.

It provides us with a bound on the achievable data rates given


bandwidth B and signal-to-noise ratio (received SNR). This can be
employed in the ratio
η=RC

where R is the signaling rate and C is the channel capacity. Note that,
as η → 1, the system becomes more efficient.
2.

It provides us with a basis for trade-off analysis between B and γ.


3.

It can be used for comparing the noise performance of one modulation


scheme versus another.

Note that Equation (3.1) provides us with only the achievable data rate limit
but does not tell us how to construct a transceiver to achieve this limit.

3.2.2 Sources of Transmission Error

A data transmission error may occur in any part of a communication system. Two common sources of
error are the introduction of noise into the data transmission and the effects of a band-limited
transmission medium, which are two characteristics of a communications channel.
In designing a digital communication system, we often represent the physical transmission channels as
mathematical models. One most commonly used model is the linear filter channel with additive noise as
illustrated in Figure 3.2. In this model, s(t) is the channel input, c(t) is the impulse response of the linear
filter, n(t) is an additive random noise process, and r(t) is the channel output, which can be computed as
303 chap 4

Sign in to download full-size image

Figure 3.2. The linear filter channel with additive noise.


rt=st*ct+nt,

where ∗ denotes convolution. Such channel models can be further


categorized according to whether the linear filter is time variant or time
invariant.

Additive White Gaussian Noise (AWGN)

By definition, noise is an undesirable disturbance accompanying the received signal that may distort the
information carried by the signal. Noise can originate from both human-made and natural sources, such
as thermal noise due to the thermal agitation of electrons in transmission lines, other wireless
transmitters, or even other conductors. The combination of such sources of noise is known to possess an
approximately Gaussian distribution, as shown in Figure 3.3(a). A histogram of zero-mean Gaussian
noise with a variance of σn2 = 0.25 is shown, with the corresponding continuous probability density
function (pdf) superimposed on it. The continuous Gaussian pdf is defined as [72]

Sign in to download full-size image

Figure 3.3. Time and frequency domain properties of AWGN.


(3.2)fXx=12πσn2e−x−μn22σn2,

where μn and σn2 are the mean and variance.


303 chap 4

In most situations, the assumption that the noise introduced by the channel is white, which means that
the noise received at any frequency is approximately the same, can justifiably be made. An example of
zero-mean white Gaussian noise with a variance of σn2 = 0.25 is shown in Figure 3.3(b). Even if this
assumption may not hold in certain cases, it does help make the mathematical analysis of digital
communication systems tractable.
The receiver has to make a decision on what has been transmitted based on the received signal, which is
a mixture of noise and originally transmitted signal. Usually this is accomplished via a “nearest-neighbor”
rule with a known set of symbols. However, if a large amount of noise is added to the signal, the
received symbol might be shifted closer to a symbol other than the correct one, resulting in a decision
error, as seen in Figure 3.4, where additive white Gaussian noise is included with the signal. It is readily
observable that, as the noise power increases, the constellation points become fuzzy and begin to
overlap each other. It is at this point that the system begins to experience errors.

Sign in to download full-size image

Figure 3.4. 16-QAM signal constellations with varying amounts of noise: (a).

Band-Limited Channel

In general, usable radio frequency (RF) spectrum is limited and increasingly


becoming overpopulated. In addition, digital communication transceivers
become increasingly expensive when handling wide transmission
bandwidths. Consequently, most transmission are band limited. Narrowband
transmissions use narrowband filters at the transmitting and receiving ends
to permit only the modulated signal through the system. One problem with
this is the noise appearing at the output of the filter.

Read full chapter

Random Fractal Signals


Secure Digital Communications

A digital communications systems is one that is based on transmitting and


receiving bit streams. The basic processes involved are as follows: (i) a
303 chap 4

digital signal is obtained from sampling an analogue signal derived from


some speech and/or video system; (ii) this signal (floating point stream) is
converted into a binary signal consisting of 0 s and 1 s (bit stream); (iii) the
bit stream is then modulated and transmitted; (iv) at reception, the
transmitted signal is demodulated to recover the transmitted bit stream; (v)
the (floating point) digital signal is reconstructed. Digital to analogue
conversion may then be required depending on the type of technology being
used.

In the case of sensitive information, an additional step is required between


stages (ii) and (iii) above where the bit stream is coded according to some
classified algorithm. Appropriate decoding is then introduced between stages
(iv) and (v) with suitable pre-processing to reduce the effects of transmission
noise for example which introduces bit errors. The bit stream coding
algorithm is typically based on a pseudo random number generator or
nonlinear maps in chaotic regions of their phase spaces (chaotic number
generation). The modulation technique is typically either Frequency
Modulation or Phase Modulation. Frequency modulation involves assigning a
specific frequency to each 0 in the bit stream and another higher (or lower)
frequency to each 1 in the stream. The difference between the two
frequencies is minimized to provide space for other channels within the
available bandwidth. Phase modulation involves assigning a phase value (0,
π/2, π, 3π/2) to one of four possible combinations that occur in a bit stream
(i.e. 00, 11, 01 or 10).

Scrambling methods can be introduced before binarization. A conventional


approach to this is to distort the digital signal by adding random numbers to
the out-of-band components of its spectrum. The original signal is then
recovered by lowpass filtering. This approach requires an enhanced
bandwidth but is effective in the sense that the signal can be recovered from
data with a relatively low signai-to-noise ratio. ‘Spread spectrum’ or
‘frequency hopping’ is used to spread the transmitted (e.g. frequency
modulated) information over many different frequencies. Although spread
spectrum communications use more bandwidth than necessary, by doing so,
each communications system avoids interference with another because the
transmissions are at such minimal power, with only spurts of data at any one
303 chap 4

frequency. The emitted signals are so weak that they are almost
imperceptible above background noise. This feature results in an added
benefit of spread spectrum which is that eaves-dropping on a transmission is
very difficult and in general, only the intended receiver may ever known that
a transmission is taking place, the frequency hopping sequence being known
only to the intended party. Direct sequencing, in which the transmitted
information is mixed with a coded signal, is based on transmitting each bit of
data at several different frequencies simultaneously, with both the
transmitter and receiver synchronized to the same coded sequence. More
sophisticated spread spectrum techniques include hybrid ones that leverage
the best features of frequency hopping and direct sequencing as well as
other ways to code data. These methods are particularly resistant to
jamming, noise and multipath anomalies, a frequency dependent effect in
which the signal is reflected from objects in urban and/or rural environments
and from different atmospheric layers, introducing delays in the transmission
that can confuse any unauthorized reception of the transmission.

The purpose of Fractal Modulation is to try and make a bit stream ‘look like’
transmission noise (assumed to be fractal). The technique considered here
focuses on the design of algorithms which encode a bit stream in terms of
two fractal dimensions that can be combined to produce a fractal signal
characteristic of transmission noise. Ultimately, fractal modulation can be
considered to be an alternative to frequency modulation although requiring a
significantly greater bandwidth for its operation. However, fractal modulation
could relatively easily be used as an additional preprocessing security
measure before transmission. The fractal modulated signal would then be
binarized and the new bit stream fed into a conventional frequency
modulated digital communications system albeit with a considerably reduced
information throughput for a given bit rate. The problem is as follows: given
an arbitrary binary code, convert it into a non-stationary fractal signal by
modulating the fractal dimension in such a way that the original binary code
can be recovered in the presence of additive noise with minimal bit errors.

In terms of the theory discussed earlier, we consider a model of the type


∂2∂x2−τqt∂qt∂tqtuxt=−δxnt,q>0,x→0
303 chap 4

where q(t) is assigned two states, namely q1 and q2 (which correspond to 0


and 1 in a bit stream respectively) for a fixed period of time. The forward
problem (fractal modulation) is then defined as: given q(t) compute u(t) ≡
u(0, t). The inverse problem (fractal demodulation) is defined as: given u(t)
compute q(t).
Advanced Implementations
In all digital communication systems, a general objective is the efficient use of the available resources,
that is, bandwidth, power, and complexity, to achieve a specified performance goal. The design of a
communication system very often requires tradeoffs among these resources depending on the channel
description which quantifies the power limitations, available bandwidth, and nature of the noise and its
statistics. In many applications, one of the two primary communications resources, power or bandwidth,
is more scarce than the other. This limitation on the communication system is fundamental to the choice
of a modulation scheme.

Synchronization
Abstract
In every digital communication system, some level of synchronization is required, without which a
reliable transmission of information is not possible. Of various synchronization levels, the focus of this
chapter is on symbol synchronization and carrier recovery. The role of the former is to provide the
receiver with an accurate estimate of the beginning and ending times of the symbols and the latter aims
to replicate a sinusoidal carrier at the receiver whose phase is the same as that sent by the transmitter.
Due to the fact that carrier frequency is much higher than the symbol rate, these two types of
synchronization are done with different circuitry.

Introduction
Error Correction in Digital Communication System

In a Digital Communication System, the messages generated by the source


which are generally in analog form are converted to digital format and then
transmitted. At the receiver end, the received digital data is converted back
to analog form, which is an approximation of the original message [1]. A
simple block diagram of a digital communication system is shown in Fig. 1.1.
303 chap 4

Sign in to download full-size image

Figure 1.1. Block diagram of a simple Digital Communication System.

A digital communication system consists of six basic blocks. The functional


blocks at the transmitter are responsible for processing the input message,
encoding, modulating, and transmitting over the communication channel.
The functional blocks at the receiver perform the reverse process to retrieve
the original message [2].

The aim of a digital communication system is to transmit the message


efficiently over the communication channel by incorporating various data
compressions (e.g., DCT, JPEG, MPEG) [3], encoding and modulation
techniques, in order to reproduce the message in the receiver with the least
errors. The information input, which is generally in analog form, is digitized
into a binary sequence, also known as an information sequence. The source
encoder is responsible for compressing the input information sequence to
represent it with less redundancy. The compressed data is passed to the
channel encoder. The channel encoder introduces some redundancy in the
binary information sequence that can be used by the channel decoder at the
receiver to overcome the effects of noise and interference encountered by
the signal while in transit through the communication channel [4]. Hence,
the redundancy added in the information message helps in increasing the
reliability of the data received and also improves the fidelity of the received
signal. Thus, the channel encoder aids the receiver in decoding the desired
information sequence. Some of the popular channel encoders are Low
Density Parity Check (LDPC) codes, Turbo codes, Convolution codes, and
Reed-Solomon codes. The channel encoded data is passed to the channel
303 chap 4

modulator, which serves as the interface to the communication channel. The


encoded sequence is modulated using suitable digital modulation
techniques, i.e., Binary Phase Shift Keying (BPSK), Quadrature Phase Shift
Keying (QPSK) and transmitted over the communication channel [1].

The communication channel is the physical medium used to transfer signals


carrying the encoded information from the transmitter to the receiver. A
range of noise and interferences can affect the information signal during
transmission depending on the type of the channel medium, e.g., thermal
noise, atmospheric noise, man-made noise. The communication channel can
be air, wire, or optical cable [2].

At the receiver, the received modulated signal, probably incorporating some


noise introduced by the channel, is demodulated by channel demodulator to
obtain a sequence of channel encoded data in digital format. The channel
decoder processes the received encoded sequence and decodes the
message bits with the help of the redundant data inserted by the channel
encoder in the transmitter. Finally, the source decoder reconstructs the
original information message. The reconstructed information message at the
receiver is probably an approximation of the original message because of
errors involved in channel decoding and the distortion introduced by the
source encoder and decoder [4].

Coding
In a digital communication system, information is sent as a sequence of
digits that are first converted to an analog form by modulation at the
transmitter and then converted back into digits by de-modulation at the
receiver. An ideal communication channel would transmit information
without any form of corruption or distortion. Error control coding is the
controlled addition of redundancy to the transmitted digit stream in such a
way that errors introduced in the channel can be detected, and in certain
circumstances corrected, in the receiver. It is, therefore, a form of channel
coding, so called because it compensates for imperfections in the channel;
the other form of channel coding is transmission (or line coding), which has
different objectives such as spectrum shaping of the transmitted signal.
303 chap 4

There are two main types of error-correcting code: block coding and
convolutional coding. In block coding, the input is divided into blocks of k
digits. The coder then produces a block of n digits for transmission and the
code is described as an (n, k) code. In convolutional coding, the coder input
and output are continuous streams of digits. The coder outputs n output
digits for every k digits input, and the code is described as a rate k/n code. A
more sophisticated decoder may use the parity digits for error detection and
a full decoder for error correction. Unsystematic codes also exist but are less
commonly used.

Optical performance monitoring in optical long-haul


transmission systems
System performance measures
As a digital communication system, the ultimate measure of the long-haul system performance is the bit
error rate (BER). However, there are many other parameters that can be used to evaluate complex
optical transmission system performance.

Receiver Sensitivity

Receiver sensitivity, the traditional measure of receiver performance, is defined as the minimum
received optical signal power at a specific BER (e.g., 10−9)4 in the back-to-back configuration. This
parameter shows the quality of receiver design. The better the receiver sensitivity, the better the system
performance in terms of longer transmission distance and the greater the tolerance to fiber
impairments.
However, receiver sensitivity is not the critical measure for long-haul systems that consist of many
optical amplifiers. In the optically amplified long-haul system, receiver sensitivity is replaced by optical
signal-to-noise ratio (OSNR) as the base for performance comparison. Nevertheless, back-to-back
receiver sensitivity itself is still a good measure of performance for high-speed receiver design.
In practice, the system dispersion map is calculated based on the premeasured parameters of CD for
each span. The overall accumulated CD can be reduced to a small range within the CD tolerance by DCF
and DCM. Such method is effective since the CD effect is deterministic and linear with the fiber length.
The CD tolerance can be used to evaluate the system robustness against the CD impairment.
PMD is not an issue for regular systems at the data rate of 10 Gb/s and below. Most fibers used in long-
haul system transmission are low-PMD fibers. Only a few systems need to deal with high-PMD fiber.
However, as the data rates move up to 40 and 100 Gb/s, both CD and PMD become the real challenge
for long-haul transmission. Both impairments of CD and PMD need to be monitored and corrected.

describe means by which the effect of noise is reduced and made non-cumulative on a
digital system
303 chap 4

Back-end design fixes cannot improve a bad “noise launch” at the radio system
input, but tower-mounted amplifiers can improve overall system noise figure.

There are many types of noise. Most are nuisances, and some are downright
destructive to communications. Two types of noise can affect the operation of all
active devices, including transistors, packaged amplifiers and radio receivers.
One type of noise is generated within active devices, effectively limiting the
number of components or amplifiers that can be chained. The other is
desensitizing noise that literally overloads, or swamps, the input stages of
amplifiers and receivers, rendering them insensitive to intended signals.

Internally generated noise (equipment noise) is additive with other system


noise. Noise-threshold limits are determined, literally, by the amount of
equipment involved. This is true for the front-end operation of a receiver or for a
long string of bidirectional amplifiers in a tunnel. Each element of the system, be
it a gain block or passive network, adds its bit of noise. Cumulative noise will
ultimately desensitize a system to the point of unacceptable performance.
Fortunately, in most modern equipment designs, a great number of blocks must
be cascaded before “noise build-up” begins to approach practical sensitivity
thresholds.

Desensitizing noise is manmade and external to the system. Power line noise,
which can take many forms, is often broadband enough to affect more than one
radio communications service simultaneously. Impulse noise, such as that
generated by automobile ignition systems, is another common form of
desensitization to VHF and UHF receivers. Enough noise of these types will
effectively kill a system’s response to intended signals. A receiver or amplifier
will effectively shut down in the presence of strong noise because of bias shift in
the input stages.

Noise: An analogy Consider a ball game during which you see the hot dog
vendor, maybe 150 feet away – too far to get his attention. You can’t yell loud
enough over the noise of the crowd. So you get the attention of someone 50
feet away and ask him to pass it on. Here, we have ambient noise (the crowd)
and gain blocks in the form of two or three “pass it on” volunteers. Sound loss
303 chap 4

between each volunteer is about equal to each one’s “yelling volume.” You are
just about to deliver your hot dog order when a fan between volunteers 2 and 3
begins to shout loudly at the umpire. The fan’s outburst becomes additional
noise disrupting communications over one section of the path. Before the fan
started yelling, there was net zero gain, with each volunteer contributing just
enough “power” to overcome the loss in his section, or link, of the system. The
added burst of noise was enough to disrupt communications.

This illustrates what noise can do in a communications system. Fortunately,


communications equipment with guaranteed noise specifications gives us a bit
more design control than we had over the hot dog path. The use of good “noise
sense” and careful design procedures can lead to successful communications
over an almost unlimited number of links.

Key points related to system noise (c) Noise temperature definition – At a pair of
terminals, and at a specific frequency, it is the temperature of a passive system
having an available noise power-per-unit bandwidth equal to that of the actual
terminals.

In the context of this definition, “pair of terminals” means the terminals of any
active system under consideration, such as an antenna, transmission line,
resistor or amplifier. “Active” does not necessarily mean gain in this definition
(i.e., inert components have “negative gain,” or loss, but the system as a whole
is still active). Essentially, noise temperature is the ratio between noise at the
terminals of a network and the noise that exists at the reference temperature
(290§K) of a totally passive system.

(c) Noise factor – The ultimate sensitivity of an amplifier is set by the noise
inherent to its input stage. A precise evaluation of a device’s noise quality is
obtained by means of its noise factor. Noise factor, as described below, is a ratio
of powers. An amplifier’s input impedance must closely match the source
impedance for maximum power to be transferred to the amplifier.

The input termination (impedance) has an available noise power given by the
expression:
303 chap 4

(c) Example – As shown in the following equation, the output noise power of a
“perfect” 40dB-gain amplifier (numerical gain of 10,000), operating over a
30kHz bandwidth, would be:

A perfect amplifier generates no noise power of its own. A real amplifier


generates internal noise because of thermal effects and molecular agitation. The
numerical ratio of output noise power between real and perfect amplifiers is
noise factor (F). Noise factor compares the noise power output of a model,
perfect amplifier with the noise power output of a practical amplifier under the
same gain and bandwidth conditions. It is described by the expression:

where

P subscript n 5 noise power output of a real amplifier.

KTBG 5 noise power output of a perfect amplifier.

G 5 numerical gain.

(c) Example – Solving Eq. 3 for noise power, P subscript n, and with F 5 1 for a
perfect amplifier, the noise power output is 1.2 3 10 superscript 212W. In other
words, a perfect 40dB amplifier operating over a 30kHz bandwidth at room
temperature would have a noise output of 1.2 3 10 superscript 212W. A typical
practical amplifier under the same conditions might have a noise power output
of 1.4 3 10 superscript 211W. Calculating the noise factor by Eq. 3 yields:

(Note: Noise factor is always a numerical ratio.)

Noise factor, then, is a multiplier used to predict the amount of amplifier noise
power output compared with the noise power output that would occur in a
perfect amplifier under the same conditions.

(c) Noise figure – Signal-to-noise ratio (SNR), at the receiver’s output is a critical
element in the communications link. All information received, whether analog or
digital, encrypted or “clear,” is subject to confusion on the part of the
interpreter. Accurate interpretation of information drives the necessity for
303 chap 4

measurable quantities. Noise figure, N, is a measurable figure of merit. In digital


systems, the quantitative reliability measure is bit-error rate (BER). Bit-error rate
is closely related to noise figure in a non-linear way.

define bit rate

Bitrate, as the name implies, describes the rate at which bits are transferred from one location
to another. In other words, it measures how much data is transmitted in a given amount of time.
Bitrate is commonly measured in bits per second (bps), kilobits per second (Kbps), or megabits
per second (Mbps). Bitrate is the number of bits per second. ... It generally determines the
size and quality of video and audio files: the higher the bitrate, the better the quality and the
larger the file size because File size = bitrate (kilobits per second) x duration. In most cases, 1
byte per second (1 B/s) corresponds to 8 bit/s.

explain the relationship between bit rate, system bandwidth and signal-to-noise ratio as
expressed by the hartley-shannon law

Shannon's Theorem
Shannon's Theorem gives an upper bound to the capacity of a link, in bits per second (bps), as a
function of the available bandwidth and the signal-to-noise ratio of the link.

The Theorem can be stated as:

C = B * log2(1+ S/N)
where C is the achievable channel capacity, B is the bandwidth of the line, S is the average signal
power and N is the average noise power.

The signal-to-noise ratio (S/N) is usually expressed in decibels (dB) given by the formula:

10 * log10(S/N)

so for example a signal-to-noise ratio of 1000 is commonly expressed as

10 * log10(1000) = 30 dB.

Here is a graph showing the relationship between C/B and S/N (in dB):
303 chap 4

Examples
Here are two examples of the use of Shannon's Theorem.

Modem

For a typical telephone line with a signal-to-noise ratio of 30dB and an audio bandwidth of
3kHz, we get a maximum data rate of:

C = 3000 * log2(1001)

which is a little less than 30 kbps.

Satellite TV Channel

For a satellite TV channel with a signal-to noise ratio of 20 dB and a video bandwidth of
10MHz, we get a maximum data rate of:

C=10000000 * log2(101)

which is about 66 Mbps.

Shannon's Law
The Shannon-Hartley Capacity Theorem, more commonly known as the Shannon-Hartley theorem
or Shannon's Law, relates the system capacity of a channel with the averaged recieved signal
power, the average noise power and the bandwidth.

This capacity relationship can be stated as:

C = W log 2 ⁡ ( 1 + S N ) {\displaystyle C=W\log _{2}\left(1+{S \over N}\right)}


303 chap 4

where:

C is the capacity of the channel (bits/s)

S is the average received signal power

N is the average noise power

W is the bandwidth (Hertz)

Shannon's work showed that the values of S,N, and W set a limit upon the transmission rate. [1]
It is important to note that doubling the bandwidth will NOT double the available capacity. The
parameter N, is often defined as the average noise power in an AWGN (Additive White Gaussian
Noise) system, is dependent upon bandwidth. In order to double the capacity, the argument within
the logarithm, "1 + signal to noise ratio" must be squared.
C = W log 2 ⁡ ( ( 1 + S N ) 2 ) {\displaystyle C=W\log _{2}\left(\left(1+{S \over

N}\right)^{2}\right)}
C = 2 W log 2 ⁡ ( 1 + S N ) {\displaystyle C=2W\log _{2}\left(1+{S \over N}\right)}

The relationship between information, bandwidth and noise


The most important question associated with a communication channel is
the maximum rate at which it can transfer information. Information can only
be transferred by a signal if the signal is permitted to change. Analog signals
passing through physical channels may not change arbitrarily fast. The rate
at which a signal may change is determined by the bandwidth. In fact the
same Nyquist-Shannon law as governs the minimum sampling rate governs
this rate of change; a signal of bandwidth B may change at a maximum rate
of 2B. If each change is used to signify a bit, the maximum information rate
is 2B.
The Nyquist-Shannon theorem makes no observation concerning the
magnitude of the change. If changes of differing magnitude are each
associated with a separate bit, the information rate may be increased. Thus,
if each time the signal changes, it can take one of n levels, the information
rate is increased to:

           
This formula states that as n tends to infinity, so does the information rate.
Is there a limit on the number of levels? The limit is set by the presence of
noise. If we continue to subdivide the magnitude of the changes into ever
decreasing intervals, we reach a point where we cannot distinguish the
individual levels because of the presence of noise. Noise therefore places a
limit on the maximum rate at which we can transfer information. Obviously,
303 chap 4

what really matters is the signal-to-noise ratio (SNR). This is defined by the
ratio of signal power S to noise power N, and is often expressed in decibels;

           
Also note that it is common to see following expressions for power in many
texts:

           
           
i.e. the first equation expresses power as a ratio to 1 Watt and the second
equation expresses power as a ratio to 1 milliWatt. These are expressions of
power and should not be confused with SNR.
There is a theoretical maximum to the rate at which information passes error
free over the channel. This maximum is called the channel capacity C. The
famous Hartley-Shannon Law states that the channel capacity C is given by:

           
Note that (S/N)P is the linear (not logarithmic) power ratio and B is the
channel bandwidth (in cycles/second or Hertz) in this expression.  A more
satisfying approximation to this equation is
           
Where the 2*B gives the maximum independent sample rate the channel can
carry and  is the number of bits required to describe the
average number of discernable signal voltage levels given the strength of
the noise.
For example, a standard telephone channel passes signals from 300 Hz to
3300 Hz yielding a 3 KHz Bandwidth (this BW limit is set by filters at the
telephone office, not by the wire itself). It typically has a SNR of 40dB. The
theoretical maximum information rate is then:

           
Note that power is proportional to voltage squared.
The Hartley-Shannon Law makes no statement as to how the channel
capacity is achieved. In fact, channels only approach this limit. The task of
providing high channel efficiency is the goal of coding techniques. The failure
to meet perfect performance is measured by the bit-error-rate (BER).
Typically BERs are of the order of 10-6.

perform calculations using the hartley-shannon law


The Shannon-Hartley Theorem represents a brilliant breakthrough in the way communication theory
was viewed in the 1940s and describes the maximum amount of error-free digital data that can be
transmitted over a communications channel with a specified bandwidth in the presence of noise. 
303 chap 4

As you can see, and as threatened in Blog 1: Categories of LPWA Modulation Schemes, we’re going back
to basics and generating the framework and vocabulary to justify and discuss the categories of Low-
Power, Wide-Area (LPWA) connectivity: Ultra-Narrow Band (UNB), Non-Coherent M-ary Modulation
(NC-MM), Orthogonal Division Multiple Access (OFDM), and Direct Sequence Spread Spectrum (DSSS).
Before jumping directly into the details of the Shannon-Hartley Theorem, lets set the stage with the
concepts of receiver sensitivity, Eb/No, and spectral efficiency (η).
Receiver Sensitivity and Eb/No
Receiver sensitivity is the amount of energy required for
demodulation.  A link with better receiver sensitivity will
have longer range and more robustness, which will equate to
less density of infrastructure.
Eb/No is energy per bit relative to thermal noise spectral
density.  To understand Eb/No, its important to understand
the relationship between receiver sensitivity (related to
range), data rate, and Eb/No:
Receiver sensitivity = (-174 dBm + NF) + Eb/No +
10log (Data Rate)
10

 The number -174 dBm is based on Boltzman’s constant


and represents the thermal energy at room temperature per
Hz.
 NF is Noise Figure, which is a figure of merit that is
typically around 2 to 5 dB for a given radio front end.
 There is an inverse linear relationship between receiver
sensitivity and data rate. The better the Eb/No, the better the
receiver sensitivity at a given data rate.
Spectral Efficiency
Spectral efficiency is the amount of data that can be transmitted in a single link per Hz of bandwidth.  So
why is spectral efficiency important?  It’s because spectrum is typically a fixed and valuable asset.  For a
multiple access system to operate well, it’s often important to channelize so that multiple
communications may occur simultaneously. The formula for the amount of data throughput that can be
moved through a piece of spectrum is η x BW.  BW is typically fixed – if it’s free spectrum, there is a very
finite amount of it, if it’s licensed spectrum, then it’s extremely expensive.  The role of a good
modulation scheme is then to make sure that η is as high as reasonably can be designed.
The Shannon-Hartley Thoerem
The Shannon-Hartley theorem describes the theoretical best that can be done based on the amount of
bandwidth efficiency:  the more bandwidth used, the better the Eb/No that may be achieved for error-
free demodulation.  Or, equivalently stated: the more bandwidth efficient, there is a sacrifice in Eb/No. 
The Shannon-Hartley theorem applies only to a single radio link.  It does not directly apply to a multiple
303 chap 4

access system that is important in a Wide-Area Network (WAN).   Nevertheless, Shannon-Hartley is a


great starting point.
Specifically, the Shannon-Hartley theorem puts a lower bound
on the Eb/No for error-free demodulation given spectrum
efficiency as : [1]

where η is spectral efficiency measured in units of bits/Hz.  This relationship is shown in the figure
below.  Note that what’s called the Shannon Limit (which is -1.6 dB) is asymptotically approached when
η = 10 .  As spectrum efficiency is increased, the required Eb/No increases.  Stated another way, as η
-1

increases, not only is there more power required (or worse receiver sensitivity) because you’re moving
more bits, but each bit actually gets more expensive in terms of power required.  That’s a double
whammy in terms of receiver sensitivity.

So, let’s be honest, the Shannon-Hartley Theorem is hardly good news for anyone.  Let’s walk through
what it means when you’re trying to build a system in a constrained amount of spectrum:
 Say you want to cover as much distance with your link
as possible. Shannon-Hartley tells you that you can reduce
data rate to get better range (in theory without limit).  At this
limit, it costs a fixed amount of power to get a bit through –
so every dB of data rate reduction buys you 1 dB of receive
sensitivity.  But, the more you do this, the less the capacity.  
303 chap 4

You’ll be getting a trickle of data through this admirable


amount of spectrum you happen to have access to.
 Okay fine, you say. Then let me just get a lot of data
through my spectrum (that’s a perspective that is important
in the cellular industry).  If it’s a fixed amount of power per
bit, then I’ll crank up the power and get a factor of 10 data
rate increase for every factor of 10 dB increase in transmit
power.  But, not so fast, says Shannon-Hartley.  If you look at
the curve, the amount of power per bit increases dramatically
as η becomes larger than 1.  Not only are you spending
power getting more bits through your spectrum, each bit is
getting considerably more expensive.  Nevertheless, in the
3GPP standards body, the quest for ever-increasing data
rates has caused pushing up this curve as much as possible. 
As an example, 256 Quadrature Amplitude Modulation (QAM)
that has a spectral efficiency of close to 9 (η=9 which is
pretty much the far right point in the figure above) is being
currently standardized.
 Oh, and just to add insult to injury – Shannon-Hartley is
the best-case scenario. It’s what could be achieved with
infinite computation power, infinite latency (because the
coding blocks become infinitely large), infinite frequency
accuracy, etc. Everything in the real world will be worse,
perhaps even much worse.
What do smart people do when confronted with such bad
news?  How about look for a loophole!  We definitely need
such a loophole to build a high coverage, yet high capacity
network (a trickle won’t do).  Something called “spreading”
may be that loophole.  “Spreading” is maintaining the
channel bandwidth (BW) regardless of data rate and allowing
303 chap 4

very low link spectral efficiencies (η <<0.1) to achieve high


receiver sensitivity.  On the face of Shannon-Hartley, this
seems like an exceedingly bad idea.  Yeah, you may get the
coverage you need, but you only get a “trickle” of small
amounts of data through that precious spectrum.  That’s
okay, a lot of loopholes seems like a bad idea at first and we
will explore the implications of this approach in Blog 4:
“Spreading” – A Shannon-Hartley Loophole?
But first, because there are multiple ways to “spread” a
signal, we need to first understand how LoRa/Chirp Spread
Spectrum (CSS) can be analyzed and we lay that framework
in Blog 3: Chirp Spread Spectrum (CSS): The Jell-O of Non-
Coherent M-ary Modulation.
And finally, we’ll get back into the real world and test
whether this discussion is even useful in in Blog 5: The
Economics of Receiver Sensitivity and Spectral Efficiency to
attempt to quantify the commercial value of that which is
discussed on these blogs.

describe the conditions that provide a high bit rate using pulse
regeneration

Fiber-optic Links
Suppliers for fiber-optic links

Analyzing Fiber-optic Links

The software RP Fiber Power can simulate ultrashort pulse propagation under the
influence of chromatic dispersion, nonlinearities, amplifier gain and noise, etc. It is
thus also suitable for analyzing the detailed performance of fiber-optic links.

Definition: optical communication links where the signal light is transported in


fibers

German: faseroptische Verbindungen


303 chap 4

Categories: fiber optics and waveguides, lightwave communications

How to cite the article; suggest additional literature


A fiber-optic link (or fiber channel) is a part of an optical fiber communications system which provides a
data connection between two points (point-to-point connection). It essentially consists of a data
transmitter, a transmission fiber (in some cases with built-in fiber amplifiers), and a receiver. Even for
very long transmission distances, extremely high data rates of many Gbit/s or even several Tbit/s can be
achieved.
The used components, which are mostly based on fiber optics, are explained in the following, beginning
with a simple single-channel system. More sophisticated approaches are discussed thereafter.

Figure 1: Schematic of a fiber-


optic link, with a data transmitter, a long transmission fiber with several fiber amplifiers, and a receiver.
The amplifiers can be supplemented with additional components for dispersion compensation or signal
regeneration.
Transmission Formats
In most cases, the data transmission is digital, making the system very versatile and relatively
insensitive, e.g. to nonlinear distortions. There are various different modulation formats, i.e., different
methods for encoding the information. For example, a simple non-return-to-zero (NRZ) format transmits
subsequent bits by sending a high or low optical power value, with no gaps between the bits, and extra
means for synchronization. In contrast, a return-to-zero (RZ) format is easily self-synchronizing by
returning to a rest state after each bit, but it requires a higher optical transmission bandwidth for the
same data rate. Apart from details of the equipment and the optical bandwidth required (related to the
modulation efficiency), different transmission formats also differ in terms of their sensitivity e.g. to noise
influences and cross-talk.
Transmitter
The transmitter converts the electronic input signal into a modulated light beam. The information may
be encoded e.g. via the optical power (intensity), optical phase or polarization; intensity modulation is
most common. The optical wavelength is typically in one of the so-called telecom windows (see the
article on optical fiber communications).
A typical transmitter is based on a single-mode laser diode (normally a VCSEL or a DFB laser), which may
either be directly modulated via its drive current (DML = directly modulated laser), or with an external
optical modulator (e.g. an electroabsorption or Mach–Zehnder modulator). Direct modulation is the
simpler option, and can work at data rates of 10 Gbit/s or even higher. However, the varying carrier
density in the laser diode then leads to a varying instantaneous frequency and thus to signal distortions
in the form of a chirp. Particularly for long transmission distances, this makes the signal more sensitive
to the influence of chromatic dispersion. Therefore, external modulation is usually preferred for the
combination of high data rates (e.g. 10 or 40 Gbit/s) with long transmission distances (many kilometers).
The laser can then operate in continuous-wave mode, and signal distortions are minimized.
For even higher single-channel data rates, time division multiplexing may be employed, where e.g. four
channels with 40 Gbit/s are temporally interleaved to obtain a total rate of 160 Gbit/s. For high data
rates with return-to-zero formats, it can be advantageous to use a pulsed source (e.g. a mode-locked
laser emitting soliton pulses) combined with an intensity modulator. This reduces the bandwidth
303 chap 4

demands on the modulator, as it does not matter how the modulator's transmittance evolves between
the pulses.
For high data rates, the transmitter needs to meet a number of requirements. In particular, it is
important to achieve a high extinction ratio (low pedestal pulses), a low timing jitter, low intensity
noise, and a precisely controlled clock rate. Of course, a data transmitter should operate stably and
reliably with minimum operator intervention.
In simple cases, a light-emitting diode (LED) is used in the transmitter, but due to the poor spatial
coherence this requires the use of multimode fibers. The transmission rate or distance is then restricted
due to intermodal dispersion; for a longer bandwidth–distance product, single-mode fibers are
required. For short distances, several hundred Mbit/s are possible.
Transmission Fiber
The transmission fiber is usually a single-mode fiber in the case of medium or long-distance
transmission, but can also be a multimode fiber for short distances. In the latter case, intermodal
dispersion can limit the transmission distance or bit rate.
Long-range broadband fiber channels can contain fiber amplifiers at certain points (lumped amplifiers)
to prevent the power level from dropping to too low a level. Alternatively, it is possible to use a
distributed amplifier, realized with the transmission fiber itself, by injecting an additional powerful
pump beam (typically from the receiver end) which generates Raman gain. In addition, means for
dispersion compensation (counteracting the effects of chromatic dispersion of the fiber) and for signal
regeneration may be employed. The latter means that not only the power level but also the signal
quality (e.g. pulse width and timing) is restored. This can be achieved either with purely optical signal
processing, or by detecting the signal electronically, applying some optical signal processing, and
resending the signal.
Receiver
The receiver contains some type of fast photodetector, normally a photodiode, and suitable high-speed
electronics for amplifying the weak signal (e.g. with a transimpedance amplifier) and extracting the
digital (or sometimes analog) data. For high data rates, circuitry for electronic dispersion compensation
may be included.
Avalanche photodiodes can be used for particularly high sensitivity. The sensitivity of the receiver is
limited by noise, normally of electronic origin. Note, however, that the optical signal itself is
accompanied by optical noise, such as amplifier noise. Such optical noise introduces limitations which
can not be removed with any receiver design. Noise effects are discussed below in more detail.
Bidirectional Transmission
So-called full duplex links provide a data connection in both directions. These may simply be based on
separate optical fibers, or work with a single fiber. The latter can be realized e.g. by using fiber-optic
beam splitters at each end to connect a transmitter and a receiver. However, the need for bidirectional
operation introduces various trade-offs, which in some cases (e.g. for very high data rates) make a
system with two separate fibers preferable.
Multiplexing
A typical single-channel system for long-haul transmission has a transmission capacity of e.g. 2.5 or
10 Gbit/s; higher data rates of 40 Gbit/s or even 160 Gbit/s may be used in the future. For higher data
rates, several data channels can be multiplexed (combined), transmitted through the fiber, and
separated again for detection.
303 chap 4

The most common technique is wavelength division multiplexing (WDM). Here, different center
wavelengths are assigned to different data channels. It is possible to combine even hundreds of
channels in that way (DWDM = dense WDM), but coarse WDM with a moderate number of channels is
often preferred in order to keep the system simpler. The main challenges are to suppress channel cross-
talk via nonlinearities, to balance the channel powers (e.g. with gain-flattened fiber amplifiers), and to
simplify the systems.
Another approach is time division multiplexing, where several input channels are combined by nesting
in the time domain, and solitons are often used to ensure that the sent ultrashort pulses stay cleanly
separated even at small pulse-to-pulse spacings.
For short distances, for example for connections within data centers, it can be simpler to use ribbon
fiber cables with multiple fibers and corresponding numbers of transmitters and receivers. However, a
disadvantage of that approach is that the cables become bulkier.
Active Optical Cables
For short transmission distances, so-called active optical cables (AOC) can be used, where a transmitter
and a receiver (together with corresponding electronics) are rigidly attached to the ends of an optical
fiber cable. Common electrical interfaces such as USB or HDMI ports are available, so that the use of
such an active optical cable is essentially the same as that of an electrical cable, while offering
advantages like reduced diameter and weight and also a larger possible transmission distance.
Fiber to the Home
It is possible to use optical links even to supply data over the “last mile” to single homes and offices. This
technology is called fiber to the home (FTTH). In many cases, however, the last mile is still bridged with
copper cables, and fiber-optic transmission occurs only up to some small stations close to the users.
Limitations via Noise and Cross-talk
Ultimately, the data transmission capacity of any system is limited by noise. In amplified optical systems,
quantum noise e.g. in the form of spontaneous emission in fiber amplifiers (→ amplifier noise) is not
avoidable. It can impact the system performance in different forms, such as timing jitter (→ Gordon–
Haus jitter, particularly in soliton systems) or intensity noise affecting the photodetection.
Apart from noise, certain systematic signal distortions can also limit the transmission distance or bit
error rate. In particular, chromatic dispersion and nonlinearities of the transmission fiber can cause
severe signal distortions. As an example, Figure 2 shows a so-called eye diagram. Here, the “eye” is wide
open, so that the signal could still be well detected. For twice the fiber length (not shown here), this
would be different.
303 chap 4

Figure 2: Eye
diagram for the telecom signal after the fiber. This has been taken from a demo case for the RP Fiber
Power software.
Note that the transmitter also has an important impact on signal detection issues.
For example, a simple directly modulated transmitter may produce some unwanted
frequency chirp, which increases the effect of chromatic dispersion in the
transmission fiber and thus makes it more difficult to receive a clean signal after
some propagation distance.

A related and even more sophisticated topic is cross-talk between the different
channels e.g. of a WDM system. In systems with constant channel spacing, the
channels can also influence each other in the form that one channel is amplified at
the expense of the power in another channel (→ four-wave mixing). The impact of
such effects can depend strongly on the system architecture, including the
transmitter type, modulation format, fiber parameters, detection techniques, etc.
The modeling of these effects and the subsequent optimization of communication
systems are complex tasks.

Noise and related influences always cause some bit error rate, i.e., some portion
of the transmitted bits will not be correctly detected. Provided that the bit error rate
is at a sufficiently low level, occasional bit errors can be detected with certain
techniques and corrected (e.g. by resending of defective data packets). For
increasing transmission distances and/or data rates, the bit error rate finally sets
some limits. In that context, the bandwidth–distance product is often used in a
comparison of different fiber-optic links.
303 chap 4

Suppliers
The RP Photonics Buyer's Guide contains 10 suppliers for fiber-optic links. Among
them:

differentiate between multiplexing methods

The term “Multiplexing” or “Muxing” is one kind of technique for combining


multiple signals like analog as well as digital into one signal over a channel.
This technique is applicable in telecommunications as well as computer
networks. For instance, in telecommunications, one cable is used for carrying
different telephone calls. In the year 1870, the multiplexing technique is
invented first in telegraphy, and at present, it is extensively used in
communications. The scientist “George Owen Squier” was recognized the
growth of multiplexing in telephony in the year 1910. The signal which is
multiplexed will be transmitted over a cable or channel and separates the
channel into numerous logic channels. This article discusses what is
multiplexing, Different types of multiplexing techniques, and
applications. Please refer the link to know about Multiplexer and
Demultiplexer – Electronics Circuits

What is a Multiplexing?
Muxing (or) multiplexing can be defined as; it is a way of transmitting various
signals over a media or single line. A common kind of multiplexing merges a
number of low-speed signals to send over an only high-speed link, or it is
used to transmit a medium as well as its link with the number of devices. It
provides both privacy & Efficiency. The entire process can be done using a
device namely MUX or multiplexer, and the main function of this device is
303 chap 4

to unite n-input lines for generating a single output line. Thus MUX has many
inputs & single output. A device is called DEMUX or demultiplexer is used at
the receiving end which divides the signal into its component signals. So It
has single input and number of outputs.

Multiplexing

Types of Multiplexing Techniques


Multiplexing techniques are mainly used in communication, and these are
classified into three types. The 3 types of multiplexing techniques include
the following.
 Frequency Division Multiplexing (FDM)
 Wavelength Division Multiplexing (WDM)
 Time Division Multiplexing (TDM)
1). Frequency Division Multiplexing (FDM)
The FDM is used in telephone companies in the 20th century in long-distance
connections for multiplexing number of voice signals using a system like a
coaxial cable. For small distances, low-cost cables were utilized for different
systems such as bell systems, K-and N-carrier, however, they don’t let huge
bandwidths. This is analog multiplexing used to unite analog signals. This
type of multiplexing is useful when the link’s bandwidth is better than the
United bandwidth of the transmitted signals.
303 chap 4

Frequency Division Multiplexing


In FDM, signals are produced by transmitting various device modulated
carrier frequencies, and then these are united into a solo signal which can be
moved by the connection. To hold the adapted signal, the carrier frequencies
are divided by sufficient bandwidth, & these ranges of bandwidths are the
channels through the different traveling signals. These can be divided by
bandwidth which is not used. The best examples of the FDM comprise signal
transmission in TV and radio.

2). Wavelength Division Multiplexing (WDM)


In fiber communications, the WDM (Wavelength Division Multiplexing) is
one type of technology. This is the most useful concept in high-capacity
communication systems. At the end of the transmitter section, the
multiplexer is used to combine the signals as well as at the end of receiver
section, de-multiplexer for dividing the signals separately. The main function
of WDM at the multiplexer is for uniting various light sources into an only
light source, and this light can be changed into numerous light sources at the
de-multiplexer.

Wavelength Division Multiplexing

The main intention of WDM is to utilize the high data rate capacity of the FOC
(fiber optic cable). The high data rate of this FOC cable is superior to the data
303 chap 4

rate of the metallic transmission cable. Theoretically, the WDM is similar to


the FDM, apart from the data transmission through the FOC in which the
multiplexing & de-multiplexing occupies optical signals. Please refer the link
to know more about Wavelength Division Multiplexing (WDM) Working
and Applications
3). Time Division Multiplexing (TDM)
The Time division multiplexing (or) TDM is one kind of method for
transmitting a signal over a channel of particular communication with
separating the time edge into slots. Like single slot is used for each message
signal.

  Time Division Multiplexing

TDM is mainly useful for analog and digital signals, in which several channels
with low speed are multiplexed into high-speed channels used for
transmission. Depending on the time, every low-speed channel will be
assigned to an exact position, wherever it works in the mode of
synchronized. Both the ends of MUX and DEMUX are synchronized timely &
at the same time switch toward the next channel.

Types of Time Division Multiplexing


The different types of TDM include the following.

 Synchronous TDM
 Asynchronous TDM
 Interleaving TDM
 Statistical TDM
303 chap 4

 Types of TDM

1). Synchronous TDM


The synchronous TDM is very useful in both analog as well as digital signals.
In this type of TDM, the connection of input is allied to a frame. For example,
if there are n-connections in the frame, then a frame will be separated into n-
time slots, and for every unit, each slot is assigned to every input line.

In the sampling of synchronous TDM, the speed is similar for every signal, as
well as this sampling needs a clock (CLK) signal at both the ends of sender &
receiver. In this type of TDM, the multiplexer assigns the similar slot for each
device at every time.

2).Asynchronous TDM
In asynchronous TDM, for different signals, the rate of sampling is also
different, and it doesn’t need a general clock (CLK). If the device has nothing
for transmitting, then the time slot is assigned to a new device. The design of
a commutator otherwise de-commutator is not easy & the bandwidth is low
for this type of multiplexing, and it is applicable for not synchronous transmit
form network.
3). Interleaving TDM
The TDM can be imagined like two speedy rotary switches on the
multiplexing & demultiplexing surface. These switches can be rotated &
synchronized in reverse directions. Once the switch releases at the surface of
multiplexer ahead of a connection, then it has a chance of sending a unit into
the lane. Similarly, once the switch releases at the surface of de-multiplexer
303 chap 4

ahead of a connection a chance to receiving a unit from the lane. This


procedure is named as interleaving.
4). Statistical TDM
The statistical TDM is applicable to transmit different types of data
simultaneously across a single cable. This is frequently used to handle data
being transmitted through the network like LAN (or) WAN. The transmission
of data can be done from the input devices which are connected to networks
like computers, fax machines, printers, etc. The statistical TDM can be used
in the settings of telephone switchboards to control the calls. This type of
multiplexing is comparable to dynamic bandwidth distribution, and a
communication channel is separated into a random data stream number.
Applications of Multiplexing
The applications of multiplexing include the following.
 Analog Broadcasting
 Digital Broadcasting
 Telephony
 Video Processing
 Telegraphy

explain the need for multiplexing

What is the need and types of multiplexing?


Multiplexing in the process of simultaneously transmitting two or more individual signals over a signals
communication.

Need multiplexing
Multiplexing is the set of techniques that allows the simultaneous transmission of multiple signals across
a signals data link.
When many signals or channels are to be transmitted, then from transmitter’s side that sends
simultaneously i.e. multiplexer converts many into one, so that at the receiving end also all input we get
303 chap 4

simultaneously.

Sending many signals separately is expensive and requires more wires to send. So there is a need of
multiplexing. For example in cable T.V distributor sends many channels through single wire.

Types of multiplexing
There are types of multiplexing are given below,

o Frequency division multiplexing (FDM).

Frequency division multiplexing

o Wavelength division multiplexing (WDM)

o Time division multiplexing (TDM)


303 chap 4

Time division multiplexing

explain the connection between multiplexing and modulation

Modulation is when you have a low frequency signal and you want to transit it for exemple,
it's useful to grow up its frequency, so you use a modulation method (AM FM PM in
analogical modulations ASK FSK and PSK for digital ones).
AM or ASK is an amplitude modulation: it's about variying the amplitude of a high frequency
signal (carrier) in function of your signal(information) then in reception you can easily
demodulate the signal by detecting the enveloppe (a diod with a lowpass filter).
The Fm or FSK is a frequency modulation, it's a bit more difficult than amplitude one but in
general it's the same principle you change the frequency of your carrier in function of the
information.
The same thing is applying when you want a phase modulation PM or PSK but changing the
phase.

Multiplexing is a different thing it's when you have one transmission support and you have
many signals to transmit through it, so there is three kinds of multiplexing (at least)
in time: which means that you take every laps δt of a signal and you send it through you
transmission support. then you switch to the other signal....
in frequency which means that you have (or modulate ) your signals at different frequencies
so you send them all on the support at reception a simple filter bandpass permit to have
every signal alone.
There in a hybrid mode of multiplexing which combains the two modes.

In telecommunications, modulation is the process of varying a periodic waveform, i.e. a


tone, in order to use that signal to convey a message, in a similar fashion as a musician
may modulate the tone from a musical instrument by varying its volume, timing and pitch.
Normally a high-frequency sinusoid waveform is used as carrier signal. The three key
parameters of a sine wave are its amplitude ("volume"), its phase ("timing") and its
frequency ("pitch"), all of which can be modified in accordance with a low frequency
information signal to obtain the modulated signal.

multiplexing

In electronics, telecommunications and computer networks, multiplexing is a term used to


refer to a process where multiple analog message signals or digital data streams are
combined into one signal. The aim is to share an expensive resource. For example, in
electronics, multiplexing allows several analog signals to be processed by one analog-to-
digital converter (ADC), and in telecommunications, several phone calls may be transferred
using one wire.
303 chap 4

explain factors that make demultiplexing more difficult than


multiplexing

Multiplexing is the process in which multiple Data Streams, coming from different Sources, are combined and
Transmitted over a Single Data Channel or Data Stream.

In Electronic Communications, the two basic forms of Multiplexing are Time Division Multiplexing (TDM) and
Frequency Division Multiplexing (FDM).

In Time Division Multiplexing, Transmission Time on a Single Channel is divided into non-overlapped Time
Slots. Data Streams from different Sources are divided into Units with same size and interleaved successively
into the Time Slots.

In Frequency Division Multiplexing, Data Streams are carried simultaneously on the same Transmission
medium by allocating to each of them a different Frequency Band within the Bandwidth of the Single Channel.

Multiplexing is done by an equipment called Multiplexer (MUX). It is placed at the Transmitting End of the
communication link. At the Receiving End, the Composite Signal is separated by an equipment called
Demultiplexer (DEMUX). Demultiplexer performs the reverse process of Multiplexing and routes the separated
signals to their corresponding Receivers or Destinations.

Figure 1 shows how TDM interleaves small Units of each Data Stream into the corresponding Time Slots. It
Transmits the Data Streams from three Signal Sources (Red, Green and Blue) simultaneously by combining
them into a Single Data Stream.

Figure 1: Multiplexing and Demultiplexing

explain the concept of space division multiplexing (SDM).

A method by which metallic, radio, or optical transmission media are physically


separated by insulation, waveguides, or space in order to maintain channel
separations. Within each physically distinct channel, multiple channels can be
303 chap 4

derived through frequency, time, or wavelength division multiplexing. Some


Passive Optical Network (PON) implementations employ space division
multiplexing, with the downstream transmissions occurring over one fiber of a
duplex fiber optic cable and upstream transmission occurring over the other
fiber.

describe channel sharing in frequency division multiplexing


(FDM)

Concept and Process


In FDM, the total bandwidth is divided to a set of frequency bands that do not overlap.
Each of these bands is a carrier of a different signal that is generated and modulated by
one of the sending devices. The frequency bands are separated from one another by
strips of unused frequencies called the guard bands, to prevent overlapping of signals.

The modulated signals are combined together using a multiplexer (MUX) in the sending
end. The combined signal is transmitted over the communication channel, thus allowing
multiple independent data streams to be transmitted simultaneously. At the receiving
end, the individual signals are extracted from the combined signal by the process of
demultiplexing (DEMUX).

Example
The following diagram conceptually represents multiplexing using FDM. It has 4
frequency bands, each of which can carry signal from 1 sender to 1 receiver. Each of
the 4 senders is allocated a frequency band. The four frequency bands are multiplexed
and sent via the communication channel. At the receiving end, a demultiplexer
regenerates the original four signals as outputs.
303 chap 4

Here, if the frequency bands are of 150 KHz bandwidth separated by 10KHz guard
bands, then the capacity of the communication channel should be at least 630 KHz
(channels : 150 × 4 + guard bands : 10 × 3).

Uses and Applications


It allows sharing of a single transmission medium like a copper cable or a fiber optic
cable, among multiple independent signals generated by multiple users.

FDM has been popularly used to multiplex calls in telephone networks. It can also be
used in cellular networks, wireless networks and for satellite communications.

Orthogonal Frequency Division Multiplexing


OFDM is a technique where the channel bandwidth is split into many closely packed
sub-carriers or narrowband channels each of which transmits signals independently
using techniques like QAM (Quadrature Amplitude Modulation). Consequently, they do
not need any guard bands and thus have better utilization of available bandwidth.

explain the reason for channel separation in FDM

Frequency Division Multiplexing (FDM)

The FDM scheme is illustrated in figure 1 with the simultaneous


transmission of three messages or base band signals.
303 chap 4

Fig 1

The spectra of the message signals and the sum of the modulated carriers are indicated in the figure.
Any type of modulation can be used in FDM as long as the carrier spacing is sufficient to avoid spectral
overlapping. However, the most widely used method of modulation is SSB modulation.
FDM is used in telephone system, telemetry, commercial broadcast, television, and communication
networks.
Commercial AM (Amplitude Modulation) broadcast stations use carrier frequency spaced 10kHz apart in
the frequency range from 540 to 640 kHz. This separation is not sufficient to avoid spectral overlap for
AM with a reasonably high fidelity (50 Hz to 15kH) audio signal. Therefore, AM stations on adjacent
carrier frequencies are placed geographically far apart to minimize interference.
Commercial FM (Frequency-Modulation) broadcast uses carrier frequencies spaced 200kHz apart. In a
long distance telephone system, upto 600 or more voice signals (200 Hz to 3.2 kHz) are transmitted over
a coaxial cable or microwave links by using SSB modulation with carrier frequencies spaced 4 kHz apart.
In practice, the composite signal formed by spacing several signals in frequency may, in turn , be
modulated by using another carrier frequency. In this case, the first carrier frequencies are often called
sub-carriers.

explain the mechanism of channel separation in FDM.

frequency-division multiplexing (FDM)


Frequency-division multiplexing (FDM) is a scheme in which numerous signals are combined for
transmission on a single communications line or channel. Each signal is assigned a different frequency
(subchannel) within the main channel.
A typical analog Internet connection via a twisted pair telephone line requires approximately three
kilohertz (3 kHz) of bandwidth for accurate and reliable data transfer. Twisted-pair lines are common in
households and small businesses. But major telephone cables, operating between large businesses,
government agencies, and municipalities, are capable of much larger bandwidths.
Suppose a long-distance cable is available with a bandwidth allotment of three megahertz (3 MHz). This
is 3,000 kHz, so in theory, it is possible to place 1,000 signals, each 3 kHz wide, into the long-distance
channel. The circuit that does this is known as a multiplexer. It accepts the input from each individual
end user, and generates a signal on a different frequency for each of the inputs. This results in a high-
bandwidth, complex signal containing data from all the end users. At the other end of the long-distance
cable, the individual signals are separated out by means of a circuit called a demultiplexer, and routed to
the proper end users. A two-way communications circuit requires a multiplexer/demultiplexer at each
end of the long-distance, high-bandwidth cable.
303 chap 4

When FDM is used in a communications network, each input signal is sent and received at maximum
speed at all times. This is its chief asset. However, if many signals must be sent along a single long-
distance line, the necessary bandwidth is large, and careful engineering is required to ensure that the
system will perform properly. In some systems, a different scheme, known as time-division multiplexing,
is used instead.

describe the essential features of TDM.

Time Division Multiplexing


1. TDM is the digital multiplexing technique.
2. In TDM, the channel/link is not divided on the basis of frequency but on the basis of
time.
3. Total time available in the channel is divided between several users.
4. Each user is allotted a particular a time interval called time slot or time slice
during which the data is transmitted by that user.
5. Thus each sending device takes control of entire bandwidth of the channel for fixed
amount of time.

6. In TDM the data rate capacity of the transmission medium should be greater than
the data rate required by sending or receiving devices.

7. In TDM all the signals to be transmitted are not transmitted simultaneously.


Instead, they are transmitted one-by-one.

8. Thus each signal will be transmitted for a very short time. One cycle or frame is
said to be complete when all the signals are transmitted once on the transmission
channel.

9.  The TDM system can be used to multiplex analog or digital signals, however it is
more suitable for the digital signal multiplexing.

10. The TDM signal in the form of frames is transmitted on the common
communication medium.
303 chap 4

   

Types of TDM

1. Synchronous TDM
2. Asynchronous TDM

Synchronous TDM (STDM)

1. In synchronous TDM, each device is given same time slot to transmit the data over the link,
irrespective of the fact that the device has any data to transmit or not. Hence the name Synchronous
TDM. Synchronous TDM requires that the total speed of various input lines should not exceed the
capacity of path.
2. Each device places its data onto the link when its time slot arrives i.e. each device is given the
possession of line turn by turn.
3. If any device does not have data to send then its time slot remains empty.
4. The various time slots are organized into frames and each frame consists of one or more time slots
dedicated to each sending device.
5. If there are n sending devices, there will be n slots in frame i.e. one slot for each device.

        

6. As show in fig, there are 3 input devices, so there are 3 slots in each frame.

Multiplexing Process in STDM

1. In STDM every device is given the opportunity to transmit a specific amount of data onto the link.
2. Each device gets its turn in fixed order and for fixed amount of time. This process is known as
interleaving.
303 chap 4

3. We can say that the operation of STDM is similar to that of a fast interleaved switch. The switch opens
in front of a device; the device gets a chance to place the data onto the link.
4. Such an interleaving may be done on the basis of a hit, a byte or by any other data unit.
5. In STDM, the interleaved units are of same size i.e. if one device sends a byte, other will also send a
byte and so on.
6. As shown in the fig. interleaving is done by a character (one byte). Each frame consists of four slots as
there are four input devices. The slots of some devices go empty if they do not have any data to send.
7. At the receiver, demultiplexer decomposes each frame by extracting each character in turn. As a
character is removed from frame, it is passed to the appropriate receiving device.

                            

Disadvantages of Synchronous TDM

1. The channel capacity cannot be fully utilized. Some of the slots go empty in certain frames. As shown
in fig only first two frames are completely filled. The last three frames have 6 empty slot. It means out of
20 slots in all, 6 slots are empty. This wastes the l/4th capacity of links.
2. The capacity of single communication line that is used to carry the various transmission should be
greater than the total speed of input lines.
Asynchronous TDM 
1. It is also known as statistical time division multiplexing.
2. Asynchronous TDM is called so because is this type of multiplexing, time slots are not fixed i.e. the
slots are flexible.
3. Here, the total speed of input lines can be greater than the capacity of the path.
4. In synchronous TDM, if we have n input lines then there are n slots in one frame. But in asynchronous
it is not so.
5. In asynchronous TDM, if we have n input lines then the frame contains not more than m slots, with m
less than n (m < n).
6. In asynchronous TDM, the number of time slots in a frame is based on a statistical analysis of number
of input lines.
303 chap 4

                       
7. In this system slots are not predefined, the slots are allocated to any of the device that has data to
send.
8. The multiplexer scans the various input lines, accepts the data from the lines that have data to send,
fills the frame and then sends the frame across the link.
9. If there are not enough data to fill all the slots in a frame, then the frames are transmitted partially
filled.
10. Asynchronous Time Division Multiplexing is depicted in fig. Here we have five input lines and three
slots per frame.
11. In Case 1, only three out of five input lines place data onto the link i.e. number of input lines and
number of slots per frame are same.
12. In Case 2, four out of five input lines are active. Here number of input line is one more than the
number of slots per frame.
13. In Case 3, all five input lines are active.
In all these cases, multiplexer scans the various lines in order and fills the frames and transmits them
across the channel.
The distribution of various slots in the frames is not symmetrical. In case 2, device 1 occupies first slot in
first frame, second slot in second frame and third slot in third frame.
                         

                           

Advantages of TDM :

1. Full available channel bandwidth can be utilized for each channel.


2. lntermodulation distortion is absent.
303 chap 4

3. TDM circuitry is not very complex.


4. The problem of crosstalk is not severe.
Disadvantages of TDM :
1. Synchronization is essential for proper operation.
2. Due to slow narrowband fading, all the TDM channels may get wiped out.

define the terms used in TDM

Definition - What does Time Division Multiplexing (TDM) mean?


Time division multiplexing (TDM) is a communications process that transmits two or more
streaming digital signals over a common channel. In TDM, incoming signals are divided
into equal fixed-length time slots. After multiplexing, these signals are transmitted over a
shared medium and reassembled into their original format after de-multiplexing. Time slot
selection is directly proportional to overall system efficiency.
Time division multiplexing (TDM) is also known as a digital circuit switched.

TDM was initially developed in 1870 for large system telegraphy implementation. Packet
switching networks use TDM for telecommunication links, i.e., packets are divided into
fixed lengths and assigned fixed time slots for transmission. Each divided signal and
packet, which must be transmitted within assigned time slots, are reassembled into a
complete signal at the destination.

TDM is comprised of two major categories: TDM and synchronous time division
multiplexing (sync TDM). TDM is used for long-distance communication links and bears
heavy data traffic loads from end users. Sync TDM is used for high-speed transmission.

During each time slot a TDM frame (or data packet) is created as a sample of the signal of
a given sub-channel; the frame also consists of a synchronization channel and sometimes
an error correction channel. After the first sample of the given sub-channel (along with its
associated and newly created error correction and synchronization channels) are taken,
the process is repeated for a second sample when a second frame is created, then
repeated for a third frame, etc.; and the frames are interleaved one after the other. When
the time slot has expired, the process is repeated for the next sub-channel.

Examples of utilizing TDM include digitally transmitting several telephone conversations


over the same four-wire copper cable or fiber optical cable in a TDM telephone network;
these systems may be pulse code modulation (PCM) or plesiochronous digital hierarchy
(PDH) systems. Another example involves sampling left and right stereo signals using
resource interchange file format (RIFF), also referred to as waveform audio file format
(WAV), audio standard interleaves. Also synchronous Digital Hierarchy (SDH) and
synchronous optical networking (SONET) network transmission standards have
incorporated TDM; and these have surpassed PDH.

TDM can also be used within time division multiple access (TDMA) where stations sharing
the same frequency channel can communicate with one another. GSM utilizes both TDM
and TDMA.
303 chap 4

explain the benefits of digital time division multiplexing (TDM) systems.

TDM combines multiple digital signals into a single serial digital bit stream. A specialized circuit,
called a serializer, allocates parallel input streams into time slots in the serial output. In a fiber
optic system, the serial bit stream is transmitted as a single wavelength down a  fiber optic cable.
On the far end of the channel, a deserializer reconstructs the original parallel signal from the
serial bit stream as shown in Figure 1. The serial data rate must be sufficiently fast to ensure no
data is lost. Fiber optic transmitters and receivers for high resolution AV signals typically operate
at a 4 to 6Gbps data rate.

Figure 1 a Serializer – Deserializer

TDM is used to transmit a wide variety of signals, including HDMI, DVI, 3G-SDI, RGB, HD and
SD component video, S-video, composite, USB, audio, and RS-232 control. In modern fiber optic
AV systems, analog video and audio are converted to digital signals, avoiding nonlinear effects
that plague direct optical conversion of analog signals. Digital transmission ensures high-
resolution video is transmitted pixel-for-pixel along the fiber optic cable.
 

Figure 2 TDM Fiber Optic Transmitter and Receiver for HDMI/DVI, Audio and Control
 
The transmitter in Figure 2 accepts HDMI/DVI video, stereo audio, and RS-232 control signals.
The multiplexer combines the signals as a serial stream of digital pulses. An electrical-to-optical –
303 chap 4

E-to-O converter changes the digital pulses to light pulses at a single wavelength for transmission
down the fiber. The receiver on the far end converts the signal from optical to electrical – O-to-E
before deserializing to restore the original signal.

describe the relationship between sampling rate, channel time slot and the possible
number of channels

Multiplexing
Multiplexing is the process of combining two or more information channels into a single
transmission medium. There are a number of different standards that can be applied to the
process. Many standards are common and are applied by manufacturers and carriers on a world-
wide basis. This assures that a multiplexing protocol used in Japan can also be used in Brazil,
Kansas, New York, or Tulsa.

This section will focus on two of the most common types of multiplexing – TDM (time division
multiplexing), and PDM (packet division multiplexing. Additionally, this section will describe the
primary multiplexing protocol in use in North America: T-1.

T-1 was the basic type of multiplexing scheme selected by Bell Labs for high capacity
communication links. It is so pervasive in the world wide telephone network, that replacement
protocols are just being introduced. A second type of multiplexing – Packet - will also be
discussed, because this is now being used to support the conversion of the telephone networks
from analog to digital. Packet multiplexing is used to support IP and Ethernet.

Time Division Multiplexing


Time Division Multiplexing is a method of putting multiple data streams in a single
communication channel, by separating it into many segments. Each segment is of a very short
duration and always occurs at the same point in time within the main signal. In TDM, there is a
direct relationship between the connections (ports) on the multiplexing hardware and the
multiplexing protocol. The data for port number one of the multiplexer always falls within the
same time period (time frame #1) – because the originating end multiplexer always places the
data for each communication port at the same place.

Figure 2-8: TDM Process Flow Chart

In time, the receiving end multiplexer is able to funnel data to the correct port. There is no data
added to the bit stream to identify the data. With TDM, the general rule is "time slot (frame) one
= communication port one". A T-1 multiplexer has 24 ports (one for each time slot). Therefore,
303 chap 4

time slot one = port one; time slot two = port two; etc. In a TDM transmission, each time slot is
always present even if only one time slot actually has data. The bandwidth is always used.

Packet Division Multiplexing


Packet Division Multiplexing (PDM) is a method of breaking the data into multiple groups. Each
group is given an identity at the origination so that the multiplexer at the receiving end can
assemble groups of data to re-create the information as it was originated. In theory, many
information sources can be funneled through a single communication port on both ends. In some
packet multiplexing schemes, the size of the packet – the amount of data included – can vary to
provide greater throughput of information. With packet multiplexing, the bandwidth used never
exceeds the total amount of data in all packets being transmitted. The PDM scheme is important
because it is the basis for the new generation of broadband communication processes. Ethernet
is an example of a transmission protocol that relies on packet division multiplexing.

Figure 2-9: Flow Chart – PDM Process

TDM is excellent for supporting voice communications and broadcast quality video, because each
service gets the amount of bandwidth required. PDM is excellent for data transmission because it
only uses the bandwidth required and requires less hardware. The technologies of TDM and PDM
have both been available for many years. TDM was more broadly deployed because it suited the
requirement of the telephone companies to provide high quality voice communications. However,
with recent advances in communication technology this is changing.

The following table provides a comparison of packet and time division multiplexing:

Table 2-8: Comparison of TDM & PDM


Time Division Multiplexing Packet Multiplexing

Fixed Bandwidth Bandwidth Varies Based on Need

Data Placed in Time Frames Data Placed in Packets

Data for individual channels always


Data for individual channels identified within packets
in same place

Ideal for transporting voice & video Ideal for transporting data

T-1 Communication Systems

T-1 Analog to Digital Conversion Process A voice signal is changed from analog to digital
within a channel bank by two processes. First, the analog signal is sampled 8000 times each
second. Each sample is converted to a discrete voltage level. Second, each voltage is
303 chap 4

converted to a binary code represented by an 8 bit word. Therefore, 8000 samples times 8
bits is 64,000 bits – a DS-0 communication channel.

The T-1 based digital network has been under development for more than 40 years. During this
time, a hierarchy of transmission levels has been implemented through a wide variety of
equipment. The primary device is a channel bank which can be arranged to carry many different
voice, analog data, or digital data signals. Port cards in the channel bank are used to support the
type of inputs into the T-1. The most common are voice (POTS), and digital data (DDS).

A T-1 contains 24 signals (or channels). Each channel is represented by 8 bits, for a total of 192
bits within a single frame. A bit is added for management (synchronization, error checking, etc),
and the result is a T-1 frame. Because the sampling rate of each channel is 8000 times per
second, a T-1 contains 8000 frames, or 1,544,000 bits. In a TDM system, each channel is
recognized within each frame at exactly the same point in time. That is – for example – channel
one never appears in a channel five time slot.

This most common form of multiplexing is used primarily for voice channel services. In fact
almost all P.O.T.S. and Special Service communication circuits are multiplexed for transport
between Telephone Company Central Offices. Multiplexing (from the telephone company
perspective) was developed to obtain efficiency in the use of the available communication cable
plant. The telephone companies were able to provide services to more customers without the
expensive installation of new communication cables.

There were a number of early attempts at providing an efficient multiplexing protocol; for voice
based services, but the carriers had to consider the overall quality of the voice communication.
Ultimately, the standard for "toll-grade" voice was set as the transmission of frequencies
between 0 Hz and 4000 Hz.

The voice frequencies are digitized for multiplexing via a process that samples the frequencies at
8000 points in one cycle (Hz) in a period of one second. Each digital sample point is produced as
an 8 bit character. Therefore, each voice channel uses 64,000 bits per second. Twenty-four voice
channels are combined into a single multiplexed communication channel referred to as a T-1.

Because the telephone company needs to monitor and manage the T-1 circuit, it "steals" 8000
bits. A few bits are taken from each sample point, and the caller never notices a reduction in
quality.

Most data transmission is accomplished using a dial-up modem. The modem converts the data
output of a computer (or other device – traffic signal field controller, dynamic message sign,
etc.) to a VF (voice frequency) signal. This signal is treated as if it were a P.O.T.S telephone call.
The modem dials a telephone number associated with another modem and a connection is made
via the switched voice network. The telephone network does not treat the call as if it were
something special. In terms of T-1 multiplexing, it is treated as if it were a normal telephone call.

Figure 2-10: Diagram of Computer Digital Output Converted to Analog Using a MODEM
303 chap 4

Multiplexing does reduce some of the overall quality of the transmission, but does not affect its
usability. Unfortunately, each step in the process of getting data transmitted from one location to
another can introduce a problem. Therefore, when troubleshooting, it is important to check every
segment of the transmission path and all attached equipment – especially if the trouble cannot
be found at either of the termination points. T-1 multiplexers are designed to convert analog
signals to digital, therefore, data output by a computer must be converted to analog before being
multiplexed.

In addition to dial-up data transmission, the telephone companies offer two additional services:
2/4 wire analog circuits (as mentioned above) and DDS (Digital Data Service). The 2/4 wire
services are generally used for transmission rates of 9600 bits per second, or slower. DDS
(originally offered for data rates from 2400 bits per second and higher) is used primarily for data
transmitted at 56,000 bits per second. A DSU/CSU is required to condition a digital output from a
computer to a format that will travel over telephone lines. Because the signal is no longer
analog, a special interface card must be installed in the T-1 multiplexer (see below).

Transporting Digital Communications via an Analog Network


Whenever a digital signal is be transported via the public telephone network it must be
"conditioned" for travel. This is because the basic wiring infrastructure was designed to transport
an analog communication signal. Analog communication is accomplished by providing a variable
electrical signal which varies as the frequency of a human speaking. Changes in volume and
pitch are represented by a smooth flowing electrical current with positive and negative values. A
dial-up modem converts your computer output to a series of analog tones which can be
transported via the network in the same manner as a voice telephone call.

Figure 2-11: Diagram of Analog Inputs to T-1 Mux

Digital signals are different. Data is represented by the presence, or absence of an electrical
signal – "on or off". "On" represents 1, "off" represents 0. Electrical signals can have either a
positive, or a negative value. The digital output of your computer must be converted to
something that is compatible with the existing telephone network. A Data Service Unit (DSU) is
used to convert the on/off electrical signal to something that looks like an analog signal.
Electrical voltages representing 1 are given alternating positive and negative values. The
momentary absence of an electrical signal is assigned a zero value. The DSU is normally used in
conjunction with a Channel Service Unit (CSU). The CSU is used as a management tool to make
certain that the communications link is performing to specification.

High Capacity Broadband Transmission


303 chap 4

This section describes various high capacity and broadband transmission systems. When the
telephone companies first deployed T-1 services, they called this "High Capacity" digital services.
Video transmission for broadcast or conferencing used multiple T-1s, or multiple T-3 circuits.
With the deregulation of telephone companies and the rise in demand for very high capacity, a
new type of service was developed – broadband. As explained, T-1's are formatted by the
telephone carriers, but large corporate users, government entities, and even the home user want
un-formatted bandwidth to be available for a large number of services.

T-1/DS-1 & T-3/DS-3


T-1 and DS-1 services are fixed point-to-point systems, and dedicated to a single customer.
These types of services are most often used to connect Traffic Operations Centers. They may
also be used to bring the data and video images from a section of freeway back to a TOC.

DDS are digital voice channel equivalents as described previously and are used as a fixed point-
to-point service. T-1 service is channelized to accommodate 24 DDS circuits.

The terms T-1 and DS-1 are often used interchangeably, but each is a distinctly different service
provided by telephone companies and carriers. T-1 service is channelized with the carrier
providing all multiplexing (channel banks) equipment. The customer is provided with 24 DS-0
interfaces. Each DS-0 interface has a maximum data capacity of 56 kbps (or can accommodate
one voice circuit). The customer tells the carrier how to configure the local channel bank
(multiplexer).

DS-1 service allows the customer to configure the high speed circuit. The customer provides
(and is responsible for maintaining) all local equipment – multiplexer, and DSU/CSU. The carrier
provides (and maintains) the transmission path. The customers can channelize the DS-1 to their
own specifications as long as the bandwidth required does not exceed 1.536 mbps, and the DS-1
signal meets applicable AT&T, Bellcore (Telcordia) and ANSI standards (these standards are now
maintained and available through Telcordia).

Customers may purchase fractional service to save money. In this case, they don't pay for a full
T-1 or DS-1. However, the economies for this type of service are only realized for longer
distances. The local loop (link) for Fractional T-1 is still charged at the full service rate.

T-3 and DS-3 services are essentially higher bandwidth variants of T-1 and DS-1. The T-3
provides either 28 T-1s or 28 DS-1s, and the DS-3 provides about 44 mbps of contiguous
bandwidth. DS-3s are used for Distance Learning and broadcast quality video. They are also
used in enterprise networks to connect major office centers.

T-1, DS-1, T-3, and DS-3 have the following characteristics:

 They are all private line services


 They are all provided on a fixed point-to-point basis
 Most users pay for the installation of the service
 Users pay a monthly fee based on distance – very large corporate customers are able to
negotiate favorable rates for DS-1 and DS-3 services.
 Users pay a fixed monthly connection and maintenance fee

DSL
DSL (Digital Subscriber Loop) services are DS-1 and Fractional DS-1 variants that use existing
P.O.T.S. service telephone lines to provide broadband services at a substantially lower cost.
Local DS-1 service starts at $500 per month. Basic DSL service starts at less than $50 per
303 chap 4

month. The primary difference between the services is that DS-1 is setup as a private line
system with fixed communication points. DSL service is typically used to provide broadband
internet connectivity. Some carriers will create a "quasi" private circuit by linking two customer
locations to a common Central Office. The primary advantage of DSL for the homeowner or small
business is that it shares the existing telephone lines and it keeps the cost of installation at a
much lower level.

DSL service has the following characteristics:

 Typically provided in an ADSL (Asymmetrical DSL) format with the link toward the
customer (download) at a higher rate than the link toward the Internet (upload).
 DSL can be provided as SDSL (Symmetrical DSL) when the user needs to have equal
bandwidth bi-directionally.
 DSL is a shared service. At peak use times, the available bandwidth is reduced. Many
users have complained that they get better service via dial-up.
 DSL offerings start at 384 kbps/128 kbps and can rise to 6.0 mbps/6.0 mbps. (the
monthly rates start at $30 and rise to $400.
 DSL can only be used when the customer is no more than 18,000 "wire feet" from the
Serving Central Office. Best service is available when the customer is less than 12,000
"wire feet" distant.
 DSL is most often used to provide internet access.

SONET
SONET (Synchronous Optical Network) is the first fiber optic based digital transmission
protocol/standard. The SONET format allows different types of transmission signal formats to be
carried on one line as a uniform payload with network management. A single SONET channel will
carry a mixture of basic voice, high and low speed data, video, and Ethernet. All of these signals
will be unaffected by the fact that they are being transported as part of a SONET payload.

The SONET standard starts at the optical equivalent of DS-3. This is referred to as an OC-1
(Optical Carrier 1). The optical carrier includes all of the DS-3 data and network management
overhead, plus a SONET network management overhead. In North America, the following SONET
hierarchy is used: OC-3; OC-12; OC-48; OC-96; OC-192. The number indicates the total of DS-3
channel equivalents in the payload.

Within the payload, SONET network management allows for the shifting of DS-1 circuits. SONET
is a point-to-point TDM system, but it has the ability to allow users to set up a multipoint
distribution of DS-1s and DS-3s. Therefore, it is possible to direct a DS-1 from one location to
many locations within the SONET network. The reader should note that SONET does not provide
"bandwidth-on-demand". The routing of portions of the SONET payload to multiple points must
be planned and built into a routing table.

A SONET network management program provides the ability to set up multiple routing plans.
These plans can be executed as part of a program to restore service in the event of an outage in
a portion of the network. Some early adopters of SONET attempted to use this feature to provide
for "time-of-day" routing changes. Often users were disappointed with the results.

Normally, SONET is transmitted in groups of DS-3s (OC-3, OC-12, OC-48, etc). In this mode, the
SONET payload is segmented within the DS-3. However, it is possible to combine DS-3s into a
single channel. An OC-3C (concatenated) is a group of DS-3s combined into a single payload to
allow for the total use of the OC-3 as a single data stream.
303 chap 4

ATM
Asynchronous Transfer Mode (ATM) is a widely deployed communications backbone technology
based on Packet Multiplexing. ATM is a data-link layer protocol that permits the integration of
voice and data, and provides quality of service (QoS) capabilities. This standards-based transport
medium is widely used for access to a wide-area (WAN) data communications networks. ATM
nodes are sometimes called "Edge Devices". These Edge Devices facilitate telecommunications
systems to send data, video and voice at high speeds.

ATM uses sophisticated network management features to allow carriers to guarantee quality of
service. Sometimes referred to as cell relay, ATM uses short, fixed-length packets called cells for
transport. Information is divided among these cells, transmitted and then re-assembled at their
final destination. Carriers also offer "Frame Relay" service for general data requirements that
can accept a variable packet or frame size. Frame Relay systems use variable cell (packets)
based on the amount of data to be transmitted. This allows for a more efficient use of a data
communications network.

ATM services are offered by most carriers. A number of DOTs are using this type of service –
especially in metropolitan areas – to connect CCTV cameras (using compressed video), traffic
signal systems, and dynamic message signs to Traffic Operations Centers. The stable packet size
is well suited for video transmission. ATM is generally not used by telephone companies for toll
grade voice, although its stable packet size was developed to meet requirements for voice
service.

In a WAN (wide area network), ATM is most often used as an "edge" transport protocol. ATM
devices typically have ports that allow for easy connectivity of legacy systems and the newer
communications systems. In a private (or enterprise) network, as shown in figure 2-12, ATM is
effectively used for voice and video transport as well as data.

Figure 2-12: Diagram – ATM over SONET Network

ATM has fixed-length "cells" of 53 bytes in length in contrast to Frame Relay and Ethernet's
variable-length "frames." The size of cell that represents a compromise between the large frame
requirements of data transmission and the relatively small needs of voice calls. By catering to
303 chap 4

both forms of network traffic, ATM can be used to handle an end user's entire networking needs,
removing the need for separate data and voice networks. The performance, however, can also be
compromised, and the network may not be as efficient as dedicated networks for each service.
ATM systems usually require DS-1 circuits, but can be made to work in a lower speed
environment.

ATM does have a reputation for being difficult to interface to an existing network. However,
competent network technicians can usually overcome most difficulties. Missouri DOT is using an
ATM based network for its ATMS.

FDM
Frequency Division Multiplexing (FDM) is used when large groups of analog (voice or video)
channels are required. The available frequency bandwidth on an individual communications link
is simply divided into a number of sub-channels, each carrying a different communication
session. A typical voice channel requires at least 3 kHz of bandwidth. If the basic communication
link is capable of carrying 3 megahertz of bandwidth, approximately 1000 voice channels could
be carried between two points. Frequency Division Multiplexing was used to carry several low
speed (less than 2400 bits per second) data channels between two points, but was abandoned in
favor of TDM which has an ability to carry more data channels with more capacity over greater
distances with fewer engineering problems.

Many older Cable TV systems use FDM to carry multiple channels to customers. This type of
system was used by Freeway Management Systems to carry video over coaxial cable. However,
most coaxial systems have been replaced by fiber optic systems. Fiber has a greater bandwidth
capability than coaxial cable, or twisted pair. The FDM scheme allows for multiple broadband
video channels to be carried over a single strand of fiber.

WDM – CWDM & DWDM


Wave Division Multiplexing (WDM) is an optical variant of FDM. A beam of light is divided into
segments called lambdas. The Greek letter Lambda (λ) is used to represent the wave channels.
These lambdas are actually different colors of light. Because a wavelength is inversely
proportional to frequency WDM is logically equivalent to FDM.

Figure 2-13: DWDM Channels

Light transmitted over a fiber is normally a group of frequencies that can be used to create a
single communication channel, or multiple channels. The frequency group can be broken into
several sub groups. The LASER output of a multiplexer is "tuned" to a specific set of frequencies
to form a single communication channel. These channels are then transmitted with other
frequency groups via a wave division multiplexer. Unlike FDM, the information sent via the
frequency groups is digital.

Two variables of WDM are used, CWDM (coarse wave division multiplexing) and DWDM (dense
wave division multiplexing). DWDM systems can carry as many as 64 channels at 2.5 gigabits
303 chap 4

per second per channel over a pair of fibers. Each DWDM lambda is equal to one OC-48 (48 DS-
3s), or one Gigabit Ethernet channel (future systems will allow 2 GigE channels per lambda).

Ethernet
Ethernet is a packet based network protocol, invented by the Xerox Corporation, in 1973, to
provide connectivity between many computers and one printer. Ethernet was designed to work
over a coaxial cable that was daisy-chained (shared) among many devices. The original Xerox
design has evolved into an IEEE series of standards (802.XXX) with many variations that include
10Base-T, Fast-Ethernet (100Base-T), and GigE (Gigabit Ethernet).

The Ethernet system consists of three basic elements:

1. The physical medium used to carry Ethernet signals between computers,


2. A set of medium access control (MAC) rules embedded in each Ethernet interface that
allows multiple computers to fairly arbitrate access to the shared Ethernet channel,
3. An Ethernet frame that consists of a standardized set of bits used to carry data over the
system.

The most current configurations use twisted pair with devices networked in a star configuration.
Each device has a direct connection to an Ethernet hub, or router, or switch. This system then
provides each user with a connection to a printer, file server, another user computer (peer), or
any other device on the network.

Ethernet works by setting up a very broadband connection to allow packets of data to move at
high speed through a network. This assures that many users can communicate with devices in a
timely manner. The Ethernet is shared, and under normal circumstances, no one user has
exclusivity.

Note: When Ethernet is used to connect only two devices via a common communication channel CSMA
is not used. This is because the two devices can coordinate communication so that one does not
interfere with the other. This type of system can be used to transmit broadband video over long
distances to take advantage of the bandwidth and economics of using Ethernet.

Freeway Management Systems using incident detection CCTV cameras may use this type of
arrangement to facilitate Video over IP. This allows video from the field to be transported to the TCC
for further distribution.

Traditional Ethernet networks of the 1990s used a protocol called CSMA/CD (carrier sense
multiple access/collision detection). In this arrangement, the transmitting device looks at the
network to determine if other devices are transmitting. The device "senses" the presence of a
carrier. If no carrier is present, it proceeds with the transmission. The CSMA protocol is not
perfect, hence the need for Collision Detection. Occasionally, more than one device transmits
simultaneously and creates a "call collision". If the originating device does not receive an
acknowledgement from the receiving device, it simply retransmits the information (not as part of
the Ethernet protocol, but part of an application protocol). In an office environment, where users
are trying to access a printer, or a file-server, this is normally not a problem. Most users are not
aware of any significant delays. In this arrangement all devices are wired to the network through
a "hub". The hub provides a central meeting point for all devices and users on the network, but
has very little intelligence for managing activity on the network. In fact all communications on
the network are sent to all devices on the network.
303 chap 4

Each Ethernet-equipped computer, also known as a station, operates independently of all other
stations on the network: there is no central controller. All stations attached to an Ethernet are
connected to a shared signaling system, also called the medium. Ethernet signals are
transmitted serially, one bit at a time, over the shared signal channel to every attached station.
To send data, a station first listens to the channel, and when the channel is idle the station
transmits its data in the form of an Ethernet frame, or packet.

After each frame transmission, all stations on the network must contend equally for the next
frame transmission opportunity. This ensures that access to the network channel is fair, and that
no single station can lock out the other stations. Access to the shared channel is determined by
the medium access control (MAC) mechanism embedded in the Ethernet interface located in each
station. The medium access control mechanism is based on a system called Carrier Sense
Multiple Access with Collision Detection (CSMA/CD).

Modern networks still use CSMA/CD, but are managed by routers and switches. Routers are not
actually Ethernet devices – they operate at layer 3 of the OSI protocol stack. The router is able
to manage the flow of data between devices and has the intelligence to route information
between specific devices. A request to view a file from an individual computer is routed to the
specific file server storing the information. None of the other computers on the network see the
data request. However, the router will simply route the request, it won't manage several users
trying to access the same file server simultaneously.

Management of users in a network is accomplished by a Switch. The Switch has the intelligence
and computing "horsepower" to manage users and allocate bandwidth. A Switch can be set to
block some users from the system based on various factors, such as priority and time-of-day
requirements.

Following is a description of the most commonly used Ethernet protocols:

 10Base-T was the most commonly used variant (as this handbook was being developed,
100Base-T was most common, with GigE catching on fast). This is typically run over
twisted pair copper and is adequate for most small office data communication
requirements.
 Fast Ethernet or 100Base-T can be run over the same twisted pair infrastructure. The
10Base-T protocol allows for a maximum throughput of 10 megabits of data and the
100Base-T allows 100 megabits of data throughput. However, various factors keep these
systems from exceeding more than 70 percent of stated capacity. Chief among these
problems is overloading of the number of users and the condition of the wired
infrastructure.
 100Base-TX is a two twisted copper pair (4 wire) transmission standard. Identical to
10Base-T, it is also referred to as "fast Ethernet", and provides 100 Mbps throughput.
 100Base-FX is a fiber optic transmission standard for Local Area Networks.
 GigE (gigabit Ethernet) is a very high bandwidth service (One gigabit per second) and is
being deployed in many large office networks. In addition to allowing more users onto the
network, GigE is capable of facilitating video between desktops, and desktop to desktop
conferencing between users. GigE is also the preferred communications protocol for
linking one office building to another. Metropolitan Networks (MAN) using DWDM facilitate
a "real-time" GigE link. IT departments can create a storage area network (SAN) to link
together many databases. Large financial institutions connect locations in a region to
facilitate electronic commerce using DWDM and GigE. Many telephone carriers are
beginning to add GigE services to their network offerings.
 10GigE protocols and standards are being developed by IEEE under 802.3ae and
802.3AK. The 10GigE standards are being developed to use the protocol for broadband
metropolitan area network (MAN) connectivity. At this time, there is no intention to use
the standard to support desktop applications, however, all things change. 10GigE
303 chap 4

systems will be deployed using fiber optic transmission networks. Following is an example
of a 10GigE Network.

Figure 2-14: Diagram – Typical Office LAN

Because most of these networks are deployed in office buildings, the twisted pair cables tend to
be run close to interference – causing electrical fixtures. The drawing shows a typical LAN for a
small office. Notice that most of the office computers are connected via a hub, and that the
printers, and communications services are connected via dedicated servers. This is simply one
type of network that can be established using Ethernet. In very small offices, every device can
be directly connected to a router. Large corporations will network several routers. The actual
network configuration is based on completing a requirements document (see chapter 4). In this
system every computer has access to every device, but the router can be programmed to restrict
access based on the unique identity of each computer work station.

Figure 2-15 provides a look at how multiple building networks are connected via a metropolitan
area network (MAN) Notice that there is one primary router in each building connected to a
10GigE network – each building is a node on the network. The common basis of Ethernet
protocols provides an easy expansion path. The system shown is actually a network of networks.
303 chap 4

Figure 2-15: Diagram – Metro Area Network (MAN)

Conclusions
The design of telecommunications systems is an iterative process. Each piece of a system is
dependent upon the others. A simple example of this dependency can be found in the use of a
modem. Basic modems rely on the cables that connect them to a computer (a serial cable) and
the twisted-pair cable that connects them to a telephone network. Each of these elements is
dependent upon the other to provide a working system. These types of dependencies can be
found in all telecommunications systems. This chapter was organized to provide basic
information about individual elements and their relationships. Recognition of these relationships
will help to provide an effective design of a telecommunications network.

perform calculations using the relationship between sampling rate, channel time slot
and the possible number of channels.

During digital data acquisition, transducers output analog signals which must be digitized for a
computer.  A computer cannot store continuous analog time waveforms like the transducers
produce, so instead it breaks the signal into discrete ‘pieces’ or ‘samples’ to store them.
 
Data is recorded in the time domain, but often it is desired to perform a Fourier transform to view
the data in the frequency domain. There are unique terms used when performing a Fourier transform
on this digitized data, which are not always used in the analog case.  They are listed in Figure 1
below:
 
303 chap 4

Figure 1: Time domain and


frequency domain terms used in performing a digital Fourier transformWhether viewing digital data in the time
domain or in the frequency domain, understanding the relationship between these different terms
affects the quality of the final analysis. 
 
Time Domain Terms

 Sampling Rate (Fs) – Number of data samples acquired per second


 Frame Size (T) – Amount of time data collected to perform a Fourier transform
 Block Size (N) – Total number of data samples acquired during one frame

Frequency Domain Terms

 Bandwidth (Fmax) – Highest frequency that is captured in the Fourier transform, equal to half
the sampling rate
 Spectral Lines (SL)– After Fourier transform, total number of frequency domain samples
 Frequency Resolution (Δf) – Spacing between samples in the frequency domain

 
Sampling Rate (Fs)
Sampling rate (sometimes called sampling frequency or F s) is the number of data points acquired per
second. 
 
A sampling rate of 2000 samples/second means that 2000 discrete data points are acquired every
second.  This can be referred to as 2000 Hertz sample frequency.
 
The sampling rate is important for determining the maximum amplitude and correct waveform of the
signal as shown in Figure 2. 
303 chap 4

  Figure 2: In the top graph, the 10


Hertz sine wave sampled at 1000 samples/second has correct amplitude and waveform. In the other plots, lower
sample rates do not yield the correct amplitude nor shape of the sine waveTo get close to the correct peak
amplitude in the time domain, it is important to sample at least 10 times faster than the highest
frequency of interest.  For a 100 Hertz sine wave, the minimum sampling rate would be 1000
samples per second. In practice, sampling even higher than 10x helps measure the amplitude
correctly in the time domain.
 
It should be noted that obtaining the correct amplitude in the frequency domain only requires
sampling twice the highest frequency of interest.  In practice, the anti-aliasing filter in most data
acquisition systems makes the requirement 2.5 times the frequency of interest. The Bandwidth
section contains more information about the anti-aliasing filter.
 
The inverse of sampling frequency (Fs) is the sampling interval or Δt.  It is the amount of time
between data samples collected in the time domain as shown in Figure 3.

  Figure 3: Sampling frequency and sampling interval relationship


The smaller the quantity Δt, the better the chance of measuring the true peak in the time domain.
 

 
Block Size (N)
The block size (N) is the total number of time data points that are captured to perform a Fourier
transform.  A block size of 2000 means that two thousand data points are acquired, then a Fourier
transform is performed.
 
303 chap 4

Frame Size (T)


The frame size is the total time (T) to acquire one block of data. The frame size is the block size
divided by sample frequency as shown in Figure 4.
 

Figure 4: Frame size (T) equals block size (N) divided by sample frequency (Fs)
For example, with a block size of 2000 data points and a sampling rate of 1000 samples per second,
the total time to acquire a single data block is 2 seconds. It takes two seconds to collect 2000 data
points.
 
The total time frame size is also equal to the block size times the time resolution (Figure 5).
 

Figure 5: Frame size (T) equals block size (N) time the time resolution (t)
When performing averages on multiple blocks of data, the term total amount of time might be used in
different ways (Figure 6) and should not be confused:
 

 Total Time to Acquire One Block – The frame size (T) is the time to acquire one data block,
for example, this could be two seconds
 Total Time to Average – If five blocks of data (two seconds each) are to be averaged, the
total time to acquire all five blocks (with no overlap) would be 10 seconds

Figure 6: Five averages of 2


second frames
The 'Throughput Processing knowledge base article' further explains the interaction between
frames and averages.
 
Bandwidth (Fmax)
The bandwidth (Fmax) is the maximum frequency that can be analyzed.  The bandwidth is half of the
sampling frequency (Figure 7).  The Nyquist sampling criterion requires setting the sampling rate at
least twice the maximum frequency of interest. 
 
303 chap 4

Figure 7: Bandwidth, or the maximum


frequency, is half the sample frequency (Fs)
A bandwidth of 1000 Hertz means that the sampling frequency is set to 2000 samples/second.
 
In fact, even with a sampling rate of 2000 Hz, the actual usable bandwidth can be less than the
theoretical limit of 1000 Hertz. This is because in many data acquisition systems, there is an anti-
aliasing filter which starts reducing the amplitude of the signal starting at 80% of the bandwidth.

Figure 8 - At 80% of the bandwidth, a anti-aliasing filter


starts reducing the amplitude of the incoming signals. The 'Span' represents the frequency range without any anti-
aliasing filter effects.
For a bandwidth of 1000 Hertz, the anti-aliasing filter reduces the bandwidth to 800 Hertz and below.
The filter attenuates frequencies above 800 Hertz in this case.
 
In Simcenter Testlab, under ‘Tools -> Options -> General’, it is possible to view only the
usable bandwidth by switching to ‘Span’ under ‘Frequency’ as shown in Figure 9.
 

Figure 9: Under
‘Tools -> Options -> General’ switch to ‘Span’ instead of ‘Bandwidth’ if desired ‘Span’ represents the actual
useable bandwidth, and the switching to the ‘Span’ setting makes all the Simcenter Testlab displays
show only 80% of the bandwidth.
 
Spectral Lines (SL)
303 chap 4

After performing a Fourier transform, the spectral lines (SL) are the total number of frequency
domain data points.  This is analogous to N, the number of data points in the time domain. There are
two data ‘values’ at each spectral line – an amplitude and a phase value as shown in Figure 10. 
  

Figure 10: At each frequency there is an


amplitude (top graph) and phase (bottom graph)
Note that while the Fourier Transform results in amplitude and phase, sometimes the frequency
spectrum is converted to an autopower, which eliminates the phase.
 
The number of spectral lines is half the block size (Figure 11).

  Figure 11: Spectral lines equals half the block size


For a block size of 2000 data points, there are 1000 spectral lines.
 
Frequency Resolution
The frequency resolution (Δf) is the spacing between data points in frequency. The frequency
resolution equals the bandwidth divided by the spectral lines as shown in Figure 12.

Figure 12: Frequency resolution equals bandwidth (Fmax) divided by spectral lines
(SL)For example, a bandwidth of 16 Hertz with eight spectral lines, has a frequency resolution of 2.0
Hertz (Figure 13). 
 
303 chap 4

Figure 13: Frequency resolution


equals bandwidth (Fmax) divided by spectral lines (SL)The eight frequency domain spectral lines are spread
evenly between 0 and 16 Hertz, which results in the 2.0 Hertz spacing on the frequency axis. Note
that 0 Hertz is not included in the spectral line total.  The calculated value at zero Hertz represents a
constant amplitude DC offset. For example, if a 1 Volt sine wave alternated around a 5 Volt offset,
the offset value would be placed at zero Hertz, while the sine wave's 1 Volt amplitude would be
placed at the spectral line corresponding to the sine wave's frequency.
 
Digital Signal Processing Relationships
Putting the above relationships together, the different digital signal processing parameters can be
related to each other (Figure 14).
 

Figure 14: Digital signal


processing relationships
This can be boiled down to one ‘golden equation’ of digital signal processing (Figure 15) which
related frame size (T) and frequency resolution (Δf):
 

Figure 15: The ‘golden equation’ of digital signal processingThis means that:

 The finer the desired frequency resolution, the longer the acquisition time
 The shorter the acquisition time, or frame size, the coarser the frequency resolution
303 chap 4

The frequency resolution is important to accurately understand the signal being analyzed.  In Figure
16, two sine tones (100 Hertz and 101 Hertz) have been digitized, and a Fourier Transform
performed. This was done with two different frequency resolutions: 1.0 Hertz and 0.5 Hertz.

   Figure 16: Left – Spectrum with


1.0 Hertz frequency resolution makes two separate tones appear as one peak. Right - Spectrum with 0.5 Hertz
frequency resolution makes two separate tones appear as two different peaks.

explain elements of statistical time division multiplexing (TDM

Statistical time-division multiplexing (STDM) is a form of communication link sharing, which is almost
identical to dynamic bandwidth allocation (DBA).
In STDM, a communication channel is split into a random range of variable bit-rate data streams or
digital channels. The link sharing is tailored for the instantaneous traffic requirements of the data streams
which are transmitted over every channel.
This type of multiplexing is a replacement for creating a fixed link sharing, such as in standard time
division multiplexing (TDM) and frequency division multiplexing (FDM). Upon precise execution,
STDM can offer an improvement in link utilization, referred to as the statistical multiplexing gain. STDM
is facilitated by means of packet-mode or packet-oriented communication.

STDM is more efficient than standard TDM. In standard TDM, time slots are allotted to channels even
when there is no data to transmit. This leads to wasted bandwidth. STDM was originally developed to
address this inefficiency, where the time allocation to lines happens only when it is actually required. This
is attained through intelligent devices that are ideal for identifying an idle terminal.

STDM is same as TDM, with the exception that every signal is assigned a slot based on priority and
demand. This indicates that STDM is an "on-demand" service as opposed to a fixed one. Standard TDM
and various other circuit switchings are executed at the physical layer in the OSI and TCP/IP model,
while STDM is executed at the data link layer and above.

Scenarios of statistical time-division multiplexing are:


The MPEG transport stream used for digital TV transmission. STDM is used to permit multiple data,
audio and video streams of different data rates to be broadcasted across a bandwidth-limited channel.
The TCP and UDP protocols, in which data streams from various application processes are multiplexed
together.
The Frame relay packet-switching and X.25 protocols, in which the packets have different lengths.
The Asynchronous Transfer Mode packet-switched protocol, in which the packets maintain a fixed
length.
303 chap 4

describe code division multiplexing (CDM) in terms of its essential elements

CDMA (Code-Division Multiple Access) refers to any of several protocols used in second-generation
(2G) and third-generation (3G) wireless communications. As the term implies, CDMA is a form of
multiplexing, which allows numerous signals to occupy a single transmission channel, optimizing the use
of available bandwidth. The technology is used in ultra-high-frequency (UHF) cellular telephone systems
in the 800-MHz and 1.9-GHz bands.

CDMA employs analog-to-digital conversion (ADC) in combination with spread spectrum technology.
Audio input is first digitized into binary elements. The frequency of the transmitted signal is then made to
vary according to a defined pattern (code), so it can be intercepted only by a receiver whose frequency
response is programmed with the same code, so it follows exactly along with the transmitter frequency.
There are trillions of possible frequency-sequencing codes, which enhances privacy and makes cloning
difficult.
The CDMA channel is nominally 1.23 MHz wide. CDMA networks use a scheme called soft handoff,
which minimizes signal breakup as a handset passes from one cell to another. The combination of digital
and spread-spectrum modes supports several times as many signals per unit bandwidth as analog modes.
CDMA is compatible with other cellular technologies; this allows for nationwide roaming. The original
CDMA standard, also known as CDMA One, offers a transmission speed of only up to 14.4 Kbps in its
single channel form and up to 115 Kbps in an eight-channel form. CDMA2000 and Wideband CDMA
deliver data many times faster.
The CDMA2000 family of standards includes 1xRTT, EV-DO Rev 0, EV-DO Rev A and EV-DO Rev B
(renamed Ultra Mobile Broadband -- UMB). People often confuse CDMA2000 (a family of standards
supported by Verizon and Sprint) with CDMA (the physical layer multiplexing scheme).

CDM is widely used in so-called second-generation (2G) and third-generation 3G


wireless communications. The technology is used in ultra-high-frequency (UHF) cellular
telephone systems in the 800-MHz and 1.9-GHz bands. This is a combination of analog-
to-digital conversion and spread spectrum technology.
CDM may be defined as a form of multiplexing where the transmitter encodes the signal
using a pseudo-random sequence. CDM involves the original digital signal with a
spreading code. This spreading has the effect of spreading the spectrum of the signal
greatly and reducing the power over anyone part of the spectrum. On the other hand,
the receiver knows about the code generated and transmitted by the transmitter and
therefore, can decode the received signal. Each different random sequence corresponds
to a different communication channel from multiple stations.
Code Division Multiplexing assigns each channel its own code to make them separate
from each other. These unique underlying codes, which ~hen decoded restore' the
original desired signal while totally removing the effect of the other coded channels.
Guard spaces are realized by using codes with orthogonal codes. In cue of TDM and
FDM, channels are isolated by separate .time or frequency slots, which are occupied in
common by all users.
Figure explains how all channels Ci, use the same frequency at the same time for
transmission.
303 chap 4

                   
It may be understood that a single bit may be transmitted by modulating a series of
signal elements at different frequencies in some particular order. These numbers of
different frequencies per bit are called as the chip rate. If one or more bits are
transmitted at the same frequency, it is called as frequency hopping. This will happen
only when the chip rate ,is less than one because chip rate is the ratio of frequency and
bit. At the receiving side, receiver decodes a 0 or a 1 bit by checking these frequencies in
the correct order.
A disadvantage of CDM is that each user's transmitted bandwidth is enlarged than the
digital data rate of the source. The result is an occupied bandwidth approximately equal
to the coded rate. Therefore, CDM and spread spectrum are used interchangeably. The
transmitter and receiver require a complex electronics circuitry.
The main advantage of CDM is protection from interference and tapping because only
the sender the receiver knows the spreading code.

explain multiple access techniques

Multiple access means access to a given facility or a resource by multiple users. In the
context of satellite communication, the facility is the transponder and the multiple users
are various terrestrial terminals under the footprint of the satellite. The transponder
provides the communication channel(s) that receives the signals beamed at it via the
uplink and then retransmits the same back to Earth for intended users via the downlink.
Multiple users are geographically dispersed and certain specific techniques, to be
discussed in this chapter, are used to allow them a simultaneous access to the satellite's
transponder. The text matter is suitably illustrated with the help of a large number of
problems.

Introduction to Multiple Access Techniques


Commonly used multiple access techniques include the following:

1. Frequency division multiple access (FDMA)


303 chap 4

2. Time division multiple access (TDMA)

3. Code division multiple access (CDMA)

4. Space domain multiple access (SDMA)

In the case of frequency division multiple access (FDMA), different Earth stations are
able to access the total available bandwidth in the satellite transponder(s) by virtue of
their different carrier frequencies, thus avoiding interference amongst multiple signals.
The term should not be confused with frequency division multiplexing (FDM), which is
the process of grouping multiple base band signals into a single signal so that it could be
transmitted over a single communication channel ...

You might also like