0% found this document useful (0 votes)
125 views11 pages

Sync 101: The Need For Synchronization

This document provides an overview of a course that introduces key concepts in network synchronization. It discusses the need for synchronizing digital networks and reviews timing aspects of digital transmission. The objectives are to explain why modern digital networks operate synchronously and to define what is meant by a "synchronous network". It covers fundamentals of digital transmission, receiver frame alignment, differences between voice and data services, and the convergence of separate voice and data networks into a single synchronous network.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
125 views11 pages

Sync 101: The Need For Synchronization

This document provides an overview of a course that introduces key concepts in network synchronization. It discusses the need for synchronizing digital networks and reviews timing aspects of digital transmission. The objectives are to explain why modern digital networks operate synchronously and to define what is meant by a "synchronous network". It covers fundamentals of digital transmission, receiver frame alignment, differences between voice and data services, and the convergence of separate voice and data networks into a single synchronous network.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Sync 101

The Need for Synchronization

Overview

This course, the first one of the Sync U program, introduces the student to several key concepts in network
synchronization. Starting from a review of timing aspects of digital transmission, the rationale for
synchronizing the entire network is discussed; then some important influences that affect the behavior of
different systems are introduced.

Objectives
The objectives of this course, Sync 101, are to give the student an appreciation of why the modern digital
telecom network is operated as a synchronized network, and an understanding of what is meant (and what
is not meant) by the term “synchronous network”. This is achieved by reviewing some basic mechanisms of
digital transmission and routing/switching and examining their behavior in asynchronous and synchronous
environments. An understanding of the need for network synchronization in certain ATM networks is also
discussed.
Chapter 1 - Fundamentals of Digital Transmission
In the simplest of digital transmission systems, a digital signal is clocked out from the transmitting
equipment over a suitable channel (twisted pair wire, optical fiber, etc.) to the distant far-end receiving
equipment where clock recovery and regeneration of the transmitted signal takes place, as shown in Figure
101-b -1.

Figure 101-b-1, Point-to-Point Digital Transmission System

Far-end
Near-end Digital
Digital Receiver
Transmitter
Clock
XO Transmission Recovery
Channel
A-D Regenerator
Convertor

D-A
Convertor

Regeneration is necessary because a real-world received signal is subjected to attenuation; dispersion,


caused by differential propagation delay of the wide-band signal; and added noise. Clock recovery is
essential to the regeneration process because of the impracticality of providing a separate clock signal
coherent with the data signal over a long distance. Conversely, a digital bit-stream intended for carrying
information over any distance is by design also a carrier of the transmitter’s actual clock rate.

This self-contained, self-timing transmission link could be considered to be a synchronous system, since
the receiver is timed directly from the transmitter. In this simple point-to-point system there is no need for
the transmitter oscillator to be traceable to (i.e., coherent with) any other clock, so the system can also be
considered to be asynchronous to any external clock in a larger network.

The generation process of a DS1 signal in the T1 system, introduced by Bell Labs in the early sixties for
trunking 24 voice channels between channel banks, is shown in simplified form in Figure 101-b-2.
Figure 101-b-2, Digitization of Voice signals in a Channel Bank

Timing Generator
Circuit XO

DPLL DPLL DPLL


8kHz 64kHz 1.544MHz

Frame
Pattern
Generator

1001100010101000100..
CH 1 ADC Encoder

Line 1.544Mbits/s
Byte
Interleaver Coder Bipolar
CH 24
ADC Encoder ..100110001010100011010011
1001010010101011110110….

- - 01100101 - - - 01100111 - - -

Each voice signal consists of a band-limited analog signal which is sampled at 8000 samples per second,
then each sample is non-linearly encoded into a byte. The resulting 64kbit/s DS0 streams are then
interleaved one byte at a time. The 24 channels are kept track of in the bit-stream by a frame bit pattern
which is appended at 8000 frame bits per second to give an aggregate DS1 bit-rate of 1.544 Mbits/s.
Outside North America a similar system based on 30 DS0 channels and two framing bytes with an
aggregate rate of 2.048 Mbits/s was adopted. From the timing point of view, the basic principles discussed
here apply to both systems.

It is clear that for optimum performance the 24 DS0 signals and the frame-bit pattern, as well as the DS1
resulting signal, must be clocked by timing signals that are effectively divided down from a common
source. Thus the set of DS0, DS1, and framing timing signals are coherent, and can be called synchronous
to the crystal oscillator (XO) and to each other.

The principles illustrated in this early example of a multi-channel transmission system are inherent in all
time-division multiplex systems (TDM), regardless of the transmission rate or the medium. SONET and
SDH systems not only incorporate these principles, but they also use the same 8000 frames per second
format for compatibility with the earlier 64 kbits/s DS0-based systems.

The 24 voice channel example is a full-duple x system, in that a mirror-image channel is also provided to
transmit a signal from the far-end equipment to the local equipment. The far-end transmitter uses its
internal oscillator to clock out its signal for regeneration in the near-end terminal, independently of the
near-end oscillator, but at the same nominal frequency. The actual frequency of each equipment oscillator
may have a value anywhere within the tolerance of the signal rate. Thus the full-duplex T1 system is an
asynchronous system with self-timing of each DS1 signal.

Chapter 2 - Receiver Frame Alignment


At the receiver of a digital signal such as the DS1 signal produced by a channel bank, regeneration
produces a replica of the transmitted bit-stream by using the recovered clock to re-time the equalized
received pulses. Successful clock recovery is assured by encoding of the bit-stream at the transmitter to
guarantee sufficient energy in the bit-stream at half the clock rate. This is done by converting the binary bit-
stream into a bipolar stream (consisting of three logic levels: positive, zero, and negative pulses); then
applying Alternate Mark Inversion (AMI) and substitution of ones in place of long strings of zeroes in the
raw bit-stream. These tactics ensure that for any arbitrary binary signal there is a high density of zero
crossings in the transmitted signal for reliable clock recovery.

To extract each of the 24 channels correctly it is necessary to perform frame alignment before byte-by-byte
sampling and per-channel processing. This is achieved by framer logic that searches the received bit-stream
continuously for alignment of the bits in each bit position of the (193-bit) frame with the predetermined
frame pattern injected at the transmitting channel bank.

This frame alignment process is frequently referred to as “frame synchronization”, but it is required for any
TDM system receiver, whether in a synchronous or asynchronous environment.

Chapter 3 - Voice and Data Services Compared


When considering the needs of a digital transmission system for voice signals, as in the classic public
switched telephone system (PSTN), the important performance parameters for customer acceptance are
accurate reproduction of the calling party’s voice at the called party’s earpiece. The human brain is very
forgiving of slight discrepancies in the voice signal, except for delay; with the result that the digital voice
system does not need to be error-free as much as it needs to have minimal delay.

In practice the threshold for unacceptable error-rate in a 64 kb it/s voice channel is conservatively set at 1
error in 103 bits. For end-to-end propagation delay, a few milliseconds for each direction is acceptable, but
more can cause dissatisfied users.

Data traffic, on the other hand, has very different needs. Any errored bit in a block of data can be a
problem. Even with error detection and re-transmission techniques, the system’s through-put can be
degraded to unacceptable levels by an error-rate several decades better than the voice channel acceptability
threshold. Conversely, delay of the order of hundreds of milliseconds is of very little consequence for data.
Thus any real-time service such as voice must have minimal propagation delay through the system, but any
data service is much more sensitive to bit errors.

Chapter 4 - Convergence of Data and Voice Networks into a Single


Synchronous Network
In the nineteen seventies it became clear to PSTN operators that although voice traffic was growing in
volume, data traffic was growing at a much higher rate. Typically, separate networks were designed and
operated for voice and data, but this was questioned on economic grounds as both networks grew. Data
growth was driven by the expansion of private networks which relied on PSTN leased lines and by “special
services”, which were DS0-based virtual private networks operated on behalf of a customer by a PSTN.
In the voice arena, work was going ahead on defining “common channel signaling” systems (CCS). These
were intended to add intelligence to the digital switching and support functions of the voice network to
enable more efficient DS0-level routing and value-added processing.

With these drivers, a single digital network capable of handling any service in a common transmission and
switching/routing fabric was inevitable.

The fabric would have to be capable of transporting, switching, routing, and processing DS0 bytes in a
transparent manner, regardless of the DS0’s content or service to the end-user. These functions required
byte-level time -slot interchange, which in turn required visibility of each DS0 byte at each node.

Given the different constraints of high bit-error performance for data and low propagation delay for real-
time voice, and time-slot interchange functionality for flexible switching/routing of generic DS0 bit-
streams, it was apparent that a synchronous fabric, in which each DS0 would be timed from a common
clocking source, was the only viable option. The timing aspects of a generic time -slot interchange system
are illustrated in Figure 101-e-1.

Figure 101-e-1, Generic Time-slot Interchange System

PRS/PRC Traceable Clock


(TSG/SSU, for example)
Per-Channel
Frame Location
Info

DS0 Channel Selection Timing Logic

Byte Read Clocks

Output Signal
Receiver Timing & Framer
DS1 Logic
&
-1
Framer

DS1 Receiver
&
-2
Framer

Receiver
DS1
&
-3 Framer DS1 – X
Inter- Output
leaver

Receiver
DS1
&
-n Framer
Serial Data
It was recognized that a sophisticated system of master clock signal distribution and methods for elegantly
handling distribution failures would be necessary costs of the synchronous network.

It was also recognized that each interface would require a limited buffering capacity to provide bit-level
integrity during short-term variations in carrier transmission rates and to enable unconstrained time -slot
interchange of an arbitrary DS0 in a received carrier signal to any channel of a transmitted carrier. This
buffering involved an inherent delay of less than a millisecond, so it was acceptable for voice.

Thus typical public networks evolved from a set of point-to-point asynchronous transmission systems
between analog switching centers to a synchronized matrix of DS0-level transport paths and
switching/routing nodes.

Chapter 5 - Buffers, Slips, and Pointers


In summary, two mechanisms are used to enable a nominally synchronous system to accommodate
variations in frequency differences caused by real-world , non-ideal timing behavior;- slip buffers and
pointers. These devices and their importance to acceptable performance of a synchronous network are
briefly described here.

In the original T1 system and similar inter-office trunking systems, the lossy and dispersive nature of the
twisted pair channel dictated that frequent regeneration of the bipolar signal take place between terminals.
This was achieved by repeaters installed along the line. At each repeater, clock recovery was an essential
function, just as described for the terminal receiver earlier in this course. Clock recovery was achieved by
an analog high-Q tank device which responded to the peak power component of the signals power spectral
density after full-wave rectification of the equalized signal. Because of inefficiencies in this process from
finite width pass-band of the tank circuit and its sensitivity to ones-density in the signal, the regenerated
signal suffered from added jitter. This jitter accumulated non-linearly along the repeatered line so that it
had to be absorbed at the receiving terminal. This was achieved by a buffer which could accept the received
equalized pulses clocked in by the jittered recovered clock, ready for passing to the rest of the terminal’s
circuits by the buffer’s read clock. The ability of this buffer function to absorb jitter caused it to sometimes
be referred to as an “elastic store”.

Essentially, the elastic store enabled the receiver to track the average clock rate while accepting limited
amounts of short-term deviations in the rate. Thus it acted as a low-pass filter to phase variations on the
recovered clock, but could accept several Unit Intervals (UI, the inverse of the signal’s nominal clock rate)
of higher-speed phase excursions from the average.

The T1 repeatered line developed by Bell Labs was engineered to accommodate jitter as phase modulation
on the received signal in the frequency range of 10Hz to 40kHz. Modulation on the signal at frequencies
below this were tracked and passed through to the receiver framing circuit and per-channel digital-to-
analog converters. The actual frequency was thus preserved as long as it stayed within the tolerance of the
1.544 MHz signal rate;- a range originally of +/- 130 ppm. The 10Hz break-point in the recovered clock’s
phase-noise filter became the demarcation between jitter and “wander”. Hence by definition, wander is
noise in the demodulated phase band of 0–10 Hz.

When the inter-office DS1 signals in the network were made synchronous, the read clock for terminal
receiver buffers was made the local common clock, rather than a band-limited version of the received
signal recovered clock. Also, the buffer capacity was increased to absorb much larger phase variations in
order to provide a “controlled slip” mechanism. As shown in Figure 101-f-1, this mechanism required a
buffer capacity of a frame of bits plus some hysteresis, so that an accumulation of difference in frequency
between the recovered (write) clock and the local (read) clock large enough to cause a buffer overflow or
underflow could be accommodated by abandoning or re-reading respectively a complete frame of data.
Figure 101-f-1, Controlled Slip Mechanism

a) End-to-end DS1 Path

PRS/PRC- PRS/PRC-
traceable traceable
Clock Clock

Data Encoder, DS1 DS1 Decoder, Data


Source etc. Generator Receiver etc. Sink

b) DS1 Receiver with Slip Buffer

Regenerator Frame Bits


Line
DS1 and Framer Binary Data @
Receiver
Input 1.544 Mbits/s
3 Buffer Fill
DS1 Clock 2
Logic
Recovery, 1
Write 193
1.544MHz Clock 192
191
190

Network
Element
3
2 Read Clock,
1 Clock 1.544MHz
193
192

“Elastic
Store”

Buffered data
with Slip Control

The buffer control logic kept track of the frame alignment within the buffer so that only a single byte of
each constituent DS0 was affected, but not the frame-bit pattern. Hence the term “controlled” slip.
The normal operation of a slip buffer, i.e., under no clock fault conditions, has the buffer absorbing both
jitter and wander (within the buffer capacity) since the average frequencies of the write and read clocks are
the same (since they are both traceable to a common master clock source). Note, however, that wander at 0
Hz, or DC, is the special case consisting of frequency offset as follows.

If a temporary fault occurs (e.g., traceability of the read or local clock to the master clock is lost) a
constant frequency difference in write and read clocks exists, and the buffer will continuously fill or empty
and a series of controlled slips will result until the problem is corrected.

It should be noted that a DS1/E1-based synchronous system only suffers from controlled slips under faulty
timing conditions; i.e., there are no other degradations which are attributable to poor sync.

The jitter buffer described earlier had no control over which bits would be affected in overflow and
underflow conditions, with the result that any such action caused loss of frame alignment and a subsequent
loss of signal to the downstream end-user’s equipment. The re-alignment process could take tens of
milliseconds, Thus this type of slip is referred to as an “uncontrolled slip”.

The controlled slip mechanism for elegantly absorbing frequency differences between DS1/E1 signals and
the master clock is not used in SONET and SDH. Instead, a mechanism based on allocating more or fewer
bytes in a “synchronous payload envelope” (SPE) is used, as shown in Figure 101-f-2.

Figure 101-f-2, Synchronous Payload and Pointer Mechanism

Transport Payload
Overhead
1 2 3 4 5.... .. .. ...86 87 88 89 90
1
2
3
4 CH 1 CH 2 CH 3
9 x 90
Pointer 5 CH 4 CH 5 Bytes
6
7
8
9

1 Fr ame in 125µs

(9 x i90 x 8 bits / 125µs = 51.84 Mbits/s


= STS-1 i l
maps traffic bytes (as N x DS0) (fromtai four-byte buffer into an SPE which
(electrical)
In simple terms, this technique
= OC-1format.
may start at any position in a grid of bytes in the SONET/SDH (optical)
If the payload data arrives at the
mapper faster or slower than the carrier rate, then an extra byte, or one fewer byte is filled. The SPE length
is thus modified, and the “pointer”, a flag in the SONET/SDH overhead fields which indicates the start of
each SPE frame, is updated accordingly. At the receiving terminal the pointer enables the correct bytes to
be de-mapped to the end-user without errors. The clock which is used to read out the payload bytes from a
de-synchronizer buffer to the external user must accommodate the change in rate by injecting a phase ramp
onto the signal, and this can cause increased jitter and/or wander, depending on the payload rate.

Chapter 6 - Synchronization Aspects of Asynchronous Transfer mode


(ATM)
Asynchronous Transfer mode has gained acceptance as a flexible “bandwidth-on-demand” methodology
for inter-office transport of different services which is not constrained to a rigid per-channel allocation of
capacity as in TDM systems such as those based on SONET/SDH formats. ATM specifications do not
include a unique physical layer for transporting the cell-based payloads; in fact most ATM systems
operated by PSTN operators use SONET/SDH with cell-mapping formats as the underlying physical layer,
and the ATM protocols are concerned with the higher layers such as transport and applications layers. This
is shown in a comparison with the ISO seven-layer model in Figure 101-g-1.

Figure 101-g-1, ATM Layer Model Compared with ISO 7-Layer Model

Other physical layer systems are also used for ATM systems, such as unchannelized DS3 and E3 bit-
streams, but they are more common in large private networks.

Clearly, an ATM network based on a SONET/SDH physical layer will by definition be synchronized, while
supporting asynchronous cell transport. In this case, the engineering rules for SONET/SDH timing must
still be met.

There is another scenario that may impose a need for synchronization on an ATM network, regardless of
the physical layer chosen, that is the need to support Circuit Emulation Service or Constant Bit -Rate service
(CES and CBR). In an interesting example of backward compatibility being at once a burden and a driver
for new systems, CES in an ATM system offers end-users traditional leased lines such as DS1 and E1 in
the ATM environment. This is achieved by chopping up the raw DS1/E1 bit-stream into cells at the
transmitting ATM terminal and re-assembling them at the ATM termination for delivery. ATM standards
include a flag for CES cells so that they are not delayed excessively, and three methods for ensuring that
the frequency of the DS1/E1 is conserved across the ATM system.

One method, referred to as adaptive timing (also known as adaptive clocking), is purely asynchronous,
relying on cell arrival rate at the termination to adjust a local clock to keep a buffer half-full while reading
out the bit-stream. Thus this method, if applied in a system with non-SONET/SDH physical layer, can
operate completely independently of network synchronization.

The other two methods require network synchronization at both ends. The first, Synchronous Residual
Time -Stamp (SRTS) repeatedly calculates the frequency offset between the traffic DS1/E1 and a defined
reference frequency that is locked to the synchronous network master clock. This difference is represented
by a time-stamp which is transmitted in cell overhead along with the data. At the termination the time -
stamp and the same defined network-traceable reference frequency are used to reproduce the original signal
clock rate.

The second method, known as synchronous CES applies only to DS1/E1 signals that are network
synchronous. In this method, the DS1/E1 signal is directly mapped into cells by the local appearance of the
network master clock and then transported across the ATM network. At the ATM termination the data from
the cells are written into a buffer and then read out by the termination’s version of the network clock.

To summarize, the oft-asked question “why does Asynchronous Transfer Mode require synchronization?”
is answered by asking further questions:

“does the ATM system make use of a SONET or SDH physical layer?” and “does the ATM system use
either SRTS or synchronous CES timing to support CES/CBR?”.

A “yes” to either of these questions means that full network synchronization engineering rules apply to the
ATM system. If the first question elicits a “yes”, then the SONET/SDH requirements already dictate the
need for network synchronization.

Chapter 7 - Signal Degradation from Unsatisfactory Synchronization


Earlier sections of Sync 101 have given the rationalization for constraining the digital network to operate as
a synchronous matrix. In a real-world scenario, it is necessary to plan for non-ideal synchronous operation
and predict the effect of any such occurrence. In addition to fault conditions which will occasionally occur
even in the best engineered synchronous networks, there is a small but finite probability that a slip or
pointer movement will be experienced when the network is operating within its designed margins. This is
because of the statistical nature of the phase noise mechanisms that affect buffer fill states, as discussed in
Sync 102.

The controlled slip and pointer mechanisms described above are the only manifestations of signal
degradation that can be directly attributed to lack of ideal timing conditions, but the effect of slips and
pointers on the services carried by an affected signal must be considered when gauging the extent of the
degradation.

A controlled slip operation on a DS1 or E1will affect each constituent DS0 by either repeating or deleting a
single byte of payload data. The effect on the end-user will obviously depend on the value or importance of
a byte in the bit-stream at the terminal of the service, as discussed in the following paragraphs.

64 kbit/s PCM Voice:


In the case of a 64 kbit/s voice channel, a byte represents 125 microseconds of voice content. Given the
human brain’s ability to correctly interpret voice information in noisy conditions, and the statistical
probability of 40% that at any arbitrary instant a telephone channel is actually carrying a signal, this small
gain or loss of voice information from a controlled slip is negligible. The design limit for degraded but
useable voice signals was somewhat arbitrarily set in the original Bell Labs engineering rules at 255 slips
per day.

32 kbit/s and Lower-rate Voice:


In the case of lower-rate voice service, the importance of a single byte is more pronounced, not only
because the bye represents a longer sample of the voice signal, but such services typically rely on
differential coding schemes so that an affected byte also affect subsequent byte or bytes. For example, in a
32 kbit/s ADPCM signal, each byte represents 250 microseconds of the voice signal, but the decoding
scheme at the end-user receiver reconstitutes the analog signal from previous bytes as well as the current
one. Thus in this case a controlled slip affects up to 500 microseconds of the voice signal.

Voice-band Data and Fax Modems:


Other services using the voice channel, such as voice-band modems and Fax machines suffer from bit -
errors and reproduction distortion, respectively, but the extent of the degradation is difficult to quantify
without a detailed knowledge of the exact encoding and decoding process utilized, and any error detection
and correction scheme that may be invoked. It can be stated, however, that an isolated controlled slip is
unlikely to have a serious effect on most such services, but a series of slips may easily lead to
unsatisfactory throughput or unacceptable Fax reception.
A study conducted to determine the effects of controlled slips on Group 3 fax transmission found that a
single slip caused up to eight horizontal scan lines to be missing. This corresponds to a missing 0.08 inches
of vertical space. In a standard typed page, a slip may be seen as the top or bottom half of a typed line
missing. If slips continued to occur the affected pages would need to be transmitted manually.

Data Services:
A controlled slip changes the block length of a data stream, with the result that the house-keeping functions
are lost, and a re-frame action is required, followed by re-transmission of the blocks that were not received
correctly. Thus even a few slips may noticeably degrade the performance of the data channel. The extent of
the degradation depends on the size of the data blocks and on the protocol in use.

Encrypted Data:
In encrypted data service, a continuous calculation of the bit statistics is used to de-crypt the transmitted
message. A single byte of data out of place in the bit -stream will upset this mechanism and potentially
destroy the confidence level of the encryption process. This triggers a demand for re-transmitting the
unique key word that validates the link’s security, thus creating a major nuisance for the user. For many
secure applications, more than one slip per day is considered unacceptable.

Digital Video:
For digital video transmission (video teleconferencing, for instance) tests indicate that a slip usually causes
segments of the picture to be distorted or to freeze for periods of up to 6 seconds. The seriousness and
length of the distortion depends on the encoding and compression techniques used. As noted previously for
voice traffic the impairment is most serious for low bit-rate encoding.

The pointer action in a SONET/SDH system does not directly affect the payload bits, since the result is
limited to a phase adjustment to the de-mapping buffer output bit-stream. The total phase adjustment is one
to three bytes of the SPE rate, e.g., 1.74Mhz for a DS1 mapped into a VT1.5, or 51.84 MHz for a DS3
mapped into a STS-1. The SPE level determines the duration of the phase ramp and therefore the frequency
content of the modulation, with the result that a DS1 pointer action causes added wander on the output
DS1, and jitter on a DS3 output.

The amplitude of the jitter component is constrained by SONET/SDH requirements to levels consistent
with network requirements, but the wander added to a DS1 or EI is much more problematic. This is
because the amplitude of a byte (or 8 UI) at the VT SPE is approximately 5 microseconds; well above the 1
microsecond network standard for phase transients.

You might also like