You are on page 1of 62

REII312

Network Fundamentals
Study Unit 1: Chapter 1
By: Mitch

Data Communications
Lovemore

5 Components of Data Communication

Data Representation

Data Flow

Networks

Network Criteria
By: Mitch
Lovemore

Physical Structures

Physical Topology
Exam Question

Mesh Topology

Star Topology

By: Mitch
Lovemore

Bus Topology

Ring Topology

Hybrid Topology

Network Types
Local Area Network (LAN)
By: Mitch
Lovemore

Wide Area Network (WAN)

Switching

The Internet
By: Mitch
Lovemore

Accessing The Internet

Standards & Administration


Internet Standards

Maturity Levels

Requirement Levels
By: Mitch
Lovemore

Internet Administration
Study Unit 1: Chapter 2

Protocol Layering
By: Mitch
Lovemore

Scenarios

Scenario 1

Scenario 2

Principles of Protocol Layering

Logical Connections

TCP/IP Protocol Suite

Layered Architecture
By: Mitch
Lovemore

Layers in the TCP/IP Protocol Suite

Description of Each Layer


Physical Layer

Data-Link Layer
By: Mitch
Lovemore

Network Layer

Transport Layer
By: Mitch
Lovemore

Application Layer

Encapsulation & Decapsulation

Encapsulation @ Source Host

Decapsulation & Encapsulation @ The Router


By: Mitch
Lovemore

Decapsulation @ The Destination Host

Addressing

Multiplexing & Demultiplexing

The OSI Model

TCP/IP vs OSI Models


Study Unit 2: Chapter 3
By: Mitch

The Physical Layer: Data & Signals


Lovemore

Analog & Digital Data

Analog & Digital Signals

Periodic & Non-Periodic

Periodic Analog Signals

Sine Wave
By: Mitch
Lovemore

Wavelength

Time & Frequency Domains

Composite Signals
By: Mitch
Lovemore

Bandwidth

Digital Signals

Bit Rate

Bit Length

Digital Signal as a Composite Analog Signal

Transmission of Digital Signals

Baseband Transmission
By: Mitch
Lovemore

Case 1: Low-Pass Channel with Wide Broadband

Remember: Baseband Transmission of a digital signal that preserves the shape of the digital signal is possible only if we have a low-pass

channel with an infinite or very wide bandwidth

Broadband Transmission ( Using Modulation)

Transmission Impairment

Attenuation

Decibel

Distortion
By: Mitch
Lovemore

Noise

Signal-to-Noise Ratio

Data Rate Limits

Noiseless Channel: Nyquist Bit Rate

Noisy Channel: Shannon Capacity

Using Both Limits


Performance By: Mitch
Lovemore

Bandwidth

Bandwidth in Hertz

Bandwidth in Bits per Second

Relationship:

Throughput

Latency (Delay)

Propagation Time

Transmission Time

Bandwidth-Delay Product

Jitter
Study Unit 3: Chapter 4

Digital Transmission
By: Mitch
Lovemore

Digital-to-Digital Conversion

Line Coding

Characteristics
By: Mitch
Lovemore

Line Coding Schemes

Unipolar Schemes

Polar Schemes

Comparison in terms of other criteria:


Baseline Wondering is an issue for both variations but twice as severe in NRZ-L —> a long sequence of 1s or 0s causes the average signal
power to skew/drift, making it difficult for the receiver to distinguish the bit value.
Synchronisation problems also affect both schemes, but is more severe in the NRZ-L scheme —> while a long sequence of 0s affects both
schemes, a long sequence of 1s only affects NRZ-L.
Sudden changes in polarity in the system affects NRZ-L if (for example) the medium is a twisted pair of cables, a change in the polarity of the
wire causes all 0s to be interpreted as 1s and all 1s interpreted as 0s (basically 1s become 0s and the other way around) —> NRZ-I does not have this
problem.
—> Both NRZ-L & NRZ-I have an average signal rate of N/2 baud
Bandwidth:
In the graph , the normalised bandwidth for both variations is shown —> the vertical axis is power density (power of each Hz of bandwidth) and
the horizontal axis is frequency. —> Problem: the power density of frequencies around zero is very high, which means there are DC components
carrying high energy —> most energy is actually concentrated between 0 and N/2, thus energy distribution is not even over both halves.
By: Mitch
Lovemore

Return-to-Zero (RZ):
Main problem with NRZ occurs when sender and receiver clocks are not synchronised —> one solution is Return-to-Zero (RZ) scheme, which
uses positive, Zero, & Negative values only.
The signal changes in the middle of the bit, not between bits.
Main Disadvantages —> requires two signal changes to encode a bit, thus occupies
more bandwidth; polarity change problem still exists but no DC component problem.
Complexity is also an issue as RZ uses three voltage levels which is thus more complex to
create and for the receiver to distinguish.
Because of all these problems - the scheme is not used —> been replaced by Manchester and Differential Manchester Schemes.
Bi-Phase: Manchester and Differential Manchester Schemes:
Manchester is a combo of RZ (mid-bit transition) and NRZ-L.
Duration of the bit is split into two halves —> voltage remains at one level for one
half of the bit and then transitions to the other level for the second half of the bit.
The transition in the middle of the bit provides synchronisation.
Differential Manchester combines ideas of RZ and NRZ-I —> there is always a
transition @ the middle of the bit but the bit values are determined @ the beginning
of the bit.
Manchester overcomes several problems associated with NRZ-L and Differential Manchester overcomes problems associated with NRZ-I.
- No baseline wondering, no DC Component.
- Only drawback is that the signal rate for Manchester and Differential Manchester is double that of NRZ, due to there always being at least one
and maybe even two transitions per bit (always one in the middle and maybe on at the end of each bit)
NOTE: Manchester and Differential Manchester are also called Bi-Phase Schemes

BiPolar Schemes
In BiPolar encoding (a.k.a. Multilevel Binary ) there are three voltage levels (positive, negative and zero) —> voltage level for one data element is at
zero, while the voltage level for the other element alternates between positive and negative.
AMI & Pseudoternary:
- Alternate Mark Inversion (AMI) —> “mark” means 1, so AMI means alternate 1 inversion —> a
neutral zero voltage represents binary 0 and binary 1s are represented by alternating positive and
negative voltages. —> if the first 1 is a positive pulse, then the next 1 will be a negative pulse.
- Pseudoternary —> a variation of AMI where the 1 bit is encoded as a zero voltage and the 0 bit is encoded as
alternating positive and negative voltages. —> voltages alternate when 0 and voltage @ zero is a 1.
~ Bipolar scheme signal rate is the same as NRZ but there is no DC Component. Bipolar energy concentration is
around a frequency of N/2 —> see the diagram ———>
~ AMI used for long distance comms but has a synchronisation problem when a long string of 0s is present.

Multilevel Schemes
• The general goal —> increase # of bits per baud by encoding a pattern of m data elements into a pattern of n signal elements.
- We only have 2 types of data (0 & 1) —> thus a group of m data elements can only have Data patterns.
- If we have L signal levels then we can produce L combinations of signal levels.
— If ________ then each data pattern is encoded into one signal pattern
— If _______ then data patterns only occupy a subset of signal patterns. —> this subset can be designed to prevent baseline wondering, provide
synchronisation & to detect errors that occurred during transmission.
— Data transmission is not possible if ________ as some of the data patterns can’t be encoded.
- These coding types are known as mBnL —> L is sometimes substituted by: B(binary) if L=2; T (ternary) if L=3; Q (quarternary) if L=4.
- NOTE: first two letter define the data pattern & the second two define the Signal pattern.
By: Mitch
Lovemore

2B1Q:
• 2 Binary 1 Quarternary —> uses data patterns of size 2 encodes them as one signal element,
belonging to a four-level signal. —> m=2, n=1, L=4
• Average signal rate is S= N/4 —> using 2B1Q we can send data twice as fast as with NRZ-L
• 2B1Q uses 4 signal levels so the receiver must be able to discern four different thresholds.
• Reduced bandwidth has a price - there are no redundant signal patterns because 2 = 4
• 2B1Q used in DSL (Digital Subscriber Line) technology to provide high-speed Internet connection by using telephone lines.
8B6T:
• Eight Binary 6 Ternary —> encode a pattern of 8 bits as a pattern of
6 signal elements, where the signal has three levels (ternary).
• We can have 2 = 256 different data patterns and 3 = 729 different
signal patterns —> thus there are 729 - 256 = 473 redundant signal
elements which provide synchronisation and error detection.
• Part of the redundancy is also used to provide DC Balance —> each signal pattern has a weight of either 0 or +1 DC values, thus, if the sender
encounters two groups of weight 1 after one another, then it send the first one as is and completely inverts the second to provide balance to the
whole stream. —> in the diagram - the first signal pattern has the overall weight of 0, the second has weight +1 and the final signal patter also
has a weight of +1, however the sender inverts it to a weight of -1, thus balancing the stream. —> the receiver can see that the pattern is inverted
because its weight is -1, thus the signal is inverted again before decoding.
• Average signal rate of the scheme is theoretically _____________________ ; In practice the minimum bandwidth is very close to 6N/8.
4D-PAM5:
• Four-Dimensional Five-Level Pulse Amplitude Modulation (4D-PAM5) —> 4D means data is sent over 4 wires simultaneously using 5-level
voltage (e.g. -2,-1,0,1,2) but level 0 is used only for forward error detection. —> assuming a one-dimensional code,
an 8-bit code is translated to a signal element of four levels. —> worst signal rate for this imaginary one-dimensional version is N x (4/8) or N/2.
• The technique is designed for use over four channels (4 wires) —>then the signal rate can be reduced to N/8 —> all 8 bits can be fed into a
single wire simultaneously and sent using one signal element. Thus, four signal elements (forming one signal group) are sent simultaneously in a
four-dimensional setting.
• Gigabit LANs use this to send 1-Gbps data over four copper cables that can handle
125MBaud.
• This scheme has a lot of redundancy in the signal pattern as 2 data patterns are matched to
4 = 256 signal patterns —> the extra signal patterns can be used for other things such as
error detection.
Multitransition: MLT-3
• NRZ-I and Differential Manchester are classified as differential encoding but they use two transition rules (inversion and non-inversion) to
encode binary data but if we have multi-level signals, we can use differential encoding schemes with multiple transition rules.
• Multiline Transition, 3-level (MLT-3) scheme uses three levels ( +V, 0, -V) and three transition rules between the levels:
1.) If the next bit is 0, there is no transition.
2.) If the next bit is 1 and the current level is not 0, the next level is 0.
3.) If the next bit is 1 and the current level is 0, the next level is the opposite
of the last non-zero level.
• MLT-3 maps one bit to one signal element —> same signal rate as NRZ-I, but
greater complexity (3 levels & transition rules)
• Shape of the signal helps reduce needed bandwidth —> signal rate = 1/4 bit rate
By: Mitch

Summary of line coding schemes:


Lovemore

Block Coding
• We need redundancy in terms of synchronisation and error detection —> Block coding gives us this
• Block coding changes a block of m bits into a block of n bits, where n>m —> referred to as mB/nB
encoding technique (the slash distinguishes block coding (mB/nB) from multilevel coding (mBnL)
• Block coding involves three levels —> division, substitution and combination.
- Division: a sequence of bits is divided into groups of m bits (e.g. in 4B/5B a bit sequence is
divided into 4-bit groups)
- Substitution: Substitute an m-bit group with an n-bit group. (E.g. in 4B/5B encoding, we swop
a 4-bit group with a 5-bit group)
- Combination: the n-bit groups are then combined to form a bit stream with more bits than the original stream.
4B/5B:
• Four-Binary/Five-Binary (4B/5B) designed to be used combined with NRZ-I
• NRZ-I has a good signal rate, but has a synchronisation problem —> the 4B/5B
scheme results in no more than 3 consecutive 0s , thus no more synchronisation
problem.
• Sender —> 4B/5B encoded —> NRZ-I encoded —> Link —> NRZ-I decoded
—> 4B/5B decoded —> Reciever
• 16 groups are not used for encoding —> some are used for control purposes and some
are completely unused - if one of the unused groups arrives @ the receiver, then there is
an error in the transmission.
• The redundant bits add 20% more baud & also doesn’t solve the DC Component
problem of NRZ-I (if DC component is unacceptable then we can’t use this scheme)
8B/10B:
• Eight Binary/Ten Binary —> similar to 4B/5B but now 8-bit groups are substituted by 10-bit groups.
• Greater error detection capability than 4B/5B.
• Actually a combo of 5B/6B and 2B/3B encoding. —> diagram
• 5 most significant bits of a 10 bit block are fed into a 5B/6B encoder; the 3
least significant bits are fed into a 3B/4B encoder and then both are fed into a
disparity controller to prevent excess 0s over 1s or the other way around. —> if the bits in
the current block create a disparity that contributes to the previous blocks disparity, then the bits are complemented (0s —> 1s & 1s—>0s).
• the coding has 2 - 2 = 768 redundant groups that can be used for disparity checking and error detection.
• This technique is better than 4B/5B due to better built-in error checking capability and better synchronisation.
By: Mitch
Lovemore

Scrambling:
• Biphase schemes that are suitable for dedicated links in a LAN are not suitable for long distance comms due to their wide bandwidth requirement.
• Combo of block coding and NRZ line coding is also not suitable for long distance comms due to the DC Component.
• BiPolar AMI encoding = narrow bandwidth & no DC Component, but a long string of 0s messes with synchronisation.
—> Thus, we need a solution that substitutes long zero-level pulses with a combination of other levels to provide synchronisation.
• Scrambling —> done at the same time as encoding —> inserts required pulses based on
defined scrambling rules
• Examples of scrambling —> B8ZS and HDB3
B8ZS:
• Binary with 8-Zero Substitution —> eight consecutive zero-level voltages
are replaced by 000VB0VB, where the V denotes violation (a nonzero voltage
that breaks an AMI rule of encoding(opposite polarity from the previous)) and
the B denotes bipolar which means a nonzero level voltage in accordance with
the AMI rule.
• There are two cases: one where the previous level is positive and one where the previous level is negative.
• Scrambling in this case does not change the bit rate and the DC Balance is also maintained.
• NOTE: Violation and Binary are relative to the previous nonzero pulse.
Substitution may change the polarity of a 1 because after substitution, AMI needs to follow its rules.
HDB3:
• High-Density Bipolar 3-zero (HDB3) —> four consecutive zero-level voltages are replaced with 000V or B00V. ( two different substitutions
to maintain an even number of nonzero pulses after each substitution.)
• Two rules for substitution:
1.) If the number of nonzero pulses after the last substitution is odd, the substitution pattern
will be 000V, which makes the total number of nonzero pulses even.
2.) If the number of nonzero pulses after the last substitution is even, the substitution pattern
will be B00V, which makes the total number of nonzero pulses even.
• Diagram shows an example —> before the 1st substitution, the number of nonzero pulses is even, thus B00V is used —> after this sub, the
polarity of the 1 bit is changed (because of AMI following its own rules after each sub) —> after this wee need another sub which is 000V
because we only have one nonzero pulse (odd) after the last sub. —> third sub is B00V because there are no nonzero pulses after the second sub
(even).
By: Mitch
Lovemore

Analog-to-Digital Conversion
• Cameras, Microphones etc create analog signals —> convert them using two techniques: Pulse Code Modulation & Delta Modulation

• Modulation: changing analog signals to digital signals


Pulse Code Modulation:
• Most common & uses three processes:
1.) Sampling the analog signal
2.) Quantising the sampled signal
3.) Encoding the quantised values as streams of bits

Sampling:
• Analog signal is sampled every T seconds, where T is the sample interval or period —> inverse of the sampling interval is the sampling rate or
sampling frequency denoted by ___ where __________.
• Three methods of sampling: Ideal (pulses from the analog signal are sampled— ‘ideal’ and not easily implemented) , Natural (High-speed switch is
turned on for only the small time period when the sampling occurs — samples retain the shape of the analog signal) & Flat-top (Sample and
Hold method —> creates flat-top samples by using a circuit)
• Sampling process also called Pulse Amplitude Modulation (PAM)
—> Sampling Rate: Nyquist Theorem says that sampling rate must be 2 times the
highest frequency contained in the signal — Elaboration: We can only sample
signals with finite bandwidth. Sampling rate must be twice the highest frequency,
not the bandwidth (low-pass analog signal has a bandwidth and maximum
frequency that are equal, bandpass signal has a bandwidth lower than the
maximum frequency)
EXAMPLE 4.6: Let’s sample a sine wave at three sampling rates: f =4f (twice the Nyquist
Rate), f =2f (Twice the Nyquist rate) and f = f (half of the Nyquist rate)
• We then see that sampling @ the Nyquist rate gives us a good approximation of the
original sine wave, Oversampling gives us the same approximation but is redundant and
unnecessary & Undersampling gives us a signal that doesn’t look like the original sine wave.
Example 4.9:
• telephone companies assume a maximum frequency of 4000Hz and thus use a sampling
rate of 8000 samples per second.
Example 4.10:
• A complex low-pass signal has a bandwidth of 200kHz. What is the minimum sampling rate
for this signal?
Solution: the bandwidth of a low-pass signal is 0–>f, Thus the sampling rate is twice the maximum frequency —> sampling rate = 400000
samples per second.
Example 4.11:
• A complex bandpass signal has a bandwidth of 200 kHz. What is the minimum sampling rate for the signal?
Solution: We can’t find the minimum sampling rate for a bandpass signal as the bandwidth of the signal is infinite and thus we cannot find the
maximum frequency.
By: Mitch
Lovemore

Quantisation:
• Sampling results in a series of pulses with amplitude values between the max and min amplitudes of the signal —> the set of amplitudes can be
infinite with non-integral values between the two limits. —> these values cannot be used in the encoding process.
• Steps in Quantisation:
1.) We assume that the original analog signal has instantaneous amplitudes between Vmin & Vmax.
2.) We divide the range into L zones, each of height ___(delta)

3.) We assign quantised values of 0 to (L-1) to the midpoint of each zone.


4.) We approximate the value of the sample amplitude to the quantised values
• Example: if we have a sampled signal with sample amplitudes between -20 and +20 V and we choose to have eight levels (L=8), then we have
(20-(-20))/8 = 5 , thus we have a delta of 5V
Quantisation Levels:
• The choice of how many levels (L) depends on the range of amplitudes of the
analog signal and how accurately we need to recover the signal.
• If the amplitude of the signal fluctuates between only two values then we only
need two levels, but for a signal like a voice recording with many amplitude
levels, we need more quantisation levels —> in audio digitising L is usually taken
as 256 and in video its usually thousands.
• Choosing a lower L value increases the quantisation error if there is a lot of
fluctuation in the signal.
Quantisation Error:
• Quantisation is an approximation process - the input values into the quantiser
are the real values and the output values are the approximated values.
• Output values are chosen to be the middle value in the zone —> if he input value is also @ the middle of he zone then there is no error.
• For example if the normalised amplitude is 3.24 and the normalised quantised value is 3.50, then the error is 3.5 - 3.24 = +0.26
• The value of the error is always less than ______ —> such that we have:
• The Quantisation Error changes the SNR ratio of the signal, which in turn reduces the upper limit capacity (Shannon Capacity) —> contribution
of the quantisation error to the SNRdB of the signal is given by:
SNR = 6.02n + 1.76 dB
Where n Is the bits per sample
Example 4.12: What is the SNR in the example for the figure above
Solution: We use the formula —> we have 8 levels and 3 bits per sample so SNR = 6.02(3) +1.76 = 19.82 dB
• Obviously one can manipulate the formula when given SNR and asked to find n
Uniform vs. Non-Uniform Quantisation:
• For many applications, the distribution of the instantaneous amplitudes in the analog signal is not uniform —> changes in amplitude occur more
frequently in the lower amplitudes. —> better to use non-uniform zones, where the height of ____ is not fixed (greater near lower amplitudes and
less near the greater amplitudes)
• Non-uniform quantisation can also be done through a process called companding and expanding, where the signal is companded (instantaneous
voltage amplitude for large values is reduced) at the sender and then expanded (instantaneous voltage amplitude for large values is increased) at
the receiver. —> companding gives greater weight to strong signals and less weight to weak ones.
• Non-Uniform quantisation reduces the SNR of quantisation.
By: Mitch
Lovemore

Encoding:
• Last step in PCM (Pulse Code Modulation) is encoding —> after each sample is quantised and the number of bits per sample is decided then each
sample can be changed into an n -bit code word.
• The number of bits for each sample is determined from the number of quantisation levels —> if the number of quantisation levels is L then the
number of bits is given by
• The bit rate can be found from: Bit Rate = (sampling rate) x (# of bits per sample) = f x n
Example 4.14: We want to digitise the human voice - what is the bit rate assuming 8 bits per sample?
Solution: The human voice usually contains frequencies from 0 to 4000 Hz, thus we calculate
Sampling Rate = 4000 x 2 = 8000 samples/s
Bit Rate = 8000 x 8 = 64000 bps = 64 kbps
Original Signal Recovery:
• Requires the PCM Decoder
• Decoder first uses circuitry to convert the code words into a pulse that holds
the amplitude until the next pulse.
• After the staircase signal is complete, its passed through a low-pass filter to
smooth the signal into an analog signal. —> the filter has the same cutoff
frequency as the original signal at the sender.
• If the signal has been sampled at a rate equal or greater than the Nyquist sampling rate & if there are enough quantisation levels, then the original
signal will be recreated.
• NOTE: the maximum and minimum values can then be achieved by using amplification.
PCM Bandwidth:
• If we are given the bandwidth of a low-pass analog signal and we then digitise the signal - what is the new minimum bandwidth of the channel that
can pass this new digitised signal? —> we know that the min bandwidth of a line-coded signal is Bmin = cN(1/r), so we sub the value of N into this
formula:
When 1/r = 1 (for NRZ or Bipolar signal) and c = 0.5 (average situation), the minimum bandwidth is given by:

• this means the minimum bandwidth of the digital signal is n times greater than the bandwidth of the analog signal
Maximum Data Rate of a Channel:
• Max data rate for a channel is given by: Nmax = fs x nb = 2B(log2L) bps
Minimum Required Bandwidth:
• We can get the minimum required bandwidth from the previous formula if the data rate and # of signal levels are fixed. —> given by
Bmin = N / (2log2L) Hz
Delta Modulation (DM):
• PCM is very complex —> Delta Modulation is the simplest
• PCM finds the value of the signal amplitude for each sample but DM finds the change
from the previous sample. —> no ‘code words’ , bits are sent one after another.
By: Mitch
Lovemore

Modulator:
• Used at the sender side to create a stream of bits from an analog signal.
• This process records the small changes (+ or -) called delta _____ —> if the delta is positive, the process records a 1 and if the delta is negative, the
process records a 0
• The process needs a base to compare against so the modulator builds a second signal
that resembles a staircase, which is then used to compare against the input signal
• The modulator (@ each sampling interval) compares the value of the analog signal to the last
value of the staircase signal —> if the amplitude of the analog signal is larger then the next bit in the digital data is a 1, otherwise its a 0
• The output of the comparator also makes the staircase itself. —> if the next bit is 1 then the staircase maker moves the last point of the staircase
signal delta up and if the next bit is a 0 then it moves the last point delta down.
• NOTE: we need a delay unit to ‘pause’ the staircase function for the period between comparisons.
Demodulator:
• Takes the digital data and (using the staircase maker and the delay unit) creates an analog signal —> this analog signal does, however, need to pass
through a low-pass filter to smooth it out
Adaptive DM:
• Better performance can be achieved if the value of delta is not fixed.
• In Adaptive Delta Modulation, the value of delta changes according to the amplitude of the analog signal.
Quantisation Error:
• DM is not perfect —> quantisation error is always present but for DM it is much less than that of PCM.
Transmission Modes:
• Basically do we send one bit at a time or group the bits and send one group at a time?
• Either serial (one at a time through the line) or Parallel (multiple bits sent together over
many lines simultaneously)
Parallel Transmission:
• Binary data is organised into groups of n bits each. Serial
Parallel
• By grouping the bits we can send n bits at a time over n wires —> then each
clock cycle sends a group of n bits from one device to another
• Advantages: speed (sending n bits per clock cycle)
• Disadvantages: cost —> to send n bits at a time you need n wires/lines, thus expensive
Serial Transmission:
• One bit follows another along a single line
• Advantages: only need one line to send n bits, thus much cheaper.
• Disadvantages: since comms within devices is parallel, we need conversion devices @ the interface between the sender and the line (parallel-to-
serial) and between the line and the receiver (serial-to-parallel)
• Serial Transmission occurs in one of three ways: Synchronous, Asynchronous or Isochronous
Asynchronous Transmission:
• Timing of the bits doesn’t matter —> info is received and translated by agreed upon patterns so as long as the receiver sticks to those patterns
then they have no problem retrieving info, with no regard for the rhythm at which it arrives.
• Patterns are based on grouping the bit stream into bytes (usually 8 bits per group) which are then sent along the link as a unit. —> each group sent
individually when it is ready, without regard for timing.
• Without synchronisation, the receiver cannot predict when the next group will arrive, so each group has an extra bit at the start of each byte (usually
a 0 and is called the start bit) and an extra one or more bits at the end of each byte (usually 1s and are called the stop bit/bits)
• Each byte is thus enlarged by at least 2 bits.
By: Mitch
Lovemore

• The gaps between bytes can be shown as an idle channel or a stream of additional stop bits.
• This mechanism is not synchronised at the byte level but is still synchronised with the
incoming bit stream.
Synchronous Transmission:
• Bit stream is combined into longer “frames” which may contain multiple bytes. —> each byte is introduced into the transmission link without gaps
between bytes and it is then left to the receiver to separate the bytes in the bit stream for
decoding.
• If the sender wishes to send data in separate bursts, then the bursts must be separated by a
special sequence of 1s and 0s that means ‘idle’. —> absolutely no gaps in the bit stream.
• the receiver then counts the bits as they arrive and groups them into 8-bit units.
• Without start/stop bits, there is no built-in way for the receiver to adjust its bit synchronisation midstream - thus timing is very important as the
accuracy of the received info is completely dependent on the ability of the receiver to count and group the bits as they come in.
• Advantage: speed —> with no extra bits or gaps its faster than asynchronous —> high-speed applications
• NOTE: although there are no gaps between characters, there might be uneven gaps between frames.
Isochronous Transmission:
• guarantees the data arrives at a fixed rate. —> used for time-sensitive shit like real-time audio and video here the entire bit stream must be
synchronised.
Study Unit 3: Chapter 5 By: Mitch
Lovemore

Digital-to-Analog Conversion
• The process of changing one of the characteristics of an analog signal
based on the information in digital data.
• A sine wave is defined by amplitude, phase and frequency —> if we change one of
these characteristics we get a different version of that wave, thus, by changing any of
these three properties of a simple electric signal, we can use it to represent digital data.
• Three mechanisms for modulating digital data into analog signals: Amplitude Shift
Keying (ASK), Frequency Shift Keying (FSK) & Phase Shift Keying (PSK). —> also a
fourth, better mechanism called Quadrature Amplitude Modulation (QAM), which is the most efficient and thus the most commonly used
mechanism.
Aspects of Digital-to-Analog Conversion
Data Elements vs. Signal Elements:
• Pretty much the same as previously defined —> the data elements are the people and the signal elements are the cars that they drive in but for
analog transmission the nature of the signal element is changed a bit
Data Rate vs. Signal Rate:
• Same definition as before and the relationship between them is given by: S = N(1/r) baud, where N is the data rate (bps) and S is the
signal rate (baud) —> the value of r in analog transmission is r = log2L, where L is the number of different signal elements.
• Summary: Bit rate is the number of bits per second; Baud rate or Signal rate is the number of signal elements per second; baud rate is always less
than or equal to bit rate.
Bandwidth:
• Required bandwidth for analog transmission of digital data is proportional to the signal rate, except for FSK, in which the difference between the
carrier signals needs to be added.
Carrier Signal:
• In analog transmission the sending device produces a high-frequency signal (carrier signal or carrier frequency) that acts as a base for the
informational signal
• The receiving device is tuned into the expected carrier frequency of the sender—> digital informations then changes the carries signal by
modifying its amplitude, frequency or phase —> this kind of modification is called modulation (or shift keying)
Amplitude Shift Keying (ASK)
• amplitude of the carrier signal is altered to create signal elements —> both phase and frequency remain constant while the amplitude changes.
Binary ASK (BASK):
• Normally uses two kinds (levels) of signal elements —> referred to as Binary
Amplitude Shirt Keying (BASK) or On-Off Keying (OOK).
• The peak amplitude of one signal level is zero while the other is the same as the carrier
frequency. —> diagram is a conceptual view of BASK
Bandwidth for ASK:
• Although the carrier signal is only one simple sine, modulations produces a no periodic composite signal, which has a continuous set of frequencies.
• Bandwidth is proportional to the baud rate, however, there is now another factor d, which depends on the modulation and filtering process —> value
of d is between 0 and 1 and bandwidth is, thus, shown as: B = (1+d) x S.
• The formula shows that the required bandwidth has a minimum value of S and a maximum value of 2S.
• Note: the bandwidth is located such that the carrier frequency (fc ) is the middle of the bandwidth. —> this means we have a bandpass channel
available and we can choose fc such that the modulated signal occupies that bandwidth. —> Most important advantage of digital-to-analog
conversion.
By: Mitch
Lovemore

Implementation:
• If digital data is represented as a unipolar NRZ digital signal with a voltage high of 1v
and a low of 0V, the implementation can be achieved by multiplying the NRZ digital
signal by the carrier coming from an oscillator. —> when the amplitude of the NRZ
signal is 1, the amplitude of the carrier frequency is held; when the amplitude of the NRZ signal is zero, the amplitude of the carrier frequency is
zero.
• Example 5.3: We have an available bandwidth of 100kHz, spanning from 200 to 300kHz. What are the carrier frequency and the bit rate if
we modulated our data using ASK with d = 1?
Solution: The middle of the bandwidth is @ 250kHz , which means fc = 250 kHz , then we use our bandwidth formula to find the bit rate with d=1
and r=1
B = (1+d) x S = 2N(1/r) = 2N = 100kHz —> N = 50 kbps.
Multi-level ASK:
• if we wanted to use more than two levels —> we can use 2 Amplitudes for the signal and modulate the data using n bits at a time, where r = n.
• Not implemented with pure ASK but is implemented with QAM. (See later)
Frequency Shift Keying (FSK):
• Frequency of the carrier signal is varied to represent data —> frequency of the modulated signal is constant for the duration of one signal element
and then changes for the next signal element IF the data element changes.
Binary FSK (BFSK)
• Two carrier frequencies —> f1 & f2 —> first carrier used if the data element is zero and the
second carrier is used if the data element is 1. —> unrealistic example only for demonstration
• Carrier frequencies are usually very high and the difference between them is very small.
• The middle of one bandwidth is f1 and the middle of the other is f2 —> both are f apart from the midpoint between the two bands —>
difference between these two frequencies is 2
• Bandwidth for BFSK: carrier signals are simple sine waves but modulations creates a non-periodic composite signal with a continuous
frequency. We can think of BFSK as two ASK signals (each with its own carrier frequency)
If the difference between them is 2 , then the required bandwidth is:
B = ( 1+d) x S + 2
with the value of 2 being equal to or greater than the value of S.
Example 5.5: We have an available bandwidth of 100 kHz, spanning from 200-300kHz. What should the carrier frequency and bit rate be if we
modulated our data using FSK with d=1?
Solution: Midpoint of the band is 250kHz and we choose 2 = 50kHz, then
B = (1+d) x S + 50 = 100 —> 2S = 50kHz —> S = 25 kbaud —> N = 25 kbps
Multilevel FSK:
• Not uncommon with the FSK method —> more than two frequencies are used and thus more bits can be sent at a time.
• Bandwidth: B = L x S, where L is the number of levels and S is the signal rate/ baud rate.
Phase Shift Keying:
• the phase of the carrier is altered to represent two or more different signal elements —> peak frequency and amplitude remain constant
Binary Phase Shift Keying (BPSK):
• Two signals: one with a phase of 0˚ and the other with a phase of 180˚.
• Less susceptible to noise —> noise changes amplitude easier than it changes
phase - PSK less susceptible to noise than ASK
• PSK superior to FSK because we don’t need two carrier signals. —> PSK doe however need more complex hardware to distinguish between
phases.
By: Mitch
Lovemore

Bandwidth for BPSK:


• Same as the bandwidth for BASK but less that that of BFSK (because we don’t waste any bandwidth by separating two carries signals)
Implementation:
• Signal element of phase 180˚ can be seen as the compliment of the signal with phase 0˚
• A polar NRZ is multiplied by the carrier frequency —> bit 1 (positive voltage) is shown
by the phase starting at 0˚ and bit 0 (negative voltage) is represented by the phase starting
at 180˚.
Quadrature PSK (QPSK):
• 2 bits at a time in each signal element —> thus decreasing baud rate and thus the
required bandwidth
• Uses two separate BPSK modulations: one in phase and the other is
quadrature (out-of-phase)
• Incoming bits pass through a serial-to-parallel converter that sends one bit to one modulator
and the other bit to the other modulator.
• If the duration of each bit in the incoming signal is T, then the duration of each bit sent to the
corresponding BPSK signal is 2T —> thus, the bit sent to each BPSK signal has has a frequency half that of the original signal.
• The two signs;s created are sine waves with the same frequency but different phases, which, when added together, create another sine wave with
one of four possible phases: 45˚, -45˚, 135˚, -135˚.
• There are 4 types of signal elements in the output signal (L = 4), so we can send 2 bits per signal element (r = 2)
Constellation Diagrams:
• A constellation diagram can help us find the amplitude & phase of a signal element, particularly when using two carriers (one in-phase and one
quadrature)
• Signal elements shown as dots (the bit(s) that it carries usually written next to it) where the height on the Y-axis
shows the amplitude of the Quadrature component and the distance along the X-axis shows the amplitude of
the In-Phase component. —> the length of the line (vector) from the origin to the point is the peak amplitude of
the signal element (combo of X and Y components) and the angle the line makes with the X-axis is the phase
of the signal element.
Quadrature Amplitude Modulation (QAM):
• PSK is limited in terms of the distinguishing small differences in phase —> this limits its
potential bit rate —> we can combine ASK and PSK - then we use two carriers, one in phase
and the other quadrature, with different amplitude levels for each carrier. —> possible variations of QAM are numerous
• a. Shows 4-QAM (four different signal element types ) using a unipolar
NRZ signal to modulate each carrier.
b. Shows 4-QAM using polar NRZ (exactly the same as QPSK)
c. Shows another 4-QAM in which the levels used to modulate the two carriers are both positive.
d. Shows a 16-QAM constellation of a signal with 8 levels (4 positive and 4 negative).
Bandwidth for QAM:
Minimum bandwidth required is the same as for ASK and PSK transmission and QAM has the same advantages over ASK as PSK.
By: Mitch
Lovemore

Analog-to-Analog Conversion
• Analog-to-analog conversion or Analog Modulation is the representation of analog information by an analog signal.
• Modulations is needed if the transmission medium is bandpass in nature or if only a bandpass channel is available to us. —> e.g. radio - the
government assigns a narrow bandwidth to each radio station, thus, the analog signal produced by each station (which is a low-pass signal all in
the same range) need to be shifted into a different range so that we can tune in to different stations using their specific range.
• Analog-to-analog conversion can be achieved in three ways: Amplitude Modulation (AM), Frequency Modulation (FM) and Phase Modulation
(PM) —> AM and FM are usually categorised together (radio stations are AM or FM radio)
Amplitude Modulation (AM):
• The carrier signal is modulated so that its amplitude varies with the changing
amplitudes of the modulating signal.
• The phase and frequency of the carrier signal remains constant —> only the amplitude is
changed to follow variations in the information.
• The modulating signal is the envelope that the carrier fits into.
• AM usually implemented using a simple multiplier because the amplitude of the carrier signal needs to be changed according to the amplitude of the
modulating signal.
AM Bandwidth:
• Modulations crates a bandwidth twice the bandwidth of the modulating signal and covers a range centred on the carrier frequency. —> signal
components above and below the carrier frequency carry the same information (some implementations thus discard half of the signals and cut the
bandwidth in half)
• Total bandwidth required for AM can be determined from the bandwidth of the audio signal: BAM = 2B
Standard Bandwidth Allocation for AM Radio:
• Bandwidth of an audio signal is usually 5kHz, thus each radio station gets a bandwidth of 10kHz
• AM stations are allowed carrier frequencies between 530-1700 kHz (1.7MHz), however each stations carrier frequency must be separated
from its neighbours by at least 10kHz(one AM bandwidth) to avoid interference.
Frequency Modulation (FM): Use this Skip this
one one
• Frequency of the carrier signal is modulated to follow the changing voltage levels
(amplitude) of the modulating signal —> peak amplitude and phase are constant
• FM usually implemented using a voltage-controlled oscillator (like with FSK) —> the
frequency of the oscillator changes according to the input voltage (amplitude of the
modulating signal)
FM Bandwidth:
• Actual bandwidth is difficult to determine but it can be shown to be several times that of
the analog signal or 2(1+ß) B, where ß is a factor that depends on the modulation technique (common value of 4)
—> we use the formula BFM = 2( 1 + ß )B
Standard Bandwidth Allocation for FM Radio:
• Bandwidth of a speech and music audio signal is almost 15 kHz —> FCC allows 200 kHz for each station, which gives us ß = 4, with some extra
guard band
• FM stations are allowed carrier frequencies between 88 and 108 MHz, and stations must be separated by at least 200 kHz.
• To further prevent overlapping and thus interference - the FCC only allows alternate bandwidths to operate at a time —> example if we have
88-108MHz as a range then there are 100 possible bandwidths but only 50 can operate at a time within a certain area.
By: Mitch
Lovemore

Phase Modulation (PM):


• Phase of the carrier signal is modulated to follow the changing voltage level (amplitude) of the modulating signal while amplitude and frequency
remain constant.
• PM is the same as FM, but with one difference: in FM, the instantaneous change in the carrier
frequency is proportional to the amplitude of the modulating signal —> in PM, the
instantaneous change in the carrier frequency is proportional to the derivative of the amplitude
of the modulating signal.
• PM usually implemented using a voltage-controlled oscillator along with a derivative.
—> the frequency of the oscillator changes due to the derivative of the input voltage (amplitude of the modulating signal)
PM Bandwidth:
• Actual bandwidth is difficult to determine but we can use the formula: BPM = 2( 1 + ß )B,
• The ß for PM (usually around 1 for narrowband and 3 for wideband ) is lower than that of FM
Study Unit 3: Chapter 6 By: Mitch
Lovemore

Multiplexing:
• Multiplexing is the set of techniques that allow the simultaneous transmission of multiple signals across a single data link.
• We can add data links to expand the bandwidth when needed.
• n input lines sharing the bandwidth of a link.
• n input lines —> MUX—>one line with n channels—>DEMUX —> n output lines
Frequency Division Multiplexing(FDM):
• Analog multiplexing technique that combines analog signals into a single composite analog signal.
• Channels separated by sections of unused bandwidth which we call Guard Bands - protect from
interference
• Bandwidth minimum is the number of channels you have combined.
• DEMUX: same as MUX but opposite

Higher-Bandwidth Link

Wavelength-Division Multiplexing (WDM):


• analog multiplexing technique that combines optical signals (fibre optic cable)
• Narrow bands of light from the input to form a band of light that is wider —> using a
prism (for MUX and DEMUX)
• Guard bands also present here
• Optical signals split up according to their respective wavelengths at the demultiplexer
Time-Division Multiplexing:
• Digital Multiplexing
• Time is shared —> each connection is given time slots to use the full bandwidth once at a time
• No switching present - connections are fixed
• Two schemes: 1.) Synchronous TDM: each input connection has an allotment in the output even if it
is not sending data. —> inputs are divided into ‘slots’ which are then sent. Each input every T
seconds is sampled on all n of the lines and put into one T sized time slot, which means the
output frames house time slots from n lines and thus each time slot duration is (T÷n) seconds.
• basically each input time slot is squashed down into a frame that houses all n of the inputs
and thus each output time slot is n times smaller.
• In synchronous TDM, the data rate of the link is n times faster and the unit duration is n times
shorter.
• Disadvantage: slots are still sent even if they do not contain any input data - called the empty
slot problem.
• On the multiplexing side, as the switch opens in front of a connection, that connection has the opportunity to send a unit onto the path. This
process is called interleaving. On the demultiplexing side, as the switch opens in front of a connection, that connection has the opportunity to
receive a unit from the path.
• Synchronisation patterns show us where and ‘when’ the frames are —> need good synchronisation —> we add one or more framing bits to the
beginning of a frame to allow synchronisation —> usually one bit that alternates (0&1) per frame.
By: Mitch
Lovemore

Multilevel
TDM Data Rate Management:
• how to we handle disparity in the input data rates?
• 3 strategies: Multilevel Multiplexing, Multiple-slot Allocation, Pulse Stuffing
Multilevel Multiplexing:
Multiple-slot
• used when the data rate of an input line is a multiple of others
Multiple-slot Allocation:
• used when the input is a multiple of another input - we split it into 2to match with the other inputs
Pulse stuffing:
Pulse Stuffing
• used when bit rates of sources are not multiples of one another —> we then make the highest input data rate
the dominant data rate and insert dummy bits into the lines with lower data rates.
• a.k.a bit stuffing or bit padding
2.) Statistical TDM:
• dynamical slot allocations —> improved bandwidth efficiency
• Done by only allocating a slot if a line has data to send
• Number of slots is usually less than the number of line (most of the time there will be lines
that don’t have data to send and thus wont need a slot.
• Multiplexer checks each line for data to send and if there is no data to send then it skips that
slot without allocating a slot to it.
• No framing bits/synchronisation bits —> instead we have an indicator that shows us where that output must go (e.g. A1 goes to line A) —> no
slot given to a line = no output on that line.
• Digital Signal Service (DSS) (like telephone companies) use Statistical TDM in a digital
hierarchy —> 24 x 64kbps lines combine to make one 1.544 Mbps line, which is then
combined to form larger and larger lines —> this of a large river formed by the coming together

of many smaller rivers

Spread Spectrum:
• designed to be used with wireless applications where other concerns outweighs bandwidth efficiency.
• Wireless: our medium is air. —> stations share the same medium but each station needs to be protected from eavesdropping and
jamming(malicious intruder)
• Spread spectrum is used to create redundancy and thus prevent eavesdropping
• Spread the original spectrum of the signal from B to Bss —> allows the source to ‘wrap’
its package in a protective envelope.
• Goals are achieved through two principals: bandwidth allocated to each station is far larger than what is
needed (allows redundancy) & the expanding of the original bandwidth must be done only by a process that is independent of the original signal
(spreading occurs AFTER the signal is created by the source)
Two techniques are used: Frequency Hopping SS and Direct Sequence SS
Frequency Hopping Spread Spectrum (FHSS):
• Uses M different frequencies that are modulated by the source
• Different carriers are modulated at different times
• K-bit pattern is a function of M —> if M=8 then K-bits =3 —> M = 2
• High M = short hopping period = secure
By: Mitch
Lovemore

Direct Sequence Spread Spectrum (DSSS):


• replace each data bit with n-bits using a spreading code (spreading codes are called chips)
• Chip rate = n-times bit rate
• Wireless LAN uses the Barker Sequence (n=11)
• Required bandwidth is n-times larger than the original signal.
• Provides privacy (you need the spreading code to decode it)
Study Unit 3: Chapter 7
By: Mitch
Lovemore

Transmission Media
Introduction:
• Defined as anything that can carry information from a source to a
destination. —> air, metallic cable, fibre optic cable
• Media split into guided and unguided media —> guided media has a physical

material which the signal flows through and unguided media is free space.
Guided Media:
• Provide a conduit from on device to another —> twisted-pair cable, coaxial cable, fibre-optic cable

Twisted-Pair Cable:
• Two conductors (usually copper), each with its own plastic sheath, twisted together.
• One cable is used to carry the signal and the other is used as a ground reference.
• Both wires are susceptible to noise and crosstalk and if the wires were parallel, then they would have different quantities of interference, thus we
twist the cables as in one twist, one cable will be closer to the interference source and in the next twist, the other cable will be closer, thus
maintaining balance and making it easier for the receiver to pull out a clean signal as most of the unwanted signals cancel each other out.
• This is also why the quality of the cable depends on the number of twists per unit length.
Shielded vs Unshielded Twisted Pair Cables:
• Unshielded Twisted Pair = UTP; Shielded Twisted Pair = STP.
• STP has a layer of metal foil or braided mesh that surrounds the twisted pair of conductors
• Metal shielding improves resistance to interference (crosstalk and noise) but makes the
cable heavier and more bulky. —> seldom used outside of IBM
• Categories:

• Categories go from 1 (kak slow and lots of


interference) to 7 (fast af and low interference)
• Connectors: most common is the RJ45 (RJ =
registered jack) and is a keyed connector (can only
be inserted one way) —> think of an Ethernet
cable connection.

Performance:
• Measure cable performance by comparing the attenuation vs. Frequency and distance
• Gauge means the diameter of the cable
• Attenuation sharply increases for frequencies over 100kHz
Applications:
• twisted-pair cables are used in telephone lines, DSL lines and LAN networks such as
10Base-T and 100Base-T
By: Mitch
Lovemore

Coaxial Cable:
• Central conductor wrapped in an insulator, wrapped in another conductor, wrapped in more insulation.
• The outer metallic wrapping serves both as a shield against noise and as a second conductor,
which completes the circuit.
• Coaxial Cable Standards: categorised by their Radio Government (RG) ratings
Each RG number denotes wire gauge, inner conductor, thickness & type of inner
insulator, shield construction and outer casting type.
• Coaxial Cable Connectors: Most common type = Bayonet Neill-Concelman (BNC) connector.
BNC terminator is used at the end of a cable to prevent signal reflection.
• Performance: Attenuation in a coaxial cable is much higher than in twisted-pair —> although
the bandwidth of coaxial cable is much higher, the signal weakens rapidly.
• Applications: Analog telephone networks —> one coaxial cable can carry 10k voice signals,
Digital telephone networks —> one coaxial cable can carry ≤600Mbps (Both have largely been
replaced by fibre optic.) , Cable TV networks (also largely replaced by fibre optic), Ethernet LANs

(10Base-2 (thin Ethernet), 10Base-5 (thick Ethernet - has specialised connectors))


Fibre-Optic:
• Made of glass or plastic —> transmits signals the form of light. —> light travels in a
straight line if it is moving through a medium of uniform density, but if it moves between
materials of different densities it changes direction depending on the critical angle.
• If the angle of incidence (the angle of the ray as it hits the border between mediums) is
smaller than the critical angle then the light refracts and leaves the medium.
• If the angle of incidence is larger than the critical angle then the light is reflected back
into the first medium —> and if its equal then the light is refracted along the border.
• Fibre-Optic cables use cladding in conjunction with the refraction principle to keep the light
bouncing around inside the cable. (Reflection—> angle of incidence (I) > critical angle
Propagation Modes:
• Currently we have two modes: Multimode (houses two indexes: step index and graded index) and
Single Mode.
• Multimode: Multiple beams of light from the source move through the medium in different paths
• Step Index: the density of the core remains constant from the centre to the edges —> light
moves in straight lines until it is reflected at the cladding.
• Graded Index: the density of the core is highest at the centre and gets lower the further out you go
—> decreases the distortion of the signal due to the abrupt change in density as in step index.
• Single-Mode: uses a step-index fibre with a very focused light source that limits beams to a small range of
angles ( close to horizontal)
Cable Composition
• Fibre is of much smaller diameter and lower density —> critical angle is so close to 90˚ that beam
propagation is almost horizontal —> all beams travel along the horizontal and with negligible delay.
• All beams arrive together at the destination and are recombined with very little distortion.
Fibre Sizes:
• Optical Fibres are defined by the ratio of the diameter of their core and cladding (both in micrometers)
• Common sizes are shown ———>
By: Mitch
Lovemore

Fibre-Optic Cable Connectors: 3 types —>


• Subscriber Channel (SC) Connector —> used for cable TV; uses push/pull locking system
• Straight-tip (ST) Connector —> used for connecting cable to networking devices; uses a bayonet locking system
and is more reliable than SC connectors.
• MT-RJ —> same size as the RJ45
Performance:
• attenuation is much flatter than in twisted-pair and coaxial —> we only need one tenth the number of repeaters
Applications:
• Backbone networks (due to its high bandwidth being cost effective) —> with wavelength-division multiplexing
we can transfer data at 1600 Gbps
• Some TV companies use a hybrid of optical fibre with coaxial cabling —> optical fibre provides the backbone and the coaxial cable provides the
connection to the users premises —> low-bandwidth usage at the user doesn’t warrant optical fibre use.
• LANs such as 100Base-FX (fast Ethernet) networks and 100Base-X also use fibre optic cables.
Advantages and Disadvantages:
Advantages:
• Higher bandwidth —> data rates in fibre optic cables are not limited by the medium but rather by the signal generation and reception technology
available ; basically the medium is too fast for the current transmission and reception tech.
• Less Signal Attenuation —> signal can run for 50km without needing regeneration whereas for twisted pair and coaxial, we need a repeater
every 5km
• Immunity to Electromagnetic Interference —> light is not affected by electromagnetic noise
• Corrosion Resistance —> glass is inherit and does not react with corrosive materials, unlike copper which corrodes and reacts with other
materials.
• Light Weight —> much lighter than copper cables
• Greater Immunity to Tapping —> copper cables create antennae effects which can easily be tapped, where as fibre optic cables do not.
Disadvantages:
• Installation and Maintenance —> relatively new technology so installation & maintenance require expertise that are not readily available
• Unidirectional Light Propagation —> we can only send the light in one direction so if we need bidirectional commas then we need two fibre optic
cables
• Cost —> cables and interfaces are pretty expensive compared to other guided media.
Unguided Media (Wireless):
• Unguided media transport electromagnetic waves without using a physical conductor. —> often called wireless broadcast through free space and
available
• Electromagnetic spectrum used for wireless comms is 3kHz —> 900THz
• Travel from source to destination in 3 ways: ground propagation, sky propagation &
line-of-sight propagation.
• Ground Propagation —> radio waves travel through the lowest portion of the atmosphere, hugging
the earth —> low frequency signals emanate in all directions and follow he curvature of the earth.
the greater the power of the signal, the greater the distance the signal can cover.
• Sky Propagation —> higher-frequency signals radiate upward into the ionosphere (the layer of
the atmosphere where particles exist as ions) where they are reflected back to earth. —> allows for greater distances with lower output
• Line-of-Sight Propagation —> very heigh frequency signals are transmitted in straight lines from the source to the destination —> antennae
must be directional, facing each other and either tall enough or close enough to not be affected by the curvature of earth.—> tricky because radio
waves can’t be completely focused.
By: Mitch
Lovemore

• Electromagnetic spectrum we use is split into 8 ranges (called bands) —> rated from very low frequencies (VLF) to extremely high frequencies
(EHF). —> split into three broad groups: radio waves, microwaves, infrared
waves.
Radio Waves:
• No clear demarcation between radio and microwaves —> but radio waves are
usually 3kHz and 1GHz (microwaves are usually 1-300GHz)
• Radio waves are omnidirectional —> sending and receiving antennae don’t
need to be aligned. —> this leads to interference if the antennae send signals
on the same frequency or band.
• Radio waves (especially sky propagation) can travel long distances —> good candidate for long-distance transmission (AM Radio)
• Radio Waves (of low and medium frequencies) can penetrate walls —> an advantage because things like AM radio can be received in buildings
and disadvantage because we can’t isolate a communication to inside or outside a building.
• Radio waves are narrow (under 1GHz) and sub-bands are also narrow —> leads to lower data rate for digital comms.
• Using any part of the band requires permission from authorities.
Omnidirectional Antenna:
• Omnidirectional antennas send out signals in all directions —> based on wavelength, strength and the purpose of transmission, we have a
couple antenna options.
Applications:
• Multicast communications, such as radio and television, and paging systems
Microwaves:
• Electromagnetic waves have frequencies between 1 and 300 GHz
• Microwaves are unidirectional but can be narrowly directed. —> sending and receiving antennas need to be aligned (no interference between
antennae that are not aligned.
Characteristics of Microwaves:
• Line-of-sight propagation —> towers need to be in direct sight of one another. Anything in between blocks transmission —> repeaters are
often needed to overcome this problem.
• Very high frequency microwaves cannot penetrate walls —> can be problematic if receivers are indoors.
• Microwave band is relatively wide (almost 299 GHz) —> wider sub-bands can be assigned and high data rate is possible.
• Use of certain band portions requires permission from the authorities.
Unidirectional Antenna:
• send signals in one specific direction —> using two types of antenna: Parabolic Dish Antenna & Horn Antenna
• Parabolic Dish —> all the lines intersect in a common focus point —> more of the signal is recovered than with a single point receiver
• Horn Dish —> reversal of the receipt path. —> outgoing transmissions go up the stem and are spread into parallel outgoing beams
opposite is true for receiving.
Applications:
• Used in Unicast (one-to-one) —> cellular phones, satellite networks and wireless LANs
By: Mitch
Lovemore

Infrared:
• Frequencies from 300GHz to 400THz (wavelengths from 1mm to 770nm ) —> used for short range comms (useless for long range)
• Cannot penetrate walls —> prevents interference between systems (room to room for example)
• Cannot use infrared outside of buildings as the suns rays contain infrared waves.
Applications:
• communications between devices such as keyboards, mice, PCs and printers
• The Infrared Data Association (IrDA) sponsors the use of infrared and has established standard for the use of this technology.
• Some devices have an IrDA port for using infrared to communicate (e.g. wireless keyboards)
• Data rate of 4Mbps
• Line-of-sight propagation —> if they can’t see each other then they can’t communicate
Study Unit 3: Chapter 8 By: Mitch
Lovemore

Switching: Switched
Introduction: Network
• In a network we want one-to-one connection —> we can just use connections between pairs of devices (mesh) or connect every device to every
other device (star) etc but the best solution is to use switching.
• A switched network consists of a series of interlinked nodes (called switches) that are capable of creating temporary connections between two
or more devices linked to the switch —> some switches are connected to the end devices and some are used only for routing.
• In the diagram above —> the end devices are labelled as Letters (A,B,C,D,F,G,H,I,J) and the switches are labelled as numbers (I, II, III, IV, V)
Three Methods of Switching:
• Circuit Switching, Packet Switching (divided into virtual-circuit approach and datagram
approach) and Message Switching.
Switching & TCP/IP Layers:
• Switching can happen @ several layers in the TCP/IP protocol suite.
• Switching @ Physical Layer: only have circuit switching, no packets are exchanged —> switches at the physical layer allow signals to travel one
path or another.
• Switching @ Data-Link Layer: can have packet-switching, however “packet” in this case means frames or cells. —> packet switching is usually
done using the virtual-circuit approach in this layer.
• Switching @ Network Layer: can have packet switching —> can either be virtual-circuit approach or datagram approach —> currently the
Internet uses the datagram approach but the tendency is to move to virtual-circuit approach.

• Switching @ Application Layer: only have message switching as only messages are exchanged in this layer —> we don’t really see any
“message-switched networks”.
Circuit-Switched Networks:
• A circuit-switched network consists of a set of switches connected by physical links, in which
each link is divided into n channels.
• Each link is usually divided into n channels using FDM (frequency division multiplexing) or
TDM (time division multiplexing).
• End devices are directly connected to a switch —> when end system A and end system M need
to communicate, A needs to request a connection to M and have that request granted by all the
switches as well as by M itself (this process is called the setup phase) —> once a dedicated paths is established, data can be transferred in the
data-transfer phase. —> after the data has been transferred, the circuits are torn down.
NOTE:
• Circuit-switching takes place at the physical layer
• Before the start of communication, the resources (channels (bandwidth in FDM & time slots in TDM), switch buffers, switch processing time,
switch I/O ports) must be reserved and remain dedicated during the entire data transfer duration until the teardown phase.
• Data transferred is not put into packets —> it is a continuous data flow between the sender and receiver (although there may be periods of
silence)
• There is no addressing involved during data transfer —> switches route the data based on their occupied band (FDM) or time slot (TDM). —>
obviously there is still end-to-end addressing used in the setup phase.

By: Mitch
Lovemore

Three Phases:
• Connection setup, data transfer & connection teardown
• Setup Phase: When end system A and end system M need to communicate, A needs to request a connection to M and have that request granted
by all the switches as well as by M itself —> the request contains the address of M so that the switches can find routes between themselves and
form a path from A to M —> when the connection request is received by M, an acknowledgment from system M is then sent in the reverse
direction to system A —> a connection between A and M is only established when A receives the acknowledgement from M.
• Data-Transfer Phase: After the connection is established, the two parties can communicate and transfer data.
• Teardown Phase: When one of the parties needs to disconnect, a signal is sent to each of the switches to release the resources that were reserved
for that specific connection.
Efficiency:
• Not the most efficient as the connection between devices such as computers is not terminated even if there is a long period of inactivity —> this
means that those resources are tied up in the connection and are not being used, thus wasting those resources.
• Networks such as telephone networks are a little better because people usually end the connection when they have finished speaking, thus freeing
up those resources again.
Delay:
• Delay in circuit-switched networks is minimal —> dedicated links between devices means
that data is not delayed at every switch but rather just let through all the way to the end.
• The inure shows the connection request from A to B, the acknowledgement from B to A,
the data transfer from A to B and then the disconnection/termination of the connection
from B to A. —> notice there is no delay when data is passing through the switches.
• Data transferred is only delayed by the propagation time (slope of the box) and the data transfer time (height of the box)
Packet Switching:
• Data to be transferred is divided into packets of fixed or variable size. —> packet size determined by the network and governing protocol.
• No resource allocation for a packet —> sources are allocated on demand on a first-come first-served basis. —> lack of reservation of resources
may cause delays as packets must wait for its turn before being processed.
Datagram Networks:
• Each packet is treated independently from all others (even if it is part of a multi packet transmission) —> Packets in this approach are called
datagrams.
• Normally done at the network layer
• Diagram shows how 4 datagrams are sent from A to X using packet switching
datagrams can be reordered due to each one taking different paths and some datagrams
can be lost or dropped due to lack of resources —> it is usually up to an upper-layer
protocol to reorder the datagrams before passing them on to the application.
• Datagram networks are also referred to as connectionless networks —> where connectionless means the switch doesn’t keep info about the
connection state.
• No setup or teardown phases —> each packet is treated the same regardless of its origin or destination.
Routing Table:
• each switch ( or packet switch) has a routing table which is based on the destination address. —> routing tables are
dynamic and are updated periodically
• The destination address and the corresponding forwarding output ports are recorded in the tables.
• A switch in a datagram network uses a routing table that is based on the destination address.
Destination Address: every packet in a datagram network carries a header with (among other info) the destination address.
This destination address in the header of a packet doesn’t change during the entire journey of the packet.
By: Mitch
Lovemore

Efficiency: better than that of a circuit-switched network —> resources are allocated on an as-needed basis and thus are not tied up in connections
that are not transmitting data.
Delay: there may be a greater delay in a datagram network than in a virtual-circuit network
as there are no setup or teardown phases, so each packet may experience a delay at a switch
before it is forwarded. —> since all packets do not travel through the same switches, the delays may
differ between packets
• In the diagram: the packet travels through two switches which means three transmission
times (3T), three propagation delays (slopes of the lines = 3 ) and two waiting times (w1 & w2), thus the total delay is

Total Delay = 3T + 3 + w1 + w2
Virtual-Circuit Networks:
• A virtual-circuit network is a cross between a circuit-switched and a datagram network.
Characteristics:
• Has setup and tear down phases in addition to the data transfer phase.
• Resources can be allocated either in the setup phase or they can be dynamically allocated.
• Data is packetised and each packet carries an address in the header —> however
the address in the header has local jurisdiction (defines what the next switch should be
and what channel the packet is carried on), not end-to-end jurisdiction (where only the end
address is known and the routing is left to the switches)
• All packets follow the same path established in the connection.
• Normally implemented in the data-link layer
Addressing:
• Two types of addressing: Global addressing & Local addressing (Virtual-Circuit Identifier)
• Global Addressing: A source or destination needs a global address that is unique in the scope of the network (or internationally if its an
international network) —> global addresses in virtual-circuit networks are only used to create a
virtual-circuit identifier.
• Virtual-Circuit Identifier (VCI): Actually used for data transfer —> a small number that has only
switch scope (used by a frame between two switches) —> when a frame arrives at a switch it has a
specific VCI and when it leaves, it has a different VCI.
Three Phases:
• Setup Phase —> source and destination use their global addresses to help switches make table entries for their connection
• Tear down Phase —> source and destination inform the switches to delete those corresponding entries.
• Data-Transfer Phase —> data is actually sent from source to destination.
• Data-Transfer Phase: To transfer a frame from source to destination, all switches must have a table entry
for this virtual circuit connection. —> in its simplest form The table has four columns —> this means that
the switch hold four pieces of info for each virtual circuit that’s already been set up.
By: Mitch
Lovemore

• Setup Phase: a switch creates an entry for a virtual circuit —> done in two steps:
• Setup Request: A setup request frame is sent from source to
destination.
a. Source A sends setup frame to switch 1
b. Switch 1 receives the frame, creates 3 of the 4 entries in its routing table , assigns incoming port (1)
and chooses an available incoming VCI (14) and the outgoing port (3). It doesn’t know the outgoing VCI
yet (found during acknowledgement) and forwards the frame through port 3 to switch 2
c. Switch 2 receives the frame and performs the same process as in switch 1
d. Switch 3 receives the frame and performs the same process as in switch 2
e. Destination B receives the setup frame and (if its ready to receive) assigns a VCI to the incoming frames from A. This VCI lets the destination
know that the frames come from A and not another source.
• Acknowledgement: A special frame called the Acknowledgement frame completes the
entries in the switching tables.
a. Destination sends an ACK (carries the global source and destination address) to
switch 3 —> frame also carries VCI 77 which is the incoming VCI for the destination but the
outgoing VCI for switch 3 (which completes the switching table for switch 3)
b. Switch 3 sends the ACK to switch 2, which does the same process and fills its switching table.
c. Switch 2 sends the ACK to switch 1 which does the same process and fills its switching table.
d. Switch 1 sends the ACK to source A, giving the incoming VCI of switch 1 to source A.
e. The sources use the outgoing VCI for the data frames to be sent from A to B.
• Teardown Phase: Source A sends a special frame called a teardown request, B responds with a teardown confirmation frame and all switches
delete the corresponding entries from their table.
Efficiency:
• In virtual-circuit switching, all packets belonging to the same source and destination travel the same path, but the packets may arrive @ the
destination with different delays if resource allocation is on-demand.
Delay in Virtual-Circuit Networks:
• One-time delay for setup and one-time delay for teardown.
• If resources are reserved beforehand then there’s no wait time for individual packets.
• Total Delay = 3T + 3 + setup delay + teardown delay
Circuit-Switched Technology in WANs:

• Switching in the data-link layer in a switched WAN is usually done using virtual-circuit techniques.
Structure of a Switch:
Structure of Circuit Switches:
• Either space-division switch or time-division switch
Space-Division Switch:
• Paths in the circuit are separated by physical space —> used in both analog and digital networks.
• Two types: Crossbar Switch and Multistage Switch
Crossbar Switch: connects n inputs to m outputs in a grid, using electronic microswitches
(transistors) at each crosspoint. —> the major limitation of this design is the number of
cross points required (to connect 1000 inputs to 1000 outputs, we need 1000000
cross points)
By: Mitch
Lovemore

Multistage Switch: combines crossbar switches in several (normally 3) stages —>


each crosspoint in the middle stage can be accessed by multiple crosspoints in the first or
third stages.
• To design a three stage switch, we follow the following steps:
1. Divide the N input lines into groups of n lines each —> for each group we use one crossbar of size n x k, where k is the number of crossbars in
the middle stage. —> thus, the first stage has N/n crossbars of nxk crosspoints.
2. Use k crossbars, each of size (N/n) x (N/n) in the middle stage.
3. Use N/n crossbars, each of size k x n at the third stage.
thus, we can calculate the number of crosspoints as: —>
• Multistage has one drawback —> blocking in periods of high traffic —> blocking is when an input can’t be connected to an output because there
are no available paths. —> Clos Criteria describes the # of middle-stage switches needed —>
Time-Division Switch:
• Time-division switching uses Time-Division Multiplexing (TDM) inside a switch —> most popular is called time-slot interchange (TSI)
Time-Slot Interchange:
• In the diagram a system is shown where 4 inputs are connected to 4 output lines
• The time-slot interchange then fills slots in RAM with incoming data and then sends
the slots out based on decisions by the control unit.
Time- and Space-Division Switch Combinations:
• We can also combine time and space-division switches to form time-space-time switches
• Time-space-time switches (diagram —>) use time-division switches to split inputs into groups, which
is n times faster than using a single time-slot interchange (if n is the number of time-slot interchanges
being used) —> the middle stage is a space-division switch (crossbar) that connects the TSI groups
to allow connectivity between all input/output pairs —> then the final stage is a mirror of the first
stage.
Structure of Packet Switches:
• 4 main components —> input ports, routing processor, switching fabric, output ports.
Input Ports:
• input ports decapsulate the packets from the frames, detect and correct errors and stick
the packet into the queue to be routed to the switching fabric.
Output Ports:
• Output port performs the same function but in reverse, the packets are taken from the queue,
encapsulated in a frame and the physical-layer functions are applied to the frame to create the signal to
be sent on the line.
Routing Processor:
• performs the processes of the network layer —> the destination address is used to find the address of the next hop and the output port number
from which the packet is sent out —> process sometimes called table lookup.
Switching Fabrics:
• can use a number of fabrics —> crossbar (discussed already) or Banyan Switch
• Banyan switch uses binary micro switches to route the packets based on the output port represented as a binary string.
- If we have a 3 stage Banyan switch then each output port is represented as a 3 digit binary number
where the first bit decides the routing in the first stage, the second bit decides the second stage and the third
bit decided the third stage.
- Problem with the Banyan is that there may be collisions —> thus we use a Bacher-Banyan Switch which
has a trap module to prevent simultaneous routings to the same output port (and thus collisions)
Study Unit 5: Chapter 9
By: Mitch
Lovemore

Link-Layer (DLC & MAC)


Nodes & Links:
• Communication is node-to-node —> data from one point needs to pass through many networks (LANs & WANs) to reach another point.
• Networks are then connected by routers, which together with the source and destination hosts, are called nodes, with the networks in between
being called links.
Services:
• data-link layer provides services to the network and receives services from the physical
layer
• The data-link layer is responsible for delivering the data gram to the next node in the path,
when a packet is travelling in the Internet. —> data-link layer of the sending side needs to encapsulated the data gram in a frame and the data-link
layer of the receiving side needs to decapsulate it from the frame. —> thus, the source host only needs to encapsulate, the destination host needs only
to decapsulate, but each node in between must encapsulate and decapsulate (reason being that each node may have a different protocol with a different
frame format)
Framing:
• A packet in the data-link layer is usually called a frame
• Each node needs to decapsulate and encapsulate the datagram in a frame
before sending it to the next node.
Flow Control:
• if the rate of the produced (sent) frames is higher than the rate of consumed
( received ) frames then frames at the receiving end need to be buffered while waiting to be consumed (processed). —> we can’t have infinite buffer
size so we can either 1.) let the receiving data-link layer drop the frames after the buffer is full or 2.) let the receiving data-link layer send feedback
to the sending data-link layer to ask it to stop or slow down.
Error Control:
• At the sending node, the frame is changed into bits, transformed into electromagnetic signals and then transmitted. —> as electromagnetic signals
are subject to interference and error, the receiving side must then transform the electromagnetic signal into bits, then into a frame, while detecting
error and either correcting it, or discarding the transmission and requesting retransmission from the sender. —> spoken more in Chapter 10
Congestion Control:
• Most data-link layer protocols don’t use congestion control —> considered to be an issue in the network or transport layer because of its end-to-
end nature.
Two Categories of Links:
• Two nodes are connected by a transmission medium, but the data-link layer controls how the medium is used. —> we can have a point-to-point link
( dedicated to two devices) or a broadcast link ( link is shared between several pairs of devices)
Two Sublayers:
• Two sublayers are: Data Link Control (DLC) and Media Access Control (MAC).
• DLC sub layer deals with all issues common to both point-to-point and broadcast links while the
MAC sub layer deals with issues specific to broadcast links.
Link-Layer Addressing
• each frame needs the: Source Address, Destination address ( usually an IP address used to
identify the source and destination host devices) —> these addresses always stay the same
—> they don’t change from one frame to another
• To determine the next hop ( a.k.a. The next node ) we need the address for that specific node.
• Link-layer addresses are also called link addresses, MAC addresses or physical addresses
• Link-layer addresses change from one frame to another!!
By: Mitch
Lovemore

Link-Layer Address Types:


• Unicast —> for one-to-one communication —> first hex value is an UNEVEN number —> A3:34:45:11:87:F1
• Multicast —> for one-to-many communication —> first hex value is an EVEN number —> A2:35:67:43:32:F1
• Broadcast —> for one-to-all communication —> FF:FF:FF:FF:FF:FF
Address Resolution Protocol (ARP):
• the ARP protocol is an auxiliary protocol defined in the network layer —> ARP accepts an IP address from the IP protocol, maps the address to
the corresponding link-layer address and passes it to the data-link layer.
• Anytime a host or router needs the link-layer address for another host or router, then it sends an ARP request packet, which includes the link-
layer and IP addresses of the sender and the IP address of the receiver. The query is broadcast over the link using the link-layer broadcast
address —> all hosts/routers receive the packet but only the host with the IP address matching the IP in the request packet responds with a
response packet containing the IP address and Link-layer address of the recipient.
• Why use ARP if the sender can just broadcast itself? —> using ARP means that instead of sending an entire message, which may be multiple
data grams long, over the entire network, we send one broadcast, connect to that host and then send the data grams only to that host. —> way
more efficient method compared to just broadcasting over the whole network as most of the other hosts would have to receive, decapsulate the
frames, remove the datagram and pass them to their network-layer, only to find that they need to discard the datagram anyway. (Very inefficient)
Packet Format:
• ARP packets are formatted as in the diagram —>
• Hardware Type field defines the type of link-layer protocol (Ethernet = type 1)
• Protocol Type defines the network-layer protocol
Study Unit 5: Chapter 10

Error Detection & Correction


By: Mitch
Lovemore

Introduction:
Types of Errors:
• Whenever bits flow from one point to another they are subject to interference which changes the shape of the signal. —> when ONE bit is changed
(from a 1 to 0 or 0 to 1) it is called a single-bit error and when a string of 2 or more bits is changed its called a burst error.
• A burst error is more likely to occur than a single-bit error because the duration of the
noise signal is usually more than one bit long. —> number of affected bits depends on
data rate and noise duration —> 1/100 of a second of noise affects 10 bits @ 1kbps
but affects 10000 bits @ 1Mbps
Redundancy:
• To be able to detect and correct errors, we send some extra bits with our data —> redundant bits are added by the sender and removed by the
receiver and allow the receiver to detect or correct corrupted bits.
Detection vs Correction:
• In Error Detection we see if any error has occurred —> yes or no answer and no further action ( don’t find the number of corrupted bits etc)
• In Error Correction we need to know the exact number of corrupted bits and (very importantly) their location in the message —> number of errors
and the size of the message are important factors —> 1 error in 8 bits = 8 possible error locations —> 2 errors in 8 bits = 28 possible locations
(permutation of 8 by 2) —> now think of 10 errors in 1000 bits
Coding:
• Redundancy is achieved through various coding schemes —> sender adds redundant bits through a process that creates a relationship between
redundant bits and actual data bits —> receiver checks the relationships between these two sets of bits to check for errors.
• Two main coding scheme categories —> block coding & convolution coding.
Block Coding:
• Divide our message into blocks of k bits ,called datawords. —> add r redundant bits to each block to make the length n = k + r.
• The resulting n-bit blocks are called codewords.
• Thus, we have a set of datawords (each of size k) and a set of codewords (each of size n). —> with k bits we can create 2^k datawords and with n
bits we can create 2^n codewords. —> since n>k , the number of possible codewords is more than the number of possible datawords.
• Block coding process is one-to-one. —> the same data is always encoded as the same codeword - thus, we have (2^n - 2^k) unused codewords
(referred to as illegal or invalid. —> error detection relies on these “invalid” codewords as they are never sent and thus if one is received, we know
something went wrong.
Error Detection:
• The receiver can detect a change in the original codeword if: 1.) The receiver has (or can find) a list of valid codewords. AND 2.) The original
codeword has changed to an invalid one.
• Sender creates codewords out of datawords using a generator which applies the rules and
procedures of encoding —> codewords are sent to the receiver but may change along the
way —> if the received codeword is the same as one of the valid codewords then it is
accepted, if it is an invalid codeword then it is discarded —> NOTE: if the sent codeword is
changed along the way into another valid codeword, then the error goes undetected.
• An error detecting code can only detect errors it is designed to detect —> other errors may go undetected.
Hamming Distance:
• The hamming distance between two words of the same size is the number of differences between the corresponding bits. —> hamming distance
between two words x and y is shown as d(x,y).
• Hamming distance shows us how many of the bits are corrupted —> if the codeword 00000 is sent and the codeword 01101 is received, then
the hamming distance is d(00000,01101) = 3, as three bits were corrupted during transmission.
• To find the hamming distance between two codewords, we can use the XOR operation and count the 1’s —> d(10101,11110) = 3 because
(10101 + 11110) = 01011 (3 1’s) —> basically - if the bit changes then the result is a 1 and if it stays the same, the result is a 0
By: Mitch
Lovemore

Minimum Hamming Distance for Error Detection:


• Minimum hamming distance is the smallest distance between all possible pairs of codewords.
• To guarantee the detection of up to s errors in all cases, the minimum hamming distance in a block code must be dmin = s + 1.
• A geometrical representation —> assume a sent codeword x is at the centre of a circle with radius s. All received codewords that are created by 0
to s errors are points either inside or on the perimeter of the circle. All other valid codewords must be outside the circle —> that means that dmin
must be an integer greater than s —> thus we get dmin = s+1.
Linear Block Codes:
• Almost all block codes today are linear block codes
• Def: A linear block code is a code in which the exclusive OR (addition modulo-2) of two valid codewords creates another valid codeword.
Minimum Hamming Distance for Linear Block Codes:
• The minimum hamming distance for a linear block code is the number of 1’s in the nonzero valid codeword with the smallest number of 1’s.
• So if the codeword 00001 is the valid codeword with the least number of 1’s, then the minimum hamming distance is 1 —> if 00111 is the valid
codeword with the least number of 1’s then the minimum hamming distance is 3.
Parity Check Code:
• A k-bit dataword is changed to an n-bit codeword where n = k + 1.
—> the extra bit is called the parity bit and is used to make the total
number of 1’s in the codeword even.
• Minimum hamming distance for this is dmin = 2, which means the code
is a single-bit error detecting code.
• The encoder and decoder for parity-check code —> (uses modular arithmetic)
The encoder uses a generator to take a copy of a 4-bit dataword (a , a , a , a )
and generate a parity bit r and thus form a 5-bit codeword
• The parity bit is usually found by adding the 4 bits of the dataword (modulo-2)
• r = a +a +a +a (modulo-2)
• If the number of 1’s is even, then the result is 0 & if the number of 1’s is odd, then the result is a 1
• The receiver does the same thing but over all five bits (parity bit included) —> the result is called the syndrome, and is 1 bit.
• s = b +b +b +b +q (modulo-2)
• Where q denotes the parity bit.
• The syndrome is passed to the decision logic analyser —> is the syndrome is 0, there is no detectable error (and the received data portion of the
codeword is accepted as the dataword) , but if it is 1 then there is an error (and the codeword is discarded and thus no dataword is created)

• A parity-check code can detect an odd number of errors


Cyclic Codes:
• Special type of linear block code where if a codeword is cyclically shifted (rotated), the result is another codeword. —> E.g. if 1011000 is a
codeword, then cyclically shifting it left produces 0110001, which is also a codeword.
• If we call the bits in the first word a To a , and the bits of the second word b To b , we can shift the bits using:
b =a ,b =a ,b =a ,b =a ,b =a ,b =a ,b =a
• In the rightmost equation —> the last bit of the first word is wrapped around and becomes the first bit of the second word.
By: Mitch
Lovemore

Cyclic Redundancy Check:


• A subset of cyclic codes, called Cyclic Redundancy Check (CRC), used in networks such as LANs and WANs.
• In the encoder the dataword has k bits (4 in the diagram) and the codeword has n bits
(7 in the diagram) —> the size of the dataword is augmented by adding (n-k) 0’s to
the right-hand side of the word. —> the n-bit result is then fed into the generator.
• the generator uses a divisor of (n-k+1) (4 in the diagram) which is predefined. The
generator divides the dataword by the divisor (modulo-2 division) —> the quotient
is then discarded but the remainder (r r r ) is appended to the dataword to create a
codeword.
• the decoder receives the codeword (possibly corrupted in transmission) and a copy
of all n bits is fed into the checker (replica of the generator) which then produces
a remainder (the syndrome) of (n-k) bits (3 in the diagram), which is fed into the
decision logic analyser.
• If the syndrome bits are all 0’s then the 4 left-most bits of the codeword are

accepted as the dataword (interpreted as “no error”) and if the syndrome bits are anything else, then the 4 bits are discarded (error detected)
Encoder:
• The encoder takes a dataword and augments it with (n-k) number of 0’s (3 in this case).
• It then divides the augmented dataword by the divisor.
• The modulo-2 binary division is much the same as long-division for decimals
• In each step, a copy of the divisor is XORed with the 4 bits of the dividend.
• The remainder (in the diagram is 3 bits with an extra 0 bit pulled down from the
augmented dataword to make it 4 bits) is used in the next step.
• Remember: if the left-most bit of the dividend is 0 then we must use an all-zero divisor.
• The remainder forms the check bits ( r ,r , and r ) which are appended to the dataword to

create the codeword.


Decoder:
• The codeword can change during transmission.

• The decoder does the same division process as the encoder and the remainder is the syndrome —> if the syndrome is all 0’s then no error
detected and the dataword is separated from the syndrome and accepted —> if it is anything else then the whole thing is discarded.
Divisor:
• Divisor is chosen according to expectations of the code —> discuss standard divisors later.
Polynomials:
• A pattern of 0’s and 1’s can be represented as a polynomial with coefficients of 0 and 1.
• The power of each term shows the position of each bit.
Degree of a Polynomial:
• The degree of a polynomial is the highest power in the polynomial
e.g. if we have the highest power as x^6 then the degree is 6
• The number of bits in the pattern is one more than the degree (#bits = degree+1)
Adding and Subtracting Polynomials:
• Adding and subtracting is the same (modulo-2) —> we add polynomials and then delete pairs of identical terms
• E.g. (x^5 +x^4 +x^2) + (x^6 +x^4+x^2) = (x^5 +x^6) as the pairs of (x^4) and (x^2) get deleted.
• Note that if we add or subtract 3 polynomials and each has an identical term then we delete the pair and keep the third.
By: Mitch
Lovemore

Multiplying and Dividing Terms:


• For multiplication we just add the exponents of the terms —> (x^3) X (x^5) = (x^8)
• For division we just subtract the exponent of the denominator from the exponent of the numerator —> (x^5)/(x^2) = (x^3)
Multiplying Entire Polynomials:
• Multiply term by term and then delete the identical pairs.
Dividing Entire Polynomials:
• Divide the first term of the dividend by the first term of the divisor to get the first term of the quotient.
• Multiply the term in the quotient by the divisor and subtract the result from the dividend.
• Repeat until the dividend degree is less than the divisor degree.
• Example later in the chapter!!
Shifting:
• Shifting to the left = adding 0’s after the rightmost bits ; shifting to the right = deleting some rightmost bits
• Shifting left is accomplished by multiplying each term in the polynomial by x where m is the number of bits shifted.
• Shifting right is accomplished by Dividing each term in the polynomial by x .
Cyclic Code Encoder Using Polynomials:
• The dataword is shown as (x^3 + 1) and the divisor is shown as (x^3 + x +1).
• To find the augmented dataword we left-shift the dataword by 3 bits (multiplying by x^3)
• The result is (x^6 +x^3) which we use in the division.
• Using polynomials in the division makes it much easier —> see diagram
• Stop division when the degree of the dividend is less than that of the divisor.
• In the polynomial representation, the divisor is normally referred to as the generator polynomial t(x).
Cyclic Code Analysis:
• We can find the capabilities of a cyclic code using polynomials.
• We define the following where f(x) is a polynomial function with binary coefficients:
• Dataword: d(x) Codeword: c(x) Generator: g(x) Syndrome: s(x) Error: e(x)
• If s(x) is not zero then one or more bits is corrupted ; if s(x) is zero, then no bit is corrupted OR the decoder failed to detect any errors.
• We want to find the criteria to be imposed on the generator to detect the type or error we want to detect.
• We can say: Received codeword = c(x) + e(x)
• The receiver divides the received codeword by g(x) to get the syndrome: (received codeword / g(x) ) = (c(x) / g(x)) + (e(x) / g(x)).
• (c(x)/g(x)) has a remainder of zero (definition of a codeword), thus the syndrome is the remainder of the second term on the right hand side.
• Thus, if we have a syndrome of zero, we either have zero error OR the error is divisible by g(x)
• Thus: In a cyclic code, those errors e(x) that are divisible by g(x) are not caught.
Single-Bit Error:
• What should the structure of the generator be to be able to catch all single-bit errors.
• If the generator has more than one term (usually the case) and the coefficient of x is 1, then all single-bit errors can be caught.
Two Isolated Single-bit Errors:
• We can show this type of error as e(x) = x + x with i and j showing the positions of the errors and the difference (j-i) defines the distance
between the errors.
• Thus, if a generator g(x) cannot divide ( x + 1 ) (with t between 0 and n-1) then all isolated double errors can be detected.
By: Mitch
Lovemore

Odd Numbers of Errors:


• A generator that contains a factor (x+1) can detect all odd-numbers errors.
Burst Errors:
• A burst error is of the form e(x) = (x +… + x ). —> two or more terms whereas for two isolated single-bit errors, we have only two terms
• We can then factor out x^i to give x (x +… + 1 ). —> thus if our generator can detect a single error, then it cannot divide x^i. —> what we
should be concerned with are the generators that divide (x^j-i +…+ 1) —> the remainder of (x^j-i +… + 1 ) / (x^r +… + 1) must not be zero.
• We can then have three cases:
• 1.) If j-i < r, the remainder can never be zero —> then we write j-i = L-1, where L is the length of the error —> thus, L< r+1, which means that all
burst errors with length smaller than or equal to the number of check bits r will be detected.
• 2.) Rare cases where j - i = r or L = r + 1, the syndrome is zero and the error is undetected —> it can be proved that the probability of burst
errors with length r + 1 is given by 0.5^(r-1) —> for example if our generator is x^14 + x^3 + 1, where r=14, then an error of length L=15 can
slip by with a probability of 0.5^14, or almost 1:10000
• 3.) Rare cases where j - i > r or L> r + 1, the syndrome is 0 and the error is undetected and the probability of undetected errors is 0.5^r —> for
example if we have the same generator as in 2.) then with r=14, a burst error of length L>15 can slip by with a probability of 0.5^14 or
1:16000 chance.
In summary:
• a good polynomial generator has the following characteristics:
• It should be at least 2 terms
• The coefficient of the term (x^0) should be 1
• It should not divide ((x^t) +1) for T between 2 and (n-1)
• It should have the factor (x+1).
Standard Polynomials:
• Some standard polynomials are used by popular protocols for CRC generation:

Advantages of Cyclic Codes:


• easily detect single-bit errors, double errors, an odd number of errors and burst errors
• Easily implemented in software and hardware
• Especially fast when implemented in hardware
Other Cyclic Codes:
• Use abstract algebra involving Galois fields —> one of the more interesting ones is the Reed-Solomon Code
CHECKSUM:
By: Mitch
Lovemore

• Checksum is an error-detecting technique that can be applied to a message of any length.


• At the source, the message is first divided into m-bit units, the generator then creates an extra m-bit unit called the checksum, which is sent with the
message.
• At the destination, the checker creates a new checksum from the combination of the message and sent checksum. If the new checksum is all zeros
then the message is accepted, otherwise the message is discarded.
• Note: in the real implementation, the checksum doesn’t have to be placed at the end of the message, it can be inserted in the middle.
Concept:
• Shown by the following example:
Suppose the message is a list of five 4-bit numbers that we want to send.
In addition to these numbers, we send the sum of these numbers —> for example
If the numbers are (7,11,12,0,6) we send (7,11,12,0,6,36) —> the receiver then
Adds the numbers and if the result is the same as the sum, then the message is
Uncorrupted and accepted, but if the result differs form the checksum, then the
Message is corrupted and thus discarded.
One’s Compliment Addition:
• In the example, all the numbers except the sum could be written as a 4-bit word —> one solution is to use one’s compliment arithmetic.
• We represent unsigned numbers between 0 and (2^m)-1 using only m bits. —> if the number has more than m bits, then the extra left-most bits
need to be added to the m right-most bits ( called wrapping )
• In the last example we had 36 as the sum but 36 in binary is 100100, which isn’t 4 bits long, thus, we cut the 2 left most bits off and plus them
with the 4 right most bits like this —> 100100 —> 10 0100 —> 10 + 0100 = 0110 —> decimal 6, thus we can write the new message/
checksum combo as (7,11,12,0,6,6), where the last 6 is the one’s compliment sum.
• The receiver does the same process to see if the checksum is correct —> i.e. if the checker also gets a sum of 6 after one’s compliment arithmetic
then the message is accepted.
Checksum:
• To make the receivers job easier, we can send the compliment of the sum
• In one’s compliment arithmetic, the compliment of a number is found by
changing all the 1’s in the number to 0’s and the 0’s to 1’s. —> basically
subtracting the number from ((2^m) - 1 )
• We have two zeros —> a positive zero is when all m bits are 0’s, and a
negative zero is when all m bits are 1’s ( equal to (2^m) - 1 ).
• If we add a number and its compliment, we get a negative zero (all 1’s) —> when the receiver adds all five numbers, including the checksum, it gets a
negative zero, which it can then compliment to get a positive zero (all 0’s) Positive Zero
Negative Zero
(All 1’s) (All 0’s)
Note: thinking of a set of m-bit numbers (take m = 4 for example) as a loop from positive zero (0000),
Through all the numbers, to negative zero ( 1 1 1 1 ) helps to visualise how the system works.
All other numbers
Start of the loop is technically 0000 (positive zero) and then any number up to 1111 is just 1111 minus the Between 0000 & 1111
(For m = 4 )
Compliment of that number — 0000 can be shown as the difference of negative zero and the compliment of
0000 —> 0000 = 1111 - 1111
Positive zero = negative zero - complimented positive zero
• same principle for all numbers between 0000 and 1111.
By: Mitch
Lovemore

Internet Checksum:
• Internet used a 16-bit checksum traditionally
• See diagram for procedure ——>
Algorithm:
• See flow diagram for the algorithm for the calculation of the checksum —>
• The first loop calculates the sum of the data units in Two’s compliment
• The second loop wraps the extra bits created from the two’s compliment calculation
to simulate the calculation in one’s compliment —> needed because almost all computers
do calculations in two’s compliment.
Performance:
• Traditional checksum uses a small number of bits (16) to detect errors in a message of
any size
• Error-checking capabilities not as strong as CRC —> if some of the numbers are
incremented and some decremented then the sum and checksum will stay the same and the error wont be detected. —> also if several values are
changed but the sum and checksum don’t change then the error can slip by undetected.
• Fletcher and Adler proposed weighted checksums to solve the first problem.
• General tendency in the Internet is to replace checksum with a CRC
Other Approaches to the Checksum:
Fletcher Checksum:
• Weight each data item according to its position
• Two algorithms proposed —> 8-bit Fletcher calculates on 8-bit data items and
creates a 16-bit checksum, and 16-bit Fletcher calculates on 16-bit data items and
creates a 32-bit checksum.
• 8-bit Fletcher is calculated over data octets (bytes) with a 16-bit checksum —>
calculation done modulo 256, which means the intermediate results are divided by
256 and the remainder is kept. —> algorithm uses two accumulators (L&R), where
the first adds the data items together and the second adds weight to the calculation.
Adler Checksum:
• 32-bit checksum
• 3 differences to the 16-bit Fletcher:
• Calculation is done on single bytes instead of two bytes at a time
• The modulus is a prime number —> 65 521 instead of 65 536
Prime modulo has better detecting capability for certain data sets.
• L is initialised to 1 instead of 0
Forward Error Correction
By: Mitch
Lovemore

• Retransmission of corrupted or lost packets is not useful for real-time multimedia transmission (unacceptable delay) —> thus we need to correct
the error or reproduce the packet immediately. —> called Forward Error Correction (FEC) techniques.
Using Hamming Distance:
• We already know to detect s errors, we need a minimum hamming distance
of dmin = s + 1, but it can be shown that to detect t errors, we need to have
dmin = 2t + 1 —> if we want to correct 10bits in a packet, we need a
hamming distance of 21 bits.
Using XOR:
• We can also use the property of the exclusive OR operation as shown —>
• If we apply the XOR operation on N data items, we can recreate any of the data items by XORing all of the items, replacing the one to be created
by the result of the previous operation.
• This means that we can divide a packet into N chunks, create the XOR of all the chunks and then send N + 1 chunks —> if any chunk is lost or
corrupted, then it can be created at the receivers side.
• If N=4, then we have to send 25% more data to be able to correct the data, if only one out of four chunks is lost.
Chunk Interleaving:
• We allow some small chunks to be missing at the receiver
• We obviously can’t afford to let all the chunks from one packet be missing
but one chunk in each packet is fine.
• We divide each packet into 5 chunks (in reality the number of chunks is much
higher) —> we can then create data chunk by chunk (horizontally) but
combine chunks into packets vertically.
• In this case, each packet sent carries a chunk from several original packets, thus if we
lose the packet, only one small chunk from each packet is lost, which is normally acceptable in multimedia comms.
Combining Hamming Distance & Interleaving:
• We can combine then by creating n-bit packets that can correct t-bit errors, then we interleave m rows and send the bits column by column. —>
thus, we have automatic burst error correction up to m x t-bit errors.
Compounding High- & Low-Resolution Packets:
• another solution is to create a duplicate of each packet with a low-resolution
redundancy and combine the redundant version with the next packet.
• For example we can create four low-resolution packets out of five
high-resolution packets and send them as in the figure —>
• if a packet is lost then we can use the low-res version from the next packet.
• Lost packets cannot be recovered but we have the low-res version to use, as long
as the lost packet is not the last one.
• The audio and video reproduction is not of the same quality but it goes unnoticed
most of the time.
Study Unit 5: Chapter 11

Data Link Control (DLC)


By: Mitch
Lovemore

DLC Services:
• Data Link Control (DLC) deals with procedures for communication between two adjacent nodes (node-to-node communications) no matter
whether the link is dedicated or broadcast.
• Data link control functions include framing and flow & error control.
Framing:
• Framing in the data-link layer separates a message from one source to a destination by adding a sender address and a destination address. —>
source address is for the recipient to send an acknowledgement and the destination address is obviously where the shit needs to go.
Frame Size:
• Frames can be fixed or variable size —> fixed size framing: no need for defining the boundaries of the frames; the size itself can be used as the
delimiter. & variable-sized framing: we need to define the end of one frame and the beginning of the next, using two approaches character-
orientated framing & bit-orientated framing.
Character-Orientated Framing:
• In Character-orientated (or byte-orientated) framing, data to be carried
are 8-bit characters from a coding system such as ASCII.
• The header (carries the source and destination addresses + other control information) & the trailer ( carries error detection redundant bits) are
also multiples of 8 bits.
• To separate one frame from the next, an 8-bit (1-byte) flag is added at the
beginning and end of the frame —> flag is composed of protocol-dependent
special characters.
• In Byte stuffing (or character stuffing), a special byte (called the
escape character (ESC) ) is added to the data section of the frame when
there is a character with the same pattern as the flag. —> the escape byte
has a predefined bit pattern
• Whenever the receiver encounters the ESC character, it removes it from the data section and treats the next character as data, not as a delimiting
flag.
• Note: Byte stuffing is the process of adding one extra byte whenever there is a flag or escape character in the text.
• What if there is one or more ESC characters followed by a byte with the same pattern as the flag?? —> to solve this problem, the ESC
characters that are part of the text must be marked with another ESC character —> basically if the escape character is part of the text then
another one is added to show that the second escape character is part of the text.
• World is tending towards bit-orientated protocols.
Bit-Orientated Framing:
• In bit-orientating framing, the data section of a frame is a sequence of bits
to be interpreted by the upper layer as text, audio, graphics etc
• In addition to headers (and possible trailers), we still need a delimiter to
separate one frame from another.
• Most protocols use a special 8-bit pattern flag, 01111110, as the delimiter to define
the start and end of the frame.
• Unlike in character-orientated framing, instead of stuffing an entire byte, we just stuff
a single bit to prevent a pattern in the data from looking like a flag. —> called bit stuffing.
• In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0 is added
(regardless of the value of the next bit) —> extra 0 is eventually removed by the receiver
• E.g. —> if the flaglike pattern is 01111110 then it becomes 011111010 (stuffed) and is thus,
not mistaken as a flag by the receiver —> the actual flag 01111110 is not stuffed and thus, is recognised by the receiver as a flag.
By: Mitch

Flow & Error Control:


Lovemore

Flow Control:
• Basically finding the balance between production and consumption rates.
• If one entity produces items faster than the other entity consumes them, then the consumer can get overwhelmed and may need to discard some
items. —> a.k.a. If production > consumption = overwhelming/discarding items.
• If one entity produces items slower than they are consumed then the consumer may need to wait for items to arrive, leading to delays and
inefficiency —> if production < consumption = delays / waiting.
• In comms at the data-link layer, we need to consider 4 entities —> network
and data-link layers at the sending node and also the network and data-link layer
at the receiving node.
• As the sending node “pushes” frames to the receiving node, feedback regarding the rate at which the receiving node can handle the frames is fed
back to the sender.
Buffers: implementation of flow control is usually using two buffers, one at the sending data-link layer and one at the receiving data-link layer —> a
buffer is a set of memory locations at the sender and receiver sides to hold packets to be sent or processed.
- The flow control communication can occur by sending signals from the consumer to the producer —> like “hey bru, my buffer is full so
maybe don’t push more frames, thanks.”
Error Control:
• Error control at the data-link layer is implemented using one of two methods:
• In the first method, if the frame is corrupted, it is silently discarded and if it isn’t corrupted then the packet is delivered to the network layer. —>
mostly used in wired LANs such as Ethernet.
• In the second method, if the frame is corrupted, it is silently discarded and if it is not corrupted then an acknowledgement is sent (for both flow
and error control)
Combination of Flow and Error Control:
• Both can be combined —> acknowledgement sent for flow control can also be used for error control to tell the sender the packet has arrived
uncorrupted. —> lack of an acknowledgement shows that something went wrong with the sent frame.
• A frame contains an acknowledgement called an ACK, to distinguish it from the data frame.
Connectionless and Connection-Oriented:
Connectionless Protocol:
• In a Connectionless protocol, frames are sent from one node to the next without any relationship between frames —> independent frames.
• Frames are not numbered so there’s no sense of ordering
• Most data-link protocols for LANs are Connectionless.
Connection-Oriented Protocol:
• In a connection-oriented protocol, a logical connection should be established first between the two nodes (setup phase).
• After all frames (that are related to one another) have been transmitted (transfer phase), the logical connection is terminated (tear down phase).
• Frames are numbered and sent in order —> if they aren’t received in order then the receiver needs to wait for all the frames belonging to the
same set are received and then deliver them to the network layer in order.
• Used in some point-to-point protocols —> some wireless LANs and some WANs.
By: Mitch

Data-Link Layer Protocols


Lovemore

• Four protocols have been defined for the data-link layer to deal with flow and error control —> Simple, Stop-and-Wait, Go-Back-N, and
Selective-Repeat. —> last two are not used anymore
• The behaviour of the data-link layer can be best described as a Finite State Machine (FSM), where there are only a set number of states that the
system can be in, until an event occurs.
• Each event is associated with two reactions: defining the list (possibly empty) of actions to be performed and determining the next state (which
can also be the same state as the original state).
• An initial state must be defined (default state when the machine turns on)
• The FSM starts off in state 1 and when an event occurs, it changes the state,
• For our diagram, the FSM starts off in state 1 when turned on, then if event 1
occurs, then the machine performs actions 1&2 before moving to state 2. If event 2
occurs, while the machine is in state 2, then the machine performs action 3 and returns to state 2. If event 3 occurs, the machine performs no
action, but returns to state 1 (initial or default state)
Simple Protocol:
• A simple protocol has neither flow nor error control —> we assume the
receiver can immediately handle any frame we send. No regard for the receiver.
• Data-link layer at the sender gets the packet from its network layer, make it into a frame and sends it to the receiver, where the frame is opened
and the packet is extracted and passed to the destination network layer. —> data-link layers of the source and destination provide transmission
services to their network layers.
FSMs:
• Sender shouldn’t send a frame unless its network layer has a message to send.
• Receiver can’t pass a packet to its network layer if a frame hasn’t arrived.
• THUS, we have two FSMs, —> at the sender side — the machine waits
until the network layer has something to send, then it wraps it in a frame and
sends it. ; At the receiver side — the machine waits until there is a frame to process, then it unwraps the packet and passes it to the network layer.
Stop-and-Wait Protocol:
• Uses both flow and error control
• Sender sends one frame at a time and waits for an ACK before sending the next one.
• For error detection we add a CRC —> if the CRC is incorrect when checked at the receiver, the frame is discarded and no ACK is sent. The
absence of an ACK lets the sender know something went wrong and to resend the frame.
• Every time the sender sends a frame, a timer is started —> if an ACK is received before the timer runs out then it resets the timer and sends the
next frame, but if the timer runs out without receiving an ACK, then
the sender knows something went wrong and to resend the previous
frame. —> this requires the sender to keep a copy of the previous
frame until the ACK arrives (if it arrives then the copy is discarded
and if it doesn’t arrive then the copy is sent)
FSMs:
Sender States:
• Sender is initially in the ready state.
• Ready State: In this state, the sender is simply waiting for a packet from the network layer - when the packet comes from the network layer, the
sender creates a frame, saves a copy of the frame, starts the only timer and sends the frame.- the sender then moves to the blocking state.
• Blocking state: in this state, three events occur:
a. If a time-out occurs, the sender resends the saved copy of the frame and restarts the timer.
b. If a corrupted ACK arrives, it is discarded.
c. If an error-free ACK arrives, the sender stops the timer and discards the saved copy of the frame, and then moves to the ready state.
By: Mitch
Lovemore

Receiver:
• the receiver is always in the ready state - only two events can occur:
a. If an error-free frame arrives, the message in the frame is delivered to the network layer and an ACK is sent.
b. If a corrupted frame arrives, the frame is discarded.
Sequencing and Acknowledgement Numbers:
• Duplicate and corrupted packets need to be avoided, thus we use numbers to specify
the order of the data frames - to do this, we add sequence numbers to the data frames
and acknowledgement numbers to the ACK frames.
• This numbering for these frames is given as:
0,1,0,1,0… for the sequence numbers and 1,0,1,0… for the acknowledgement numbers,
• Thus, an acknowledgement number always defines the sequence number of the next
frame to receive.

Piggybacking:
• A method of two-way communication where, to make communication more efficient, the data sent in one of the directions is “piggybacked” with the
acknowledgement in the other direction.
• A.K.A. —> Node A sends a frame to Node B, who then sends an acknowledgement + additional data back to Node A and so on…
• Piggybacking makes communication in the data-link layer complicated so it is not common practice.
By: Mitch
Lovemore

HDLC (High-Level Data Link Control):


• HDLC is a bit-oriented protocol for communication over point-to-point and multipoint links that uses the stop-and-wait protocol.
• This protocol is more theoretical but forms the basis for other protocols such as PPP, Ethernet Protocol and in Wireless LANs.
Configurations and Transfer Modes:
• HDLC provides two common transfer modes:
- Normal Response mode: (NRM) - the station configuration is unbalanced, so we have one primary station which can send commands and
multiple secondary stations which can only respond. - NRM is used for both point-to-point and multipoint links.
- Asynchronous Balanced Mode: (ABM) - Configuration is balanced and the link is point-to-point, where each station can function as a primary
or secondary (acting as peers) — this is the common mode today.
Framing:
HDLC defines three types of frames:
• Information frames (I-frames), Supervisory Frames (S-frames) and Unnumbered Frames (U-frames), each of which serve as an envelope for the
transport of a different type of message.
• I-Frames are used to transport data-link user data and control information relating to user data.
• S-Frames are used only to transport control information.
• U-frames are reserved for system management and the information carried by them is used to manage the link itself.
Each frame in HDLC contains up to 6 fields —> beginning flag, address, control, information, frame check sequence (FCS), and ending flag fields.
• Flag Field: contains synchronisation pattern 01111110, which identifies
both the beginning and end of a frame.
• Address Field: Contains the secondary stations address - if the primary
station is sending the frame then the address field is a TO address to the
secondary station, but if the secondary station is sending the frame to the
primary station, then the address is a FROM address to show the primary station where it came from. - the address field can be a single or
multiple bytes.
• Control Field: The control field is one or two bytes used for flow and error control.
• Information Field: Contains the users data from the network layer or management information - its length varies from network to network.
• FCS Field: Frame Check Sequence is the HDLC error detection field. - can contain either a 2 or 4 byte CRC.
Control Field for I-Frames:
• Page 307 Middle

You might also like