## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

3.1 Digital Representation of Information 3.2 Why Digital Communications? 3.3 Digital Representation of Analog Signals 3.4 Characterization of Communication Channels 3.5 Fundamental Limits in Digital Transmission 3.6 Line Coding 3.7 Modems and Digital Modulation 3.8 Properties of Media and Digital Transmission Systems 3.9 Error Detection and Correction

Digital Networks

Digital

**transmission enables networks to support many services
**

TV E-mail

Telephone

Questions of Interest

**How long will it take to transmit a message?
**

How many bits are in the message (text, image)? How fast does the network/system transfer information? How many bits/second does voice/video require? At what quality?

**Can a network/system handle a voice (video) call?
**

**How long will it take to transmit a message without errors?
**

How are errors introduced? How are errors detected and corrected?

What transmission speed is possible over radio, copper cables, fiber, infrared, …?

Chapter 3 Digital Transmission Fundamentals 3.1 Digital Representation of Information .

n = 16. or 64 n-bit field in a header n-bit representation of a voice sample Message consisting of n bits n bits allows enumeration of 2n possibilities The number of bits required to represent a message is a measure of its information content More bits → More content . information Bit: number with value 0 or 1 n bits: digital representation for 0. numbers. 1. 32.Bits. … . n = 8 Computer word. 2n Byte or Octet.

Stream Information Block Information that occurs in a single block Text message Data file JPEG image MPEG file Stream Information that is produced & transmitted continuously Real-time voice Streaming video Size = bits / block or bytes/block Bit rate = bits / second 1 Kbyte = 210 bytes 1 Mbyte = 220 bytes 1 Gbyte = 230 bytes 1 Kbps = 103 bps 1 Mbps = 106 bps 1 Gbps = 109 bps .Block vs.

Transmission Delay L R bps L/R d c tprop number of bits in message speed of digital transmission system time to transmit the information distance in meters speed of light (3x108 m/s in vacuum) time for signal to propagate across medium Delay = tprop + L/R = d/c + L/R seconds What can be done to reduce the delay? Use data compression to reduce L Use higher-speed modem to increase R Place server closer to reduce d .

zip.. quality Noisy: recover information approximately Compression Ratio #bits (original file) / #bits (compressed file) .Compression Information usually not represented efficiently Data compression algorithms Represent the information using fewer bits Noiseless: original information recovered exactly e. compress. fax JPEG Tradeoff: # bits vs.g. GIF.

Color Image W H Color image = H W Red component image W + H Green component image W + H Blue component image Total bits = 3 × H × W pixels × B bits/pixel = 3HWB bits Example: 8× 10 inch picture at 400 × 400 pixels per inch2 400 × 400 × 8 × 10 = 12.4 megabytes .8 megapixels × 3 bytes/pixel = 38.8 million pixels 8 bits/pixel/color 12.

compress Format ASCII Original Compressed( Ratio) KbytesMbytes 256 Kbytes 38.4 Mbytes (2-6) Fax CCITT Group A4 page 200x100 3 pixels/in2 JPEG 8x10 in2 photo 4002 pixels/in2 5-54 Kbytes (5-50) 1-8 Mbytes (5-30) Color Image .Examples of Block Information Type Text Method Zip.

Stream Information A real-time voice signal must be digitized & transmitted as it is produced Analog signal level varies continuously in time Th e s p ee ch s i g n al l e v el v a r ie s w i th t i m(e) .

Digitization of Analog Signal Sample analog signal in time and amplitude Find closest approximation Original signal Sample value 3 bits / sample 7∆ 5∆ 3∆ ∆ − ∆ −2 3∆ − 5∆ − 7∆ /2 /2 /2 /2 / /2 /2 /2 Approximation Rs = Bit rate = # bits/sample x # samples/second .

Bit Rate of Digitized Signal Bandwidth Ws Hertz: how fast the signal changes Higher bandwidth → more frequent samples Minimum sampling rate = 2 x Ws Representation accuracy: range of approximation error Higher accuracy → smaller spacing between approximation values → more bits per sample .

Example: Voice & Audio Telephone voice W = 4 kHz → 8000 s samples/sec 8 bits/sample R =8 x 8000 = 64 kbps s Cellular phones use more powerful compression algorithms: 8-12 kbps CD Audio W = 22 kHz → 44000 s samples/sec 16 bits/sample R =16 x 44000= 704 kbps s per audio channel MP3 uses more powerful compression algorithms: 50 kbps per audio channel .

Video Signal Sequence of picture frames Each picture digitized & compressed 10-30-60 frames/second depending on quality Small frames for videoconferencing Standard frames for conventional broadcast TV HDTV frames Frame repetition rate Frame resolution 30 fps Rate = M bits/pixel x (W x H) pixels/frame x F frames/second .

4 x 106 pixels/sec HDTV 1080 at 30 frames/sec = 67 x 106 pixels/sec .Video Frames 176 QCIF videoconferencing (144 lines and 176 pixels per line ) 144 720 at 30 frames/sec = 760.000 pixels/sec Broadcast TV 480 1920 at 30 frames/sec = 10.

261 Format 176x144 or 352x288 pix @10-30 fr/sec 720x480 pix @30 fr/sec Original Compressed 2-36 Mbps 249 Mbps 64-1544 kbps MPEG2 2-6 Mbps MPEG2 1920x1080 pix 1.Digital Video Signals Type Video Conference Full Motion HDTV Method H.6 Gbps 19-38 Mbps @30 fr/sec .

e. e... packet switching or rate-smoothing with constant bit-rate circuit .. 64 kbps Network must support steady transfer of signal. 64 kbps circuit Variable bit-rate Signals such as digitized video produce a stream that varies in bit rate.g.g..g.g. e. according to motion and detail in a scene Network must support variable transfer rate of signal.Transmission of Stream Information Constant bit-rate Signals such as digitized telephone voice produce a steady stream: e.

Stream Service Quality Issues Network Transmission Impairments Delay: Is information delivered in timely fashion? Jitter: Is information delivered in sufficiently smooth fashion? Loss: Is information delivered without loss? If loss occurs. is delivered signal quality acceptable? Applications & application layer protocols developed to deal with these impairments .

Chapter 3 Communication Networks and Services 3.2 Why Digital Communications? .

A Transmission System Transmitter Communication channel Receiver Transmitter Converts information into signal suitable for transmission Injects energy into communications medium or channel Telephone converts voice into electric current Modem converts bits into tones Receiver Receives energy from medium Converts received signal into form suitable for delivery to user Telephone converts current into voice Modem converts tones into bits .

Transmission Impairments Transmitter Transmitted Signal Received Signal Receiver Communication channel Communication Channel Transmission Impairments Pair of copper wires Signal attenuation Coaxial cable Signal distortion Radio Spurious noise Light in optical fiber Interference from other Light in air signals Infrared .

.. Repeater Destination Each repeater attempts to restore analog signal to its original form Restoration is imperfect Distortion is not completely eliminated Noise & interference is only partially removed Signal quality decreases with # of repeaters Communications is distance-limited Still used in analog cable TV systems Analogy: Copy a song using a cassette recorder .Analog Long-Distance Communications Transmission segment Source Repeater .

Analog vs. Digital Transmission Analog transmission: all details must be reproduced accurately Sent Distortion Attenuation Received Digital transmission: only discrete levels need to be reproduced Sent Distortion Attenuation Received Simple Receiver: Was original pulse positive or negative? .

multiplexing. analog systems Less power. lower system cost Monitoring.Digital Long-Distance Communications Transmission segment Source Regenerator . Regenerator Destination Regenerator recovers original data sequence and retransmits on next segment Can design it so error probability is very small Then each regeneration is like the first time! Analogy: copy an MP3 file Communications is possible over very long distances Digital systems vs.. protocols… .. encryption. coding. longer distances.

Digital Binary Signal +A 0 1 0 1 1 0 1 T 2T 3T 4T 5T 6T -A Bit rate = 1 bit / T seconds For a given communications medium: How do we increase transmission speed? How do we achieve reliable communications? Are there limits to speed and reliability? .

that is. then typical output is a spread-out pulse with ringing Question: How frequently can these pulses be transmitted without interfering with each other? Answer: 2 x Wc pulses/second where Wc is the bandwidth of the channel . make T as small as possible Channel T t t If input is a narrow pulse.Pulse Transmission Rate Objective: Maximize pulse rate through a channel.

then input signal passes readily A(f)≈0. then A(f) 1 output is a sinusoid of same frequency f Output is attenuated by an amount A(f) that depends on f A(f)≈1.Bandwidth of a Channel X(t) = a cos(2π ft) Channel Y(t) = A(f) a cos(2π ft) If input is sinusoid of frequency f. then input signal is blocked 0 Wc f Ideal low-pass channel Bandwidth Wc is the range of frequencies passed by channel .

so Bit Rate = 1 bit/pulse x 2Wc pulses/sec = 2Wc bps If amplitudes are from {-A.Multilevel Pulse Transmission Assume channel of bandwidth Wc. then each pulse conveys 1 bit. +A/3. then bit rate is 2 x 2Wc bps By going to M = 2m amplitude levels. -A/3. and transmit 2 Wc pulses/sec (without interference) If pulses amplitudes are either -A or +A. the bit rate can be increased without limit by increasing m . we achieve Bit Rate = m bits/pulse x 2Wc pulses/sec = 2mWc bps In the absence of noise. +A}.

Noise & Reliable Communications All physical systems have noise Electrons always vibrate at non-zero temperature Motion of electrons induces noise Presence of noise limits accuracy of measurement of received signal amplitude Errors occur if signal separation is comparable to noise level Bit Error Rate (BER) increases with decreasing signal-to-noise ratio Noise places a limit on how many amplitude levels can be used in pulse transmission .

Signal-to-Noise Ratio Signal High SNR t t t Noise Signal + noise No errors Signal Low SNR t t Noise Signal + noise t SNR = Average signal power Average noise power error SNR (dB) = 10 log10 SNR .

then arbitrarily reliable communications is not possible. If R > C. Bandwidth Wc & SNR determine C .Shannon Channel Capacity C = Wc log2 (1 + SNR) bps Arbitrarily reliable communications is possible if the transmission rate R < C. C can be used as a measure of how close a system design is to the best achievable performance. “Arbitrarily reliable” means the BER can be made arbitrarily small through sufficiently complex coding.

Example Find the Shannon channel capacity for a telephone channel with Wc = 3400 Hz and SNR = 10000 C = 3400 log2 (1 + 10000) = 3400 log10 (10001)/log102 = 45200 bps Note that SNR = 10000 corresponds to SNR (dB) = 10 log10(10001) = 40 dB .

Bit Rates of Digital Transmission Systems System Telephone twisted pair Ethernet twisted pair ADSL Bit Rate 33. 1 Gbps 64-640 kbps in.5366.6-56 kbps 10 Mbps.5-45 Mbps 2.11 wireless LAN 5 km multipoint radio 1 wavelength Many wavelengths Cable modem 500 kbps-4 Mbps 2. 100 Mbps.144 Mbps out 1.4 GHz radio 2-11 Mbps 28 GHz radio Optical fiber Optical fiber .5-10 Gbps >1600 Gbps Observations 4 kHz telephone channel 100 meters of unshielded twisted copper wire pair Shared CATV return channel Coexists with analog telephone signal IEEE 802. 1.

11) Optical fiber Bandwidth 3 kHz 1 MHz Bit Rates 33 kbps 1-6 Mbps 500 MHz (6 30 Mbps/ channel MHz channels) 300 MHz (11 54 Mbps / channel channels) Many TeraHertz 40 Gbps / wavelength .Examples of Channels Channel Telephone voice channel Copper pair Coaxial cable 5 GHz radio (IEEE 802.

9 Error Detection and Correction .Chapter 3 Digital Transmission Fundamentals 3.

Error Control Digital transmission systems introduce errors Applications require certain reliability level Data applications require error-free transfer Voice & video applications tolerate some errors Error control used when transmission system does not meet application requirement Error control ensures a data stream is transmitted to a certain level of accuracy despite errors Two basic approaches: Error detection & retransmission (ARQ) Forward error correction (FEC) .

it is in error Redundancy: only a subset of all possible blocks can be codewords Blindspot: when channel transforms a codeword into another codeword All inputs to channel satisfy pattern or condition User Encoder information Channel Channel output Pattern checking Deliver user information or set error alarm .Key Idea All transmitted data blocks (“codewords”) satisfy a pattern If received block doesn’t satisfy pattern.

…. b2.Single Parity Check Append an overall parity check to k information bits Info Bits: Check Bit: Codeword: b1. …. bk+! ) All codewords have even # of 1s Receiver checks to see if # of 1s is even All error patterns that change an odd # of bits are detectable All even-numbered patterns are undetectable Parity bit used in ASCII code . b3. b2. bk bk+1 = b1+ b2+ b3+ …+ bk modulo 2 (b1.. bk. b3.

0. 1. 1. odd Error detected If errors in bits 3 and 5: (0. 0. 0. 1. 1. 1) # of 1’s =4. 0) Parity Bit: b8 = 0 + 1 +0 + 1 +1 + 0 = 1 Codeword (8 bits): (0. 0. 0. 0. 1.Example of Single Parity Code Information (7 bits): (0. 1. 1. 1. 0. 1. 0. 0. 1) # of 1’s =5. 0. 1. 1. 1) If single error in bit 3 : (0. 1. 1. even Error not detected .

**Check bits & Error Detection
**

Information bits Received information bits

k bits Calculate check bits Channel Sent check bits n – k bits Received check bits

Recalculate check bits

Compare Information accepted if check bits match

**How good is the single parity check code?
**

Redundancy: Single parity check code adds 1 redundant bit per k information bits: overhead = 1/(k + 1) Coverage: all error patterns with odd # of errors can be detected

An error pattern is a binary (k + 1)-tuple with 1s where errors occur and 0’s elsewhere Of 2k+1 binary (k + 1)-tuples, ½ are odd, so 50% of error patterns can be detected

Is it possible to detect more errors if we add more check bits? Yes, with the right codes

**What if bit errors are random?
**

Many transmission channels introduce bit errors at random, independently of each other, and with probability p Some error patterns are more probable than others:

P[10000000] = p(1 – p)7 = (1 – p)8

p and 1–p p 2 2 6 8 P[11000000] = p (1 – p) = (1 – p) 1–p

In any worthwhile channel p < 0.5, and so (p/(1 – p) < 1 It follows that patterns with 1 error are more likely than patterns with 2 errors and so forth What is the probability that an undetectable error pattern occurs?

roughly 1 in 2000 error patterns is undetectable .96 (10-4 ) For this example. p = 10-3 32 2 (10-3 )2 (1 – 10-3 )30 + 32 (10-3 )4 (1 – 10-3 )28 4 P[undetectable error] = ≈ 496 (10-6 ) + 35960 (10-12 ) ≈ 4.Single parity check code with random bit errors Undetectable error pattern if even # of bit errors: P[error detection failure] = P[undetectable error pattern] = P[error patterns with even number of 1s] n = 2 p (1 – p) 2 n-2 + n 4 p4(1 – p)n-4 + … Example: Evaluate above for n = 32.

What is a good code? Many channels have preference for error patterns that have fewer # of errors These error patterns map transmitted codeword to nearby n-tuple If codewords close to each other then detection failures will occur Good codes should maximize separation between codewords o o o o x x x x x o o o x x o o o o o x = codewords o = noncodewords Poor distance properties o x x o o o o x x o o o x o o o x o x Good distance properties .

Two-Dimensional Parity Check More parity bits to improve coverage Arrange information as columns Add single parity bit to each column Add a final “parity” column Used in early error control systems 1 0 0 1 0 0 0 1 0 0 0 1 Last column consists 1 0 0 1 0 0 of check bits for each 1 1 0 1 1 0 row 1 0 0 1 1 1 Bottom row consists of check bit for each column .

Not all patterns >4 errors can be detected Four errors (undetectable) Arrows indicate failed check bits . 2. or 3 errors can always be detected.Error-detecting capability 1 0 0 1 0 0 0 0 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 0 1 0 0 1 1 1 One error 1 0 0 1 0 0 0 0 0 0 0 1 1 0 0 1 0 0 1 0 0 1 1 0 1 0 0 1 1 1 Two errors 1 0 0 1 0 0 0 0 0 1 0 1 1 0 0 1 0 0 Three errors 1 0 0 1 1 0 1 0 0 1 1 1 1 0 0 1 0 0 0 0 0 1 0 1 1 0 0 1 0 0 1 0 0 0 1 0 1 0 0 1 1 1 1.

Other Error Detection Codes Many applications require very low error rate Need codes that detect the vast majority of errors Single parity check codes do not detect enough errors Two-dimensional codes require too many check bits The following error detecting codes used in practice: Internet Check Sums CRC Polynomial Codes .

b1.. bL-1 The algorithm appends a 16-bit checksum bL . Checksum recalculated at every router. b0. 16-bit words.. b2.Internet Checksum Several Internet protocols (e.. so algorithm selected for ease of implementation in software Let header consist of L. . TCP. UDP) use check bits to detect errors in the IP header (or in the header and data for TCP/UDP) A checksum is calculated for header contents and included in a special field. IP.g..

+ bL-1 modulo 216 -1 The checksum is then given by: bL = ..x modulo 216 -1 Thus. find x = b0 + b1 + b2+ . the headers must satisfy the following pattern: 0 = b0 + b1 + b2+ ....Checksum Calculation The checksum bL is calculated as follows: Treating each 16-bit word as an integer.+ bL-1 + bL modulo 216 -1 The checksum calculation is carried out in software using one’s complement arithmetic .

Internet Checksum Example Use Modulo Arithmetic Assume 4-bit words Use mod 24-1 arithmetic b =1100 = 12 0 Use Binary Arithmetic Note 16 =1 mod15 So: 10000 = 0001 mod15 leading bit wraps around b0 + b1 = 1100+1010 =10110 =10000+0110 =0001+0110 =0111 =7 Take 1s complement b2 = -0111 =1000 b1=1010 = 10 b0+b1=12+10=7 mod15 b2 = -7 = 8 mod15 Therefore b2=1000 .

Polynomial Codes Polynomials instead of vectors for codewords Polynomial arithmetic instead of check sums Implemented using shift-register circuits Also called cyclic redundancy check (CRC) codes Most data communications standards use polynomial codes for error detection Polynomial codes is the basis for powerful error-correction methods .

i2 . ik-2 . i0) ik-1 xk-1 + ik-2 xk-2 + … + i2x2 + i1x + i0 Addition: (x7 + x6 + 1) + (x6 + x5) = x7 + x6 + x6 + x5 + 1 = x7 +(1+1)x6 + x5 + 1 = x7 +x5 + 1 since 1+1=0 mod2 Multiplication: (x + 1) (x2 + x + 1) = x(x2 + x + 1) + 1(x2 + x + 1) = x3 + x2 + x + (x2 + x + 1) = x3 + 1 .…. i1 .Binary Polynomial Arithmetic Binary vectors map to polynomials (ik-1 .

Binary Polynomial Division Division with Decimal Numbers quotient dividend 34 35 ) 1222 105 divisor 17 2 140 32 dividend = quotient x divisor +remainder 1222 = 34 x 35 + 32 remainder x3 + x2 + x x3 + x + 1 ) x6 + x5 x6 + x4 + x3 x5 + x4 + x3 x5 + x3 + x2 x4 + x4 + x2 x2 + x x = r(x) remainder = q(x) quotient dividend Polynomial Division divisor Note: Degree of r(x) is less than degree of divisor .

Polynomial Coding Code has binary generating polynomial of degree n–k g(x) = xn-k + gn-k -1 xn-k -1 + … + g2x2 + g1x + 1 k information bits define polynomial of degree k – 1 i(x) = ik-1 xk-1 + ik-2 xk-2 + … + i2x2 + i1x + i0 Find remainder polynomial of at most degree n – k – 1 q(x) xn-k i(x) = q(x)g(x) + r(x) g(x) ) xn-k i(x) r(x) Define the codeword polynomial of degree n – 1 b(x) = xn-k i(x) + r(x) n bits k bits n-k bits .

0) i(x) = x3 + x2 Encoding: x3i(x) = x6 + x5 x3 + x2 + x x3 + x + 1 ) x6 + x5 x6 + x4 + x3 1110 1011 ) 1100000 1011 1110 1011 1010 1011 010 x5 + x4 + x3 x5 + x3 + x2 x4 + x4 + x2 x2 + x x Transmitted codeword: b(x) = x6 + x5 + x b = (1.1.0) .0. n–k = 3 Generator polynomial: g(x)= x3 + x + 1 Information: (1.Polynomial example: k = 4.0.1.0.0.1.

0.1.0) i(x) = x6 + x4 + x2 + x Q1: Find the remainder (also called Frame Check Sequence.1.1. FCS) and transmitted codeword Encoding: x3i(x) = x3 (x6 + x4 + x2 + x) = x9 + x7 + x5 + x3 .Exercise 1 Generator polynomial: g(x)= x3 + x2 + 1 Information: (1.0.

1.0.Solution 11 0 1 11 0 11 0 1 1 0 1 0 11 0 0 0 0 11 0 1 1111 11 0 1 0101 0 0 0 0 Remainder? 001 Transmitted codeword: b(x) = x9 + x7 + x5 + x3 + 1 b = (1.1) 1010 11 0 1 111 0 11 0 1 0 11 0 0 0 0 0 11 0 0 11 0 1 0 01 .1.0.0.0.0.1.

The Pattern in Polynomial Coding All codewords satisfy the following pattern: b(x) = xn-k i(x) + r(x) = q(x)g(x) + r(x) + r(x) = q(x)g(x) All codewords are a multiple of g(x) Receiver should divide received n-tuple by g(x) and check if remainder is zero If remainder is nonzero. then received n-tuple is not a codeword .

.Exercise 1 cont’d Q2: How does the receiver check whether the message T was transmitted without any errors? Show your work Answer: The received message b is divided by g(x) and if the remainder is zero then b is error-free otherwise it contains errors.

i0 Append n – k zeros to information bits Feed sequence to shift-register circuit that performs polynomial division After n shifts.i1.Shift-Register Implementation 1. Accept information bits ik-1. 3.…. the shift register contains the remainder 4. 2.ik-2. .i2.

i1.0.0.i2.i0.i3 Clock 0 1 2 3 4 5 6 7 Input 1 = i3 1 = i2 0 = i1 0 = i0 g0 = 1 g1 = 1 Reg 0 g3 = 1 Reg 1 Reg 2 + + Reg 0 0 1 1 0 1 1 1 0 r0 = 0 Reg 1 0 0 1 1 1 0 0 1 r1 = 1 Reg 2 0 0 0 1 1 1 0 0 r2 = 0 0 0 0 Check bits: r(x) = x .Division Circuit Encoder for g(x) = x3 + x + 1 0.

e(x) is a nonzero codeword. . that is. then R(x) = b(x) + e(x) = q(x)g(x) + q’(x)g(x) The set of undetectable error polynomials is the set of nonzero code polynomials Choose the generator polynomial so that selected error patterns can be detected.Undetectable error patterns (Transmitter) (Receiver) b(x) + R(x)=b(x)+e(x) (Channel) e(x) Error polynomial e(x) has 1s in error locations & 0s elsewhere Receiver divides the received polynomial R(x) by g(x) Blindspot: If e(x) is a multiple of g(x).

it cannot divide xi If g(x) is a primitive polynomial. it cannot divide xi e(x) = xi + xj = xi(xj-i +1) where j>i If g(x) has more than 1 term.Designing good polynomial codes Select generator polynomial so that likely error patterns are not multiples of g(x) Detecting Single Errors e(x) = xi for error in location i + 1 If g(x) has more than 1 term. it cannot divide xm+1 for all m<2n-k -1 (Need to keep codeword length less than 2n-k -1) Primitive polynomials can be found by consulting coding theory books Detecting Double Errors .

Designing good polynomial codes Detecting Odd Numbers of Errors Suppose all codeword polynomials have an even # of 1s. b(x) evaluated at x = 1 is zero because b(x) has an even number of 1s This implies x + 1 must be a factor of all b(x) Pick g(x) = (x + 1) p(x) where p(x) is primitive Visit http://mathworld. then all odd numbers of errors can be detected As well.html for more info on primitive polynomials .com/PrimitivePolynomial.wolfram.

DoD. V. XMODEM.42 = x32 + x26 + x23 + x22 + x16 + x12 + x11 + x10 + x8 + x7 + x5 + x4 + x2 + x + 1 . V.Standard Generator Polynomials CRC = cyclic redundancy check CRC-8: = x8 + x2 + x + 1 AT M CRC-16: = x16 + x15 + x2 + 1 = (x + 1)(x15 + x + 1) Bisync CCITT-16: CCITT-32: = x16 + x12 + x5 + 1 HDLC.41 IEEE 802.

there is a Hamming code of length n = 2m – 1 with n – k = m parity check bits Redundancy m 3 4 5 6 n = 2m–1 k = n–m 7 4 15 11 31 26 63 57 m/n 3/7 4/15 5/31 6/63 .Hamming Codes Class of error-correcting codes Can detect single and double-bit errors Can correct single-bit errors For each m > 2.

b4 Equations for parity checks b5. b6.m = 3 Hamming Code Information bits are b1.0.0) is a codeword . b7 b5 = b1 b6 = b1 + b2 b7 = + b3 + b4 + b4 + b2 + b3 + b4 There are 24 = 16 codewords (0.0. b2.0.0. b3.0.

4) code can correct any single-bit error.4) code Hamming introduced in 1950 Hamming code adds 3 additional check bits to every 4 data bits of the message for a total of 7 Hamming's (7.4) code Hamming code really refers to a specific (7.Hamming (7.4) is effectively lossless . Hamming's (7. and detect all two-bit errors Since the medium would have to be uselessly noisy for 2 out of 7 bits (about 30%) to be lost.

Hamming (7.4) code Information b1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 b2 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 b3 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 b4 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 Codeword b1 b2 b3 b4 b5 b6 b7 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 1 0 0 1 1 0 1 0 0 1 1 0 0 1 0 1 0 1 1 0 1 0 1 0 1 0 0 1 0 1 0 1 1 0 1 0 0 1 0 1 1 0 1 0 0 1 Weight w(b) 0 4 3 3 3 3 4 4 3 3 4 4 4 4 3 7 .

Parity Check Equations Rearrange parity check equations: 0 = b5 + b5 = b1 0 = b6 + b6 = b1 + b2 0 = b7 + b7 = + b3 + b4 + b5 + b4 + b6 + b7 + b2 + b3 + b4 b1 b2 In matrix form: 0 0 0 = 1011100 = 1101010 = 0111001 b3 b4 = H bt = 0 b5 b6 b7 All codewords must satisfy these equations Note: each nonzero 3-tuple appears once as a column in check matrix H .

Error Detection with Hamming Code 1011100 s=He= 1101010 0111001 0 0 1 0 0 0 0 0 1 0 0 1 0 0 = 0 1 1 Single error detected 1011100 s=He= 1101010 0111001 1 0 1 = 1 + 0 = 1 1 1 0 Double error detected 1011100 s=He= 1101010 0111001 1 1 1 1 0 = 1 + 0 0 0 0 0 1 + 1 1 0 = 0 1 Triple error not detected .

so for instance the Hamming weight of 11101 is 4. Hamming distance between "toned" and "roses" is 3. .e.. The Hamming weight of a string is its Hamming distance from the zero string of the same length it is the number of elements in the string which are not zero for a binary string this is just the number of 1's.Hamming Distance (weight) is the # of positions in two strings of equal length for which the corresponding elements are different (i. Hamming distance between 2143896 and 2233796 is 3. the # of substitutions required to change one into the other) For example: Hamming distance between 1011101 and 1001001 is 2.

g(x) = x3+x+1 . the Hamming code is obtained through the check matrix H: Each nonzero m-tuple appears once as a column of H The resulting code corrects all single errors For each value of m. there is a polynomial code with g(x) of degree m that is equivalent to a Hamming code and corrects all single errors For m = 3.General Hamming Codes For m > 2.

Error-correction using Hamming Codes (Transmitter) b + R (Receiver) e Error pattern The receiver first calculates the syndrome s: s = HR = H (b + e) = Hb + He = He If s = 0. then the receiver accepts R as the transmitted codeword If s is nonzero. then an error is detected Hamming decoder assumes a single error has occurred Each single-bit error pattern has a unique syndrome The receiver matches the syndrome to a single-bit error pattern and corrects the appropriate bit .

Performance of Hamming ErrorCorrecting Code Assume bit errors occur independent of each other and with probability p s = H R = He 7p s=0 s=0 1–3p Correctable errors 7p(1–3p) 3p Uncorrectable errors 21p2 No errors in transmission (1–p)7 Undetectable errors 7p3 .

History of Hamming Code Read http://en.wikipedia.org/wiki/Hamming_code .

- Electronic Noise
- Gsm Repeater (3)
- Lesson 7
- 03.12012010
- Built-In Loopback Test for IC RF Transceivers
- FSR Training Material_1
- Information Processing - Types of Source Documents
- Telecommunication
- David Adamy-Practical communication theory-SciTech Publishing (2014).pdf
- real time noise reduction
- Theory of noise suppression
- Chapter 7
- Detection Qualification and Types of Detectors in HPLC.pdf
- Vocal Emotion Recognition in Five Languages of Assam Using Features Based on MFCCs and Eigen Values of Autocorrelation Matrix in Presence of Babble Noise
- FO_NAST30010 FDD-LTE Link Budget.doc
- Covariance based Signal Detections for Cognitive Radio
- Tranmission Line
- Two-Color Photodetector for Ratio Thermometry
- Instrumentation Notes
- srr
- Performance Analysis of Fog Effect on Free Space Optical Communication System
- ch4_7.pdf
- sharp 2260 trbl-sd2260
- Codec
- Surface Wave Dispersion Measurements and Tomography From ASN in China
- Robust Design Experiments for Better Products Taguchi
- Test and evaluation
- wsn
- WO_NAST3006_E01_1 UMTS Indoor Coverage Solution

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading