August 2009 Bachelor of Science in Information Technology (BScIT) – Semester 4 BT0046 – Communication Technology – 2 Credits (Book ID: B0025 & B0026

) Assignment Set – 1 (30 Marks) Answer all questions Book ID: B0025 1. What is bandwidth? What is the bandwidth of a. Telephone signal b. Commercial radio broad casting c. TV signal. Ans: In computer networking and computer science, digital bandwidth, network bandwidth or just bandwidth is a measure of available or consumed data communication resources expressed in bit/s or multiples of it (kbit/s, Mbit/s etc). Bandwidth may refer to bandwidth capacity or available bandwidth in bit/s, which typically means the net bit rate, channel capacity or the maximum throughput of a logical or physical communication path in a digital communication system. For example, bandwidth test implies measuring the maximum throughput of a computer network. The reason for this usage is that according to Hartley's law, the maximum data rate of a physical communication link is proportional to its bandwidth in hertz, which is sometimes called frequency bandwidth, radio bandwidth or analog bandwidth, the last especially in computer networking literature. Bandwidth may also refer to consumed bandwidth (bandwidth consumption), corresponding to achieved throughput or good put, i.e. average data rate of successful data transfer through a communication path. This meaning is for example used in expressions such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap, bandwidth allocation (for example bandwidth allocation protocol and dynamic bandwidth allocation), etc. An explanation to this usage is that digital bandwidth of a bit stream is proportional to the average consumed signal bandwidth in Hertz (the average spectral bandwidth of the analog signal representing the bit stream) during a studied time interval. Digital bandwidth may also refer to: average bit rate (ABR) after multimedia data compression (source coding), defined as the total amount of data divided by the playback time. Some authors prefer less ambiguous terms such as gross bit rate, net bit rate, channel capacity and throughput, to avoid confusion between digital bandwidth in bits per second and analog bandwidth in hertz. 5 x 6 = 30

Bandwidths The figures below are grouped by network or bus type, then sorted within each group from lowest to highest bandwidth; gray shading indicates a lack of known implementations. TTY/Teletypewriter or Telecommunications Device for the Deaf (TDD) Device TTY (V.18) TTY (V.18) NTSC Line Captioning 21 Closed Speed (bit/s) 45.4545 bit/s 50 bit/s 1 kbit/s Speed (characters/s) 6 characters/s 6.6 characters/s ~100 characters/s

Modems/broadband connections All modems are assumed to be in serial operation with 1 start bit, 8 data bits, no parity, and 1 stop bit (2 stop bits for 110-baud modems). Therefore, a total of 10 bits (11 bits for 110-baud modems) are needed to transmit each 8-bit byte. The "bytes" column reflects the net data transfer rate after the protocol overhead has been removed. Device Speed (bit/s) Modem 110 baud (symbols / second) 0.11 kbit/s Modem 300 (300 baud) (Bell 103 or 0.3 kbit/s V.21) Modem 1200 (600 baud) (Bell 212A or 1.2 kbit/s V.22) Modem 1200/75 (600 baud) (V.23) 1.2/0.075 kbit/s Speed (byte/s) 0.010 kB/s (~10 cps)[2] 0.03 kB/s (~30 cps)[2] 0.12 kB/s (~120 cps)[2] 0.12/0.0075 kB/s (~120 cps)[2] 0.24 kB/s[2] 0.48 kB/s[2] 0.96 kB/s[2] 1.4 kB/s[2] 2.9 kB/s[2] 3.3 kB/s[2] 6.6/3.3 kB/s 6.6/5.5 kB/s 6.6-22 kB/s 6.6-32 kB/s 8/16 kB/s

Modem 2400 (600 baud) (V.22bis) 2.4 kbit/s Modem 4800 (1600 baud) (V.27ter) 4.8 kbit/s Modem 9600 (2400 baud) (V.32) 9.6 kbit/s Modem 14.4 (2400 baud) (V.32bis) 14.4 kbit/s Modem 28.8 (3200 baud) (V.34-1994) 28.8 kbit/s Modem 33.6 (3429 baud) (V.34-1998) 33.6 kbit/s Modem 56k (8000/3429 baud) (V.90) 56.0/33.6 kbit/s[4] Modem 56k (8000/8000 baud) (V.92) 56.0/48.0 kbit/s[4] Hardware compression (variable) 56.0-220.0 kbit/s (V.90/V.42bis) Hardware compression (variable) 56.0-320.0 kbit/s (V.92/V.44) ISDN Basic Rate Interface (single/dual 64/128 kbit/s[5] channel) data

IDSL HDSL ITU G.991.1 MSDSL SDSL ADSL (typical)[9] SHDSL ITU G.991.2 ADSL ADSL (G.DMT) ADSL2 ADSL2+ DOCSIS v1.0[10] (Cable modem) DOCSIS v2.0[11] (Cable modem) FiOS fiber optic service (typical) DOCSIS v3.0[12] (Cable modem) Uni-DSL VDSL ITU G.993.1 VDSL2 ITU G.993.2 BPON (G.983) fiber optic service GPON (G.984) fiber optic service Mobile telephone interfaces

144 kbit/s 1,544 kbit/s 2,000 kbit/s 2,320 kbit/s 3,000/768 kbit/s 5,690 kbit/s 8,192/1,024 kbit/s 12,288/1,333 kbit/s 12,288/3,584 kbit/s 24,576/3,584 kbit/s 38,000/9,000 kbit/s 38,000/27,000 kbit/s 50,000/20,000 kbit/s 160,000/120,000 kbit/s 200,000 kbit/s 200,000 kbit/s 250,000 kbit/s 622,000/155,000 kbit/s 2,488,000/1,244,000 kbit/s

18 kB/s 193 kB/s 250 kB/s 290 kB/s 375/96 kB/s 711 kB/s 1,024/128 kB/s 1,536/166 kB/s 1,536/448 kB/s 3,072/448 kB/s 4750/1,125 kB/s 4,750/3,375 kB/s 6,250/2,500 kB/s 20,000/15,000 kB/s 25,000 kB/s 25,000 kB/s 31,250 kB/s 77,700/19,300 kB/s 311,000/155,500 kB/s

Speed (byte/s) GSM CSD 14.4 kbit/s 1.8 kB/s HSCSD 57.6/14.4 kbit/s 5.4/1.8 kB/s GPRS 57.6/28.8 kbit/s 7.2/3.6 kB/s WiDEN 100 kbit/s 12.5 kB/s CDMA2000 1xRTT 153 kbit/s 18 kB/s EDGE (type 1 MS) 236.8 kbit/s 29.6 kB/s UMTS 384 kbit/s 48 kB/s EDGE (type 2 MS) 473.6 kbit/s 59.2 kB/s EDGE Evolution (type 1 MS) 1,184/474 kbit/s 148/59 kB/s EDGE Evolution (type 2 MS) 1,894/947 kbit/s 237/118 kB/s 1xEV-DO Rev. 0 2,457/153 kbit/s 307.2/19 kB/s 1xEV-DO Rev. A 3,100/1,800 kbit/s 397/230 kB/s 3xEV-DO Rev. B 9,300/5,400 kbit/s 1,162/675 kB/s 14,400/5760 HSDPA/HSUPA 1,800/720 kB/s kbit/s 4xEV-DO Enhancements (2X2 34,400/12,400 4,300/1,550 MIMO) kbit/s kB/s HSPA+ (2X2 MIMO) 42,000/11,500 5,250/1,437 Device Speed (bit/s)

15xEV-DO Rev. B UMB (2X2 MIMO) LTE (2X2 MIMO) UMB (4X4 MIMO) EV-DO Rev. C LTE (4X4 MIMO) Wide area networks Device DS0 G.Lite (aka ADSL Lite)

kbit/s 73,500/27,000 kbit/s 140,000/34,000 kbit/s 173,000/58,000 kbit/s 280,000/68,000 kbit/s 280,000/75,000 kbit/s 326,000/86,000 kbit/s

kB/s 9,200/3,375 kB/s 17,500/4,250 kB/s 21,625/7,250 kB/s 35,000/8,500 kB/s 35,000/9,000 kB/s 40,750/10,750 kB/s

Speed (bit/s) 0.064 Mbit/s 1.536/0.512 Mbit/s 1.544 Mbit/s 2.048 Mbit/s 2.304 Mbit/s

Speed (Mbyte/s) 0.008 0.192/0.064 0.192 0.256 0.288 0.512 0.29 0.789 1/0.128 1.056 1.5/0.448 2.0/0.128 3.0/0.448 4.296 4.75/1.25 5.0/3.75 5.5925 6.48 12.5 20/15 19.44 31.25 34.272 50.044

DS1/T1 (and ISDN Primary Rate Interface) E1 (and ISDN Primary Rate Interface) G.SHDSL LR-VDSL2 (4 to 5 km [long-]range) 4 Mbit/s (symmetry optional) SDSL[15] 2.32 Mbit/s T2 6.312 Mbit/s [16] ADSL 8.0/1.024 Mbit/s E2 8.448 Mbit/s ADSL2 12/3.5 Mbit/s Satellite Internet[17] 16/1 Mbit/s ADSL2+ 24/3.5 Mbit/s E3 34.368 Mbit/s DOCSIS v1.0 (Cable modem)[10] 38.0/10.0 Mbit/s DOCSIS v2.0 (Cable modem)[11] 40/30 Mbit/s DS3/T3 ('45 Meg') 44.736 Mbit/s STS-1/EC-1/OC-1/STM-0 51.84 Mbit/s VDSL (symmetry optional) 100 Mbit/s [12] DOCSIS v3.0 (Cable modem) 160/120 Mbit/s OC-3/STM-1 155.52 Mbit/s VDSL2 (symmetry optional) 250 Mbit/s T4 274.176 Mbit/s T5 400.352 Mbit/s

OC-9 OC-12/STM-4 OC-18 OC-24 OC-36 OC-48/STM-16 OC-96 OC-192/STM-64 10 Gigabit Ethernet WAN PHY 10 Gigabit Ethernet LAN PHY OC-256 OC-768/STM-256 OC-1536/STM-512 OC-3072/STM-1024 Local area networks Device LocalTalk Econet PC-Network ARCNET (Standard) Ethernet Experimental Token Ring (Original) Ethernet (10base-X) Token Ring (Later) ARCnet Plus Token Ring IEEE 802.5t Fast Ethernet (100base-X) FDDI MoCA 1.0[18] MoCA 1.1[18] FireWire (IEEE 1394) 400[19][20] HIPPI Token Ring IEEE 802.5v Gigabit Ethernet (1000base-X) Myrinet 2000 Infiniband SDR 1X[21] Quadrics QsNetI Infiniband DDR 1X[21] Infiniband QDR 1X[21] Infiniband SDR 4X[21]

466.56 Mbit/s 622.08 Mbit/s 933.12 Mbit/s 1,244 Mbit/s 1,900 Mbit/s 2,488 Mbit/s 4,976 Mbit/s 9,953 Mbit/s 9,953 Mbit/s 10,000 Mbit/s 13,271 Mbit/s 39,813 Mbit/s 79,626 Mbit/s 159,252 Mbit/s

58.32 77.76 116.64 155.5 237.5 311.04 622.08 1,244 1,244 1,250 1,659 4,976 9,953 19,907

Speed (bit/s) 0.230 Mbit/s 0.800 Mbit/s 2 Mbit/s 2.5 Mbit/s 3 Mbit/s 4 Mbit/s 10 Mbit/s 16 Mbit/s 20 Mbit/s 100 Mbit/s 100 Mbit/s 100 Mbit/s 100 Mbit/s 175 Mbit/s 393.216 Mbit/s 800 Mbit/s 1,000 Mbit/s 1,000 Mbit/s 2,000 Mbit/s 2,000 Mbit/s 3,600 Mbit/s 4,000 Mbit/s 8,000 Mbit/s 8,000 Mbit/s

Speed (byte/s) 0.0288 MB/s 0.1 MB/s 0.25 MB/s 0.3125 MB/s 0.375 MB/s 0.5 MB/s 1.16 MB/s 2 MB/s 2.5 MB/s 12.5 MB/s 11.6 MB/s 12.5 MB/s 12.5 MB/s 21.875 MB/s 49.152 MB/s 100 MB/s 125 MB/s 116 MB/s 250 MB/s 250 MB/s 450 MB/s 500 MB/s 1,000 MB/s 1,000 MB/s

8,000 Mbit/s 10,000 10 Gigabit Ethernet (10Gbase-X) Mbit/s 10,000 Myri 10G Mbit/s 16,000 Infiniband DDR 4X[21] Mbit/s Scalable Coherent Interface (SCI) Dual Channel 20,000 SCI, x8 PCIe Mbit/s 24,000 Infiniband SDR 12X[21] Mbit/s 32,000 Infiniband QDR 4X[21] Mbit/s 48,000 Infiniband DDR 12X[21] Mbit/s 96,000 Infiniband QDR 12X[21] Mbit/s 100,000 100 Gigabit Ethernet (100Gbase-X) Mbit/s Wireless networks

1,000 MB/s 1,250 MB/s 1,250 MB/s 2,000 MB/s 2,500 MB/s 3,000 MB/s 4,000 MB/s 6,000 MB/s 12,000 MB/s 12,500 MB/s

802.11 networks are half-duplex; all stations share the medium. In access point mode, all traffic has to pass through the AP (Access Point). Thus, two stations on the same AP which are communicating with each other must have each and every frame transmitted twice: from the sender to the access point, then from the access point to the receiver. This approximately halves the effective bandwidth. Device Speed (bit/s) Speed (byte/s) 802.11 (legacy) 0.125 2.0 Mbit/s 0.25 MB/s RONJA free space optical wireless 10.0 Mbit/s 1.25 MB/s (full duplex, so each way) 802.11b DSSS 0.125 11.0 Mbit/s 1.375 MB/s 802.11b+ DSSS 0.125 44.0 Mbit/s 5.5 MB/s 802.11a 0.75 54.0 Mbit/s 6.75 MB/s 802.11g OFDM 0.125 54.0 Mbit/s 6.75 MB/s 802.16 (WiMAX) 70.0 Mbit/s 8.75 MB/s 802.11g with Super G 108.0 Mbit/s 13.5 MB/s 802.11g with 125HSM 125.0 Mbit/s 15.625 MB/s 802.11g with Nitro 140.0 Mbit/s 17.5 MB/s Varies, 300.0 Mbit/s Varies, 17.5 MB/s 802.11n Max Max

Wireless personal area networks Device IrDA-Control IrDA-SIR 802.15.4 (2.4 GHz) Bluetooth 1.1 Bluetooth 2.0+EDR IrDA-FIR IrDA-VFIR IrDA-UFIR Bluetooth 3.0 WUSB-UWB Speed Speed (bit/s) (byte/s) 72 kbit/s 9 kB/s 115.2 kbit/s 14 kB/s 250 kbit/s 31.25 kB/s

1,000 kbit/s 125 kB/s 3,000 kbit/s 375 kB/s 4,000 kbit/s 510 kB/s 16,000 2,000 kB/s kbit/s 100,000 12,500 kB/s kbit/s 480,000 60,000 kB/s kbit/s 480,000 60,000 kB/s kbit/s

Computer buses Device I2c ISA 8-Bit/4.77 MHz[24] Zorro II 16-Bit/7.14 MHz[25] ISA 16-Bit/8.33 MHz[24] Low Pin Count Speed (bit/s) 3.4 Mbit/s 9.6 Mbit/s 28.56 Mbit/s 42.4 Mbit/s 133.33 Mbit/s 184 Mbit/s 320 Mbit/s 400 Mbit/s 400 Mbit/s Speed (byte/s) 425 kB/s 1.2 MB/s 3.56 MB/s 5.3 MB/s 16.67 MB/s 23 32 40 40 MB/s MB/s MB/s MB/s

HP-Precision Bus EISA 8-16-32bits/8.33 MHz VME64 32-64bits NuBus 10 MHz DEC TURBOchannel 32400 Mbit/s bit/12.5 MHz MCA 16-32bits/10 MHz 660 Mbit/s NuBus90 20 MHz 800 Mbit/s Sbus 32-bit/25 MHz 800 Mbit/s DEC TURBOchannel 32800 Mbit/s bit/25 MHz VLB 32-bit/33 MHz 1,067 Mbit/s

50 MB/s 66 MB/s 80 MB/s 100 MB/s 100 MB/s 133.33 MB/s

PCI 32-bit/33 MHz 1,067 Mbit/s 133.33 MB/s HP GSC-1X 1,136 Mbit/s 142 MB/s Zorro III[26][27][28] 321,200 Mbit/s 150 MB/s Bit/37.5 MHz Sbus 64-bit/25 MHz 1,600 Mbit/s 200 MB/s [29] PCI Express 1.0 (x1 link) 2,000 Mbit/s 250 MB/s HP GSC-2X 2,048 Mbit/s 256 MB/s PCI 64-bit/33 MHz 2,133 Mbit/s 266.7 MB/s PCI 32-bit/66 MHz 2,133 Mbit/s 266.7 MB/s AGP 1x 2,133 Mbit/s 266.7 MB/s HIO bus 2,560 Mbit/s 320 MB/s [29] PCI Express 1.0 (x2 link) 4,000 Mbit/s 500 MB/s AGP 2x 4,266 Mbit/s 533.3 MB/s PCI 64-bit/66 MHz 4,266 Mbit/s 533.3 MB/s PCI-X DDR 16-bit 4,266 Mbit/s 533.3 MB/s PCI 64-bit/100 MHz 6,399 Mbit/s 800 MB/s RapidIO (1 lane) 6,500 Mbit/s 812,5 MB/s PCI Express 1.0 (x4 link) 8,000 Mbit/s 1,000 MB/s AGP 4x 8,533 Mbit/s 1,067 MB/s PCI-X 133 8,533 Mbit/s 1,067 MB/s PCI-X QDR 16-bit 8,533 Mbit/s 1,067 MB/s [21] InfiniBand single 4X 8,000 Mbit/s 1,000 MB/s 15,360 UPA 1,920 MB/s Mbit/s 16,000 PCI Express 1.0 (x8 link)[29] 2,000 MB/s Mbit/s 17,066 AGP 8x 2,133 MB/s Mbit/s 17,066 PCI-X DDR 2,133 MB/s Mbit/s HyperTransport (800 MHz, 16- 25,600 3,200 MB/s pair) Mbit/s HyperTransport (1 GHz, 16- 32,000 4,000 MB/s pair) Mbit/s 32,000 PCI Express 1.0 (x16 link)[29] 4,000 MB/s Mbit/s 32,000 PCI Express 2.0 (x8 link)[30] 4,000 MB/s Mbit/s 34,133 PCI-X QDR 4,266 MB/s Mbit/s 34,133 AGP 8x 64-bit 4,266 MB/s Mbit/s 64,000 PCI Express (x32 link)[29] 8,000 MB/s Mbit/s PCI Express 2.0 (x16 link)[30] 64,000 8,000 MB/s

Mbit/s 128,000 PCI Express 2.0 (x32 link)[30] Mbit/s QuickPath Interconnect 153,600 (2.4 GHz) Mbit/s HyperTransport (2.8 GHz, 32- 179,200 pair) Mbit/s QuickPath Interconnect 204,800 (3.2 GHz) Mbit/s HyperTransport 3.1 (3.2 GHz, 409,600 32-pair) Mbit/s

16,000 MB/s 19,200 MB/s 22,400 MB/s 25,600 MB/s 51,200 MB/s

Portable Speed (bit/s) PC Card 16 bit 255ns Byte 31.36 mode Mbit/s PC Card 16 bit 255ns Word 62.72 mode Mbit/s PC Card 16 bit 100ns Byte 80 Mbit/s mode PC Card 16 bit 100ns Word 160 Mbit/s mode PC Card 32 bit (CardBus) Byte 267 Mbit/s mode ExpressCard 1.2 USB 2.0 mode 480 Mbit/s PC Card 32 bit (CardBus) Word 533 Mbit/s mode PC Card 32 bit (CardBus) 1,067 DWord mode Mbit/s ExpressCard 1.2 PCI Express 2,500 mode Mbit/s 4,800 ExpressCard 2.0 USB 3.0 mode Mbit/s ExpressCard 2.0 PCI Express 5,000 mode Mbit/s Device Speed (byte/s) 3.92 MB/s 7.84 MB/s 10 MB/s 20 MB/s 33.33 MB/s 60 MB/s 66.66 MB/s 133.33 MB/s 312.5 MB/s 600 MB/s 625 MB/s

Storage Device PC Floppy Disk Controller (1.2MB / 1.44MB) CD Controller (1x) Speed (bit/s) 0.5 Mbit/s 1.171875 Speed (byte/s) 0.062 MB/s 0.146484375

MFM RLL DVD Controller (1x) ESDI ATA PIO Mode 0 HD DVD Controller (1x) Blu-ray Controller (1x) SCSI (Narrow SCSI) (5 MHz)[31] ATA PIO Mode 1 ATA PIO Mode 2 Fast SCSI (8 bits/10 MHz) ATA PIO Mode 3 iSCSI over Fast Ethernet ATA PIO Mode 4 Fast Wide SCSI (16 bits/10 MHz) Ultra SCSI (Fast-20 SCSI) (8 bits/20 MHz) Ultra DMA ATA 33 Ultra Wide SCSI (16 bits/20 MHz) Ultra-2 SCSI 40 (Fast-40 SCSI) (8 bits/40 MHz) Ultra DMA ATA 66 Ultra-2 wide SCSI (16 bits/40 MHz) Serial Storage Architecture SSA Ultra DMA ATA 100 Fibre Channel 1GFC (1.0625 GHz)[32] iSCSI over Gigabit Ethernet Ultra DMA ATA 133 Ultra-3 SCSI (Ultra 160 SCSI; Fast-80 Wide SCSI) (16 1,280 Mbit/s 160 MB/s bits/40 MHz DDR) Serial ATA (SATA-150)[33] 1,200 Mbit/s 150 MB/s Fibre Channel 2GFC (2.125 GHz)[32] 1,700 Mbit/s 212.5 MB/s [33] Serial ATA 2 (SATA-300) 2,400 Mbit/s 300 MB/s [33] Serial Attached SCSI (SAS) 2,400 Mbit/s 300 MB/s Ultra-320 SCSI (Ultra4 SCSI) (16 bits/80 MHz DDR) 2,560 Mbit/s 320 MB/s [32] Fibre Channel 4GFC (4.25 GHz) 3,400 Mbit/s 425 MB/s [33] Serial ATA (SATA-600) 4,800 Mbit/s 600 MB/s [33] Serial Attached SCSI (SAS) 2 4,800 Mbit/s 600 MB/s Ultra-640 SCSI (16 bits/160 MHz DDR) 5,120 Mbit/s 640 MB/s [32] Fibre Channel 8GFC (8.50 GHz) 6,800 Mbit/s 850 MB/s iSCSI over 10GbE 10,000 Mbit/s 1,250 MB/s FCoE over 10GbE 10,000 Mbit/s 1,250 MB/s iSCSI over InfiniBand 4x 40,000 Mbit/s 5,000 MB/s [citation needed] iSCSI over 100G Ethernet (hypothetical) 100,000 12,500 MB/s

Mbit/s 5 Mbit/s 7.5 Mbit/s 11.1 Mbit/s 24 Mbit/s 26.4 Mbit/s 36 Mbit/s 36 Mbit/s 40 Mbit/s 41.6 Mbit/s 66.4 Mbit/s 80 Mbit/s 88.8 Mbit/s 100 Mbit/s 133.3 Mbit/s 160 Mbit/s 160 Mbit/s 264 Mbit/s 320 Mbit/s 320 Mbit/s 528 Mbit/s 640 Mbit/s 640 Mbit/s 800 Mbit/s 850 Mbit/s 1,000 Mbit/s 1,064 Mbit/s

MB/s 0.625 MB/s 0.9375 MB/s 1.32 MB/s 3 MB/s 3.3 MB/s 4.5 MB/s 4.5 MB/s 5 MB/s 5.2 MB/s 8.3 MB/s 10 MB/s 11.1 MB/s 12.5 MB/s 16.7 MB/s 20 MB/s 20 MB/s 33 MB/s 40 MB/s 40 MB/s 66 MB/s 80 MB/s 80 MB/s 100 MB/s 106.25 MB/s 125 MB/s 133 MB/s

Mbit/s

Digital video interconnects Speeds given are from the video source (e.g. video card) to receiving device (e.g. monitor) only. Out of band and reverse signaling channels are not included. Speed (bit/s) DisplayPort 1 pair 2.7 Gbit/s LVDS Display 2.8 Gbit/s Interface Serial Digital 2.97 Gbit/s Interface Single link DVI 3.96 Gbit/s HDMI v1. 4.9 Gbit/s DisplayPort 2 pair 5.4 Gbit/s Dual link DVI 7.92 Gbit/s HDMI v1.3[39] 10.2 Gbit/s DisplayPort 4 10.8 Gbit/s pairs[36] HDMI Type B 20.4 Gbit/s Device Speed (byte/s) 0.3375 GB/s 0.35 GB/s 0.37125 GB/s 0.495 GB/s 0.6125 GB/s 0.675 GB/s 0.99 GB/s 1.275 GB/s 1.35 GB/s 2.55 GB/s

Quest 2. Define and prove sampling theorem using frequency spectrum. Ans: The definition of proper sampling is quite simple. Suppose you sample a continuous signal in some manner. If you can exactly reconstruct the analog signal from the samples, you must have done the sampling properly. Even if the sampled data appears confusing or incomplete, the key information has been captured if you can reverse the process. Figure 3-3 shows several sinusoids before and after digitization. The continious line represents the analog signal entering the ADC, while the square markers are the digital signal leaving the ADC. In (a), the analog signal is a constant DC value, a cosine wave of zero frequency. Since the analog signal is a series of straight lines between each of the samples, all of the information needed to reconstruct the analog signal is contained in the digital data. According to our definition, this is proper sampling. The sine wave shown in (b) has a frequency of 0.09 of the sampling rate. This might represent, for example, a 90 cycle/second sine wave being sampled at 1000

samples/second. Expressed in another way, there are 11.1 samples taken over each complete cycle of the sinusoid. This situation is more complicated than the previous case, because the analog signal cannot be reconstructed by simply drawing straight lines between the data points. Do these samples properly represent the analog signal? The answer is yes, because no other sinusoid, or combination of sinusoids, will produce this pattern of samples (within the reasonable constraints listed below). These samples correspond to only one analog signal, and therefore the analog signal can be exactly reconstructed. Again, an instance of proper sampling. In (c), the situation is made more difficult by increasing the sine wave's frequency to 0.31 of the sampling rate. This results in only 3.2 samples per sine wave cycle. Here the samples are so sparse that they don't even appear to follow the general trend of the analog signal. Do these samples properly represent the analog waveform? Again, the answer is yes, and for exactly the same reason. The samples are a unique representation of the analog signal. All of the information needed to reconstruct the continuous waveform is contained in the digital data. How you go about doing this will be discussed later in this chapter. Obviously, it must be more sophisticated than just drawing straight lines between the data points. As strange as it seems, this is proper sampling according to our definition. In (d), the analog frequency is pushed even higher to 0.95 of the sampling rate, with a mere 1.05 samples per sine wave cycle. Do these samples properly represent the data? No, they don't! The samples represent a different sine wave from the one contained in the analog signal. In particular, the original sine wave of 0.95 frequency misrepresents itself as a sine wave of 0.05 frequency in the digital signal. This phenomenon of sinusoids changing frequency during sampling is called aliasing. Just as a criminal might take on an assumed name or identity (an alias), the sinusoid assumes another frequency that is not its own. Since the digital data is no longer uniquely related to a particular analog signal, an unambiguous reconstruction is impossible. There is nothing in the sampled data to suggest that the original analog signal had a frequency of 0.95 rather than 0.05. The sine wave has hidden its true identity completely; the perfect crime has been committed! According to our definition, this is an example of improper sampling. This line of reasoning leads to a milestone in DSP, the sampling theorem. Frequently this is called the Shannon sampling theorem, or the Nyquist sampling theorem, after the authors of 1940s papers on the topic. The sampling theorem indicates that a continuous signal can be properly sampled, only if it does not contain frequency components above one-half of the sampling rate. For instance, a sampling rate of 2,000 samples/second requires the analog signal to be composed of frequencies below 1000 cycles/second. If frequencies above this limit are present in the signal, they will be aliased to frequencies between 0 and 1000 cycles/second, combining with whatever information that was legitimately there.

Two terms are widely used when discussing the sampling theorem: the Nyquist frequency and the Nyquist rate. Unfortunately, their meaning is not standardized. To understand this, consider an analog signal composed of frequencies between DC and 3 kHz. To properly digitize this signal it must be sampled at 6,000 samples/sec (6 kHz) or higher. Suppose we choose to sample at 8,000 samples/sec (8 kHz), allowing frequencies between DC and 4 kHz to be properly represented. In this situation their are four important frequencies: (1) the highest frequency in the signal, 3 kHz; (2) twice this frequency, 6 kHz; (3) the sampling rate, 8 kHz; and (4) one-half the sampling rate, 4 kHz. Which of these four is the Nyquist frequency and which is the Nyquist rate? It depends who you ask! All of the possible combinations are used. Fortunately, most authors are careful to define how they are using the terms. In this book, they are both used to mean one-half the sampling rate.

Figure 3-4 shows how frequencies are changed during aliasing. The key point to remember is that a digital signal cannot contain frequencies above one-half the sampling rate (i.e., the Nyquist frequency/rate). When the frequency of the continuous wave is below the Nyquist rate, the frequency of the sampled data is a match. However, when the continuous signal's frequency is above the Nyquist rate, aliasing changes the frequency into something that can be represented in the sampled data. As shown by the zigzagging line in Fig. 3-4, every continuous frequency above the Nyquist rate has a corresponding digital frequency between zero and one-half the sampling rate. It there happens to be a sinusoid already at this lower frequency, the aliased signal will add to it, resulting in a loss of information. Aliasing is a double curse; information can be lost about the higher and the lower frequency. Suppose you are given a digital signal containing a frequency of 0.2 of the sampling rate. If this signal were obtained by proper sampling, the original analog signal must have had a frequency of 0.2. If aliasing took place during sampling, the digital frequency of 0.2 could have come from any one of an infinite number of frequencies in the analog signal: 0.2, 0.8, 1.2, 1.8, 2.2, … . Just as aliasing can change the frequency during sampling, it can also change the phase. For example, look back at the aliased signal in Fig. 3-3d. The aliased digital signal is inverted from the original analog signal; one is a sine wave while the other is a negative sine wave. In other words, aliasing has changed the frequency and introduced a 180? phase shift. Only two phase shifts are possible: 0? (no phase shift) and 180? (inversion). The zero phase shift occurs for analog frequencies of 0 to 0.5, 1.0 to 1.5, 2.0 to 2.5, etc. An inverted phase occurs for analog frequencies of 0.5 to 1.0, 1.5 to 2.0, 3.5 to 4.0, and so on. Now we will dive into a more detailed analysis of sampling and how aliasing occurs. Our overall goal is to understand what happens to the information when a signal is converted from a continuous to a discrete form. The problem is, these are very different things; one is a continuous waveform while the other is an array of numbers. This "apples-to-oranges" comparison makes the analysis very difficult. The solution is to introduce a theoretical concept called the impulse train. Figure 3-5a shows an example analog signal. Figure (c) shows the signal sampled by using an impulse train. The impulse train is a continuous signal consisting of a series of narrow spikes (impulses) that match the original signal at the sampling instants. Each impulse is infinitesimally narrow, a concept that will be discussed in Chapter 13. Between these sampling times the value of the waveform is zero. Keep in mind that the impulse train is a theoretical concept, not a waveform that can exist in an electronic circuit. Since both the original analog signal and the impulse train are continuous waveforms, we can make an "apples-apples" comparison between the two.

Now we need to examine the relationship between the impulse train and the discrete signal (an array of numbers). This one is easy; in terms of information content, they are identical. If one is known, it is trivial to calculate the other. Think of these as different ends of a bridge crossing between the analog and digital worlds. The corresponding frequency spectra of these signals are displayed in the right-hand column. This should be a familiar concept from you knowledge of electronics; every waveform can be viewed as being composed of sinusoids of varying amplitude and frequency. Later chapters will discuss the frequency domain in detail. (You may want to revisit this discussion after becoming more familiar with frequency spectra). Figure (a) shows an analog signal we wish to sample. As indicated by its frequency spectrum in (b), it is composed only of frequency components between 0 and about 0.33 f>s, where fs is the sampling frequency we intend to use. For example, this might be a

speech signal that has been filtered to remove all frequencies above 3.3 kHz. Correspondingly, fs would be 10 kHz (10,000 samples/second), our intended sampling rate. Sampling the signal in (a) by using an impulse train produces the signal shown in (c), and its frequency spectrum shown in (d). This spectrum is a duplication of the spectrum of the original signal. Each multiple of the sampling frequency, f s, 2fs, 3fs, 4fs, etc., has received a copy and a left-for-right flipped copy of the original frequency spectrum. The copy is called the upper sideband, while the flipped copy is called the lower sideband. Sampling has generated new frequencies. Is this proper sampling? The answer is yes, because the signal in (c) can be transformed back into the signal in (a) by eliminating all frequencies above ? fs. That is, an analog low-pass filter will convert the impulse train, (b), back into the original analog signal, (a). If you are already familiar with the basics of DSP, here is a more technical explanation of why this spectral duplication occurs. (Ignore this paragraph if you are new to DSP). In the time domain, sampling is achieved by multiplying the original signal by an impulse train of unity amplitude spikes. The frequency spectrum of this unity amplitude impulse train is also a unity amplitude impulse train, with the spikes occurring at multiples of the sampling frequency, fs, 2fs, 3fs, 4fs, etc. When two time domain signals are multiplied, their frequency spectra are convolved. This results in the original spectrum being duplicated to the location of each spike in the impulse train's spectrum. Viewing the original signal as composed of both positive and negative frequencies accounts for the upper and lower sidebands, respectively. This is the same as amplitude modulation Figure (e) shows an example of improper sampling, resulting from too low of sampling rate. The analog signal still contains frequencies up to 3.3 kHz, but the sampling rate has been lowered to 5 kHz. Notice that along the horizontal axis are spaced closer in (f) than in (d). The frequency spectrum, (f), shows the problem: the duplicated portions of the spectrum have invaded the band between zero and one-half of the sampling frequency. Although (f) shows these overlapping frequencies as retaining their separate identity, in actual practice they add together forming a single confused mess. Since there is no way to separate the overlapping frequencies, information is lost, and the original signal cannot be reconstructed. This overlap occurs when the analog signal contains frequencies greater than one-half the sampling rate, that is, we have proven the sampling theorem.

Book ID: B0026
Ques 3. Explain the concept of Path Clearance. Ans: In optics and radio communications (indeed, in any situation involving the radiation of waves, which includes electrodynamics, acoustics, and gravitational

radiation), a Fresnel (pronounced /freɪˈnɛl/ fray-NELL) zone, named for physicist Augustin-Jean Fresnel, is one of a (theoretically infinite) number of concentric ellipsoids of revolution which define volumes in the radiation pattern of a (usually) circular aperture. Fresnel zones result from diffraction by the circular aperture. The cross section of the first Fresnel zone is circular. Subsequent Fresnel zones are annular in cross section, and concentric with the first. To maximize receiver strength, one needs to minimize the effect of the out of phase signals by removing obstacles from the radio frequency line of sight (RF LoS). The strongest signals are on the direct line between transmitter and receiver and always lie in the 1st Fresnel Zone.

Fresnel zones If unobstructed, radio waves will travel in a straight line from the transmitter to the receiver. But if there are obstacles near the path, the radio waves reflecting off those objects may arrive out of phase with the signals that travel directly and reduce the power of the received signal. On the other hand, the reflection can enhance the power of the received signal if the reflection and the direct signals arrive in phase. Sometimes this results in the counterintuitive finding that reducing the height of an antenna increases the S+N/N ratio.Fresnel provided a means to calculate where the zones are, where obstacles will cause mostly in phase and mostly out of phase reflections between the transmitter and the receiver. Obstacles in the first Fresnel zone will create signals that will be 0 to 90 degrees out of phase, in the second zone they will be 90 to 270 degrees out of phase, in third zone, they will be 270 to 450 degrees out of phase and so on. Odd numbered zones are constructive and even numbered zones are destructive. Determining Fresnel zone clearance

Several examples of how the Fresnel zone can be disrupted.

The concept of Fresnel zone clearance may be used to analyze interference by obstacles near the path of a radio beam. The first zone must be kept largely free from obstructions to avoid interfering with the radio reception. However, some obstruction of the Fresnel zones can often be tolerated, as a rule of thumb the maximum obstruction allowable is 40%, but the recommended obstruction is 20% or less. For establishing Fresnel zones, first determine the RF Line of Sight (RF LoS), which in simple terms is a straight line between the transmitting and receiving antennas. Now the zone surrounding the RF LoS is said to be the Fresnel zone. The general equation for calculating the Fresnel zone radius at any point P in between the endpoints of the link is the following:

where, Fn = The nth Fresnel Zone radius in metres d1 = The distance of P from one end in metres d2 = The distance of P from the other end in metres λ = The wavelength of the transmitted signal in metres The cross section radius of the first Fresnel zone is the highest in the center of the RF LoS which can be calculated as:

where r = radius in feet D = total distance in miles f = frequency transmitted in Gigahertz. Or even:
• • •

where

• •

D = total distance in kilometres f = frequency transmitted in gigahertz.

Ques 4. Explain Tropospheric Forward Scatter Systems. Ans: TROPOSPHERIC FORWARD SCATTERING When a radio wave passing through the troposphere meets turbulence, it makes an abrupt change in velocity. This causes a small amount of the energy to be scattered in a forward direction and returned to Earth at distances beyond the horizon. This phenomenon is repeated as the radio wave meets other turbulences in its path. The total received signal is an accumulation of the energy received from each of the turbulences. This scattering mode of propagation enables vhf and uhf signals to be transmitted far beyond the normal line-of-sight.

To better understand how these signals are transmitted over greater distances, you must first consider the propagation characteristics of the space wave used in vhf and uhf line-of-sight communications. When the space wave is transmitted, it undergoes very little attenuation within the line-of-sight horizon. When it reaches the horizon, the wave is diffracted and follows the Earth's curvature. Beyond the horizon, the rate of attenuation increases very rapidly and signals soon become very weak and unusable. Tropospheric scattering, on the other hand, provides a usable signal at distances beyond the point where the diffracted space wave drops to an unusable level. This is because of the height at which scattering takes place. The turbulence that causes the scattering can be visualized as a relay station located above the horizon; it receives the transmitted energy and then reradiates it in a forward direction to some point beyond the line-of-sight distance. A high gain receiving antenna aimed toward this scattered energy can then capture it. The magnitude of the received signal depends on the number of turbulences causing scatter in the desired direction and the gain of the receiving antenna. The scatter area used for tropospheric scatter is known as the scatter volume.

The angle at which the receiving antenna must be aimed to capture the scattered energy is called the scatter angle. The scatter volume and scatter angle are shown in figure 2-26. The signal take-off angle (transmitting antenna's angle of radiation) determines the height of the scatter volume and the size of the scatter angle. A low signal take-off angle produces a low scatter volume, which in turn permits a receiving antenna that is aimed at a low angle to the scatter volume to capture the scattered energy. As the signal take-off angle is increased, the height of the scatter volume is increased. When this occurs, the amount of received energy decreases Ques 5. Explain various light sources for Optical Fiber Communication. Ans: Optical fiber communication Optical fiber can be used as a medium for telecommunication and networking because it is flexible and can be bundled as cables. It is especially advantageous for longdistance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters. Additionally, the per-channel light signals propagating in the fiber can be modulated at rates as high as 111 gigabits per second, although 10 or 40 Gb/s is typical in deployed systems. Each fiber can carry many independent channels, each using a different wavelength of light (wavelength-division multiplexing (WDM)). The net data rate (data rate without overhead bytes) per fiber is the per-channel data rate reduced by the FEC overhead, multiplied by the number of channels (usually up to eighty in commercial dense WDM systems as of 2008). The current laboratory fiber optic data rate record, held by Bell Labs in Villarceaux, France, is multiplexing 155 channels, each carrying 100 Gbps over a 7000 km fiber. Over short distances, such as networking within a building, fiber saves space in cable ducts because a single fiber can carry much more data than a single electrical cable. Fiber is also immune to electrical interference; there is no cross-talk between signals in different cables and no pickup of environmental noise. Non-armored fiber cables do not conduct electricity, which makes fiber a good solution for protecting communications equipment located in high voltage environments such as power generation facilities, or metal communication structures prone to lightning strikes. They can also be used in environments where explosive fumes are present, without danger of ignition. Wiretapping is more difficult compared to electrical connections, and there are concentric dual core fibers that are said to be tap-proof. Although fibers can be made out of transparent plastic, glass, or a combination of the two, the fibers used in long-distance telecommunications applications are always glass, because of the lower optical attenuation. Both multi-mode and single-mode fibers are used in communications, with multi-mode fiber used mostly for short distances, up to 550 m (600 yards), and single-mode fiber used for longer distance links. Because of the tighter tolerances required to couple light into and between single-mode fibers (core diameter about 10 micrometers), single-mode transmitters, receivers, amplifiers and other components are generally more expensive than multi-mode components.

Light Source for the Optical Fibers Light-emitting diode (LED) (is an electronic light source. LEDs are used as indicator lamps in many kinds of electronics and increasingly for lighting. LEDs work by the effect of electroluminescence, discovered by accident in 1907. The LED was introduced as a practical electronic component in 1962.[2] All early devices emitted low-intensity red light, but modern LEDs are available across the visible, ultraviolet and infra red wavelengths, with very high brightness. LEDs are based on the semiconductor diode. When the diode is forward biased (switched on), electrons are able to recombine with holes and energy is released in the form of light. This effect is called electroluminescence and the color of the light is determined by the energy gap of the semiconductor. The LED is usually small in area (less than 1 mm2) with integrated optical components to shape its radiation pattern and assist in reflection.

LEDs present many advantages over incandescent light sources including lower energy consumption, longer lifetime, improved robustness, smaller size, faster switching, durable and reliable. However, they are relatively expensive and require more precise current and heat management than traditional light sources. Current LED products for general lighting have higher costs than fluorescent lamp sources of comparable output. Applications of LEDs are diverse. They are used as low-energy indicators but also for replacements for traditional light sources in general lighting, automotive lighting and traffic signals. The compact size of LEDs has allowed new text and video displays and sensors to be developed, while their high switching rates are useful in advanced communications technology.

Laser

Light Amplification by Stimulated Emission of Radiation, LASER (laser), is a mechanism for emitting light within the electromagnetic radiation region of the spectrum, via the process of stimulated emission. The emitted laser light is (usually) a spatially coherent, narrow low-divergence beam, that can be manipulated with lenses. In laser technology, “coherent light” denotes a light source that produces (emits) light of instep waves of identical frequency and phase. [1] The laser’s beam of coherent light differentiates it from light sources that emit incoherent light beams, of random phase varying with time and position; whereas the laser light is a narrow-wavelength electromagnetic spectrum monochromatic light; yet, there are lasers that emit a broad spectrum light, or simultaneously, at different wavelengths.

The gain medium of a laser is a material of controlled purity, size, concentration, and shape, which amplifies the beam by the process of stimulated emission. It can be of any state: gas, liquid, solid or plasma. The gain medium absorbs pump energy, which raises some electrons into higher-energy ("excited") quantum states. Particles can interact with light both by absorbing photons and by emitting photons. Emission can be spontaneous or stimulated. In the latter case, the photon is emitted in the same direction as the light that is passing by. When the number of particles in one excited state exceeds the number of particles in some lower-energy state, population inversion is achieved and the amount of stimulated emission due to light that passes through is larger than the amount of absorption. Hence, the light is amplified. By itself, this makes an optical amplifier. When an optical amplifier is placed inside a resonant optical cavity, one obtains a laser. The light generated by stimulated emission is very similar to the input signal in terms of wavelength, phase, and polarization. This gives laser light its characteristic coherence, and allows it to maintain the uniform polarization and often monochromaticity established by the optical cavity design. The optical cavity, a type of cavity resonator, contains a coherent beam of light between reflective surfaces so that the light passes through the gain medium more than once before it is emitted from the output aperture or lost to diffraction or absorption. As light circulates through the cavity, passing through the gain medium, if the gain (amplification) in the medium is stronger than the resonator losses, the power of the circulating light can rise exponentially. But each stimulated emission event returns a particle from its excited state to the ground state, reducing the capacity of the gain medium for further amplification. When this effect becomes strong, the gain is said to be saturated. The balance of pump power against gain saturation and cavity losses produces an equilibrium value of the laser power inside the cavity; this equilibrium determines the operating point of the laser. If the chosen pump power is too small, the gain is not sufficient to overcome the resonator losses, and the laser will emit only very small light powers. The minimum pump power needed to begin laser action is called the lasing threshold. The gain medium will amplify any photons passing through it, regardless of direction; but only the photons aligned with the cavity

manage to pass more than once through the medium and so have significant amplification. The beam in the cavity and the output beam of the laser, if they occur in free space rather than waveguides (as in an optical fiber laser), are, at best, low order Gaussian beams. However this is rarely the case with powerful lasers. If the beam is not a loworder Gaussian shape, the transverse modes of the beam can be described as a superposition of Hermite-Gaussian or Laguerre-Gaussian beams (for stable-cavity lasers). Unstable laser resonators on the other hand, have been shown to produce fractal shaped beams.[4] The beam may be highly collimated, that is being parallel without diverging. However, a perfectly collimated beam cannot be created, due to diffraction. The beam remains collimated over a distance which varies with the square of the beam diameter, and eventually diverges at an angle which varies inversely with the beam diameter. Thus, a beam generated by a small laboratory laser such as a helium-neon laser spreads to about 1.6 kilometers (1 mile) diameter if shone from the Earth to the Moon. By comparison, the output of a typical semiconductor laser, due to its small diameter, diverges almost as soon as it leaves the aperture, at an angle of anything up to 50°. However, such a divergent beam can be transformed into a collimated beam by means of a lens. In contrast, the light from non-laser light sources cannot be collimated by optics as well. Although the laser phenomenon was discovered with the help of quantum physics, it is not essentially more quantum mechanical than other light sources. The operation of a free electron laser can be explained without reference to quantum mechanics.

PhotoDiode/Optical Detector A transducer is a device that converts input energy of one form into output energy of another. An optical detector is a transducer that converts an optical signal into an electrical signal. It does this by generating an electrical current proportional to the intensity of incident optical radiation. The relationship between the input optical radiation and the output electrical current is given by the detector responsivity OPTICAL DETECTOR PROPERTIES Fiber optic communications systems require that optical detectors meet specific performance and compatibility requirements. Many of the requirements are similar to those of an optical source. Fiber optic systems require that optical detectors:
• • • •

Be compatible in size to low-loss optical fibers to allow for efficient coupling and easy packaging. Have a high sensitivity at the operating wavelength of the optical source. Have a sufficiently short response time (sufficiently wide bandwidth) to handle the system's data rate. Contribute low amounts of noise to the system.

Maintain stable operation in changing environmental conditions, such as temperature.

Optical detectors that meet many of these requirements and are suitable for fiber optic systems are semiconductor photodiodes. The principal optical detectors used in fiber optic systems include semiconductor positive-intrinsic-negative (PIN) photodiodes and avalanche photodiodes (APDs Avalanche photodiode (APD) An avalanche photodiode (APD) is a highly sensitive semiconductor electronic device that exploits Einstein's photoelectric effect to convert light to electricity. APDs can be thought of as photo detectors that provide a built-in first stage of gain through avalanche multiplication. From a functional standpoint, they can be regarded as the semiconductor analog to photomultipliers. By applying a high reverse bias voltage (typically 100-200 V in silicon), APDs show an internal current gain effect (around 100) due to impact ionization (avalanche effect). However, some silicon APDs employ alternative doping and beveling techniques compared to traditional APDs that allow greater voltage to be applied (> 1500 V) before breakdown is reached and hence a greater operating gain (> 1000). In general, the higher the reverse voltage the higher the gain. Among the various expressions for the APD multiplication factor (M), an instructive expression is given by the formula

Where L is the space charge boundary for electrons and α is the multiplication coefficient for electrons (and holes). This coefficient has a strong dependence on the applied electric field strength, temperature, and doping profile. Since APD gain varies strongly with the applied reverse bias and temperature, it is necessary to control the reverse voltage to keep a stable gain. Avalanche photodiodes therefore are more sensitive compared to other semiconductor photodiodes. If very high gain is needed (105 to 106), certain APDs can be operated with a reverse voltage above the APD's breakdown voltage. In this case, the APD needs to have its signal current limited and quickly diminished. Active and passive current quenching techniques have been used for this purpose. APDs that operate in this high-gain regime are in Geiger mode. This mode is particularly useful for single photon detection provided that the dark count event rate is sufficiently low. A typical application for APDs is laser rangefinders and long range fiber optic telecommunication. New applications include positron emission tomography and particle physics.[1] APD arrays are becoming commercially available.

APD applicability and usefulness depends on many parameters. Two of the larger factors are: quantum efficiency, which indicates how well incident optical photons are absorbed and then used to generate primary charge carriers; and total leakage current, which is the sum of the dark current and photocurrent and noise. Electronic dark noise components are series and parallel noise. Series noise, which is the effect of shot noise, is basically proportional to the APD capacitance while the parallel noise is associated with the fluctuations of the APD bulk and surface dark currents. Another noise source is the excess noise factor, F. It describes the statistical noise that is inherent with the stochastic APD multiplication process.

August 2009 Bachelor of Science in Information Technology (BScIT) – Semester 4 BT0046 – Communication Technology – 2 Credits (Book ID: B0025 & B0026) Assignment Set – 2 (30 Marks) Answer all questions 5 x 6 = 30 Book ID: B0025 1. Briefly explain different layers of digital communication. Ans: Digital communications is the physical transfer of data (a digital bit stream) over a point-to-point or point-to-multipoint transmission medium. Examples of such media are copper wires, optical fibers, wireless communication media, and storage media. The data is often represented as an electro-magnetic signal, such as an electrical voltage signal, a radio wave or microwave signal or an infra-red signal.

While analog communications represents a continuously varying signal, a digital transmission can be broken down into discrete messages. The messages are either represented by a sequence of pulses by means of a line code (baseband transmission), or by a limited set of analogue wave forms (pass band transmission), using a digital modulation method. According to the most common definition of digital signal, both baseband and pass band signals representing bit-streams are considered as digital transmission, while an alternative definition only considers the baseband signal as digital, and the pass band transmission as a form of digital-to-analog conversion. Data transmitted may be digital messages originating from a data source, for example a computer or a keyboard. It may also be an analog signal such as a phone call or a video signal, digitized into a bit-stream for example using pulse-code modulation (PCM) or more advanced source coding (data compression) schemes. This source coding and decoding is carried out by codec equipment. Protocol layers and sub-topics Courses and textbooks in the field of data transmission typically deal with the following protocol layers and topics: * Layer 1, the physical layer: o Channel coding including + Digital modulation methods + Line coding methods + Forward error correction (FEC) o Bit synchronization o Multiplexing o Equalization o Channel models * Layer 2, the data link layer: o Channel access schemes, media access control (MAC) o Packet mode communication and Frame synchronization o Error detection and automatic repeat request (ARQ) o Flow control * Layer 6, the presentation layer: o Source coding (digitization and data compression), and information theory. o Cryptography (may occur at any layer) Baseband or pass band transmission The physically transmitted signal may be one of the following:

1. A baseband signal ("digital-over-digital" transmission): A sequence of electrical pulses or light pulses produced by means of a line coding scheme such as Manchester coding. This is typically used in serial cables, wired local area networks such as Ethernet, and in optical fiber communication. It results in a pulse amplitude modulated signal, also known as a pulse train.

2.

A pass band signal ("digital-over-analog" transmission): A modulated sine wave signal representing a digital bit-stream. Note that this is in some textbooks considered as analog transmission, but in most books as digital transmission. The signal is produced by means of a digital modulation method such as PSK, QAM or FSK. The modulation and demodulation is carried out by modem equipment. This is used in wireless communication, and over telephone network local-loop and cable-TV networks.

Serial and parallel transmission In telecommunications, serial transmission is the sequential transmission of signal elements of a group representing a character or other entity of data. Digital serial transmissions are bits sent over a single wire, frequency or optical path sequentially. Because it requires less signal processing and less chances for error than parallel transmission, the transfer rate of each individual path may be faster. This can be used over longer distances as a check digit or parity bit can be sent along it easily. In telecommunications, parallel transmission is the simultaneous transmission of the signal elements of a character or other entity of data. In digital communications, parallel transmission is the simultaneous transmission of related signal elements over two or more separate paths. Multiple electrical wires are used which can transmit multiple bits simultaneously, which allows for higher data transfer rates than can be achieved with serial transmission. This method is used internally within the computer, for example the internal buses, and sometimes externally for such things as printers, The major issue with this is "skewing" because the wires in parallel data transmission have slightly different properties (not intentionally) so some bits may arrive before others, which may corrupt the message. A parity bit can help to reduce this. However, electrical wire parallel data transmission is therefore less reliable for long distances because corrupt transmissions are far more likely. Types of communication channels * Simplex * Half-duplex * Full-duplex * Point-to-point * Multi-drop:

o o o o o o

Bus network Ring network Star network Mesh network Wireless network Template:Jitenra computer

Asynchronous and synchronous data transmission Asynchronous transmission uses start and stop bits to signify the beginning bit ASCII character would actually be transmitted using 10 bits e.g.: A "0100 0001" would become "1 0100 0001 0". The extra one (or zero depending on parity bit) at the start and end of the transmission tells the receiver first that a character is coming and secondly that the character has ended. This method of transmission is used when data are sent intermittently as opposed to in a solid stream. In the previous example the start and stop bits are in bold. The start and stop bits must be of opposite polarity. This allows the receiver to recognize when the second packet of information is being sent. Synchronous transmission uses no start and stop bits but instead synchronizes transmission speeds at both the receiving and sending end of the transmission using clock signals built into each component. A continual stream of data is then sent between the two nodes. Due to there being no start and stop bits the data transfer rate is quicker although more errors will occur, as the clocks will eventually get out of sync, and the receiving device would have the wrong time that had been agreed in protocol (computing) for sending/receiving data, so some bytes could become corrupted (by losing bits). Ways to get around this problem include re-synchronization of the clocks and use of check digits to ensure the byte is correctly interpreted and received. 2. Explain PCM with a suitable block diagram.

Ans: Pulse-code modulation (PCM) is a digital representation of an analog signal where the magnitude of the signal is sampled regularly at uniform intervals, then quantized to a series of symbols in a numeric (usually binary) code. PCM has been used in digital telephone systems and 1980s-era electronic musical keyboards. It is also the standard form for digital audio in computers and the compact disc "red book" format. It is also standard in digital video, for example, using ITU-R BT.601. Uncompressed PCM is not typically used for video in standard definition consumer applications such as DVD or DVR because the bit rate required is far too high. Nomenclature The word pulse in the term Pulse-Code Modulation refers to the "pulses" to be found in the transmission line. This perhaps is a natural consequence of this technique having evolved alongside two analog methods, pulse width modulation and pulse position modulation, in which the information to be encoded is in fact represented by discrete signal pulses of varying width or position, respectively. In this respect, PCM bears little resemblance to these other forms of signal encoding, except that all can be used in time division multiplexing, and the binary numbers of the PCM codes are represented as electrical pulses. The device that performs the coding and decoding function in a telephone circuit is called a codec. Modulation

Sampling and quantization of a signal (red) for 4-bit PCM In the diagram, a sine wave (red curve) is sampled and quantized for PCM. The sine wave is sampled at regular intervals, shown as ticks on the x-axis. For each sample, one of the available values (ticks on the y-axis) is chosen by some algorithm (in this case, the floor function is used). This produces a fully discrete representation of the input signal (shaded area) that can be easily encoded as digital data for storage or manipulation. For the sine wave example at right, we can verify that the quantized values at the sampling moments are 7, 9, 11, 12, 13, 14, 14, 15, 15, 15, 14, etc.

Encoding these values as binary numbers would result in the following set of nibbles: 0111, 1001, 1011, 1100, 1101, 1110, 1110, 1111, 1111, 1111, 1110, etc. These digital values could then be further processed or analyzed by a purpose-specific digital signal processor or general purpose CPU. Several Pulse Code Modulation streams could also be multiplexed into a larger aggregate data stream, generally for transmission of multiple streams over a single physical link. This technique is called time-division multiplexing, or TDM, and is widely used, notably in the modern public telephone system. There are many ways to implement a real device that performs this task. In real systems, such a device is commonly implemented on a single integrated circuit that lacks only the clock necessary for sampling, and is generally referred to as an ADC (Analog-to-Digital converter). These devices will produce on their output a binary representation of the input whenever they are triggered by a clock signal, which would then be read by a processor of some sort. Demodulation To produce output from the sampled data, the procedure of modulation is applied in reverse. After each sampling period has passed, the next value is read and a signal is shifted to the new value. As a result of these transitions, the signal will have a significant amount of high-frequency energy. To smooth out the signal and remove these undesirable aliasing frequencies, the signal would be passed through analog filters that suppress energy outside the expected frequency range (that is, greater than the Nyquist frequency fs / 2). Some systems use digital filtering to remove some of the aliasing, converting the signal from digital to analog at a higher sample rate such that the analog filter required for anti-aliasing is much simpler. In some systems, no explicit filtering is done at all; as it's impossible for any system to reproduce a signal with infinite bandwidth, inherent losses in the system compensate for the artifacts — or the system simply does not require much precision. The sampling theorem suggests that practical PCM devices, provided a sampling frequency that is sufficiently greater than that of the input signal, can operate without introducing significant distortions within their designed frequency bands. The electronics involved in producing an accurate analog signal from the discrete data are similar to those used for generating the digital signal. These devices are DACs (digital-to-analog converters), and operate similarly to ADCs. They produce on their output a voltage or current (depending on type) that represents the value presented on their inputs. This output would then generally be filtered and amplified for use. Limitations There are two sources of impairment implicit in any PCM system: • Choosing a discrete value near the analog signal for each sample (quantization error) The quantization error swings between to .

In the ideal case (with a fully linear ADC) it is equally distributed over this interval thus with

follows

equals zero while the

equals to Between samples no measurement of the signal is made; due to the sampling theorem this results in any frequency above or equal to (fs being the sampling frequency) being distorted or lost completely (aliasing error). This is also called the Nyquist frequency.

As samples are dependent on time, an accurate clock is required for accurate reproduction. If either the encoding or decoding clock is not stable, its frequency drift will directly affect the output quality of the device. A slight difference between the encoding and decoding clock frequencies is not generally a major concern; a small constant error is not noticeable. Clock error does become a major issue if the clock is not stable, however. A drifting clock, even with a relatively small error, will cause very obvious distortions in audio and video signals, for example. Digitization as part of the PCM process In conventional PCM, the analog signal may be processed (e.g. by amplitude compression) before being digitized. Once the signal is digitized, the PCM signal is usually subjected to further processing (e.g. digital data compression). Some forms of PCM combine signal processing with coding. Older versions of these systems applied the processing in the analog domain as part of the A/D process; newer implementations do so in the digital domain. These simple techniques have been largely rendered obsolete by modern transform-based audio compression techniques.

DPCM encodes the PCM values as differences between the current and the predicted value. An algorithm predicts the next sample based on the previous samples, and the encoder stores only the difference between this prediction and the actual value. If the prediction is reasonable, fewer bits can be used to represent the same information. For audio, this type of encoding reduces the number of bits required per sample by about 25% compared to PCM.

Adaptive DPCM (ADPCM) is a variant of DPCM that varies the size of the quantization step, to allow further reduction of the required bandwidth for a given signal-to-noise ratio. Delta modulation, another variant, uses one bit per sample.

In telephony, a standard audio signal for a single phone call is encoded as 8000 analog samples per second, of 8 bits each, giving a 64 kbit/s digital signal known as DS0. The default signal compression encoding on a DS0 is either μ-law (mu-law) PCM (North America and Japan) or A-law PCM (Europe and most of the rest of the world). These are logarithmic compression systems where a 12 or 13-bit linear PCM sample number is mapped into an 8-bit value. This system is described by international standard G.711. An alternative proposal for a floating point representation, with 5-bit mantissa and 3bit radix, was abandoned.Where circuit costs are high and loss of voice quality is acceptable, it sometimes makes sense to compress the voice signal even further. An ADPCM algorithm is used to map a series of 8-bit µ-law or A-law PCM samples into a series of 4-bit ADPCM samples. In this way, the capacity of the line is doubled. The technique is detailed in the G.726 standard. Later it was found that even further compression was possible and additional standards were published. Some of these international standards describe systems and ideas which are covered by privately owned patents and thus use of these standards requires payments to the patent holders. Some ADPCM techniques are used in Voice over IP communications. Encoding for transmission Pulse-code modulation can be either return-to-zero (RZ) or non-return-to-zero (NRZ). For a NRZ system to be synchronized using in-band information, there must not be long sequences of identical symbols, such as ones or zeroes. For binary PCM systems, the density of 1-symbols is called ones-density. Ones-density is often controlled using pre coding techniques such as Run Length Limited encoding, where the PCM code is expanded into a slightly longer code with a guaranteed bound on ones-density before modulation into the channel. In other cases, extra framing bits are added into the stream which guarantees at least occasional symbol transitions. Another technique used to control ones-density is the use of a scrambler polynomial on the raw data which will tend to turn the raw data stream into a stream that looks pseudo-random, but where the raw stream can be recovered exactly by reversing the effect of the polynomial. In this case, long runs of zeroes or ones are still possible on the output, but are considered unlikely enough to be within normal engineering tolerance. In other cases, the long term DC value of the modulated signal is important, as building up a DC offset will tend to bias detector circuits out of their operating range. In this case special measures are taken to keep a count of the cumulative DC offset, and to modify the codes if necessary to make the DC offset always tend back to zero.

Many of these codes are bipolar codes, where the pulses can be positive, negative or absent. In the typical alternate mark inversion code, non-zero pulses alternate between being positive and negative. These rules may be violated to generate special symbols used for framing or other special purposes.

3. What are the different signaling formats? Explain with waveforms. Ans: In the field of telecommunication, signaling (US spelling) has the following meanings: a) The use of signals for controlling communications. b) In a telecommunications network, the information exchange concerning the establishment and control f a connection and the management of the network, in contrast to user information transfer. c) The sending of a signal from the transmitting end of a circuit to inform a user at the receiving end that a message is to be sent. Signaling systems can be classified according to their principal properties, some of which are described below: In-Band versus Out-Of-Band In the public switched telephone network, (PSTN), in-band signaling is the exchange of signaling (call control) information within the same channel that the telephone call itself is using. An example is DTMF 'Dual-Tone multi-frequency' signaling, which is used on most telephone lines to exchanges. Out-of-band signaling is telecommunication signaling (exchange of information in order to control a telephone call) that is done on a channel that is dedicated for the purpose and separate from the channels used for the telephone call. Out-of-band signaling is used in Signaling System #7 (SS7), the standard for signaling among exchanges that has controlled most of the world's phone calls for some twenty years. Line versus Register Line signaling is concerned with conveying information on the state of the line or channel, such as on-hook, off-hook (Answer supervision and Disconnect supervision, together referred to as supervision), ringing current (alerting), and recall. In the middle 20th Century, supervision signals on long distance trunks in North America were usually inband, for example at 2600 Hz, necessitating a notch filter to prevent interference. Late in the century, all supervisory signals were out of band. With the

advent of digital trunks, supervision signals are carried by robbed bits or other bits in the digital stream dedicated to signaling . Register signaling is concerned with conveying addressing information, such as the calling and/or called telephone number. In the early days of telephony, with operator handling calls, the addressing information is by voice as "Operator, connect me to Mr. Smith please". In the first half of the 20th century, addressing information is by using a rotary dial, which rapidly breaks the line current into pulses, with the number of pulses conveying the address. Finally, starting in the second half of the century, address signaling is by DTMF. Channel-Associated versus Common-Channel Channel-Associated signaling employs a signaling channel which is dedicated to a specific bearer channel. Common-Channel signaling is so-called, because it employs a signaling channel which conveys signaling information relating to multiple bearer channels. These bearer channels therefore have their signaling channel in common. Compelled Signaling The term Compelled signaling refers to the case where receipt of each signal needs to be explicitly acknowledged before the next signal is able to be sent. Most forms of R2 register signaling are compelled (see R2 signaling), while R1 multi-frequency is not. The term is only relevant in the case of signaling systems that use discrete signals (e.g. a combination of tones to denote one digit), as opposed to signaling systems which are message-oriented (such as SS7 and ISDN Q.931) where each message is able to convey multiple items of information (e.g. multiple digits of the called telephone number). Subscriber versus trunk signaling Subscriber signaling is between the telephone and the telephone exchange. Trunk signaling is signaling between exchanges. Classification examples Note that every signaling system can be characterized along each of the above axes of classification. A few examples:

a) DTMF is an in-band, channel-associated register signaling system. It is not
compelled. b) SS7 (e.g. TUP or ISUP) is an out-of-band, common-channel signaling system that incorporates both line and register signaling .

c) Metering pulses (depending on the country, these are 50Hz, 12kHz or 16kHz

or both antennas further from the ground: the reduction in loss achieved is known as height gain. Mobile Phones Although the frequencies used by cell phones are in the line-of-sight range, they still function in cities. This is made possible by a combination of the following effects: a) r−4 propagation over the rooftop landscape b) Diffraction into the "street canyon" below c) Multipath reflection along the street d) Diffraction through windows, and attenuated passage through walls, into the building e) Reflection, diffraction, and attenuated passage through internal walls, floors and ceilings within the building The combination of all these effects makes the cell phone propagation environment highly complex, with multipath effects and extensive Rayleigh fading. For cell phone services these problems are tackled using:

a) b) c) d) e)

Rooftop or hilltop positioning of base stations Many base stations (a phone can typically see six at any given time) Rapid handoff between base stations (roaming) Extensive error correction and detection in the radio link Sufficient operation of cellphone in tunnels when supported by slit cable antennas f) Local repeaters inside complex vehicles or buildings Other conditions may physically disrupt the connection surprisingly without prior notice:

a) Local failure when using the cellphone in buildings of concrete with steel
reinforcement b) Temporal failure inside metal constructions as elevator cabins, trains, cars, ships 5. Write notes on Satellite Links. Ans: Satellite Internet services are used in locations where terrestrial Internet access is not available and also for users who move frequently. Broadband Internet access via geostationary satellite is available almost worldwide, including vessels at sea and mobile land vehicles. Similar, but slower Internet service is also available through Low Earth Orbit (LEO) satellites; however their coverage areas also include

the polar regions at extreme latitudes, making them truly global. End users must be aware of the different types of satellite communication systems and the technical issues involving each, such as latency and signal loss due to precipitation, in order to make informed decisions on which system would serve them best. Mechanics and limitations of satellite communication Signal latency

Satellite Internet access via VSAT in Ghana

The new Tooway dual-band satellite antenna (for Ka and Ku-band reception) Latency is the delay between requesting data and the receipt of a response, or in the case of one-way communication, between the actual moment of a signal's broadcast and the time received at its destination. Compared to ground-based communication, all geostationary satellite communications experience high latency due to the signal having to travel to an altitude of 35,786 km (22,236 mi) above sea level (from the equator) out into space to a satellite in geostationary orbit and back to Earth again. This latency problem with satellite communications can be mitigated with TCP acceleration features that shorten the round trip time (RTT) per packet by splitting the feedback loop between the sender and the receiver. Such acceleration features are present in recent technology developments embedded in new satellite Internet services like Tooway.

The signal delay can be as much as 250 milliseconds to 900 milliseconds (one way), which makes this service unusable for applications requiring real-time user input, such as online games or remote surgery. This delay can be irritating with interactive applications, such as VoIP, videoconferencing, or other person to person communication. The functionality of live interactive access to a distant computer can also be subject to the problems caused by high latency. However these problems are more than tolerable for basic email access and web browsing, and in most cases are barely noticeable. For geostationary satellites there is no way to eliminate this problem. The delay is primarily due to the great distances travelled which, even at the speed of light (about 300,000 km/second or 186,000 miles per second), can be significant. Even if all other signaling delays could be eliminated it still takes electromagnetic radio waves about 250 milliseconds, or one quarter of a second, to travel from ground level to the satellite and back to the ground, a total of over 71,400 km (44,366 mi) to travel from the source to the destination, and over 143,000 km (88,856 mi) for a round trip (user to ISP, and then back to user—with zero network delays). Factoring in other normal delays from network sources gives a typical one-way connection latency of 500–700 ms from the user to the ISP, or about 1,000–1,400 milliseconds latency for the total Round Trip Time (RTT) back to the user. This is far worse than most dial-up modem users' experience, at typically only 150–200 ms total latency. However, Medium Earth Orbit (MEO) and Low Earth Orbit (LEO) satellites do not have such great delays. The current LEO constellations of Globalstar and Iridium satellites have delays of less than 40 ms round trip, but their throughput is less than broadband at 64 kbps per channel. The Globalstar constellation orbits 1,420 km above the earth and Iridium orbits at 670 km altitude. The proposed O3b Networks MEO constellation scheduled for deployment in 2010 would orbit at 8,062 km, with RTT latency of approximately 125 ms. The proposed new network is also designed for much higher throughput with links well in excess of 1 Gbps (Gigabits per second). A proposed alternative to geostationary relay satellites is a special-purpose solarpowered ultralight aircraft, which would fly along a circular path above a fixed ground location, operating under autonomous computer control at a height of approximately 20,000 meters. Onboard batteries would be charged during daylight hours by solar panels covering the wings, and would provide power to the plane during night. Ground-based satellite dishes would relay signals to and from the aircraft, resulting in a greatly reduced round-trip signal latency of only 0.12 milliseconds. Several such schemes involving various types of aircraft have been proposed in the past. Rain fade

Two-way satellite-only communication

The back panel of a satellite modem, with coaxial connections for both incoming and outgoing signals, and an Ethernet port for connection to the internal network. Two-way satellite Internet service involves both sending and receiving data from the remote VSAT site via satellite to a hub teleport, which then sends relays data via the terrestrial Internet. The satellite dish at each location must be precisely pointed to avoid interference with other satellites. Some providers oblige the customer to pay for a member of the provider's staff to install the system and correctly align the dish — although the European ASTRA2Connect system encourages user-installation and provides detailed instructions for this. Many customers in the Middle East and Africa are also encouraged to do self installs. At each VSAT site the uplink frequency, bit rate and power must be accurately set, under control of the service provider hub. There are several types of two way satellite Internet services, including time division multiple access (TDMA) and single channel per carrier (SCPC). Two-way systems can be simple VSAT terminals with a 60–100 cm dish and output power of only a few watts intended for consumers and small business or larger systems which provide more bandwidth. Such systems are frequently marketed as "satellite broadband" and can cost two to three times as much per month as land-based systems such as ADSL. The modems required for this service are often proprietary, but some are compatible with several different providers. They are also expensive, costing in the range of US\$600 to \$2000.

The two-way "iLNB" used on the ASTRA2Connect. The two-way "iLNB" used on the ASTRA2Connect terminal dish has a 500 mW transmitter and single-polarity receive LNB, both operating in the Ku band. Pricing for Astra2Connect modems range from 299 to 350€. These types of system are generally unsuitable for use on moving vehicles, although some dishes may be fitted to an automatic pan and tilt mechanism to continuously re-align the dish—but these are cumbersome and very expensive. The technology for ASTRA2Connect was delivered by a Belgian company called Newtec.

where latency is more important than bandwidth, reserving the satellite channel for download data where bandwidth is more important than latency, such as for file transfers. In 2006 the European Commission sponsored the UNIC project which aims at developing an end-to-end scientific test bed for the distribution of new broadband interactive TV-centric services delivered over low-cost two-way satellite to actual endusers in the home. The UNIC architecture employs DVB-S2 standard for downlink and DVB-RCS standard for uplink. Normal VSAT dishes (1.2 - 2.4m dia) are widely used for VoIP phone services. A voice call is sent by means of packets via the satellite and internet. Using coding and compression techniques the bit rate needed per call is only 10.8 kbit/s each way. Portable satellite Internet Portable satellite modem These usually come in the shape of a self-contained flat rectangular box that needs to be pointed in the general direction of the satellite—unlike VSAT the alignment need not be very precise and the modems have built in signal strength meters to help the user align the device properly. The modems have commonly used connectors such as Ethernet or Universal serial bus. Some also have an integrated Bluetooth transceiver and double as a satellite phone. The modems also tend to have their own batteries so they can be connected to a laptop without draining its battery. The most common such system is INMARSAT's BGAN—these terminals are about the size of a briefcase and have near-symmetric connection speeds of around 350–500 kbit/s. Smaller modems exist like those offered by Thuraya but only connect at 144 kbit/s in a limited coverage area. Using such a modem is extremely expensive—bandwidth costs between \$5 and \$7 per megabyte. The modems themselves are also expensive, usually costing between \$1000 and \$4000. Internet via satellite phone For many years now satellite phones have been able to connect to the internet. Bandwidth varies from about 2400 bit/s for Iridium network satellites and ACeS based phones to 15 kbit/s upstream and 60 kbit/s downstream for Thuraya handsets. Globalstar also provides internet access at 9600 bit/s—like Iridium and ACeS a dial-up connection is required and is billed per minute, however both Globalstar and Iridium are planning to launch new satellites offering always-on data services at higher speeds. With Thuraya phones the 9600 bit/s dial-up connection is also possible, the 60 kbit/s service is always-on and the user is billed for data transferred (about \$5 per megabyte). The phones can be connected to a laptop or other computer using a USB or RS-232 interface. Due to the low bandwidths involved it is extremely slow to browse the web with such a connection, but useful for sending email, Secure Shell data and using other low-bandwidth protocols. Since satellite phones tend to have

System software components Most one-way multicast applications require custom programming at the remote sites. The software at the remote site must filter, store, present a selection interface to and display the data. The software at the transmitting station must provide access control, priority queuing, sending, and encapsulating of the data. Efficiency increases Reducing satellite latency Much of the slowdown associated with satellite Internet is that for each request, many roundtrips must be completed before any useful data can be received by the requester.[11] Special IP stacks and proxies can also reduce latency through lessening the number of roundtrips, or simplifying and reducing the length of protocol headers. These types of technologies are generally referred to as TCP acceleration, HTTP prefetching and DNS caching. Elimination of advertising materials While also effective for terrestrial communications, the use of ad-blocking software such as Adblock for Firefox is exceptionally beneficial for satellite Internet, as most Internet advertising websites use cache busting in order to render the browser and ISP's cache useless, by displaying advertisements (for the purpose of maximizing the number of ad views seen by the affiliate marketing company's server). Internet access Wired Netwo Coaxi rk Optica Twiste Phone al type l d pair line cable Pow er line Wireless Unlicens ed terrestri al bands Wi-Fi · Bluetooth · DECT · Wireless USB GPRS · iBurst · WiBro/WiMAX · Satelli Muni Wi-Fi UMTS-TDD, te HSPA · EVDO · LTE Licensed terrestrial bands Satelli te

LAN

Ethern et

G.hn

Ethern et

HomePN G.hn A · G.hn

WAN

PON · Ethern et

DOCSI Ethern S et

Dial-up · ISDN · BPL DSL