You are on page 1of 342

Glossary of Digital Broadcasting & Digital Communications Terms

Edition of 2005

Mr. Tsuyoshi Miura (Senior Volunteer)

-#3:2 pull-down: Method used to map the 24 fps of film onto the 30 fps (60 fields) of 525-line TV, so that one film frame occupies three TV fields, the next two, etc. It means the two fields of every other TV frame come from different film frames making operations such as rotoscoping impossible and requiring care in editing. Some sophisticated equipment can unravel the 3:2 sequence to allow frame-by-frame treatment and subsequently re-compose 3:2. The 3:2 sequence repeats every five TV frames and four film frames, the latter identified as A-D. Only film frame A is fully on a TV frame and so exists at one time code only, making it the editable point of the video sequence. 4fsc: Four times the frequency of SC (subcarrier). The sampling rate of a D2 digital video signal with respect to the subcarrier frequency of an NTSC or PAL analog video signal. The 4fsc frequency is 14.3 MHz in NTSC and 17.7 MHz in PAL. 4:1:1: This is a set of sampling frequencies in the ratio 4:1:1, used to digitize the luminance and color difference components (Y, R-Y, B-Y) of a video signal. The four represents 13.5 MHz, the sampling frequency of Y, and the ones each 3.75 MHz for R-Y and B-Y. With the color information sampled at half the rate of the 4:2:2 system, this is generally used as a more economical form of sampling for 525-line picture formats. Both luminance and color difference are still sampled on every line. But the latter has half the horizontal resolution of 4:2:2, while the vertical resolution of the color information is maintained. For 525-line pictures, this means the color is fairly equally resolved in horizontal and vertical directions.

4:2:0: A sampling system used to digitize the luminance and color difference components (Y, R-Y, B-Y) of a video signal. The four represents the 13.5 MHz sampling frequency of Y, while the R-Y and B-Y are sampled at 6.75 MHz--effectively between every other line only (one line is sampled at 4:0:0, luminance only, and the
1

next at 4:2:2). This is generally used as a more economical system than 4:2:2 sampling for 625-line formats so that the color signals have a reasonably even resolution in the vertical and horizontal directions for that format. 4:2:2: A commonly used term for a component digital video format. A ratio of sampling frequencies used to digitize the luminance and color difference components (Y, R-Y, B-Y) of a video signal. It is generally used as shorthand for ITU-R 601-5 STUDIO ENCODING PARAMETERS OF DIGITAL TELEVISION FOR STANDARD 4:3 AND WIDE-SCREEN 16:9 ASPECT RATIOS. The term 4:2:2 describes that for every four samples of Y, there are two samples each of R-Y and B-Y, giving more chrominance bandwidth in relation to luminance compared to 4:1:1 sampling. This Recommendation specifies methods for digitally coding video signals. It includes a 13.5 MHz sampling rate for both 4:3 and 16:9 aspect ratios with performance adequate for present transmission systems. An alternative 18 MHz sampling rate for those 16:9 systems which require proportionately higher horizontal resolution is also specified. Specifications applicable to any member of this family of standards are presented first. 4:4:4: Similar to 4:2:2, except that for every four luminance samples, the color channels are also sampled four times. 16:9: See aspect ratio, and widescreen 48sF: 48 segmented frames. The process of taking 24-frame progressive images and deconstructing them to produce 48 interlaced frames each with half of the number of lines of resolution to allow some HDTV processors to pass the signal and for viewing on an interlaced monitor without flicker. 5.1 Channel Sound: 1) A five point one channel sound system actually has 6 channels of surround sound to better reproduce ambience and provide a more enjoyable listening experience. The 5 channels feed speakers around the listening position and are at centre front, and then left, right, rear left and rear right. The extra point one refers to the reduced-bandwidth low frequency effects (LFE) channel where the bass sounds from all directions are combined into a single channel that requires a (usually larger) sub-woofer loudspeaker. This combining can be done, as human hearing cannot sense the direction of these very low frequency sounds below about 100 Hz. This avoids all speakers having to cover the full frequency range, and extended bass response usually requires large speakers and enclosures. In a domestic installation this solution is both lower cost and space saving. Surround sound systems can source multiple channels either from:
2

stereo (two channel) program that has been captured with the ambience that allows sound delay differences between the two channels to be used to generate surround and rear channel information. This was used in quadraphonic systems that appeared in the 1970s. These were replaced with systems such as Dolby Surround ProLogic (I or II); for best performance, from separate multiple channel information carried in consumer digital audio systems such as DTS or Dolby AC-3 (5.1 version in DVDs, USA ATSC and optional on Australian HD) or MPEG-2 (AAC 5.1 and 7.1 channel versions on Japanese DTV). Studio VTR recording of 5.1 channels, which may be done on 2 AES/EBU digital channel pairs using Dolby-E format. 480i: Type of standard digital television (SDTV) image that is 480 lines by 720 pixels wide, displayed in interlaced format. (It has a 4:3 or 16:9 aspect ratio, 29.97-Hz frame rate, as defined by the ATSC standard.) 480p: High-definition television (HDTV) image that is 480 vertical lines by 720 horizontal pixels displayed in progressive format (It has a 4:3 or 16:9 aspect ratio, 59.94 Hz, 29.97 Hz, and 23.98 Hz frame rates, as defined by the ATSC standard.) 720p: Type of high-definition television (HDTV) image that is 720 vertical lines by 1,280 horizontal pixels wide, displayed in progressive format. (It has a 16:9 aspect ratio, 59.94 Hz, 29.97 Hz, and 23.98 Hz frame rates, as defined by the ATSC standard.) 1080i: Type of high-definition television (HDTV) image that is 1,080 vertical lines by 1,920 horizontal pixels wide, displayed in an interlaced format. (It has a 16:9 aspect ratio, 29.97 Hz frame rate, as defined by the Advanced Television Systems Committee (ATSC) standard.) 1394: The standard for a digital connection or bus used to transfer data between two independent systems. The 1394a standard provides 400-Mbps bandwidth and the reach is limited to 3 or 4 meters. The 1394b standard extends the bandwidth to 800 Mbps and the reach to a whole-house environment. 8 bits: A measure of memory. 8 bits make up 1 byte, and offer 256 (2^8) different combinations 8-VSB: Eight Discrete Amplitude Level Vestigial side-Band broadcast transmission technologies, used in the ATSC digital television transmission standard. The VSB modulator receives the 10.76 Msymbols/s, 8-level trellis encoded composite data signal (pilot and sync added). The ATV system performance is based on a linear
3

phase raised cosine Nyquist filter response in the concatenated transmitter and receiver, as shown in Figure 12.

The system filter response is essentially flat across the entire band, except for the transition regions at each end of the band. Nominally, the rolloff in the transmitter shall have the response of a linear phase root raised cosine filter. 10Base2: A type of Ethernet cable that has been largely phased out. It's also known as "thinnet." Cable speeds are typically up to 10Mbps. 10Base2 uses RG-58 coaxial cable in a bus topology configuration with a maximum cable length of 185 meters. 10Base2 was used for desktop machines, while 10Base5 was used mainly for backbones. 10Base5: The original Ethernet specification, which is now more uncommon than 10Base2. It's also known as "thicknet." Cable speeds are typically up to 10Mbps. 10Base5 uses RG-11 coaxial cable in a bus topology configuration and has a maximum cable length of 500 meters, thus the 10(Mbps)Base5(00) name. 10BaseT: A type of Ethernet cable topology that uses RJ-45 connectors, twisted pair wiring, and a star topology. Transmission speed is 10Mbps. 100Base-T (Faster Ethernet): A networking standard that supports data transfer rates up to 100 Mbps (100 megabits per second). 100BASE-T is based on the older Ethernet standard. Because it is 10 times faster than Ethernet, it is often referred to as Fast Ethernet. Officially, the 100BASE-T standard is IEEE 802.3u. Like Ethernet, 100BASE-T is based on the CSMA/CD LAN access method. There are several different cabling schemes that can be used with 100BASE-T, including: 100BASE-TX: two pairs of high-quality twisted-pair wires 100BASE-T4: four pairs of normal-quality twisted-pair wires 100BASE-FX: fiber optic CABLES
4

1000Base-T (Gigabit Ethernet): An extension of 100BaseT that runs 10 times faster than 100BaseT. It's theoretically capable of 1Gbps (1000Mbps) transmission speed, and requires Category 6 cables to run over copper wire. Gigabit Ethernet can run over fiber-optic cabling as well. 16-bit Operating Systems: DOS and Windows 3.x are 16-bit operating systems. They are limited in complexity and suffer instability and slow speed (compared to 32-bit OSes) when run on 32-bit processors like the 386DX-compatible chips and above. 16-bit chips were limited to 65,536 (2^16) Kbytes of memory, or 64 MB. This limitation caused momentum for the move to 32-bit chips and operating systems. 32-bit Operating Systems: Windows NT, OS/2, and some flavors of UNIX are 32-bit operating systems. Windows 95 is a 32-bit operating system running on top of a 16-bit operating system (DOS). 32-bit color depth: Anything that supports 32-bit color supports over four billion different colors (2^32, or 4,294,967,296 to be exact). Graphics cards are supporting up to and over 32-bit color, but the human eye cannot discern between colors at that level, and you need more memory on your graphics card to display 32-bit color at high resolutions. Color scanners, however, use 32-bit or higher color depth. This is a more accurate scan, even though the color difference between two pixels may not be perceptible. 32-bit processor (32-bit Chips) : This type of processor can run a 32-bit OS, such as Windows NT or some versions of UNIX. You can also run a 16-bit or lesser OS, but performance is not optimal. Intel's 386DX, 486, Pentium, and Pentium II/III/4 are all 32-bit processors. So are AMD's 386, 486, Athlon, and Duron. 32-bit processors can address up to 4 GB of memory. Although this may be plenty for a typical desktop machine, higher end servers, workstations, and desktops require more memory and use 64-bit processors. 48-bit color depth - Some scanners support 48-bit color, which offers the ability to identify over 281 trillion (2^48) colors. Such color differences may be imperceptible to the human eye, but as scanner DPI levels increase it helps keep colors more accurate. 64-bit Operating Systems - An operating system that is programmed to run on 64-bit processors. Some flavors of UNIX--and now Linux--are 64-bit operating
5

systems designed to run on 64-bit chips. There are also preliminary versions of Microsoft Windows that are 64 bits so that they can run on 64-bit processors. 64-bit processor (64-bit Chips) - A processor that can run a 64-bit OS. The DEC Alpha is 64-bit, and so are Intel's Itanium and AMD's Opteron and Athlon 64 processors. The 386, 486, and Pentium and Pentium II/III/4 are all 32-bit processors, even though the Pentium and newer chips have 64-bit memory buses. 128-bit Operating Systems: Operating systems may eventually be 128-bit. The ability to address 128 bits of memory space is something that may someday be necessary for huge databases, and will speed up operations. Of course, this will not happen until after there are some 128-bit chips to run them on. So far there are some chips with 128-bit registers and special operations, but they are not fully 128-bit. 128-bit Video: The bits referred to here describe the bus between the GPU and the video memory. Typically, larger buses mean faster video, but it also depends on the transmission speed over that bus. The first 128-bit video cards had only 2 MB or 4 MB of video memory on them. 3D (3-dimensional): The representation of objects in a space by length, width, and height (three planes). On a computer display device, 3D objects are typically represented on the 2D surface of a flat screen, but are computed as if they were actually in a three-dimensional environment. Advances in 3D technology are making gaming and other 3D environments more realistic.

-AAAC (Advanced Audio Coding): Originally specified in Part 7 of MPEG-2, AAC is an improved audio compression system. Used in Japanese ISDB, U.S. XM Satellite radio and Radio Mondale (RDM) and used with MPEG-4 Advanced video coding. Recent demonstrations (such as at IBC 2002) have shown advances in this system now known as aacPlus with good quality stereo at 64kbps or less. AAC is the audio coding standard defined by the International Organization for Standardization (ISO) as part of the MPEG-2 specification. Declared an international standard in April 1997, MPEG-2 AAC builds upon and extends the popular ISO/MPEG-1 Audio Layer-3 (MP3) audio coding format. Compared to MP3, AAC provides higher quality music with approximately 30% storage space or bandwidth. AAC provides up to 48 audio channels and sample rates up to 96 kHz. AAL (ATM Adaptation Layer): One of three layers of the ATM (ASYNCHRONOUS TRANSFER MODE) protocol reference model. It performs segmentation and reassembly (SAR), translates higher-layer data into ATMcell payloads, and translates incoming cells into a format readable by the higher layers. AAL-0 (ATMAdaptation Layer type 0): AAL0 cells are sometimes referred to as raw cells. The payload consists of 48 bytes and has no special meaning. AAL-1 (ATMAdaptation Layer type 1): AAL-1 relates to ATMservice class A where a time relation exists between the source and the destination, the bit rate is constant, and the service is connection-oriented(e.g., a voice channel). The structure of the AAL1 PDU is given in the following illustration: SN CSI SC CRC SNP EPC SAR PDU Payload 1 bit 47 bytes

1 bit

3 bits

3 bits

AAL1 PDU

AAL-2 (ATMAdaptation Layer type 2): relates to ATMservice class B; where a time relation exists between the source and the destination, the bit rate is variable, and the service is connection-oriented(e.g., a video or audio channel). AAL2 provides
7

bandwidth-efficient transmission of low-rate, short and variable packets in delay sensitive applications. It supports VBR and CBR. AAL2 also provides for variable payload within cells and across cells. AAL type 2 is subdivided into the Common Part Sublayer (CPS ) and the Service Specific Convergence Sublayer (SSCS ). AAL-3 (ATMAdaptation Layer type 3): relates to ATMservice class C; where no time relation exists between the source and the destination, the bit rate is variable, and the service is connection-oriented(e.g., a connection-oriented file transfer). AAL-4 (ATMAdaptation Layer type 4): relates to ATMservice class D; where no time relation exists between the source and the destination, the bit rate is variable, and the service is connectionless(e.g., LAN interconnection and electronic mail). AAL-3/4 (ATMAdaptation Layer type 3/4): since the differences between AAL-3 and AAL-4 were minor, the ITUmerged them into a single type called AAL-3/4. ABR (Available Bit Rate): QoS class defined by the ATM Forum for ATM networks. ABR is used for connections that do not require timing relationships between source and destination. ABR provides no guarantees in terms of cell loss or delay, providing only best-effort service. Traffic sources adjust their transmission rate in response to information they receive describing the status of the network and its capability to successfully deliver data. Compare with CBR, UBR, and VBR Absolute Threshold of Hearing (ATH): The Threshold of hearing is the Sound pressure level SPL of 20 Pa (micropascals) = 2 10-5 pascal (Pa). This low threshold of amplitude (strength or sound pressure level) is frequency dependent. See Fig. 2. The absolute threshold of hearing (ATH) is the minimum amplitude (level or strength) of a pure tone that the ear can hear in a noiseless environment. The threshold of pain is the SPL beyond which sound becomes unbearable for a human listener. This threshold varies only slightly with frequency. Prolonged exposure to sound pressure levels in excess of the threshold of pain can cause physical damage, potentially leading to hearing impairment. Different values for the threshold of pain Fig 1: Flecher-Munson equal loudness contours
(The lowest of the curve is the ATH)

The Threshold of hearing is frequency dependent, and typically shows a minimum (indicating the ear's maximum sensitivity) at frequencies between 1 kHz and 5 kHz. A typical ATH curve is pictured in Fig. 1. The absolute threshold of hearing represents the lowest curve amongst the set of equal-loudness contours, with the highest curve representing the threshold of Threshold of pain pain.
120 dBSPL 130 dBSPL 134 dBSPL 137.5 dBSPL 140 dBSPL

In psychoacoustic audio compression, the ATH is used, often in combination with masking curves, to calculate which spectral components are inaudible and may thus be ignored in the coding process; any part of an audio spectrum which has an amplitude (level or strength) below the ATH may be removed from an audio signal without any audible change to the signal.
20 Pa 63 Pa 100 Pa 150 Pa 200 Pa

SPL

Sound pressure

Fig. 2: Thresholds of hearing for male (M) and female (W) subjects between the ages of 20 and 60

The ATH curve rises with age as the human ear becomes more insensitive to sound, with the greatest changes occurring at frequencies higher than 2 kHz. Curves for subjects of various age groups are illustrated in Fig. 2. The data is from

the United States Occupational Health and Environment Control, Standard Number:1910.95 App F AC Coefficient: Any DCT (Discrete Cosine Transform) coefficient for which the frequency in one or both dimensions is non-zero. Acceptance Angle: The half-angle of the cone (a) within which incident light is totally internally reflected by the fiber core. It is equal to sin-1(NA).

Access Unit: A coded representation of a presentation unit. In the case of audio, an access unit is the coded representation of an audio frame. In the case of video, an access unit includes all the coded data for a picture, and any stuffing that follows it, up to but not including the start of the next access unit. If a picture is not preceded by a group_start_code or a sequence_header_code, the access unit begins with the picture start code. If a picture is preceded by a group_start_code and/or a sequence_header_code, the access unit begins with the first byte of the first of these start codes. If it is the last picture preceding a sequence_end_code in the bitstream, all bytes between the last byte of the coded picture and the sequence_end_code (including the sequence_end_code) belong to the access unit. Alternating Current (AC): A type of electrical current that reverses its direction at a regular interval, and can be represented by a sine wave. Alternating current is the type of current typically found in a wall outlet. You plug your TV, computer, refrigerator, etc. into alternating current. These devices then change the alternating current into direct current (DC) so that they can use the current properly. Alternating current is the reason that we need such complex power supplies, or power bricks, to regulate the flow of current in electronic devices. AC-3: A proprietary digital sound system developed (and patented) by Dolby. Also known as Dolby Digital (www.dolby.com), it can carry multiple channels, including stereo (2.0) (decodable with ProLogic I or II) and 5.1 discrete channel. Extra data can be included to control the dialog levels ('dial-norm') and dynamic range to the listener's choice. Variants of the system are used in movie theatres, domestic laser disk and DVD and in the US ATSC and Australian DVB-T digital broadcast systems. Decode systems may include a second decoder for special requirements. ACK (Acknowledgment): A response from a server to a network request. Basically, the server is saying, "I'm here, and I saw your request!" This also refers to the 6th ASCII character.. Active Device: A device that requires a source of energy for its operation and has an output that is a function of present and past input signals. Examples include controlled power supplies, transistors, LEDs, amplifiers, and transmitters.
10

ADC (A-D, A/D, A-to-D): Analog to Digital Conversion. Also referred to as digitization or quantization. The conversion of an analog signal into the digital data representation of that signal--normally for subsequent use in a digital machine. For TV, samples of audio and video are taken, the accuracy of the process depending on both the sampling frequency and the resolution of the analog amplitude information--how many bits are used to describe the analog levels. For TV pictures eight or 10-bits are normally used; for sound, 16 or 20-bits are common, and 24-bits are being introduced. The ITU-R 601 standard defines the sampling of video components based on 13.5 MHz, and AES/EBU defines sampling of 44.1 and 48 kHz for audio. For pictures, the samples are called pixels, each containing data for brightness and color. See also: Binary, Bit.

Add/Drop: The process where a part of the information carried in a transmission system is extracted (dropped) at an intermediate point and different information is inserted (added) for subsequent transmission. The remaining traffic passes straight through the multiplexer without additional processing. ADM: Add-Drop-Multiplexer. This is one of SONET's claim to fame. The tributaries of a SONET transport stream, are synchronously multiplexed to the line rate, i.e. there are no stuff bits or stuff opportunity bits as is the case in the plesiochronous hierarchy. As such an ADM can insert or extract lower rate tributary data without demultiplexing the aggregate line rate.

Address: Data structure or logical convention used to identify a unique entity, such as a particular process or network device. Address Mapping: Technique that allows different protocols to interoperate by translating addresses from one format to another. For example, when routing IP over X.25, the IP addresses must be mapped to the X.25 addresses so that the IP packets can be transmitted by the X.25 network. See also address resolution. Address Mask: Bit combination used to describe which portion of an address refers to the network or subnet and which part refers to the host. Sometimes referred to simply as mask. See also subnet mask.

11

Address Resolution: Generally, a method for resolving differences between computer addressing schemes. Address resolution usually specifies a method for mapping network layer (Layer 3) addresses to data-link control layer (Layer 2) addresses. See also address mapping . ADSL (Asymmetric Digital Subscriber Line) - A technology that is the phone company's answer to cable modems. It supports data speeds over 7Mbps downstream (to the user) and slower speeds upstream (to the Internet). Asymmetric describes how the upstream speed is different than the downstream speed. Administrative Unit (AU): An Administrative Unit is the information structure which provides adaptation between the Higher-Order path layer and the Multiplex Section layer. The Virtual Container (VC) plus the pointers (H1, H2, H3 bytes) is called the Administrative Unit (AU). AES/EBU: Audio Engineering Society / European Broadcast Union is a digital audio standard commonly used in linking professional digital audio equipment. The AES/EBU digital interface is usually implemented using 3-pin XLR connectors, the same type connector used in a professional microphone. One cable carries both right and left channels. AES/EBU is an alternative to the S/PDIF standard. Informal name for a digital audio standard established jointly by the AES and EBU organizations. The sampling frequencies for this standard vary depending on the format being used; the sampling frequency for D1 and D2 audio tracks is 48 kHz. Audio Engineering Society that promotes standards in the professional audio industry. International Headquarters--60 East 42nd Street, Room 2520, New York, New York 10165-2520. Tel: 212-661-8528. Fax: 212-682-0477. Email: HQ@aes.org. Internet: www.aes.org. AGC (Automatic Gain Control): A process or means by which gain is automatically adjusted in a specified manner as a function of input level or another specified parameter.

Agent: (1) Generally, software that processes queries and returns replies on behalf of an application. 2) In NMSs, process that resides in all managed devices and reports the values of specified variables to management stations. AIF (Audio Interchange File): An audio file format developed by Apple Computer to store high quality sampled sound and musical instrument information. The AIF files are a popular format for transferring between the Macintosh and the PC. AIS (Alarm Indication Signal): (1) A code sent downstream indicating an upstream failure has occurred. (2) In a T1 transmission, an all-ones signal
12

transmitted in lieu of the normal signal to maintain transmission continuity and to indicate to the receiving terminal that there is a transmission fault that is located either at, or upstream from, the transmitting terminal. Aliasing: Defects or distortion in a television picture or audio. Defects are typically seen as jagged edges on diagonal lines and twinkling or brightening. In digital video, aliasing is caused by insufficient sampling or poor filtering of the digital video. a-law: ITU-T companding standard used in the conversion between analog and digital signals in PCM systems. A-law is used primarily in European telephone networks and is similar to the North American mu-law standard. See also (mu)-law. ALiS: ALiS (Alternate Lighting of Surfaces) is a relatively recent type of high-definition plasma panel designed for optimum performance when displaying 1080i material. On a conventional plasma TV, all pixels are illuminated at all times. With an ALiS plasma panel, alternate rows of pixels are illuminated so that half the panel's pixels are illuminated at any moment (somewhat similar to interlaced-scanning on a CRT-type TV). ALiS panels offer bright, clear picture quality, reduced power consumption, and extended panel life. Algorithm: A formula or set of steps used to simplify, modify, or predict data. Complex algorithms are used to selectively reduce the high digital audio and video data rates. These algorithms utilize physiologists' knowledge of hearing and eyesight. For example, we can resolve fine detail in a still scene, but our eye cannot resolve the same detail in a moving scene. Using knowledge of these limitations, algorithms are formulated to selectively reduce the data rate without affecting the viewing experience. See also: Compression, MPEG. Alpha Channel: A relative transparency value. Alpha values facilitate the layering of media object on top of each other. In a four digit digital sampling structure (4:2:2:4) the alpha channel is represented by the last digit. AMI (Alternate Mark Inversion): The line-coding format in transmission systems where successive ones (marks) are alternatively inverted (sent with polarity opposite that of the preceding mark).

Amplifier: A device, inserted within a transmission path that boosts the strength of an electronic or optical signal. Amplifiers may be placed just after the transmitter (power booster), at a distance between the transmitter and the receiver (in-line amplifier), or just before the receiver (preamplifier).
13

Amplitude Modulation (AM): A means of transmitting and receiving radio waves that is based on variances in the amplitude of the radio waves. See also FM. Amplified Spontaneous Emission (ASE): A background noise mechanism common to all types of erbium-doped fiber amplifiers (EDFAs). It contributes to the noise figure of the EDFA which causes loss of signal-to-noise ratio (SNR).

Analog: 1. An adjective describing any signal that varies continuously as opposed to a digital signal, which contains discrete levels. 2. A system or device that operates primarily on analog signals. Anamorphic Video: Video images that have been "squeezed" to fit a video frame when stored on DVD. These images must be expanded (un-squeezed) by the display device. An increasing number of TVs employ either a screen with 16:9 aspect ratio, or some type of "enhanced-for-widescreen" viewing mode, so that anamorphic and other widescreen material can be viewed in its proper proportions. When anamorphic video is displayed on a typical TV with 4:3 screen size, the images will appear unnaturally tall and narrow. Angular Misalignment: Loss at a connector due to fiber end face angles being misaligned.

ANSI (American National Standards Institute): A US standards body. This organization represents the United States in the International Organization for Standardization (ISO). It works to develop coding and signaling standards. http://www.ansi.org/ Anti-Aliasing: The smoothing and removing of aliasing effects by filtering and other techniques. Most, but not all, DVEs and character generators contain anti-aliasing facilities. A method used to better define higher resolution objects in lower resolution. For example, you would use anti-aliasing if you have two lines that are so close together that at 320x200 they look as if they are one double-width line and you want to represent them better. This is most noticeable when dealing with curves, such as circles. For example, if you look at a circle drawn in a simple paint program at a low resolution, you can see the "steps," or "jaggies"--the points
14

it takes to make the circle. If you raise the resolution you'll notice the "steps" much less. If you use anti-aliasing, different shades of the circle's color are used to "fill in" the gaps caused by low resolution, smoothing out its appearance to the user. Typical uses for anti-aliasing are for smoothing out fonts and straight lines in 3D images. If you are using a system with jagged-looking fonts, chances are that it's not anti-aliasing the fonts. Anti-Alias Filter: A low pass filter used to bandwidth-limit a signal to less than one-half the sampling rate. APC: Abbreviation for Angled Physical Contact. A style of fiber optic connector with a 5-15 angle on the connector tip for the minimum possible backreflection.

Applet: A derivation of "application." You use the Applet tag in HTML code to define calls to Java-based applications within a browser. Thus, any Java program accessible through a Java-enabled browser can be referred to as an applet or Java Applet. Application Layer: Layer 7 of the OSI reference model . This layer provides services to application processes (such as e-mail, file transfer, and terminal emulation) that are outside of the OSI model. The application layer identifies and establishes the availability of intended communication partners (and the resources required to connect with them), synchronizes cooperating applications, and establishes agreement on procedures for error recovery and control of data integrity. Corresponds roughly with the transaction services layer in the SNA model. See also data-link layer, network layer, physical layer, PQ, session layer, and transport layer. Application Programming Interface (API): APIs allow you to program to a pre-constructed interface (the API) instead of programming a device or piece of software directly. This allows for faster development, since programming to a device's API is designed to be easier than programming to a device directly. APIs allow you to program without having intimate knowledge of the device or software to which you are sending commands. For example, the OpenGL API allows you to create 3D effects without actually knowing the innards of your video card. Application Information Table (AIT): Part of a DVB/MPEG transport stream that provides information about MHP applications associated with a particular service. For each application this information includes - The application type (e.g. DVB-J application or plug-in application); An application identifier that uniquely identifies the application; An application control code that controls the lifecycle of the application; The last of these entries says that the lifecycle can be controlled by the information sent in the AIT. When the user selects a service, the control codes in
15

the AIT are read. The AIT for a specific service is sent at frequent intervals and the application control flag can be updated between AIT versions. In this way the service provider can control the applications in the service; e.g. a news application is set to be AUTOSTART at the time when the 6:00pm news begins. Application Service Provider (ASP): A company that provides remote access to applications, typically over the Internet. ASPs are used when an organization finds it more cost effective to have someone else host its applications than to host them itself. The applications served up can be as simple as access to a remote fileserver, or as complex as running an order entry system through your browser. The ASP provides the servers, network access, and applications to be used, typically for a monthly or yearly subscription fee. APS (Automatic Protection Switching): A 1+1 protection switch architecture is one in which the head end signal is permanently bridged (at the electrical level) to service and protection equipment to enable the same payload to be transmitted identically to the tail end service and protection equipment. At the tail end , each service and protection optical signal is monitored independently and identically for failures. The receiving equipment selects either the service or protection channel based upon the switching criteria. Architecture: The specifications of a system and how its subcomponents interconnect, interact and cooperate. Architectures are often described in multiple levels of abstraction from low-level physical to higher-level logical application and end-user views. Archive: As a noun, this refers generally to any type of backed-up data. It can refer to tapes, disks, or just simply a group of data that is an old copy of current data. That copy could be one minute old or several years old. As a verb, archive is the act of backing up data or creating an archive. Off-line storage of video/audio onto backup tapes, floppy disks, optical disks, etc. AR Coating: Antireflection coating. A thin, dielectric or metallic film applied to an optical surface to reduce its reflectance and thereby increase its transmittance. Argument: What you have with your girlfriend when she wants you to stop using your computer so much. Actually, argument refers to the value with which you call a procedure. For example, if you wrote a line of code that said "goto 140" telling your program to go to line 140, the argument is "140." Some procedures will accept multiple arguments, or alternately, require no arguments. Arithmetic Logic Unit (ALU): The part of the CPU that actually does the work of adding, subtracting, multiplying, and dividing, including OR, AND, and NOT operations. The ALU is an execution unit, like the FPU, that is fed with data from the CPU registers. ARPANet (Advanced Research Projects Agency Network): A network of interconnected computers that formed the original Internet. The United States military funded the ARPANet, and construction was started in 1968. It was

16

designed to be a redundant network of computers so that no single disruption could break down communications between other units. The ARPANet expanded to universities for research, and soon after the Internet was born. Artifacts: Undesirable elements or defects in a video picture. These may occur naturally in the video process and must be eliminated in order to achieve a high-quality picture. Most common in analog are cross color and cross luminance. Most common in digital are macroblocks, which resemble pixelation of the video image. ASCII (American Standard Code for Information Interchange): A standard code for transmitting data, consisting of 128 letters, numerals, symbols, and special codes each of which is represented by a unique binary number. This is a standard means of representing characters, consisting of 256 characters. The first 128 characters are standardized, and the first 32 of those are control codes, which don't really represent visible characters but rather codes that can be used for text formatting or actions, such as making the computer beep or clearing the screen. After the 32 control codes, the next 96 standardized characters represent numbers, letters (both uppercase and lowercase), and standard punctuation marks. The last 128 characters represent different things on different platforms. ASCII is being largely supplanted by Unicode. ASIC (Application specific integrated circuit): An integrated circuit designed for special rather than general applications. Aspect Ratio: The aspect ratio of an image is its displayed width divided by its height (usually expressed as "x:y"). For instance, the aspect ratio of a traditional television screen is 4:3, or 1.33:1. High definition television uses an aspect of 16:9, or about 1.78:1. Aspect ratios of 2.39:1 or 1.85:1 are frequently used in cinematography, while the aspect ratio of a full 35 mm film frame with soundtrack (also known as "Academy standard") is around 1.37:1.

Comparison of three common aspect ratios. The outer box (blue) and middle box (red) are common formats for cinematography. The inner box (green) is the format used in standard television. The 4:3 ratio for standard television has been in use since television's origins and many computer monitors use the same aspect ratio. Since 4:3 is close to the old 1.37:1 cinema academy format, theaters suffered from a loss of viewers after films were broadcast on TV. To prevent this, Hollywood created widescreen aspect
17

ratios to immerse the viewer in a more realistic experience and, possibly, to make broadcast films less enjoyable if watched on a regular TV set. 16:9 is the format of Japanese and American HDTV as well as European non-HD widescreen television (EDTV). Many digital video cameras have the capability to record in 16:9. Anamorphic DVD transfers store the information in 16:9 vertically squeezed to 4:3; if the TV can handle an anamorphic image the signal will be de-anamorphosed by the TV to 16:9. If not, the DVD player will unsqueeze the image and add letterboxing before sending the image to the TV. Wider ratios such as 1.85:1 and 2.39:1 are accommodated within the 16:9 DVD frame by adding some additional masking within the image itself. Within the motion picture industry, the convention is to assign a value of 1 to the image height, so that, for example, an anamorphic frame is described as 2.39:1 or just "2.39". This way of speaking comes about because the width of a film image is restricted by the presence of sprocket holes and a standard intermittent movement interval of 4 perforations, as well as an optical soundtrack running down the projection print between the image and the perforations on one side. The most common projection ratios in American theaters are 1.85 and 2.39. The 16:9 format adopted for HDTV is actually narrower than commonly-used cinematic widescreen formats. Anamorphic widescreen (2.39:1) and American theatrical standard (1.85:1) have wider aspect ratios, while the European theatrical standard (1.66:1) is just slightly less. (IMAX, contrary to some popular perception, is 1.33:1, the traditional television aspect ratio.) Super 16mm film is frequently used for television production due to its lower cost, lack of need for soundtrack space on the film itself, and aspect ratio similar to 16:9 (Super 16mm is natively 1.66 whilst 16:9 is 1.78). The term is also used in the context of computer graphics to describe the shape of an individual pixel in a digitized image. Most digital imaging systems use square pixelsthat is, they sample an image at the same resolution horizontally and vertically. But there are some devices that do not, so a digital image scanned at twice the horizontal resolution to its vertical resolution might be described as being sampled at a 2:1 aspect ratio, regardless of the size or shape of the image as a whole.

Assembly Language: A programming language specific to a microprocessor. It is a very low-level language, where you actually give the processor instructions like "MOV A,B", which moves a value from one register to another. As you might imagine, programming directly in assembly language is quite tedious. Thus, higher

18

level languages, such as C++, Visual Basic, or Java, are normally used and then compiled into assembly language specific to the microprocessor on which the program will be run. The compiler tries to optimize the code during this process (e.g., "MOV A,B" followed by "MOV B,C" might be replaced by "MOV A,C"). Depending on how elegant the optimization is, the code may run faster than if no optimization is used. Today, very small and fast programs can be created by using assembly language (defeating code bloat), but assembly language programming is becoming a dying art. Asymmetric communications: Two-way communications in which the volumes of traffic in each direction are significantly different. For example, TV on demand. Asynchronous: Lacking synchronization. In video, a signal is asynchronous when its timing differs from that of the system reference signal. A foreign video signal is asynchronous before it is treated by a local frame synchronizer.

Asynchronous Mapping - These mappings are defined for clear channel transport of digital signals that meet the standard DSX cross connect requirements, typically DSX-1 and DSX-3 in most practical applications. At the asynchronous mapping interface, frame acquisition and generation is not required. For example if your system can transport a BERT (bit error test set) signal with a 10^23-1 test pattern, it is being asynchronously mapped for transport. Asynchronous Serial Interface (ASI): Commonly used for carrying (MPEG) compressed program (video, audio, etc.) as packet data on conventional digital TV studio 270Mb/s distribution and routing systems designed for uncompressed video per Rec ITU-R BT.601 and 656. The MPEG data stream payload can typically be equivalent to only several Megabit/sec up to say 34Mb/s. To be compatible to 270Mb/s equipment, the data is clocked at 270Mb/s, which bunches up the data and leaves intervening periods with no data (bursty mode). Normally these gaps are filled with coded filler (ASI stuffing bytes) to present a continuous stream. Refer byte gap & gap size: Number of 10-bit, K28.5 special character commas inserted between bytes within transport stream packets in the ASI transmission protocol. Special character commas are used as stuffing data to spread out transport stream packets in time. Refer to DVB Blue Book Implementation Guidelines for the Asynchronous Serial Interface DVB Document A055 May 2000; and, ETSI TR 101 891 V1.1.1 (2001-02) ; Professional Interfaces: Guidelines for the implementation and usage

19

of the DVB Asynchronous Serial Interface (ASI). ATM (Asynchronous Transfer Mode): The Asynchronous Transfer Mode (ATM) is a multiplexing technique that makes optimal and flexible use of the bandwidth of modern transmission links. ATM should not be confused with PDH or SDH transmission techniques. Asynchronous Transfer Mode ATM defines how the available bandwidth of a transmission medium is used. It is a statistical time division multiplex method. Its essential features are:

Packets (cells) of constant length (53 bytes) Cells comprising a 5-byte header and a 48-byte payload

Fig: An ATM Cell Consists of a Header and Payload Data

No error control for user data Supports virtually all PDH and SDH tributary bit rates (from 1.5 Mbit/s) Based on SDH networks for public use, otherwise also cell based Suitable for all useful signals (video, voice, data) and transmission modes (connection-oriented, connectionless, constant bit rate, burst mode, variable bit rate)

Quality of Service (QoS) is one of the main features of ATM (Asynchronous Transfer Mode). Apart from variable bandwidth utilization, the customer (user) can, for the first time, order and pays only for the quality he really needs for his telecommunication service. Only ATM can handle QoS in such a flexible and comprehensive way. In addition, work is in progress to extend the IP protocol to include QoS aspects for LAN applications (RSVP, Resource Reservation Protocol, IP V6 (next IP version)). When purchasing ATM service, you generally have a choice of four different types of service: Constant Bit Rate (CBR): specifies a fixed bit rate so that data is sent in a steady stream. This is analogous to a leased line. Variable Bit Rate (VBR): provides a specified throughput capacity but data is not sent evenly. This is a popular choice for voice and videoconferencing data. Available Bit Rate (ABR): provides a guaranteed minimum capacity but allows data to be bursted at higher capacities when the network is free. Unspecified Bit Rate (UBR): does not guarantee any throughput levels.
20

This is used for applications, such as file transfer, that can tolerate delays. A data transmission scheme using self-routing packets of 53 bytes, 48 of which are available for user data. Typically 25, 155, and 622 Mbps--the latter of which could be used to carry non-compressed ITU-R 601 video as a data file. SDH is the optical transport layer in wide area networks. This layer makes use of containers, which allows the highly flexible and secure transmission of any kind of telecommunication signal. SDH technology is the successor to PDH technology. Before SDH (synchronous digital hierarchy) was standardized and introduced, PDH (plesiochronous digital hierarchy), which is still being used, was state of the art. ATM Cell Header Format: An ATM cell header can be one of two formats: UNI (User-Network-Interface) or NNI (Network-Network-Interface). The UNI header is used for communication between ATM endpoints and ATM switches in private ATM networks. The NNI header is used for communication between ATM switches. Figure below depicts the basic ATM cell format, the ATM UNI cell header format, and the ATM NNI cell header format.

Fig: An ATM Cell, ATM UNI Cell, and ATM NNI Cell Header Each Contain 48 Bytes of Payload

Unlike the UNI, the NNI header does not include the Generic Flow Control (GFC) field. Additionally, the NNI header has a Virtual Path Identifier (VPI) field that occupies the first 12 bits, allowing for larger trunks between public ATM switches. ATM Network Interfaces: An ATM network consists of a set of ATM switches interconnected by point-to-point ATM links or interfaces. ATM switches support two primary types of interfaces: UNI and NNI. The UNI connects ATM end systems (such as hosts and routers) to an ATM switch. The NNI connects two ATM switches. Depending on whether the switch is owned and located at the customers premises or is publicly owned and operated by the telephone company, UNI and NNI can be further subdivided into public and private UNIs and NNIs. A private UNI connects an ATM endpoint and a private ATM switch. Its public counterpart connects an ATM endpoint or private switch to a public switch. A private NNI connects two ATM switches within the same private organization. A public one connects two
21

ATM switches within the same public organization. An additional specification, the broadband intercarrier interface (B-ICI), connects two public switches from different service providers. Figure 27-4 illustrates the ATM interface specifications for private and public networks.

ATM Interface Specifications Differ for Private and Public Networks

ATSC (Advanced Television Systems Committee): Formed to establish technical standards for advanced television systems, including digital high definition television (HDTV). 1750 K Street NW, Suite 800, Washington, DC 20006. Tel: 202-828-3130. Fax: 202-828-3131. Email: atsc@atsc.org. Internet: www.atsc.org. The digital television (DTV) standard has ushered in a new era in television broadcasting. The impact of DTV is more significant than simply moving from an analog system to a digital system. Rather, DTV permits a level of flexibility wholly unattainable with analog broadcasting. An important element of this flexibility is the ability to expand system functions by building upon the technical foundations specified in ATSC standards such as the ATSC Digital Television Standard (A/53) and the Digital Audio Compression (AC-3) Standard (A/52). With NTSC, and its PAL and SECAM counterparts, the video, audio, and some limited data information are conveyed by modulating an RF carrier in such a way that a receiver of relatively simple design can decode and reassemble the various elements of the signal to produce a program consisting of video and audio, and perhaps related data (e.g., closed captioning). As such, a complete program is transmitted by the broadcaster that is essentially in finished form. In the DTV system, however, additional levels of processing are required after the receiver demodulates the RF signal. The receiver processes the digital bit stream extracted from the received signal to yield a collection of program elements (video, audio, and/or data) that match the service(s) that the consumer selected. This selection is made using system and service information that is also transmitted. Audio and video are delivered in digitally compressed form and must be decoded for presentation. Audio may be monophonic, stereo, or multi-channel. Data may supplement the main video/audio program (e.g., closed captioning, descriptive text, or commentary) or it may be a stand-alone service (e.g., a stock or news ticker). The nature of the DTV system is such that it is possible to provide new features

22

that build upon the infrastructure within the broadcast plant and the receiver. One of the major enabling developments of digital television, in fact, is the integration of significant processing power in the receiving device itself. Historically, in the design of any broadcast systembe it radio or televisionthe goal has always been to concentrate technical sophistication (when needed) at the transmission end and thereby facilitate simpler receivers. Because there are far more receivers than transmitters, this approach has obvious business advantages. While this trend continues to be true, the complexity of the transmitted bit stream and compression of the audio and video components require a significant amount of processing power in the receiver, which is practical because of the enormous advancements made in computing technology. Once a receiver reaches a certain level of sophistication (and market success) additional processing power is essentially free. The Digital Television Standard describes a system designed to transmit high quality video and audio and ancillary data over a single 6 MHz channel. The system can deliver about 19 Mbps in a 6 MHz terrestrial broadcasting channel and about 38 Mbps in a 6 MHz cable television channel. This means that encoding HD video essence at 1.106 Gbps1 (highest rate progressive input) or 1.244 Gbps2 (highest rate interlaced picture input) requires a bit rate reduction by about a factor of 50 (when the overhead numbers are added, the rates become closer). To achieve this bit rate reduction, the system is designed to be efficient in utilizing available channel capacity by exploiting complex video and audio compression technology. The compression scheme optimizes the throughput of the transmission channel by representing the video, audio, and data sources with as few bits as possible while preserving the level of quality required for the given application. The RF/transmission subsystems described in the Digital Television Standard are designed specifically for terrestrial and cable applications. The structure is such that the video, audio, and service multiplex/transport subsystems are useful in other applications.

23

Figure 1 Block diagram of functionality in a transmitter/receiver pair. RF Transmission refers to channel coding and modulation. The channel coder takes the digital bit stream and adds additional information that can be used by the receiver to reconstruct the data from the received signal which, due to transmission impairments, may not accurately represent the transmitted signal. The modulation (or physical layer) uses the digital bit stream information to modulate a carrier for the transmitted signal. The modulation subsystem offers two modes: an 8-VSB mode and a 16-VSB mode. ATSC Formats are 18 voluntary video formats shown in the following table.

The U.S. digital television transmission standard using MPEG-2 compression and the audio surround-sound compressed with Dolby Digital (AC-3). So that a wide

24

variety of source material, including that from computers, can be best accommodated, two line standards are included--each operating at 24, 30, and 60 Hz. Note that 1,088 lines are actually coded in order to satisfy the MPEG-2 requirement that the coded vertical size be a multiple of 16 (progressive scan) or 32 (interlaced scan). See the lists of related specifications number attached here. Attached: A physical channel of a digital picture manipulator is attached to a logical channel of a controller if the physical channel is successfully acquired by the controller. A physical channel may be attached to only one logical channel of one controller at a time. Attenuation: 1) The decrease in signal strength along a fiber optic waveguide caused by absorption and scattering. Fiber optic attenuation is caused by transparency of the fiber, bending the fiber at too small of a radius, nicks in the fiber, splices, poor fiber terminals, FOTs, etc. Attenuation is usually expressed in dB/km.

2) A loss of signal strength in an electrical or radio signal usually related to the distance the signal must travel. Electrical attenuation is caused by the resistance of the conductor; poor (corroded) connections, poor shielding, induction, RFI, etc. 3) Radio signal attenuation may be due to atmospheric conditions, sun spots, antenna design / positioning, obstacles, etc. Attenuator: 1) In electrical systems, a usually passive network for reducing the amplitude of a signal without appreciably distorting the waveform. 2) In optical systems, a passive device for reducing the amplitude of a signal without appreciably distorting the ATV (Advanced Television): Digital television, including standard, enhanced and high-definition versions. Refer to ATSC. AU (also SND): Interchangeable audio file formats used in the Sun Sparcstation, Nest and Silicon Graphics (SGI) computers. Essentially a raw audio data format proceeded by an identifying header. The .au file is cross-platform compatible. Auditory Masking: A phenomenon that occurs when two sounds of similar frequencies occur at the same time. Because of auditory masking, the louder sound drowns out the softer sound and makes it inaudible to the human ear. Autotiming: Capability of some digital video equipment to automatically adjust input video timing to match a reference video input. Eliminates the need for manual timing adjustments.

25

Authentication: A process of proving the identity of a computer or computer user. For users, it generally involves a user name and password. Computers usually pass a code that identifies that they are part of a network. Availability - The foundation for many Bellcore reliability criteria is an end to end two way availability of objective of 99.98% for interoffice applications (0.02% unavailability or 105 minutes/year down time). The objective for loop transport between the central office and the customer premises is 99.99%. For interoffice transport the objective refers to a two way broadband channel, e.g. SONET OC-N, over a 250 mile path. For loop applications the objective refers to a two way narrowband channel, e.g. DS0 or equivalent. Avalanche Photodiode (APD): A photodiode that exhibits internal amplification of photocurrent through avalanche multiplication of carriers in the junction region.

AVI: Audio video interleaving. The Microsoft Video for Windows file format for combining video and audio into a single block in time such as a 1/30th second video frame. In this file format, blocks of audio data are woven into a stream of video frames. ASF is intended to supersede AVI. AVO: Audiovisual object. In MPEG-4, audiovisual objects (also AV objects) are the individual media objects of a scene--such as video objects, images, and 3D objects. AVOs have a time dimension and a local coordinate system for manipulating the AVO are positioned in a scene by transforming the object's local coordinate system into a common, global scene coordinate system.

26

AWGN (Additive White Gaussian Noise): In communications, the additive white Gaussian noise (AWGN) channel model is one in which the only impairment is the linear addition of wideband or white noise with a constant spectral density (expressed as watts per hertz of bandwidth) and a Gaussian distribution of amplitude. The model does not account for the phenomena of fading, frequency selectivity, interference, nonlinearity or dispersion. However, it produces simple, tractable mathematical models which are useful for gaining insight into the underlying behavior of a system before these other phenomena are considered. Wideband Gaussian noise comes from many natural sources, such as the thermal vibrations of atoms in antennas, "black body" radiation from the earth and other warm objects, and from celestial sources such as the sun. The AWGN channel is a good model for many satellite and deep space communication links. It is not a good model for most terrestrial links because of multipath, terrain blocking, interference, etc. Axis: Relating to digital picture manipulation, the X axis is a horizontal line across the center of the screen, the Y axis is a vertical line, and the Z axis is in the third dimension, perpendicular to the X and Y axes, and indicates depth and distance.

27

-BBackbone: A segment of a network that's often a higher speed than the rest of the network and that connects all the other segments. If you don't have a backbone faster than the rest of your network your network could lag. That's why a lot of ISPs are constantly restructuring their backbones. Banner ad: The most common form of advertising found on the Web. The traditional size of a banner ad is 468 by 60 pixels, but there are many other sizes in general use. Banner ads are typically animated GIF graphics or Flash animations that are simply clicked on to go to the website of the advertiser, but can also contain rich media that allows for some degree of interaction with the ad besides a simple click. BASIC (Beginner's All-purpose Symbolic Instruction Code): This programming language was developed in the mid '60s. The language was constructed of simple, English-like commands that were run through an interpreter, line by line, each time the program was "run." This caused BASIC programs to be slow. Nowadays compilers have been developed to speed up BASIC programs, and recent versions of BASIC can do anything that other programming languages can. Back Channel: A means of communication from users to content providers. Examples include a connection between the central office and the end user, an Internet connection using a modem, or systems where content providers transmit interactive television (analog or digital) to users while users can connect through a back channel to a web site, for example.

Backup: A copy of the work created on a computer, stored in a safe place such as a floppy disc, CD-ROM or through a network on to another computer. Backward Compatibility: A newer coding standard is backward compatible with an older coding standard if decoders designed to operate with the older coding standard are able to continue to operate by decoding all or part of a bitstream produced according to the newer coding standard. Backward Motion Vector: A motion vector that is used for motion compensation from a reference frame or reference field at a later time in display order. Backward Prediction: Prediction from the future reference frame (field).

28

Bandwidth: The complete range of frequencies over which a circuit or electronic system can function with minimal signal loss, typically less than 3 dB. 2. The information-carrying capability of a particular television channel. In PAL systems, the bandwidth limits the maximum visible frequency to 5.5 MHz, in NTSC, 4.2 MHz. The ITU-R 601 luminance channel sampling frequency of 13.5 MHz was chosen to permit faithful digital representation of the PAL and NTSC luminance bandwidths without aliasing. In transmission, the United States analog and digital television channel bandwidth is 6 MHz. BandwidthDistance Product: Of an optical fiber, under specified launching and cabling conditions, at a specified wavelength, a figure of merit equal to the product of the fibers length and the 3 dB bandwidth of the optical signal. The bandwidthdistance product is usually stated in megahertz kilometer (MHzkm) or gigahertzkilometer (GHzkm). It is a useful figure of merit for predicting the effective fiber bandwidth for other lengths, and for concatenated fibers. Barn Doors: A term used in television production to describe the effect that occurs when a 4:3 image is viewed on a 16:9 screen. When this happens, viewers see black bars on the sides of the screen or "barn doors." Baseband: A signaling technique in which the signal is transmitted in its original form and not changed by modulation. Local Area Networks as a whole, fall into two categories: baseband and broadband. Baseband networks are simpler and cheaper; the entire bandwidth of the LAN cable is used to transmit a single digital signal. In broadband networks, the capacity of the cable is divided into channels, which can transmit many simultaneous signals. Broadband networks may transmit a mixture of digital and analog signals, as will be the case in hybrid fiber/coax interactive cable television networks. Base Layer: First, independently decodable layer of a scalable hierarchy. Batch: A group of commands that are executed one at a time. Batch File: A file in a DOS/Windows environment with the .bat extension. This file type is executable in DOS or at a Windows command prompt. Batch programs are written in a batch programming language that utilizes a superset of standard DOS commands. Baud: A unit of signaling speed equal to the number of signal events per second. Baud is equivalent to bits per second in cases where each signal event represents exactly one bit. Often the term baud rate is used informally to mean baud, referring to the specified maximum rate of data transmission along an interconnection. Typically, the baud settings of two devices must match if the devices are to communicate with one another. BBS (Bulletin Board System): A bulletin board system used to describe message boards that people would dial into directly with modems before the Internet was easily accessible. Instead of dialing into a network where everything is connected, you had your choice of a group of BBSs to dial into, and each one tried to offer the

29

most members, files, and graphics to its members. Typically you paid for access on a monthly basis. More recently, the term describes Internet-based message boards or forums. BCC (Blind Carbon Copy): When sending an e-mail, if you BCC someone you are sending him or her a copy of your e-mail, but not allowing the recipients in the "To" or "CC" fields of your e-mail client to know that the BCC recipient was sent the message as well. BCC is often used for covert company communications, such as if you are getting irritated at someone and want to let someone else in on it without alerting the party you are irritated about, or if you are sending the CEO of your company a mail telling him or her he or she is wrong about something and want to BCC copies to your friends to gloat over it. Use BCC with caution. One of the most common uses of BCC is when sending mass e-mails; just send the e-mail to yourself and BCC it to the whole group you are sending to. That way, your mailing list is not known to any of the members. BCD: Binary coded decimal. A coding system in which each decimal digit from 0 to 9 is represented by four binary (0 or 1) digits. BCH (Bose-Chaudhuri-Hocquenghem) Code: Co-authors of the so-named family of commonly-used linear block codes, a type of forward error correction (FEC). Beam: A portion of a satellite antenna's footprint on earth. A typical satellite footprint is divided into a large number of beams. Beam width: In an antenna, the angular sector in degrees of the radiated power pattern at the half-power (3dB) point. Bel (B): A measure of voltage, current, or power gain. One bel is defined as a tenfold increase in power. If an amplifier increases a signal's power by 10 times, its power gain is 1 bel or 10 decibels (dB). If power is increased by 100 times, the power gain is 2 bels or 20 decibels. 3 dB is considered a doubling. The logarithm to the base 10 of a power ratio, expressed as: B = log10(P1/P2), where P1 and P2 are distinct powers. The decibel, equal to one-tenth bel, is a more commonly used unit. Beep Code: When you turn on your PC--and all is well--you typically hear a single beep from your computer speaker, signaling that all is OK. If things are wrong--and the computer BIOS senses that things are wrong--a series of beeps will be emitted. This is the beep code. Based on the number of beeps, you can look up the meaning in your motherboard documentation and often diagnose the problem. Obvious things to check are whether your video card, processor, and/or memory are plugged in and seated properly. Bending Loss: Attenuation caused by high-order modes radiating from the outside of a fiber optic waveguide which occur when the fiber is bent around a small radius. See also macrobending, microbending.
30

BER: Bit error rate. Bit errors are caused by interference, or loss of signal, so the stream of bits composing the DTV picture is disrupted. A measure of the errors in a transmitted signal. The number of bit errors detected in a unit of time, usually one second. Bit Error rate (BER) is calculated with the formula: BER = errored bits received/total bits sent Betacam: An analog component VTR system using a 1/2-inch tape cassettes. This was developed by Sony and is marketed by them and several other manufacturers. Although recording the Y, R-Y and B-Y component signals onto tape many machines are operated with coded (PAL or NTSC) video in and out. The system has continued to be developed over the years to offer models for the industrial and professional markets as well as full luminance bandwidth (Betacam SP), PCM audio and SDI connections. Digital versions exist as the high-end Digital Betacam and Betacam SX for ENG and similar applications. Betacam SX: A digital tape recording format developed by Sony which uses a constrained version of MPEG-2 compression at the 4:2:2 profile, Main Level (422P@ML) using 1/2-inch tape cassettes. B Frames or B Pictures (Bidirectionally Predictive-coded Pictures): Pictures that use both future and past pictures as a reference. This technique using motion compensated prediction from past and/or future reference fields or frames is termed bidirectional prediction. B-pictures provide the most compression. B-pictures do not propagate coding errors as they are never used as a reference. Bidirectionally Predictive Coding: <Example>

From the above pictures, there is some information which is not in the reference frame. Hence B picture is coded like P-pictures except the motion vectors can reference the previous reference picture, the next picture, or both. The following is the mechanism of B-picture coding.

31

BIFS: Binary Format for Scenes. In MPEG-4, a set of elements called nodes that describe the layout of a multimedia layout BIFS-Update streams update the scene in time, BIFS-Anim streams animate the stream in time. BIFS are organized in a tree-lined hierarchical scene graph node structure derived from VRML. Big Picture: A coded picture that would cause VBV buffer underflow as defined in C.7. Big pictures can only occur in sequences where low_delay is equal to 1. "Skipped picture" is a term that is sometimes used to describe the same concept. Binary: A base-2 numbering system using the digits 0 and 1 (as opposed to 10 digits [0 - 9] in the decimal system). In computer systems, the binary digits are represented by two different voltages or currents, one corresponding to 0 and the other corresponding to 1. All computer programs are executed in binary form. Binary representation requires a greater number of digits than the base 10 decimal system more commonly used. For example, the base 10 number 254 is 11111110 in binary. The result of a binary multiplication contains the sum of digits of the original numbers. So: 10101111 x 11010100 = 1001000011101100 (In decimal 175 x 212 = 37,100) (From right to left, the digits represent 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768) Each digit is known as a bit. This example multiplies two 8-bit numbers to produce a 16-bit result--a very common process in digital television equipment.

32

Binary Code: Binary consists of a string of bits, with bits represented by 1s and 0s, e.g., 01010111000000001. The "bi" refers to base 2 mathematical representation (1s and 0s). Binary N-Zero Suppression (BNZS): Line coding system that replaces N number of zeros with a special code to maintain pulse density required for clock recovery. N is typically 3, 6, or 8. BIOS (Basic Input Output System): A program stored on your motherboard that controls all of the interaction between your components and your chipset. Simple access to video, keyboard, hard drive, floppy, CD-ROM, and other devices--that are enough to get an operating system loaded up--are included in the BIOS. Your BIOS is there to get things started for the operating system. BIP (Bit-Interleaved): A parity check that groups all the bits in a block into units (such as byte), then performs a parity check for each bit position in the group. BIP-8 (Bit Interleaved Parity 8): BIP-8 is a method used for error monitoring where each bit of the BIP-8 code word or byte, corresponds to even parity as calculated across matching bit positions for the distinct bytes in a SONET frame. That is the first BIP-8 bit would correspond to even parity across bit number 1 of a certain number of bytes in the SONET frame. The certain number of bytes depends upon whether you are calculating section, line, or path BIP-8. BIP-8 vs BER (where N is the N in OC-N) BER Max. Detection Time BIP-8 Violations 10e-3 10 ms 10e-4 100 ms 10-5 1s 10e-6 10s 10e-7 100s 10e-8 1000s 10e-9 10,000s 344xN 492xN 510xN 512xN 512xN 512xN 512xN

The BIP violations are counted over a sliding time window equal to the maximum detection time. The BIP-8 violation counts of individual STS-1s of an STS-N are added together. The above values take into account the error detection saturation effect at high BER. Design objectives for the for the average detection time depend on the level N of the Optical Carrier and are typically an order of magnitude lower than the Max-detection times shown in the above table for OC-3. As the optical rate increases the average detection time decreases proportionally. B-ISDN (Broadband Integrated Services Digital Network): A single ISDN network which can handle voice, data, and eventually video services.

33

Bistable Multivibrator (flip-flop): A simple element of memory made up of an assembly of logic gates. Based on inputs, the state of a flip-flop can be changed back and forth, affecting the future output of the flip-flop. Bit (Binary Digit): The smallest unit of data in a digital system. A bit is a single one or zero. A group of bits, such as 8-bits or 16-bits, compose a byte. The number of bits in a byte depends upon the processing system being used. Typical byte sizes are 8, 16, and 32.

Bit Bucket: Any device capable of storing digital data--whether it be video, audio or other types of data. Bit Budget: The total amount of bits available on the media being used. In DVD, the bit budget of a single sided/single layer DVD5 disk is actually 4.7 GB. Bit Depth: The number of levels that a pixel might have, such as 256 with an 8-bit depth or 1,024 with a 10-bit depth. - How many bits it takes to represent the color in one pixel. The greater the bit depth, the more colors you can potentially display, and the more power it takes to represent a screen full of pixels at that bit depth. Common bit depths are 16 and 32 bits. Bit depth of 1 would only enable 2 colors, bit depth of 2 enables 4 colors, and so on, where the amount of potential colors can be determined by 2^bit_depth. Bitmap: A type of graphics format that specifies a number of pixels and then specifies what color those pixels are. A simplified example: 5 red, 4 green, 3 red, etc. GIF and JPEG files are bitmaps. If an image contains large blocks of solid color, it will take less storage space than an image with a lot of small regions with different colors. Bitmap font: A font where each character is stored as a bitmap graphic. These fonts are not easy to scale to different sizes. Bit-Stuffing: In asynchronous systems, a technique used to synchronise asynchronous signals to a common rate before multiplexing. Bit Parallel: Transmission of digital video a byte at a time down a multi-conductor cable where each pair of wires carries a single bit. This standard is covered under SMPTE 125M, EBU 3267-E, and ITU-R BT.656-4.

34

Bit Period (T): The amount of time required to transmit a logical one or a logical zero.

Bit Rate: The rate at which the compressed bit stream is delivered from the channel to the input of a decoder. Bit Rate Reduction: See: Compression. Bit Serial: Transmission of digital video a bit at a time down a single conductor such as coaxial cable. May also be sent through fiber optics. This standard is covered under ITU-R BT.656-4. Bit Slippage: 1. Occurs when word framing is lost in a serial signal so that the relative value of a bit is incorrect. This is generally reset at the next serial signal (TRS-ID for composite and EAV/SAV for component). 2. The erroneous reading of a serial bit stream when the recovered clock phase drifts enough to miss a bit. 3. A phenomenon that occurs in parallel digital data buses when one or more bits get out of time in relation to the rest. The result is erroneous data. Differing cable lengths is the most common cause. Bit Stream: A continuous series of bits transmitted on a line. Black Level: Describes the appearance of darker portions of a video image. Black is the absence of light, so to create the black portions of an image, a display must be able to shut off as much light as possible. Displays with good black level capability not only produce deeper blacks, but also reveal more details and shading in dark or shadowy scenes. High-quality CRT-based TVs generally have better black level performance than digital displays such as plasma, LCD, DLP, and LCoS. Block: Rectangular area of picture, usually 8 x 8 pixels in size, which are individually subjected to DCT coding as part of a digital picture compression process. Artifact of compression generally showing momentarily as misplaced rectangular areas of picture with distinct boundaries. This is one of the major defects of digital compression, its visibility generally depending on the amount of compression used, the quality of the original signal, and the quality of the coder. The visible blocks may be 8 x 8 DCT blocks or "misplaced blocks"--16 x 16 pixel macroblocks, due to the failure of motion prediction/estimation in encoder or other motion vector system, such as a standards converter.

35

The MPEG Data Hierarchy


Block Coding: Block coding is the primary type of channel coding which was used in earlier Mobile communication systems and digital broadcasting system. Simply it adds redundancy so that at the receiver, one can decode with (theoretical) probability of zero errors provided that the Rate(R) would not exceed the Channel Capacity. There are many types of block codes, but the most important by far is Reed-Solomon coding because of its widespread use on the DTV, the Compact disc, the DVD, and in computer hard drives. Golay, BCH and Hamming codes are other examples of block codes. Block Error rate (BLER): One of the underlying concepts of error performance is the notion of Errored Blocks; i.e., blocks in which one or more bits are in error. A block is a set of consecutive bits associated with the path or section monitored by means of an Error Detection Code (EDC), such as Bit Interleaved Parity (BIP). Block Error rate (BLER) is calculated with the formula: BLER = errored blocks received/total blocks sent Bluetooth: Special Interest Group (www.bluetooth.com) founded in 1998 by Ericsson, IBM, Intel, Nokia and Toshiba. Bluetooth is an open standard for short-range transmission of digital voice and data between mobile devices (laptops, PDAs, phones) and desktop devices. It supports point-to-point and multipoint applications. Bluetooth provides up to 720 Kbps data transfer within a range of 10 meters and up to 100 meters with a power boost. Bluetooth transmits in the unlicensed 2.4GHz band and uses a frequency hopping spread spectrum technique that changes its signal 1600 times per second. If there is interference from other devices, the transmission does not stop, but its speed is downgraded. Ericsson (a Scandinavian company) was the first to develop Bluetooth. The name comes from Danish King Harald Blatan (Bluetooth) who in the 10th century began to Christianize the country. BNC (British Naval Connector): A connector type for 10Base2 or Thin-Net networks. Shaped like the letter T, it connects coaxial cables. The "T" has two male connectors and one female connector. The female connector can connect to the
36

male connection of an Ethernet card that supports BNC connections, or remain empty. The male connectors link to coaxial cable or a terminator. The BNC part stands for either British Naval Connector, British Nut Connector, or Bayonet Neill-Concelman ... or perhaps it stands for some combination of those terms. Neill and Concelman are the names of the inventors of the BNC Connector Boot Up: To start up. Most computers contain a system operating program that they read out of memory and operate from after power up or restart. The process of reading and running that program is called boot up. Bottom Field: One of two fields that comprise a frame. Each line of a bottom field is spatially located immediately below the corresponding line of the top field. Bottleneck: Part of a system that limits the performance of the system. This term was derived from the neck of a bottle, which limits the flow of liquid due to the smaller circumference of the neck as compared to the wider body of the bottle. Often you will hear of people attempting to find and eliminate the bottlenecks in their computer systems or networks. This is certainly a helpful practice if the bottleneck is slowing things down inordinately, but as soon as you remove one bottleneck from a system remember that something else immediately becomes the bottleneck. For example, if you get a faster processor to speed up your 3D video, the 3D video card may be the bottleneck afterwards. Bouquet: collection of services marketed as a single entity BPSK: Biphase shift keying. BPSK is a digital frequency modulation technique used for sending data over a coaxial cable network. This type of modulation is less efficient--but also less susceptible to noise--than similar modulation techniques, such as QPSK and 64QAM. Brillouin Scattering - Stimulated Brillouin scattering is an interaction between the optical signal and the acoustic waves in the fiber that causes the optical power to be scattered backwards towards the transmitter. It is a narrowband process that effects each channel in a DWDM system individually. It is noticeable in systems that have channel powers in excess of 5 - 6 dBm. In most cases SBS can be suppressed by modulating the laser transmitter to broaden the line width. Broadband: 1) A response that is the same over a wide range of frequencies. 2) Capable of handling frequencies greater than those required for high-grade voice communications (higher than 3 to 4 kilohertz). Broadcast: A method of sending information over a network. With broadcasting, data comes from one source and goes to all other connected sources. This has the side effect of congesting a medium or large network segment very quickly. Sometimes broadcasting is necessary to locate network resources, but once found, more advanced networking protocols change to point-to-point connections to transmit data. Nowadays, switches and routers often do not pass along broadcast packets, but in the days of shared Ethernet broadcasting could really congest a network.

37

Broadcaster (SERVICE Provider): organization which assembles a sequence of events or programmes to be delivered to the viewer based upon a schedule Broadcast FTP Protocol (BFTP): A one-way IP multicast based resource transfer protocol; the unidirectional Broadcast File Transfer Protocol (BFTP) is a simple, robust, one-way resource transfer protocol that is designed to efficiently deliver data in a one-way broadcast-only environment. This transfer protocol is appropriate for IP multicast over television vertical blanking interval (IPVBI), in IP multicast carried in MPEG-2, like with the DVB multiprotocol encapsulation, or in other unidirectional transport systems. It delivers constant bitrate (CBR) services or opportunistic services, depending on the characteristics and features of the transport stream multiplexer or VBI insertion device. Browser: Most commonly used to refer to a software program used to look at World Wide Web pages. More technically, a browser reads HTML pages over TCP/IP port 80 using HTTP. Thus, you will notice the term "HTTP" before the URL in your browser address bar, such as: http://www.geek.com/ Buffer: 1) A temporary location to store or group information in hardware or software. Buffers are used whenever data is received in sizes that may be different than the ideal size for the hardware or software that uses the buffer. For example, a 64-bit processor on a 16-bit bus may have a buffer to hold 16-bit requests until they equal 64-bits. Another use of buffers is to keep hardware from getting overwhelmed with information. In that scenario, you use a large buffer to hold data until a device or program is ready to receive it, instead of just pushing it onto a device that might not be ready. Buffers must be optimized in size to work efficiently for the purpose they are designed. 2) In optical fiber, a protective coating applied directly to the fiber (illustrated).

Buffered Memory: Memory modules that have extra chips on them to support Error Checking and Correcting (ECC) functionality. Buffer Management: Rules are specified so as to avoid underflow and overflow of receive side buffers. This is achieved by a hypothetical receive side timing model, which specifies timing relationships between outgoing PDUs at the send side. Buffer Overload: See: Video coder overload. Bug: This is commonly an error in design or programming in a hardware device or piece of software. The effects of a bug may be as harmless as an extra graphic on the screen, or as harmful as a system crash or loss of data. The first computer

38

"bug" was a real bug, a moth, in fact, that was stuck between relays back in an early computer in 1945. See also Feature Burn-in: Screen burn-in can damage displays that rely on a phosphor coating on the screen plasma TVs and rear-projection CRT-based TVs are the most vulnerable to burn-in, and it's less likely, but possible with direct-view CRT TVs. Burn-in can occur when a static image such as a video game, stock or news ticker, or station logo remains on-screen for an extended period. Over time, these images can become etched into the phosphor coating, leaving faint but permanent impressions on-screen. The chance of burn-in can be reduced or eliminated by properly adjusting a display's brightness and contrast settings. Bus: A group of conductors that together constitute a major signal path. A signal path to which a number of inputs may be connected to feed to one or more outputs.

Bus Address: A code number sent out to activate a particular device on a shared communications bus. Business to Business (B2B): This term is often used to describe websites that sell goods or services to other businesses. Thus, businesses are serving other businesses as opposed to consumers. Business to Consumer (B2C): A form of doing business that deals with selling goods and services to the consumer marketplace. Examples of this would be selling consumer electronics, toys, or pet supplies. This contrasts with the business to business model. BWF Broadcast WAV: An audio file format based on Microsoft's WAV. It can carry PCM or MPEG encoded audio and adds the metadata, such as a description, originator, date and coding history, needed for interchange between broadcasters. Bypass: The ability of a station to isolate itself optically from a network while maintaining the continuity of the cable plant.

39

Byte: A group of data bits that are processed together. Typically, a byte consists of 8, 16, 24 or 32 bits. 1 Byte = 8 bits = 256 discrete values (brightness, color, etc.) 1 kilobyte = 210 bytes = 1,024 bytes: (not 1000 bytes) 1 Megabyte = 220 bytes = 1,048,576 bytes: (not 1 million bytes) 1 Gigabyte = 230 bytes = 1,073,741,824 bytes: (not one billion bytes) 1 Terabyte = 240 bytes = 1,099,511,627,776 bytes: (not one trillion bytes) A full frame of digital television, sampled according to ITU-R 601, requires just under 1 Mbyte of storage (701 kbytes for 525 lines, 829 kbytes for 625 lines). HDTV frames are 4-to-5 times as large and digital film frames may be that much larger again. Byte aligned: A bit in a coded bit stream is byte-aligned if its position is a multiple of 8-bits from the first bit in the stream.

40

-CC-Langage: A programming language developed in the late '70s. It became hugely popular due to the development of UNIX, which was written almost entirely in C. C was written by programmers for programmers, and it lets you write code in sloppy ways that other, more structured languages do not. When you think of programming in C, think of driving a Delorean. It goes really fast, but it's a mess inside. C++: An extension of the C programming language that adds object-oriented concepts. CableCARD: A removable security card available from cable TV providers. It allows a TV with a compatible CableCARD slot to receive digital cable programming, including premium and HD channels, without using a separate set-top box. The FCC (Federal Communications Commission) began requiring service providers to make CableCARDs available as of July 1, 2004. Contact your local cable provider for details regarding the availability and costs of CableCARD-related services. CableLabs: Founded in 1988 by members of the cable television industry, Cable Television Laboratories, Inc. (CableLabs) is a non-profit research and development consortium that is dedicated to pursuing new cable telecommunications technologies and to helping its cable operator members integrate those technical advancements into their business objectives. http://www.cablelabs.com/ Cable Modem: A data modem that uses the bandwidth of a given cable system, which promise speeds of up to 80 times faster than an ISDN line or six times faster than a dedicated T1 line (the type of connection most large corporations use). Because cable modems provide Internet access over cable TV networks (which rely primarily on fiber optic or coaxial cable), they are much faster than modems that use phone lines. Bandwidths are typically up to 30 Mbps in the downstream direction. Cache Memory : Generally a small chunk of fast memory that sits between either 1) a smaller, faster chunk of memory and a bigger, slower chunk of memory, or 2) a microprocessor and a bigger, slower chunk of memory. The purpose of cache memory is to provide a bridge from something that's comparatively very fast to something that's comparatively slow. Most microprocessors have built-in cache memory that holds some of the information from main memory. When the processor needs the information it takes it from the speedy cache instead of the slower main memory. Cache memory GREATLY increases the speed of a computer by storing data that is most often accessed. Camcorder: A camera used for recording video footage.

CAD: Computer Aided Design. Oh, you cad! This refers to the use of computers

41

to design things. There are specific CAD programs like AutoCAD that are generally resource-intensive, requiring fast processors, lots of memory, and a big, clear monitor for best results. CAD has enabled people to easily model, create, and walk through or view designs of 3D objects or floor plans from different angles on a computer without actually taking the time to make a physical mock up. This is a huge time saver, and has revolutionized design in general. Category I - Terminal options that perform an asynchronous multiplex function. Examples of Category I transport NE interfaces are 1) low speed interfaces to ADMs, 2) digital radio terminals and fiber optic terminals (excluding terminals that function solely as digital repeaters or regenerators), and 3) low-speed interfaces to DCSs (e.g. , the DS-1 interface to a DCS 3/1). Asynchronous DS-1, DS-2, DS-1C, DS-3 interfaces to SONET NE are considered Category I. Category II - Equipment interfaces whose behavior with respect to timing jitter is governed exclusively by input timing recovery circuitry. Examples of Category II transport NE interfaces are 1) digital terminals at a DLC system, 2) repeaters for metallic cables, 3) regenerators for fiber optic cables, and 4) high speed interfaces to DCSs (e.g. with respect of jitter behavior the DS-3 interface to a DCS 3/1. STS-N and OC-N interfaces to a SONET NE are considered Category II. Call Priority: Priority assigned to each origination port in circuit-switched systems. This priority defines the order in which calls are reconnected. Call priority also defines which calls can or cannot be placed during a bandwidth reservation. See also bandwidth reservation. Caption: A textual representation of the audio information in a video program. Captions are usually intended for the hearing impaired, and therefore include additional text to identify the person speaking, offscreen sounds, and so on. Capture: Also called Cap or Capping - To capture video or TV/Sattelite signals to disk. This can include firewire capture from DV cameras. VCDhelp Capture Section Carrier: Electromagnetic wave or alternating current of a single frequency, suitable for modulation by another, data-bearing signal.. Carrier-to-Noise Ratio: See C/N. CAV (Constant Angular Velocity): the disc(CD/DVD) is read/written at a constantly increasing speed. C-Band: The wavelength range between 1530 nm and 1562 nm used in some CWDM and DWDM applications in fiber optic transmission. CBR (Constant Bit Rate): CBR refers to the delivery of multimedia where there is dedicated bandwidth and the data can be delivered at a guaranteed constant bit rate. MPEG-1 and 2 are designed for CBR delivery. Constant bit rate cannot be

42

assured on the Internet or most Intranets. Protocols such as RSVP are being developed and deployed to provide bandwidth guarantees. CC (Carbon Copy): A method of sending a copy of an e-mail to someone, but implying that the person is not the direct recipient. For example, you send an e-mail with instructions to a group you manage, and CC it to your boss so that he or she knows what's going on but understands that the instructions in the mail were not meant for him or her to carry out. When you carbon copy someone in an e-mail, the recipients in the "To" field of the e-mail are aware of the names in the CC field. If you want to keep names secret from the To and CC recipients, you would use "BCC," or blind carbon copy. CCD (Charge Coupled Device): A device that stores samples of analog signals. Used in cameras and telecines as an optical scanning mechanism. Advantages include good sensitivity in low light and absence of burn-in and phosphor lag found in CRTs. The CCD basically reads the image by storing a group of charges based on the image to which it is exposed. These charges are analog charges, as opposed to simple digital on/off charges. Thus, you can grab degrees of light and color to transfer a visual image into a group of electrical charges, and then to your computer screen, video tape, or printer. CCIR (Comit Consultatif International des Radiocommunications): (International Radio Consultative Committee), an international standards committee no longer in operation and replaced by the International Telecommunications Union (ITU).

CD (Compact Disk): An optical disk storage media that is designed to store audio, video, and computer data in a digital format. CD's have a capacity to store 650 Mb (megabytes) of data. The digital information in a standard audio CD is encoded in the PCM format. CDDI (Copper data distributed interface): A high speed data interface--like FDDI but using copper. CDMA (Code Division Multiple Access): A 2G digital wireless technology that allows multiple calls to share a radio frequency 1.23MHz wide in the 800MHz-1.9GHz band without causing interference. This is accomplished by assigning each call a unique code and varying its signal by that code to allow only the caller and receiver with that code to communicate with each other. The original CDMA standard allows transmission of up to 14.4Kbps per channel, with up to 8 channels being able to be utilized at once for 115Kbps speeds. CDMA2000: The multiplexed version of the IMT-2000 standard developed by the ITU, and it's part of 3G wireless technology. It increases wireless data transmission speeds of the original CDMA standard to 144Kbps using a single channel and 2Mbps by utilizing 16 channels.

43

CD-R (Compact Disk Recordable): CD-R drives record up to 650 MB of data onto specialized CD-R media. The media is more expensive compared to the mass-produced CDs that software is generally distributed on, but cheap for the amount of data you can store on it. CD-R media is a WORM technology. You can Write to it Once and Read Many times, but you can't erase anything you've written. CD-ROM (Compact Disc-Read Only Memory): A disc used to store information such as computer software programs or copies of work created using a computer. CD-RW (CD-ReWriteable): A CD-ROM format that not only reads standard CD-ROMs, but can read and write CD-R disks, and also read and re-write CD-RW media. CD-RW media is more expensive than CD-R media, but it can be written to more than once in the same location, much like a hard drive or floppy disk on a computer. Standard CD-RW disks store up to 640 MB of data, similar to CD-ROM and CD-R disks. Cell: 1) Geographical area that is covered with DVB-T signals delivering one or more particular transport streams throughout the area by means of one or more transmitters2) The basic geographical unit of a cellular communications system. Service coverage of a given area is based on an interlocking network of cells, each with a radio base station (transmitter/receiver) at its center. The size of each cell is determined by the terrain and forecasted number of users. NOTE: The cell may in addition contain repeaters. Two neighbouring cells may be intersecting or fully overlapping. The cell_id that is used to uniquely identify a cell shall be unique within each original_network_id. For hand-over purposes it is more convenient if the transport streams associated with the cell cover exactly the same area, or only one transport stream per cell is used. Cellphone (Cellular Telephone): A mobile, wireless telephone that communicates with a local transmitter using a short-wave analog or digital transmission. Cellular phone coverage is limited to areas where a cellular phone can adequately communicate with a nearby transmission tower. Cellular Telephone (Cellphone): A mobile, wireless telephone that communicates with a local transmitter using a short-wave analog or digital transmission. Cellular phone coverage is limited to areas where a cellular phone can adequately communicate with a nearby transmission tower. CEMA (Consumer Electronics Manufacturers Association): An industry group that represents manufacturers of consumer electronics products. CENELEC: European Committee for Electrotechnical Standardization. Center Wavelength: In a laser, the nominal value central operating wavelength. It is the wavelength defined by a peak mode measurement where the effective optical power resides (see illustration). In an LED, the average of the two wavelengths measured at the half amplitude points of the power spectrum.

44

CF (CompactFlash): A 50-pin connection standard used in some PDAs, digital cameras, hardware MP3 players, and other small hardware devices. It was initially designed to offer PCMCIA-ATA standard access to Flash memory in a smaller form factor than PCMCIA at 43 mm by 36 mm and thicknesses of 3.3 mm (Type I) or 5.5 mm (Type II), but the standard has been expanded to support other peripherals, such as CF modems, CF network cards, and other types of I/O devices with the CompactFlash+ standard. Channel: 1. Communication path. Multiple channels can be multiplexed over a single cable in certain environments. 2. In IBM, the specific path between large computers (such as mainframes) and attached peripheral devices. 3. Specific frequency allocation and bandwidth. Downstream channels used for television in the world are 6 MHz, 7MHz and 8 MHz wide. Channel Capacity: Maximum number of channels that a cable system can carry simultaneously. Channel Coding: Data encoding and error correction techniques used to protect the integrity of data that is being transported through a channel. Typically used in channels with high bit error rates such as terrestrial and satellite broadcast and videotape recording. Cochannel Interference (CCI): Spectral overlap of signals in adjacent systems, (spatially multiplexed wireless, for example) causing degradation in each. Channel intersymbolinterference: Interference between adjacent data symbols caused bychannel frequency and phase distortion. Character: A single letter, number, or symbol. This term applies to data typed into a computer, shown on computer screens, or printed (or written) on paper. Characters Per Second (CPS): The amount of text characters printed in a second. This term was used more when daisy wheel and dot matrix printers were common.

45

Nowadays, printers are rated more by pages per minute (ppm), as the amount of characters printed are irrelevant to modern printers that work on a page-by-page basis. Although inkjet printers still go line by line like dot matrix printers, they are much faster, and it's easier to compare page printing speeds. Chat (noun): In computer terms, this is a synchronous exchange of typed information that is viewed by two or more people over a computer network. (verb: to chat) To participate in the synchronous exchange of information with one or more people over a computer network. Normally, you type in a remark, send it, and see it appear onscreen. Likewise, others can see the information at the same time and submit their own remarks. Chat room: Any Web address, IRC channel, or other virtual space where two or more users can get together and exchange synchronous remarks. Most chat rooms have a particular theme, but a theme is not required. Checksum: A simple check value of a block of data, calculated by adding all the bytes in a block. It is easily fooled by typical errors in data transmission systems; so that for most applications, a more sophisticated system such as CRC is preferred. Chip: The time it takes to transmit a bit or single symbol of a PN code. Chirp: In laser diodes, the shift of the lasers center wavelength during single pulse durations.

Chroma Bug: The basic "Chroma Bug" manifests itself as streaky or spiky horizontal lines running through the chroma channel, most notably on diagonal edges. As mentioned above, this problem has been around for a long time. It's only just now being noticed largely because one needs a good high-resolution display, such as a front projector and a six foot projection screen, to really see the problem clearly. In addition, the increasingly common use of large progressive displays has really allowed people to get up close to the screen and see every artifact magnified in great detail. Problems that might have gone unnoticed on a 20 inch interlaced TV suddenly hit you in the face. With the advent of relatively high resolution media like DVD, people are starting to compare the video image to the original film image,

46

not to other forms of TV. And suddenly strange problems that people accepted in a TV picture, but would never be allowed on film, look out of place. The Chroma Bug is one of the most visible artifacts around, but because it's specific to MPEG and 4:2:0 encoding, there was nothing written about it until very recently. http://www.hometheaterhifi.com/volume_8 ... -2001.html Chromakeying: The process of overlaying one video signal over another, the areas of overlay being defined by a specific range of color, or chrominance, on the foreground signal. For this to work reliably, the chrominance must have sufficient resolution, or bandwidth. PAL or NTSC coding systems restrict chroma bandwidth and so are of very limited use for making a chromakey which, for many years, was restricted to using live, RGB camera feeds. An objective of the ITU-R 601 digital sampling standard was to allow high quality chromakeying in post production. The 4:2:2 sampling system allowed far greater bandwidth for chroma than PAL or NTSC and helped chromakeying, and the whole business of layering, to thrive in post production. High signal quality is still important and anything but very mild compression tends to result in keying errors appearing--especially at DCT block boundaries. Chromakeying techniques have continued to advance and use many refinements, to the point where totally convincing composites can be easily created. You can no longer "see the join" and it may no longer be possible to distinguish between what is real and what is keyed. Chroma Noise: Chroma noise affects areas of colour in the image. Instead of being clean, even areas of colour, chroma noise makes colours look grainy due to random noise being inserted into the colour signal. Chroma noise seems to particularly affect blue, although it can potentially be seen in any large expanse of a single colour. Chroma noise is pretty much exclusively an artifact of analogue video processing, and it is very rare to see it in modern, all-digital transfers. Increased MPEG macro-blocking artifacts are a potential side-effect of chroma noise, as the MPEG encoder attempts to encode the extra spurious random noise, leaving less bits for actual picture information. Chroma Subsampling: In digital image processing, chroma subsampling is the use of lower resolution for the colour (chroma) information in an image than for the brightness (intensity or luma) information. It is used when an analog component video or YUV signal is digitally sampled. Because the human eye is less sensitive to colour than intensity, the chroma components of an image need not be as well defined as the luma component, so many video systems sample the colour difference channels at a lower definition (i.e., sample frequency) than the brightness. This reduces the overall bandwidth of the video signal without apparent loss of picture quality. The missing values will be interpolated or repeated from the preceding sample for that channel. The subsampling in a video system is usually expressed as a three part ratio. The three terms of the ratio are: the number of brightness ("luminance" "luma" or Y) samples, followed by the number of samples of the two colour ("chroma") components: U/Cb then V/Cr, for each complete sample area. For quality
47

comparison, only the ratio between those values is important, so 4:4:4 could easily be called 1:1:1; however, traditionally the value for brightness is always 4, with the rest of the values scaled accordingly.

Sometimes, four part relations are written, like 4:2:2:4. In these cases, the fourth number means the sampling frequency ratio of a key channel.In virtually all cases, that number will be 4, since high quality is very desirable in keying applications. NOTE: The mapping examples given are only theoretical and for illustration. The bitstreams of real-life implementations will probably differ. Chromininance: The color component of a video signal that includes information about hue and saturation. Chrominance Formats: In addition to the 4:2:0 format supported ISO/IEC 11172-2 ITU-T H.262 supports 4:2:2 and 4:4:4 chrominance formats. in

CI (Common Interface): A DVB specification for an expansion socket on a digital television receiver using a PCMCIA (PC-Card) socket similar to that found on most notebook PCs. Cards can be used for a variety applications such as memory expansion or adding a CA system complying with the Simulcrypt or Multicrypt standards (see below). CIF (Common image format): CIF is used to trade content worldwide. 1) For computers the size is 352x240 pixels. 2) For digital high definition, ratified by the International Telecommunications Union (ITU) in June 1999, the 1920x1080 digital sampling structure is a world format. All supporting technical parameters relating to scanning, colorimetry, transfer characteristics, etc. are universal. The CIF can be used with a variety of picture capture rates: 60p, 50p, 30p, 25p, 24p, as well as 60i and 50i. The standard is identified as ITU-R BT 709-3. CICS (Customer Information Control System): Not to be confused with CISC, CICS is online transaction processing application server software originally written by IBM for mainframes for dealing with customer information and transaction processing in the enterprise. Now versions of CICS are available for UNIX and x86 platforms as well.

48

CIO (Chief Information Officer): An executive title, usually at a medium or large-sized company. The person that bears this title is in charge of the flow of information in and out of the company, and also guides the technology used in information systems. The CIO's focus, as opposed to the CTO (Chief Technology Officer), is more on internal technology use and the IT department. A CIO is more concerned with keeping systems running day-to-day and uptime. CIOs are typically more managerial than CTOs. Cinepak: A high-quality medium bandwidth compression that is not real-time but can play back in software. Its 24-bit format produces high-quality video at 320 x 240 resolution and 15 frames per second at a 150 Kbps data rate. Commonly a CD-ROM solution developed a number of years ago and not a competitor to more modern techniques. Circuit: Most commonly, this describes an electrical device with a defined path of electrical current that can receive input voltages in a 0 range and a 1 range, and responds with an output voltage that is also in a 0 or 1 range based on the logic inside of the circuit. If the circuit has a 0 range of 0-1 volts and a 1 range of 4-5 volts and receives a 0.5 volt input, it will act as if it has received a "0" input. Ranges are necessary because voltages are never exact. When thinking about the logic behind a circuit, it is easiest to think of the inputs and outputs simply as a 0 or 1 instead of a range of voltages. See also integrated circuit. Circuit switching: The basis of telephone call handling, with a circuit connection being set up between caller and called party. This connection is held open for the duration of the call, even when no information (voice, data, images or video) is being transmitted. The alternative is packet switching. Circular Polarization: In an antenna, where the tip of the field vector, as viewed in the direction of propagation, rotates either clockwise (right hand) or counterclockwise (left hand). Cladding: Material that surrounds the core of an optical fiber. Its lower index of refraction, compared to that of the core, causes the transmitted light to travel down the core.

49

Clear Channel: Channel that uses out-of-band signaling (as opposed to in-band signaling), so the channel's entire bit rate is available. Clear Sky Margin:

50

Under clear sky condition, the received signal level is C(min)+M in dB, where signal power C(min) the signal powerr sufficient to guarantee, in the service area, quasi-error-free DVB decoding under clear sky conditions and M is the maximum rain attenuation (dB) which is expected to take place in the whole service area for a given percentage of time (e.g. 99.6%). Click (v. to click): The action of pressing on the button of a mouse or other input device. The press of the button when the cursor or pointer is in certain locations will cause the operating system or program you are running to react accordingly. Client. A general term that can mean a few things. In the client/server architecture, a client consists of any machine connected to a network that exchanges data with a server, and can also refer to the person using the machine. In addition, the term client refers to any software that is used on a client machine to exchange data with software running on a server, such as an e-mail client. When you send and receive e-mail you use an e-mail client that connects to an e-mail server. Client/Server. Client server technology came about when computers began to cost less. Mainframes are very expensive, and didn't give users much personal freedom. The client/server model promised to change that scenario, and it's much more popular today. Basically, a client computer with its own memory and hard drive communicates with a server whenever it needs data from the server. The client can run by itself without the server and communicate with different servers as it needs to. Click and Drag: A computer term for the user operation of clicking on an item and dragging it to a new location. Cliff Effect: Digital receivers respond differently than analog to weaker signals. In situations where an analog receiver would get a snowy picture background noise overcoming the signal to some degree a digital receiver could fail altogether. The receiver simply doesnt get enough reliable digital information to re-construct the transmitted picture. An RF (radio frequency) characteristic that causes DTV reception to deteriorate dramatically with a small change in signal reception. Bit error rate increases to the point where video cannot be obtained by the receiver. The picture and audio are lost entirely.

51

GOOD HDTV PAL QUALITY EDGE OF SERVICE AREA ROTTEN CLOSE DISTANCE FAR SDTV

Analog Digital 1 Digital 2

Clip: 1. In keying, the trigger point or range of a key source signal at which the key or insert takes place. 2. The control that sets this action. To produce a key signal from a video signal, a clip control on the keyer control panel is used to set a threshold level to which the video signal is compared. 3. In digital picture manipulators, a menu selection that blanks portions of a manipulated image that leave one side of the screen and "wraps" around to enter the other side of the screen. 4. In desktop editing, a pointer to a piece of digitized video or audio that serves as source material for editing. Clip Sheet: A nonlinear editing term for the location of individual audio/video clips (or scenes). Also known as a clip bin. Clock Cycle: Think of a clock cycle as one tick of the second hand (but generally at a much higher speed). Computer clocks run voltage through a tiny crystal that oscillates at a predictable speed to give a meaningful timing method to the computer. One clock cycle doesn't necessarily mean that the processor does one operation. Today's high-end processors often complete more than one operation per clock cycle, and other times, in the worst cases, it will take several clock cycles to complete one operation. Clock Frequency: The master frequency of periodic pulses that are used to synchronize the operation of equipment.
52

Clock Jitter: Undesirable random changes in clock phase. Clock Phase Deviation: See: Clock skew. Clock Recovery: The reconstruction of timing information from digital data. Clock Skew: A fixed deviation from proper clock phase that commonly appears in D1 digital video equipment. Some digital distribution amplifiers handle improperly phased clocks by reclocking the output to fall within D1 specifications. Clone: An exact copy, indistinguishable from the original. As in copying recorded material, for example a copy of a non-compressed recording to another non-compressed recording. If attempting to clone compressed material care must be taken not to decompress it as part of the process or the result will not be a clone. Closed Captions (CC): Text stream included in broadcast signal that provides narrative description of dialogue, action, sounds, and other elements of the picture. Most often used by the hearing impaired and in environments where audio is undesirable (such as in restaurants). CCs or closed Subtitles are provided with a program that when decoded, at the viewers choice on appropriately fitted equipment will display the text of the audio superimposed over the picture. Multiple subtitles in different languages may be provided and/or only for the hearing impaired (where notifications of sound effects are included). Such services are provided in different standards for different media such as DVD and analog and digital broadcast. In the USA, NTSC analog TV services provide CCs via a low speed VBI data system known as Line 21 as specified in EIA608. Unfortunately this sometimes causes confusion with international suppliers as co-incidentally in Australia PAL TV services, the closed caption data are also on VBI line 21 (and also 334 on field 2), but these use the higher speed EBU Teletext data system. These teletext closed captions operate independently to any other teletext service (parallel mode) that may be carried on earlier lines in the VBI. The Australian convention is to assign the captions to Teletext magazine 8, page 01 ie. page 801, whereas in NZ, Europe and the UK, the teletext closed captions are normally on page 888. DVB digital services, by converting the CCs to packetised data, may also carry teletext for closed captions (see EN 300 472) or DVBs Subtitle system where bit-mapped images of characters (such as Chinese characters), may be transmitted (see EN 300 743). Australian CC operations are covered in C.TVA (FACTS) operating practice OP42. Closed GOP: When encoding MPEG video, a Open GOP is one that uses no referenced pictures from the previous GOP at the current GOP boundary. For example the GOP is closed when it starts with an I Frame and subsequent B Frames do not rely on I or P frames from the previous GOP. Cluster: A group of computers connected over a network that are running software
53

that allows them each to work on individual pieces of one greater task. Clustering: Clustering is a technology using two or more computers that function together as a single entity for fault tolerance and load balancing. This can increase reliability and uptime in a client/server environment. One computer will sense when another computer is failing or getting bogged down and will take over full operation or just some of its tasks, depending on whether it's a complete fail-over design or just load balancing. CMOS (Complementary Metal-Oxide Semiconductor): A method of constructing transistors which produces microchips that run with relatively low power consumption compared to other methods. Most of the chips found in a modern PC are built using CMOS technology C/N (also CNR, Carrier-to-Noise Ratio): A measurement of the received carrier power relative to the power of background noise at the receiver input. In effect, a measurement of the quality of a radio (electromagnetic) transmission received dependant on the bandwidth, transmission power, path loss and propagation conditions between transmitter and receiver. C/N can be measured under different conditions for digital terrestrial TV reception and are known as: Gaussian - ideal apart from the addition of white Gaussian noise. Ricean - a channel with a prominent direct path between transmitter and receiver, but with a number of echoes present. Rayleigh - a time-varying frequency selective fading model for an RF channel. It could be a channel with no direct path between transmitter and receiver. C/N Threshold: The C/N at threshold of visibility (TOV) for random noise. Coating: The material surrounding the cladding of a fiber. Generally a soft plastic material that protects the fiber from damage.

Coaxial Cable: 1) A cable consisting of a center conductor surrounded by an insulating material and a concentric outer conductor and optional protective covering. 2) A cable consisting of multiple tubes under a single protective sheath. This type of cable is typically used for CATV, wideband, video, or RF applications.

54

COBOL (COmmon Business-Oriented Language): A programming language developed in the '60s by several computer companies and the U.S. Department of Defense. COBOL is still used today for programming business applications, and COBOL programmers were a major source of the Year 2000 headache. In fact, many of them came out of retirement to fix the mess they made, whether voluntarily or by direction, to save a couple of valuable bits of data back when bits cost big money. Codec (Coder-decoder): An acronym for "compression/deccompression", a codec is an algorithm or specialized computer program that encodes or reduces the number of bytes consumed by large files and programs. Files encoded with a specific codec require the same codec for decoding. Some codecs you may encounter in computer video production are Divx, MPEG-1, MPEG-2, Xivd, DV type 1 and type 2 for video and MP3, AC-3, AAC for audio. Code Generator: A code generator is part of a compiler. It takes intermediate code and translates it into the final workable code in the target language. Coded Representation: A data element as represented in its encoded form. Co-channel Interference: The interference from a signal on the same channel. Coding: Representing each video signal level as a number, usually in binary form. Coding Violation (CV): A transmission error detected by the difference between the transmitted line code and that expected at the receive end by the logical coding rules. COFDM (Coded Orthogonal Frequency Division Multiplexing): Orthogonal Frequency Division Multiplexing (OFDM) is a modulated multi-carrier transmission technique, which splits the available bandwidth into many narrow sub-band channels (typically 2000-8000). Each carrier is modulated by a low rate data stream. The modulation scheme can vary from a simple QPSK to a more complex 64-QAM (or other) depending on the required binary rate and the expected transmission robustness. For those familiar with Frequency Division Multiple Access (FDMA), OFDM is similar. However, OFDM uses the spectrum much more efficiently by providing a closer packing of the sub-band channels. To achieve this, all the carriers are made orthogonal to one another. By providing for orthogonality of carriers, each carrier has a whole number of cycles over a given symbol period. By doing this, the occupied bandwidth of each carrier has a null at the center frequency of each of the other carriers in the system. This results in minimal interference between the carriers, allowing then to be spaced as close together as is possible. Each individual carrier of the OFDM signal has a narrow bandwidth (for example 1kHz), and the resulting symbol rate is low. This results in the signal having high immunity to multi-path delay spread, as the delay spread must be very long to cause significant inter-symbol interference (> 500 milliseconds).

55

Coded Orthogonal Frequency Division Multiplexing (COFDM) has the same principle as OFDM except that Forward Error Correction (FEC) is applied to the signal prior to transmission. This overcomes errors in the transmission as a result of lost carriers from multiple propagation effects including frequency selective fading and channel noise. COFDM can transmit many streams of data simultaneously, each one occupying only a small portion of the total available bandwidth. This approach can have many advantages with proper implementation: 1. Because the bandwidth occupied by each sequence of symbols is relatively small, its duration in time is bigger. As a result, the immunity against multi-path echoes can be higher. 2. Frequency selective fades are spread over many carriers. Instead of completely destroying a number of adjacent symbols, many symbols are instead distorted only slightly. 3 By dividing available bandwidth in multiple narrow sub-bands, the frequency response over each of the individual sub-band channels is essentially flat even with steep multi-path induced fade. This can mean easier equalization requirements. Coherent Communications: In fiber optics, a communication system where the output of a local laser oscillator is mixed optically with a received signal, and the difference frequency is detected and amplified.

Collision: The result of two devices trying to use a shared transmission medium simultaneously. The interference ruins both signals, requiring both devices to retransmit the data lost due to the collision. Co-location: In transmission, one or more transmitters located on the same antenna mast. Color Depth: The number of bits used to represent the color of a pixel and thus how many colors can be displayed. Color depth is typically 8-, 16-, or 24-bit. 8-bit would give 256 colors. A high color pixel requires at lest 24-bit color (1.1164 billion colors). Colour Look-Up Table (CLUT): For DTV on-screen-displays, the CLUT is a look-up table of colour values for translating an objects pseudo-colours into screen display colours. It is a way of simplifying the amount of data required to display an object but has the limitation that only 4, 16 or 256 colour values are allowed somewhat like early PC EGA graphics systems. Not all decoders may support a CLUT with 256 entries. A palette of four colours would be enough for graphics that are basically monochrome, like subtitles, while a palette of 16 colours allows for cartoon-like coloured objects.

56

Color Space: The color range between specified references. Typically references are quoted in television: RGB, Y, R-Y, B-Y, YIQ, YUV and Hue Saturation and Luminance (HSL). In print, Cyan, Magenta, Yellow and Black (CMYK) are used. Moving pictures between these is possible but requires careful attention to the accuracy of processing involved. Operating across the media--print, film and TV, as well as between computers and TV equipment--will require conversions in color space. Color Space Conversion: The translation of color value form one color space to another. Since different media types, like video and computer graphics, use different color spaces, color space is often performed on the fly by graphics hardware. Combiner: In digital picture manipulators, a device that controls the way in which two or more channels work together. Under software control, it determines the priority of the channels (which picture appears in front and which in back) and the types of transitions that can take place between them. Combo Drive: A DVD-ROM drive capable of reading and writing CD-R and CD-RW media. May also refer to a DVD-R or DVD-RW or DVD+RW drive with the same capability. Comb Filter: A comb filter's task is to remove residual chrominance (color) information from the luminance (brightness) signal. Comb filtering enhances fine detail, cleans up image outlines, and eliminates most extraneous colors. Comb filters are not required and not used with S-video or component video connections since those connections carry the chrominance and luminance information separately. There are 4 types of comb filters found in today's TVs: Glass - may also be referred to as an "analog" comb filter. 2-Line Digital - compares consecutive scanning lines within one field of video and makes adjustments to reduce cross-color interference. 3-Line Digital - compares 3 scanning lines within a field of video. By comparing more picture information, a 3-line filter further reduces color bleeding and dot crawl. 3D Digital - not only analyzes consecutive scanning lines within a field, but also analyzes the preceding and following fields. Results in improved color purity and a more stable video image, and nearly eliminates dot crawl and color bleeding. Also called 3D Y/C.

57

Command Prompt: Any blinking cursor waiting, or prompting, for user input. In DOS the C: prompt greets you on most systems--this is a type of command prompt. As well, if you use any version of Windows you can get to a DOS-looking window that allows you to type in commands. UNIX can also greet you with a command prompt. For novice users a command prompt can be confusing, as it's unclear what to do next; but for experts a command prompt is a necessity at times. Common Interface (CI): A DVB specification for an expansion socket on a digital television receiver using a PCMCIA (PC-Card) socket similar to that found on most notebook PCs. Cards can be used for a variety applications such as memory expansion or adding a CA system complying with the Simulcrypt or Multicrypt standards. Companding: Contraction derived from the opposite processes of compression and expansion. Part of the PCM process whereby analog signal values are logically rounded to discrete scale-step values on a nonlinear scale. The decimal step number is then coded in its binary equivalent prior to transmission. The process is reversed at the receiving terminal using the same nonlinear scale. Compare with compression and expansion. See also a-law and mu-law. Compiler: A compiler translates a computer program from one language into another, catching any errors in syntax along the way.Most commonly, you translate some high level language, such as C++ or COBOL, into optimized machine language. This form of compilation puts your programs into a form that your computer (specifically your microprocessor) can understand without any translation, thus speeding them up greatly over programs that must be interpreted as they are run. Component (ELEMENTARY Stream): one or more entities which together make up an event, e.g. video, audio, teletext Component (video): The normal interpretation of a component video signal is one in which the luminance and chrominance remain as separate components, such as analog components in MII and Betacam VTRs, digital components Y, B-Y, R-Y(Y, Cr, Cb) in ITU-R 601. RGB is also a component signal. Component video signals retain maximum luminance and chrominance bandwidth. Component (HD) Video Connection: The output of a high definition video device (such as an HDTV set-top box), or the input of an HDTV receiver or monitor, comprised of (3) primary-color signals: red, green, and blue - each on a separate wire. The combination of these three signals convey all necessary picture information. In consumer video products, these (3) separate component signals refer to: Luminance (Y) - for Light; and two Chroma (Color) signals (Pb - blue) and

58

(Pr - red). HDTV-Component cables and connections are commonly labeled: Y/Pb/Pr. Component Digital: A digital representation of a component analog signal set, most often Y, B-Y, R-Y. The encoding parameters are specified by ITU-R BT.601-2 (CCIR 601). The parallel interface is specified by ITU-R BT.656 (CCIR 656) and SMPTE 125M. Component Digital Post Production: A method of post production that records and processes video completely in the component digital domain. Analog sources are converted only once to the component digital format and then remain in that format throughout the post production process. Composite (video): Luminance and chrominance are combined along with the timing reference "sync" information using one of the coding standards--NTSC, PAL or SECAM--to make composite video. The process, which is an analog form of video compression, restricts the bandwidths (image detail) of components. In the composite result color is literally added to the monochrome (luminance) information using a visually acceptable technique. As our eyes have far more luminance resolving power than for color, the color sharpness (bandwidth) of the coded single is reduced to far below that of the luminance. This provides a good solution for transmission but it becomes difficult, if not impossible, to accurately reverse the process (decode) into pure luminance and chrominance which limits its use in post production. Composite Digital: A digitally encoded video signal, such as NTSC or PAL video, that includes horizontal and vertical synchronizing information. Compress: A digital picture manipulator effect where the picture is squeezed (made proportionally smaller). Compressed Serial Digital interface (CSDI): A way of compressing digital video for use on SDI-based equipment proposed by Panasonic. Now incorporated into Serial digital transport interface. See: Serial digital transport interface. Compression: Reduction of the size of digital data files by removing redundant information (lossless) or removing non-critical data (lossy). Pictures are analyzed looking for redundancy and repetition and so discard unnecessary data. The techniques were primarily developed for digital transmission but have been adopted as a means of handling digital video in computers and reducing the storage demands for digital VTRs. Compression can be at either a set rate or a variable rate. Also known as Bit Rate Reduction (BRR). To reduce in size. Various methods encoding digital information can be used to reduce the size of an original digital recording by eliminating redundant information or information that is viewed as unnecessary or not critical. PCM is not a

59

compressed digital format. Various common encoding methods and their compression ratios compared to PCM are Dolby Digital (12:1) and DTS (3:1) Compression Artifacts: Compacting of a digital signal, particularly when a high compression ratio is used, may result in small errors when the signal is decompressed. These errors are known as "artifacts," or unwanted defects. The artifacts may resemble noise (or edge "busyness") or may cause parts of the picture, particularly fast moving portions, to be displayed with the movement distorted or missing. Compressionist: One who controls the compression process to produce results better than would be normally expected from an automated system. Compression Ratio: The ratio of the data in the non-compressed digital video signal to the compressed version. Modern compression techniques start with the ITU-R 601 component digital television signal so the amount of data of the non-compressed video is well defined--76 Gbytes/hour for the 525/60 standard and 75 Gbytes/hour for 625/50. The compression ratio should not be used as the only method to assess the quality of a compressed signal. For a given technique greater compression can be expected to result in worse quality but different techniques give widely differing quality of results for the same compression ratio. The only sure method of judgment is to make a very close inspection of the resulting pictures. Concatenation: 1) Linking together (of systems). Block and convolutional codes are frequently combined in concatenated coding schemes in which the convolutional code does most of the work and the block code (usually Reed-Solomon code) "mops up" any errors made by the convolutional decoder. This has been standard practice in satellite and deep space communications since Voyager 2 first used the technique in its 1986 encounter with Uranus. 2) In case of Fiber optic, it means the process of connecting pieces of fiber together. Concatenated Coding: A concatenated code consists of two codes, an inner code and an outer code, connected serially as shown in the figure above. The inner code is a binary block or convolutional code and the outer code is typically a Reed-Solomon code. In concatenated codes, the performance of the inner code has a major impact on the overall performance of the code. That is why usually convolutional codes with soft decoding using the Viterbi algorithm are commonly employed for the inner code.

60

Conditional Access (CA): Conditional access refers to methods that scramble or encrypt program video and audio, or private data, so that it may be received only by authorized receivers. A number of different private schemes using authorisation SmartCards have been used by Pay-TV operators working primarily via satellite The arrangements for terrestrial TV services generally require free-to-air services to be receivable without CA authorization but it is possible to also include additional encrypted services in a transmission that do. In Europe, a Common Scrambling Algorithm was developed to be used in conjunction with the standard MPEG data transport and selection mechanisms. Thus it is possible in a single DVB transmission to have a scrambled program that is accessible to viewers subscribing to different PAY-TV operators who have agreed to share the transmission channel. The data stream could carry multiple authorisation messages, generated by a number of different CA systems which enable access control of the same scrambled broadcast. This Simulcrypt technique allows both the delivery of one program to a number of different decoder populations that contain different CA systems, and also for the transition between different CA systems in any decoder population, for example, to recover from piracy. The price to pay is extra data bandwidth needs to be allocated to carry subscribers enabling messages (ECMs & EMMs) for each CA system. This of course will vary dependant on the number of subscribers. A Multicrypt option is also available, facilitated by the Common Interface (DVB-CI) specification proposed for standardization by CENELEC (European Committee for Electrotechnical Standardization). The CA module may operate in conjunction with a smartcard or with a PCMCIA card as popularly used with lap-top PCs.

61

Connectionless: Term used to describe data transfer without the existence of a virtual circuit. Compare with connection-oriented. See also virtual circuit . Connection-Oriented: Term used to describe data transfer that requires the establishment of a virtual circuit. See also connectionless and virtual circuit. Console: DTE through which commands are entered into a host Constraint Length: number of delay elements +1 in the convolutional coder. Content Protection (CP): Cryptographic and design techniques used to limit how data flows within a receiving device and between devices. This is generally used to restrict copying of copyright-protected material. Contouring: Digital video picture defect caused quantizing at too coarse a level. Contribution Quality: The level of quality of a television signal from the network to its affiliates. For digital television this is approximately 45 Mbps. Contrast Ratio: Measures the difference between the brightest whites and the darkest blacks a display can show. The higher the contrast ratio, the greater the ability of a display to show subtle color details and tolerate ambient room light. Contrast ratio is an important spec for all types of TV display, but especially for front projectors. Constructive Interference: Any interference that increases amplitude of the resultant signal. For example, when the wave forms are in phase, they can create a resultant wave equal to the sum of multiple light waves.

62

Convergence: In the context of mobile communications, convergence means many things. There is convergence of industry sectors, including telecommunications, information, media and entertainment; convergence of technologies, for example, of fixed and mobile communications and of telecommunications and computing; and there is convergence between mobile communications standard themselves. Convert: To change from one form into another. In video obviously it is to change one form of video into another. For example, many people like to convert divx to MPEG, quicktime to AVI, etc. Conversions to a final format is called encoding - an example is AVI to VCD MPEG-1. VCDhelp Convert Section Convolution Code: In digital broadcasting or telecommunication, a convolution code is a type of error-correcting code in which (a) each m-bit information symbol (each m-bit string) to be encoded is transformed into an n-bit symbol, where m/n is the code rate (n >= m) and (b) the transformation is a function of the last k information symbols, where k is the constraint length of the code. For decoding convolution codes, several algorithms exist for decoding convolutional codes. For relatively small values of k, the Viterbi algorithm is universally used as it provides maximum likelihood performance and is highly parallelizable. Viterbi decoders are thus easy to implement in VLSI hardware and in software on CPUs with SIMD instruction sets. Cookie: Websites send these to your browser so that the site is customized based on your previous actions on that site. For example, a website such as Geek.com could send a cookie to your browser to keep your user ID available so that you don't have to log in on your next visit. If you want to view your cookies, look for a file called "cookie.txt" on your hard drive, or check out the temporary files stored by your browser. Copyright Protection Technical Working Group (CPTWG): An ad-hoc cross-industry body organized to evaluate content protection technologies, for technology that can protect digital that is transmitted across the i.LINK (IEEE1394) digital interface. http://www.cptwg.org/
63

Core: In fiber optic cable, the light-conducting central portion of an optical fiber, composed of material with a higher index of refraction than the cladding. The portion of the fiber that transmits light.

Coring: 1) For noise reduction: Coring is used to remove fine detail information that does not contribute significantly to the detail of the picture but which adds noise to the image. Imagine the detail information viewed on a scope. About the baseline you'd see primarily the noise information, with the detail extending beyond that. Now imagine that you sliced (or cored) this signal so that only the information above the noise on the baseline came through. You would be left with most of the detail information intact but with much of the noise information removed. The coring adjustment determines how far from the baseline the detail information removed. You want to use just enough coring to reduce the noise in the picture but not so much that the fine detail in the image is affected. 2) For black level proccessing: Coring is the process where pixels are evaluated against a threshold value and set to pure black if below. This is a "poor man's" noise reduction in the black areas of an image and is implemented in many digital video cameras and capture chips (like the BT848 family). http://neuron2.net/coring.html Coarse Wavelength-division Multiplexing (CWDM): CWDM allows eight or fewer channels to be stacked in the 1550 nm region of optical fiber, the C-Band.

Co-siting: Relates to SMPTE 125M component digital video, in which the luminance component (Y) is sampled four times for every two samples of the two chrominance components (Cb and Cr). Co-siting refers to delaying transmission of

64

the Cr component to occur at the same time as the second sample of luminance data. This produces a sampling order as follows: Cb1/Y1/Cr1, /Y2/, Cb2/Y3/Cr2, /Y4/, 4:2:2 with co-siting on the first, third, and fifth. Co-siting reduces required bus width from 30 bits to 20 bits. Counter-Rotating: An arrangement whereby two signal paths, one in each direction, exist in a ring topology.

Coupler (Optical): An optical device that combines or splits power from optical fibers.

Coverage Area: Coverage area is the area within an NTSC station's Grade B contour without regard to interference from other television stations which may be present. For an ATV station, coverage area is the area contained within the station's noise-limited contour without regard to interference which may be present.

Coupling Ratio/Loss (CR, CL): The ratio/loss of optical power from one output port to the total output power, expressed as a percent. For a 1 x 2 WDM or coupler with output powers O1 and O2, and Oi representing both output powers: CR(%) = (Oi/(O1 + O2)) x 100% and CR(%) = -10Log10 (Oi/(O1 + O2)). Customer Premises Equipment (CPE): Terminal, associated equipment, and inside wiring located at a subscribers premises and connected with a carriers communication channel(s) at the demarcation point (demarc), a point established

65

in a building or complex to separate customer equipment from telephone company equipment.

CPU: Central Processing Unit. Think of this as the brains of the computer. When most people think of processors, they think of Intel, AMD, Motorola, or IBM. The Pentium 4 and Athlon are popular CPUs. The CPU's first instruction is to check the system BIOS and do what the BIOS tells it. The CPU is designed to run a group of instructions, or instruction set. CPU instructions can consist of adding together numbers, subtracting, fetching information from memory, and numerous other simple functions. The computer's operating system feeds the CPU instructions after the BIOS locates the operating system and puts it into main memory. Cracker: This is the common term used to describe a malicious hacker, though it also can refer to code breakers. Crackers get into all kinds of mischief, including breaking or "cracking" copy protection on software programs, breaking into systems and causing harm, changing data, or stealing. Hackers largely regard crackers as a less educated group of individuals who cannot truly create their own work, and simply steal other people's work to cause mischief or for personal gain, not to promote understanding. CRC (Cyclic Redundant Check): Used in data transfer to check if the data has been corrupted. It is a check value calculated for a data stream by feeding it through a shifter with feedback terms "EXORed" back in. It performs the same function as a checksum but is considerably harder to fool. A CRC can detect errors but not repair them, unlike an ECC--which is attached to almost any burst of data that might possibly be corrupted. They are used on disks, ITU-R 601 data, Ethernet packets, etc. Critical Angle: In geometric optics, at a refractive boundary, the smallest angle of incidence at which total internal reflection occurs.

Cross-Connect: Connections between terminal blocks on the two sides of a distribution frame or between terminals on a terminal block (also called straps). Also called cross-connection or jumper.
66

Cross-correlation: A measure of the similarity of two different signals. Cross Polarization: In an antenna, polarization orthogonal to a specified reference. Crossover Cable: An Ethernet cable using RJ-45 connectors, where one end of the cable has the order of the second two pairs of the 8 wires (green and orange) swapped. Instead of wires 1, 2, 3, and 6 going straight through, you have 1 going to 3, 2 going to 6, 3 going to 1, and 6 going to 2. (Thanks to DN for pointing that out.) You can use a crossover cable to directly connect two 10BaseT or 100BaseT network cards, basically making a network of two computers for easy file transfer or configuration of network printers or other devices. As well, crossover cables are often used to connect 10BaseT and 100BaseT hubs together. Cross-phase Modulation (XPM): A fiber nonlinearity caused by the nonlinear index of refraction of glass. The index of refraction varies with optical power level which causes different optical signals to interact. Cross Platform: A term given to software that can be used on multiple hardware platforms, such as the x86 PC, the Macintosh, or Sun's Solaris systems. One such piece of software is Java. CRT (Cathode-Ray Tube): A CRT ("picture tube") is a specialized vacuum tube in which images are created when an electron beam scans back and forth across the back side of a phosphor-coated screen. Each time the beam makes a pass across the screen, it lights up a horizontal line of phosphor dots on the inside of the glass tube. By rapidly drawing hundreds of these lines from the top to the bottom of the screen, images are created. The regular "direct-view" TVs that most people watch have a single large picture tube, while CRT-based rear-projection and front-projection TVs use three CRTs: one each for red, green, and blue. CSU/DSU (Channel Service Unit/Data Service Unit): A piece of hardware that you use to translate the digital data frames of a T1 line into a 10BaseT connection where Internet connectivity is concerned. If the T1 lines are used for voice connections, a CSU/DSU is required to translate the digital frames into signals that your office phone switch can interpret. Basically, the phone network/Internet consists of a bunch of CSU/DSUs talking to one another. When you lease a T1 line, your provider typically supplies you with a CSU/DSU, often with a setup cost. A synonym for CSU/DSU is DSU/CSU. CTI (Computer Telephony Integration): Simply put, this represents the integration of a computer and telephone. Its serious uses include phone registration, fax-back systems, and other systems that record or supply information by simple touch-tone telephone access. A basic use of this technology would be using your computer/modem as an auto- dialer so you don't have to push the buttons on your phone. CTO (Chief Technology Officer): An executive title related to CIO, usually at a
67

medium or large-sized company. The CTO is, however, more focused on the use of technology in products developed by the company and technology delivered to external customers. CTOs are typically more technical than CIOs. Cursor: This is often represented by a blinking line or square on your computer screen. The cursor is there to let you know where information will be displayed when you type on a keyboard. Many program manuals make reference to the cursor and describe where it should be placed so that you can enter information into software properly. You can use the pointer to place the cursor. Cyborg: A person who is partially flesh and bone, but has one or more robotic appendages electronically linked to his or her nerves. Often, a cyborg is said to be half human and half machine. For example, the "Terminator" character is a robot covered with human tissue--this is not a true cyborg. As well, a human with an artificial limb that is removable is not a cyborg. For a true cyborg it is hard to tell internally where the human ends and the robotic parts begin, and hard to separate one from the other. The Borg characters from Star Trek could be said to be true cyborgs, as they have a series of implants that truly merge flesh and machine (Robocop as well).

68

-DD1: A format for component digital video tape recording working to the ITU-R BT.601, 4:2:2 standard using 8-bit sampling. The tape is 19 mm wide and allows up to 94 minutes to be recorded on a cassette. Being a component recording system it is ideal for studio or post production work with its high chrominance bandwidth allowing excellent chroma keying. Also multiple generations are possible with very little degradation and D1 equipment can integrate without transcoding to most digital effects systems, telecines, graphics devices, disk recorders, etc. Being component there are no color framing requirements. Despite the advantages, D1 equipment is not extensively used in general areas of TV production, at least partly due to its high cost. (Often used incorrectly to indicate component digital video.) D2: The VTR standard for digital composite (coded) NTSC or PAL signals that uses data conforming to SMPTE 244M. It uses 19 mm tape and records up to 208 minutes on a single cassette. Neither cassettes nor recording formats are compatible with D1. D2 has often been used as a direct replacement for 1-inch analog VTRs. Although offering good stunt modes and multiple generations with low losses, being a coded system means coded characteristics are present. The user must be aware of cross-color, transcoding footprints, low chrominance bandwidths and color framing sequences. Employing an 8-bit format to sample the whole coded signal results in reduced amplitude resolution making D2 more susceptible to contouring artifacts. (Often used incorrectly to indicate composite digital video.) D3: A composite digital video recording format that uses data conforming to SMPTE 244M. Uses 1/2-inch tape cassettes for recording digitized composite (coded) PAL or NTSC signals sampled at 8 bits. Cassettes are available for 50 to 245 minutes. Since this uses a composite signal the characteristics are generally as for D2 except that the 1/2-inch cassette size has allowed a full family of VTR equipment to be realized in one format, including a camcorder. D4: A format designation never utilized due to the fact that the number four is considered unlucky (being synonymous with death in some Asian languages). D5: A VTR format using the same cassette as D3 but recording component signals conforming to the ITU-R BT.601-5 Recommendations at 10-bit resolution. With internal decoding D5 VTRs can play back D3 tapes and provide component outputs. Being a non-compressed component digital video recorder means D5 enjoys all the performance benefits of D1, making it suitable for high-end post production as well as more general studio use. Besides servicing the current 625 and 525 line TV standards the format also has provision for HDTV recording by use of about 4:1 compression (HD D5). D6: A digital tape format which uses a 19mm helical-scan cassette tape to record uncompressed high definition television material at 1.88 GBps (1.2 Gbps). D6 is currently the only high definition recording format defined by a recognized standard.
69

D6 accepts both the European 1250/50 interlaced format and the Japanese 260M version of the 1125/60 interlaced format which uses 1035 active lines. It does not accept the ITU format of 1080 active lines. ANSI/SMPTE 277M and 278M are D6 standards. D7: DVCPRO. Panasonic's development of native DV component format which records a 18 micron (18x10-6m, eighteen thousandths of a millimeter) track on 6.35 mm (0.25-inch) metal particle tape. DVCPRO uses native DCT-based DV compression at 5:1 from a 4:1:1 8-bit sampled source. It uses 10 tracks per frame for 525/60 sources and 12 tracks per frame for 625/50 sources, both use 4:1:1 sampling. Tape speed is 33.813mm/s. It includes two 16-bit digital audio channels sampled at 48 kHz and an analog cue track. Both Linear (LTC) and Vertical Interval Time Code (VITC) are supported. There is a 4:2:2 (DVCPRO50) and progressive scan 4:2:0 (DVCPRO P) version of the format, as well as a high definition version (DVCPROHD). D8: There is no D8. The Television Recording and Reproduction Technology Committee of SMPTE decided to skip D8 because of the possibility of confusion with similarly named digital audio or data recorders (DA-88). D9 (Formerly Digital-S): A 1/2-inch digital tape format developed by JVC which uses a high-density metal particle tape running at 57.8mm/s to record a video data rate of 50 Mbps. The tape can be shuttled and search up to 32x speed. Video sampled at 4:2:2 is compressed at 3.3:1 using DCT-based intra-frame compression (DV). Two or four audio channels are recorded at 16-bit, 48 kHz sampling; each is individually editable. The format also includes two cue tracks. Some machines can play back analog S-VHS. D9 HD is the high definition version recording at 100 Mbps. D9 HD: A high definition digital component format based on D9. Records on 1/2-inch tape with 100 Mbps video. D16: A recording format for digital film images making use of standard D1 recorders. The scheme was developed specifically to handle Quantel's Domino (Digital Opticals for Movies) pictures and record them over the space that sixteen 625 line digital pictures would occupy. This way three film frames can be recorded or played every two seconds. Playing the recorder allows the film images to be viewed on a standard monitor; running at 16x speed shows full motion direct from the tape. DA-88: A Tascam-brand eight track digital audio tape machine using the 8 mm video format of Sony. It has become the de facto standard for audio post production though there are numerous other formats, ranging from swappable hard drives to analog tape formats and everything in between. DAB (Digital Audio Broadcasting): Until now, analogue radio signals such as FM or MW have been subject to numerous kinds of interference on their way from the transmitter to your radio. These problems were caused by mountains, high-rise buildings and weather conditions.
70

DAB, however, uses these effects as reflectors creating multipath reception conditions to optimise receiver sensitivity. Since DAB always selects the strongest regional transmitter automatically, you'll always be at the focal point of incoming radio signals. DAB is broadcast on terrestrial networks, and you are able to receive it using solely a tiny non-directional stub antenna. You receive CD-like quality radio programmes even in the car without any annoying interference and signal distortion. Aside from distortion-free reception and CD Quality sound, DAB offers further advantages as it has been designed for the multimedia age. DAB can carry not only audio, but also text, pictures, data and even videos - all on your radio. *Bands and Modes for DAB 1. Band III: DAB frequency band 174240 MHz 2. L-Band: DAB frequency band 14521492 MHz 3. DAB-Mode I, II, III and IV: country specific transmission mode. For worldwide operation a receiver must support all 4 modes: Mode I for Band III, Earth Mode II for L-Band, Earth and Satellite Mode III for frequencies below 3 GHz, Earth and satellite Mode IV for L-Band, Earth and satellite Refer to: 1. ITU-R BS.1114-5 :Systems for terrestrial digital sound broadcasting to vehicular, portable and fixed receivers in the frequency range 30-3 000 MHz 2. ITU-R BS.1660 :Technical basis for planning of terrestrial digital sound broadcasting in the VHF band 3. ETS 300 401: Radio broadcasting systems; Digital Audio Broadcasting (DAB) to mobile, portable and fixed receivers DAC (D-A, D/A, D-to-A): Digital-to-Analog Converter. Conversion of digital to analog signals. The device is also referred to as DAC (D/A converter). In order for conventional television technology to display digitally transmitted TV data, the data must be decoded first and then converted back to an analog signal. See also: ADC. Dark Current: The induced current that exists in a reversed biased photodiode in the absence of incident optical power. It is better understood to be caused by the shunt resistance of the photodiode. A bias voltage across the diode (and the shunt resistance) causes current to flow in the absence of light.

71

Dark Fiber: Fiber optic wiring that has been installed but has not been turned on yet. Often, companies that lay fiber optic cabling will lay extra cable, since the laying of miles of cable is a very costly and time consuming procedure. This dark fiber will lay dormant until the company needs extra capacity, or until it leases it to another company that needs the capacity. Then the company "lights it up" by sending optical data over the fiber. DAT: Digital Audio Tape. This type of magnetic tape, introduced by Sony, at one point threatened to supplant the normal audio cassette with a better quality alternative. Unfortunately, it never really took off due to foolish licensing issues that kept its price very high.However, the small tapes took off big in the computer industry. You can get DAT drives today that hold up to 40 GB of data compressed with DDS-4. Audio DAT tapes are not compatible with data drives. Data Carousel: Data is transmitted repetitively. It can be used for downloading various data in broadcasting where if some data is lost on the first pass it may be acquired on subsequent retransmissions. Teletext pages are an example. See DSM-CC:Digital Storage Media-Command & Control, item 33, page 20. Datacasting: While broadcasting in HDTV or multicasting in SDTV, digital technology allows broadcasters to use leftover bandwidth to transmit additional program material or non-program related resources, such as video, audio, text, graphics, maps and services, to specially equipped computers, cache boxes, set-top boxes, or DTV receivers. This is called datacasting. DTV's broader bandwidth channel allows information to be downloaded at a transmission rate currently 600 times that of a personal computer modem. Data Center: Any computing environment where there is a service agreement between the people managing the computing resources and the users. A company computer network is a data center, but so is a network at an ISP. Additionally, data center has come to mean a website or network with its resources dedicated to providing information on a particular subject. Data Compression: A technique that provides for the transmission or storage, without noticeable information loss, of fewer data bits than were originally used when the data was created. Data Communications Channel (DCC): Data channels in SDH that enable OAM communications between intelligent controllers and individual network nodes as

72

well as internode communications. Data carousel: Data is transmitted repetitively. It can be used for downloading various data in broadcasting where if some data is lost on the first pass it may be acquired on subsequent retransmissions. Teletext pages are an example. See DSM-CC:Digital Storage Media-Command & Control. Data Dependent Jitter (DDJ): Also called data dependent distortion. Jitter related to the transmitted symbol sequence. DDJ is caused by the limited bandwidth characteristics, non-ideal individual pulse responses, and imperfections in the optical channel components. Data Link Layer: Layer 2 of the OSI reference model . Provides reliable transit of data across a physical link. The data-link layer is concerned with physical addressing, network topology, line discipline, error notification, ordered delivery of frames, and flow control. The IEEE divided this layer into two sublayers: the MAC sublayer and the LLC sublayer. Sometimes simply called link layer. Roughly corresponds to the data-link control layer of the SNA model. Data Recorders: Machines designed to record and replay data. They usually include a high degree of error correction to ensure that the output data is absolutely correct and, due to their recording format, the data is not easily editable. This compares with video recorders which will conceal missing or incorrect data by repeating adjacent areas of picture and which are designed to allow direct access to every frame for editing. Where data recorders are used for recording video there has to be an attendant "workstation" to see the pictures or hear the sound, whereas VTRs produce the signals directly. Although many data recorders are based on VTRs' original designs, and vice versa, VTRs are more efficient for pictures and sound while data recorders are most appropriate for data. DBA: DataBase Administrator. A person whose job it is to manage databases. A DBA's tasks may include assigning security privileges to the databases, creating and designing databases, and controlling the importing and exporting of data between databases and external sources. The creation and design of databases is a science. You can increase or decrease performance greatly by designing a database properly or improperly. dB (Decibel): 1) A measure of voltage, current, or power gain equal to 1/10 of a bel. Given by the equations 20 log Vout/Vin, 20 log Iout/Iin, or 10 log Pout/Pin.. 2) A scale to measure the loudness of sound. A difference of 1 dB is usually the minimum perceptible change in volume, 3 dB is a moderate change in volume, and 10 dB is an apparent doubling of volume. Decibels are a logarithmic scale of relative loudness. In audio equipment decibels are used to indicate the range of volume from the quietest to the loudest sound that can be handled. The volume of common sounds in decibels are:

73

0 dB is the threshold of hearing, 130 dB is the threshold of pain Whisper: 15 - 25 dB Quite background: about 35 dB Normal home or office background: 40 - 60 dB Normal speaking voice: 65 - 70 dB Orchestral climax: 105 dB Live rock music: 120 dB+
JET AIRCRAFT: 140 - 180 DB

dBc: Abbreviation for decibel relative to a carrier level. dBi (Decibel, Isotropic): decibel referenced to the gain of a theoretical isotropic radiator. dBm (Decibel, Milliwatt): decibel referenced to one milliwatt into 50 ohms (typical for RF systems). DBS: Digital broadcast system. An alternative to cable and analog satellite reception initially utilizing a fixed 18-inch dish focused on one or more geostationary satellites. DBS units are able to receive multiple channels of multiplexed video and audio signals as well as programming information, Email, and related data. DBS typically uses MPEG-2 encoding and COFDM transmission. Also known as digital satellite system. DC (Direct Current): A type of electrical current that moves in one direction at a constant rate, such as a standard 9 volt battery. Batteries provide direct current. See also alternating current. D-Cinema (also E-Cinema): Digital cinema. Typically the process of using video at 1080/24p instead of film for production, post production and presentation. DC coefficient: The DCT coefficient for which the frequency is zero in both dimensions. DCE (Data Communication Equipment): Refer to LAPB (Link Access Protocol, Balanced) DCT (Discrete Cosine Transform): 1) The Discrete Cosine Transform is performed at the Block layer. For MPEG-2 (ISO/IEC 13818-2), it's specified in the Annex A to the Recommendation. The input

74

of the DCT and the output of the inverse transform (IDCT) are 8x8 matrices of 9 bits/element, while the DCT coefficients are represented in 12 bits (-2048:+2047). DCT is used to remove the spatial correlation existing among adjacent pixels in order to allow a more efficient entropy coding. As DCT is performed to the 8x8 block, only the correlation inside the block can be removed. Besides, the subdivision of the image in 8x8 blocks for the DCT process is the cause of the typical block artifacts of all the DCT based compression algorithms. If DCT and IDCT processes were performed with infinite precisions, they would be lossless; unfortunately they are performed with limited integer precision so that, even without quantisation, they can produce some impairment. The DCT coefficients have a relationship with spatial frequencies and, given that the different components have different subjective importance, DCT gives an important tool to remove also the subjective redundancy. The first coefficient (0, 0) is related to DC component, the coefficients of the first line are related to purely horizontal spatial frequencies, while those of the first column to purely vertical ones, the other coefficients are related to diagonal components. 2) A representation of DCT coefficients The image on the left side is the original one, it has been divided in blocks of 8x8 elements for the DCT; the image on the right consists of 8x8 images, each of them represents a coefficient of the DCT and consists of the values that such coefficient takes in all the blocks the DCT has been performed to. The relevance of the DC values and of the low frequencies values can be noted. 3) As MPEG-1 and MPEG-2 adopt hybrid DCT compression algorithm, DCT may be applied not only to sample-values (Intra blocks), but also to prediction error values (Non-Intra blocks). For the latter the DC coeffients are no more so important as for the former, but their relevance is quite similar to the other low frequencies coefficients'. 4) A widely used method of data compression of digital video pictures basically by resolving blocks of the picture (usually 8 x 8 pixels) into frequencies, amplitudes, and colors. JPEG and DV depend on DCT. 2. Also an Ampex data videotape format using discrete cosine transform. Either the forward discrete cosine transform or the inverse discrete cosine transform. The DCT is an invertible, discrete orthogonal transformation. The inverse DCT is defined in Annex A of ITU-T Rec. H.262 | ISO/IEC 13818-2. DCT Coefficient: The amplitude of a specific cosine basis function. DCT Coding: The DCT coding consists of the following steps: a) Separation of the image (I-frame) into blocks Typically a block consists of 8 x 8 (or 16 x 16) pixels. (8x8 is optimum for trade-off between compression efficiency and
75

computational complexity) Note : larger block size leads to more efficient coding but requires more computational power b) Applying the DCT DCT converts a block of pixels into a block of transform coefficients. The coefficients represent the spatial frequency components which make up the original block. Each coefficient can be thought of as a weight which is applied to an appropriate basis function. c) Quantization For a typical block, most of the DCT coefficients will be almost zero. The larger DCT coefficients will be clustered around the DC value (the top-left basis function) i.e.: they will be low spatial frequency coefficients The DCT coefficients are quantized so that the near-zero coefficients are set to zero and the remaining coefficients are represented with a reduced precision - this can be achieved by dividing each coefficient by a positive integer which results in loss of information as well as compression d) Encoding Further compression is given by exploiting the statistical redundancies within the quantized DCT coefficient data non-zero coefficients are encoded using a variable length coding scheme and zero coefficients are encoded using run-length encoding (i.e.: transmit a number representing the current run of zeroes). DD2: Using D2 tape, data recorders have been developed offering (by computer standards) vast storage of data (which may be images). A choice of data transfer rates is available to suit computer interfaces. Like other computer storage media, images are not directly viewable, and editing is difficult. DDR: Digital disk recorder. See: Disk recorder. DDS (Digital Data Storage): A storage standard used with medium cost tape media and tape drives, used mainly for small businesses and departmental backups. DDS tapes are the same size and form factor as DAT tapes used to store music digitally on tape media, but DDS media is more robust and more expensive. There are four different DDS standards: DDS-1 through DDS-4. The four standards allow for backup of 2 GB, 4 GB, 12 GB, and 20 GB worth of uncompressed (native) storage respectively. Storage sizes are typically listed as double for compressed data. DDS media is good for up to 10 years, but should not be used for more than 100 backups or it may become unreliable. DDS tapes are falling to the wayside as DLT(DIGITAL LINEAR TAPE) , AIT, and other standards offer higher reliability. Sony and HP have both announced that they will not be supporting DDS-5, which pushes DDS data storage up to 40 GB of native storage per tape.

76

Debug: The act of diagnosing, fixing, or removing bugs from a computer programs. It can also be used to describe the fixing of bugs in HTML or other computer-based texts that are not necessarily programs. The first computer "bug" was an actual moth, and debugging that computer involved physically removing the moth from the system, as it was impeding electrical flow. Debugger: A program that searches other programs for bugs. In addition to identifying definite or possible bugs, debuggers can usually step through programs one operation at a time. Debuggers are associated with a particular programming language, and are typically packaged as part of a programming language or set of programming tools. Decoding Time-Stamp; DTS (system): A field that may be present in a PES packet header that indicates the time that an access unit is decoded in the system target decoder. The process defined in ITU-T Rec. H.222.0 | ISO/IEC 13818-1 that reads an input coded bit stream and outputs decoded pictures or audio samples. Default: The fallback value. If nothing else is specified the default value will be used. If you install a program it generally installs with the default settings unless you override the pre-programmed settings with your preferences. de facto standard: A standard which is widely used and accepted even though it is not official. Compare with de jure standard. See also standard de jure standard: A standard which is widely used and accepted even though it is not official. Compare with de jure standard. See also standard. Delimited: The separation of data elements in a text file by a character or combination of characters. The character that separates the elements is the delimiter. The most common form of delimited file is tab delimited, as tab characters are not often used within data elements, and are easily understood by most operating systems and character formats. DeMilitarized Zone: (DMZ). A part of a network that is protected by a firewall, but may be accessed by external Internet clients. The DMZ generally contains servers such as SMTP servers, remote access machines. or webservers. Client machines and internal servers that do not need to be accessed by Internet clients are kept in a more protected segment of the network than the DMZ. Alternately, DMZ can be used to refer to the media layer where route peering is done among multiple administrative regions with their own traffic policies. Demultiplexing: Separating elementary streams or individual channels of data from a single mutichannel stream. For example, video and audio streams must be demultiplexed before they are decoded. This is true for multiplexed digital television transmissions. Denial of Service: (DoS). A type of network attack that attempts to render a network or Internet resource useless to users, typically by sending large amounts of repeated requests for data. The target may be e-mail services or an IRC server,
77

or it could be access to a particular website. The methods of attack vary, but the end result is that a resource is artificially slowed down or unavailable to legitimate users. Dense Wavelength-division Multiplexing (DWDM): The transmission of many of closely spaced wavelengths in the 1550 nm region over a single optical fiber. Wavelength spacings are usually 100 GHz or 200 GHz which corresponds to 0.8 nm or 1.6 nm. DWDM bands include the C-Band, the S-Band, and the L-Band.

De-interlacing (also called line-doubling): The process of converting an interlaced-scan video signal (where each frame is split into two sequential fields) to a progressive-scan signal (where each frame remains whole). De-interlacers are found in digital TVs and progressive-scan DVD players. More advanced de-interlacers include a feature called 3-2 pulldown processing. For TVs, de-interlacing is often referred to as "line-doubling" or "upconversion." Delivery System: physical medium by which one or more multiplexes are transmitted e.g. satellite system, wide-band coaxial cable, fibre optics, terrestrial channel of one emitting point DEMUX (Demultiplexer): See: Demultiplexing. Dense Wavelength Division Multiplexing (DWDM): DWDM is the higher capacity version of WDM, which is a means of increasing the capacity of fibre-optic data transmission systems through the multiplexing of multiple wavelengths of light. Commercially available DWDM systems support the multiplexing of from 8 to 40 wavelengths of light. DES (Data Encryption Standard): An encryption method developed by IBM in 1977. It uses a private 56-bit key that is applied to each 64- bit block of data. The sender and receiver must each know the private key. Anything encrypted by DES encryption has 72,000,000,000,000,000 (or 72 quadrillion) possible keys. DES encryption has been broken, but it took over 14,000 computers operating in succession to crank through codes until the proper key was found. See also Triple DES encryption.
78

Deserializer: A device that converts serial digital information to parallel digital. Desktop Video: Video editing and production done using standard desktop computing platforms running add-on video hardware and software. Descriptor: A data structure of the format: descriptor_tag, descriptor_length, and a variable amount of data. The tag and length fields are each 8 bits. The length specifies the length of data that begins immediately following the descriptor_length field itself. A descriptor whose descriptor_tag identifies a type not recognized by a particular decoder shall be ignored by that decoder. Descriptors can be included in certain specified places within PSIP tables, subject to certain restrictions. Descriptors may be used to extend data represented as fixed fields within the tables. They make the protocol very flexible since they can be included only as needed. New descriptor types can be standardized and included without affecting receivers that have not been designed to recognize and process the new types. Destructive Interference: Any interference that decreases the desired signal. For example, two light waves that are equal in amplitude and frequency, and out of phase by 180, will negate one another.

Detector: An opto-electric transducer used to convert optical power to electrical current. Usually referred to as a photodiode.

D/I: Drop and Insert. A point in the transmission where portions of the digital signal can be dropped out and/or inserted. Diameter-mismatch Loss: The loss of power at a joint that occurs when the transmitting fiber has a diameter greater than the diameter of the receiving fiber. The loss occurs when coupling light from a source to fiber, from fiber to fiber, or from fiber to detector.

Diagnostics: Tests to check the correct operation of hardware and software. As digital systems continue to become more complex, built-in automated testing becomes an essential part of the equipment. Some extra hardware and software

79

has to be added to make the tests operate. Digital systems with such provisions can often be quickly assessed by a trained service engineer, so speeding repair. DIB (Dual Independent Bus): The bus architecture between Intel's Pentium II processor, memory, and L2 cache. One bus connects the processor to L2 cache and a second connects the processor to main memory. Having two buses instead of one increases performance over single-bus architectures. In addition, the speed of the external L2 cache can scale up independently from the speed of the system bus. This allows for faster cache access. The final feature of the DIB architecture is a pipeline on the cache to the processor bus that allows multiple simultaneous cache requests. Differential Gain (DG): A type of distortion in a video signal that causes the brightness information to be incorrectly interpreted. Differential Phase (DP): A type of distortion in a video signal that causes the color information to be incorrectly interpreted. Diffraction Grating: An array of fine, parallel, equally spaced reflecting or transmitting lines that mutually enhance the effects of diffraction to concentrate the diffracted light in a few directions determined by the spacing of the lines and by the wavelength of the light.

Diffraction Loss: The loss between two antennas caused by the scattering of energy from a knife-edged or rounded obstruction in the path between them. Digital: Circuitry in which data carrying signals are restricted to either of two voltage levels, corresponding to logic 1 or 0. A circuit that has two stable states: high or low, on or off. Digital8: Camcorder format which allows you to record digital-quality video onto standard Hi8 or 8mm tape. Most Digital8 camcorders also play back analog Hi8 and 8mm recordings, although they do not record in Hi8 or 8mm. A 120-minute Hi8/8mm tape yields one hour of recording when used with a Digital8 camcorder, giving you essentially the same stunning picture quality as you get with Mini DV (500 lines of horizontal resolution). Digital Audio Compression: Originally most audio was recorded and distributed on CDs in the PCM format. The PCM file format consumes about 9 megabytes of file space for each minute of audio program length. Digital compression reduces
80

the file size, and therefore the transmission time, of audio program material over a communications network such as the Internet. There are two general types of digital compression - lossy and lossless: 1. Lossy Compression In a lossy compression scheme, as the name implies, some of the original information is discarded when it is compressed. Therefore, it is impossible to produce an exact replica of the original audio signal when the audio is played. There are many different schemes of lossy compression available. These schemes generally provide varying compression ratios. The most popular of these, the MPEG Layer 3 (MP3) format, is commonly employed with compression ratios of up to ten to one. All lossy compression schemes add artifacts to the compressed audio. Artifacts are small imperfections created by the loss of the actual audio data. Although its quality may not seem that poor, the audio that has been processed by a lossy digital compressor is no longer "CD Quality". 2. Lossless Compression Lossless compressors produce an exact replica of the original source. For example, the zip file format produced by the Winzip program on a PC produces exact copies of the original material that was encoded within the zip file. Fortunately, there is at least one lossless compression scheme available for Windows PCs. It's known as Shorten, an audio format that creates an exact clone of the original source. Lossless compression is the only way to preserve the "CD Quality" level of your program material. Do we still need digital compression in this high-speed world? For the time being, the audio you seek may only be available in compressed formats like MP3. We are rapidly approaching the time when we won't need to compress the audio in order to save transmission time or conserve disk storage space. Digital Betacam: A development of the original analog Betacam VTR which records digitally on a Betacam-style cassette. It uses mild intra-field compression to reduce the ITU-R 601 sampled video data by about 2:1. Some models can replay both digital and analog Betacam cassettes. DiBEG (Digital Broadcasting Experts Group): The Digital Broadcasting Experts Group (DiBEG) was founded in September 1997 to promote ISDB-T, the Japanese Digital Terrestrial Broadcasting System, in the world. http://www.dibeg.org/ Digital Channel: A set of one or more digital elementary streams. Digital Cross-connect (DCS): An electronic cross-connect which has access to lower-rate channels in higher-rate multiplexed signals and can electronically rearrange (cross-connect) those channels. Digital Chromakeying: Digital chromakeying differs from its analog equivalent in that it can key uniquely from any one of the 16 million colors represented in the component digital domain. It is then possible to key from relatively subdued colors,
81

rather than relying on highly saturated colors that can cause color spill problems on the foreground. A high-quality digital chromakeyer examines each of the three components of the picture and generates a linear key for each. These are then combined into a composite linear key for the final keying operation. The use of three keys allows much greater subtlety of selection than does a chrominance-only key Digital Components: Component video signals that have been digitized. Digital Disk Recorder (DDR): A video recording device that uses a hard disk drive or optical disk drive mechanism. Disk recorders offer nearly instantaneous access to recorded material. Digital effects: Special effects created using a digital video effects (DVE) unit. Digital parallel distribution amplifier: A distribution amplifier designed to amplify and fan-out parallel digital signals. Digital Linear Tape (DLT): A technology designed by DEC and sold to Quantum used for backing up huge amounts of data (up to 35 GB per tape without compression, 70 GB with compression). The drives are very expensive and so is the media, but they are bulletproof. Digital Millennium Copyright Act (DMCA): A controversial reform of the U.S. copyright laws that is the first attempt to update those laws for the age of digital technology. It covers circumvention of copyright protections, fair use of copyrighted materials, and protection of ISPs from liability--as long as they follow certain practices. However, DMCA also prohibits circumventing of access controls and copyrights, and bans devices which do such circumvention. The problems with this arise when computer scientists, security analysts, users, etc., are unable to test supposedly secure products for lax security, and if they do find lax security they are unable to tell anyone or they could face prosecution. The first major DMCA case involved a Russian programmer whose company created software that allowed the bypassing of eBook security by Adobe. This allowed legitimate users to back up their eBooks and use them in different formats, which is legal by fair use standards; but it also allowed illegitimate users to potentially make pirated copies of the eBooks, which the DMCA prohibits. Digital Phase Lock Loop (DPLL): A Digital PLL (DPLL) circuit may consist of a serial shift register which receives digital input samples (extracted from the received signal), a stable local clock signal which supplies clock pulses to the shift register to drive it and a phase corrector circuit which takes the local clock and regenerates a stable clock in phase with the received signal by slowly adjusting the phase of the phase of the regenerated clock to match the received signal. This circuit is useful when the data and clock are sent together over a common cable (as in Manchester encoding), since it allows the receiver to separate (regenerate) the clock signal from the received data. The regenerated clock signal is then used to sample the received data and determine the value of each received bit.

82

When a signal is first received, the regenerated clock and the received signal will not be aligned. (This happens at the start of an Ethernet frame, since the receiver has no knowledge of which transmitter sent the frame, and therefore the frequency and phase of the received clock signal). The loop starts to track the received signal, and eventually locks-in to the required signal, allowing it to find the center of each received data bit, and reliably decode the received information. Knowing that this process takes time, many systems (including Ethernet) employ a preamble which contains a well-known and simple data pattern which increases the speed at which the DPLL gains lock of the clock signal. Digital-S: See: D9. Digital Signal level 0: (DS-0). The signal used to carry a standard analog or digital phone line connection. 24 DS-0 connections can be carried on a T1 line. The speed of the line is either 64Kbps, or 56Kbps if the eighth bit is used for signaling information. Digital Signal level 1 (DS-1) - Synonym for T1. Digital Signal level 2 (DS-2) - Synonym for T2 Digital Signal level 3 (DS-3) - Synonym for T3. Digital Signal level 4 (DS-4) - Synonym for T4. Digital Signal level 5 (DS-5) - Synonym for T5 Digital Signature: A form of electronic signature that works with a public and private key encryption system and a certificate authority. To sign an electronic document with a digital signature, you use digital signature software to select the document and enter an authorization code that is unique to your digital signature. The signature consists of a string of characters and the signer's name, title, company, certificate serial number, and the name of the certificate authority. Digital Storage Media (DSM): A digital storage or transmission device or system.
83

Digital Television (DTV): Transmitting a broadcast signal by encoding it as zeroes and ones-the digital code used in computers. DTV can be compressed to provide four, five or more channels in the same bandwidth required for one channel of the current standard television, sound and about five times more picture information (picture elements, or pixels) than conventional television. Calculators, computers, compact discs and high definition television are examples of digital technology. Digital Terrestrial TV Action Group (DigiTAG): DigiTAG promotes the widespread introduction of Digital Terrestrial Television (DTT) and seeks to ensure a thriving terrestrial television market within Europe and elsewhere, wherever the DVB-T Standard is used. http://www.digitag.org/ Digital TV Group: The DTG was formed to set technical standards for the implementation of digital terrestrial television (DTT) in the UK and now encompasses all digital TV platforms and convergence issues on a worldwide basis. http://www.dtg.org.uk/ Digital Transmission Content Protection DTCP, 5C & 4C Entity: The DTCP standard (also known as 5C), is intended to control copying of video and audio entertainment content in consumer digital equipment such as digital TV receivers and recorders that use interconnection via i.Link /firewire (IEEE 1394) links. The program material is subject to an encryption protocol that protects from illegal coping, theft or tampering when being passed between equipment via high-speed serial buses using IEEE 1394. This specification is considered to be the safest illegal copy protection system for current home appliances. DTCP specifies a mechanism that enables mutual authentication by content transmission equipment. It was proposed by five companies (hence 5C), who are: Hitachi, Ltd., Intel Corporation, Matsushita Electric Industrial Co., Ltd., Sony Corporation, and Toshiba Corporation. http://www.dtcp.com. DTLA (Digital Transmission Licensing Administrator) is DTCP's licensing administrator. See also Copyright Protection Technical Working Group, Item 22. IBM, Intel, Matsushita and Toshiba have formed the 4C Entity for Content Protection for Recordable Media and Pre-Recorded Media CPRM & CPPM. http://www.4centity.com/ Digital Word: The number of bits treated as a single entity by the system. Digitizing Time: Time taken to record footage into a disk-based editing system, usually from a tape-based analog system, but also from newer digital tape formats without direct digital connections. DIMM (Dual In-Line Memory Module): A circuit board with memory chips on it, very much like a SIMM except that it is larger and contains more pins. DIMMs are 64-bit memory devices, so you just need a single DIMM for a processor with a 64-bit memory path to work properly; or you can potentially double up DIMMs for a
84

128-bit memory interface if your motherboard supports it. Most memory today is sold on DIMMs. Dipole Antenna: A dipole antenna is a straight electrical conductor measuring 1/2 wavelength from end to end and connected at the center to a radio-frequency (RF) feed line. This antenna, also called a doublet, is one of the simplest types of antenna, and constitutes the main RF radiating and receiving element in various sophisticated types of antennas. The dipole is inherently a balanced antenna, because it is bilaterally symmetrical. Ideally, a dipole antenna is fed with a balanced, parallel-wire RF transmission line. However, this type of line is not common. An unbalanced feed line, such as coaxial cable, can be used, but to ensure optimum RF current distribution on the antenna element and in the feed line, an RF transformer called a balun (contraction of the words "balanced" and "unbalanced") should be inserted in the system at the point where the feed line joins the antenna. For best performance, a dipole antenna should be more than 1/2 wavelength above the ground, the surface of a body of water, or other horizontal, conducting medium such as sheet metal roofing. The element should also be at least several wavelengths away from electrically conducting obstructions such as supporting towers, utility wires, guy wires, and other antennas. Dipole antennas can be oriented horizontally, vertically, or at a slant. The polarization of the electromagnetic field (EM) radiated by a dipole transmitting antenna corresponds to the orientation of the element. When the antenna is used to receive RF signals, it is most sensitive to EM fields whose polarization is parallel to the orientation of the element. The RF current in a dipole is maximum at the center (the point where the feed line joins the element), and is minimum at the ends of the element. The RF voltage is maximum at the ends and is minimum at the center.

Dip Switch: One or more switches that are housed in a rectangular box on a circuit board. The switches are binary in nature, either on or off for each switch. Dip switches were more common on old ISA cards, and are often used in place of groups of jumpers. Nowadays Plug-and-Play and the abundance of Flash memory for saving settings have all but eliminated these nasty things in consumer PCs.

85

Directional Coupler: A coupling device for separately sampling (through a known coupling loss) either the forward (incident) or the backward (reflected) wave in a transmission line.

Directory: The name for a logical container for files. Directories were devised to organize files. Without directories, all the files on your hard drive would be in one big listing. When you request a list of files from a computer, you generally only see the files within one directory. Directories can contain files and/or other directories. Nowadays, most operating systems are calling directories "folders," but we know what they really are. Direct-view TV: The conventional and most common type of TV, which uses a single large (up to 36") CRT to display images. Other TV types include flat-panel, rear-projection and front-projection. Discrete Cosine Transform (DCT): A step used in the MPEG coding process to convert data from spatial to temporal domain. Disk Drive: Often, this is a synonym for hard drive, but it can also refer to a floppy drive or any type of removable drive that uses magnetic media. Diskette: This term is synonymous with floppy disk. You may also hear the long version, floppy diskette. Nowadays most people just say "disk." Dispersion: The temporal spreading of a light signal in an optical waveguide caused by light signals traveling at different speeds through a fiber either due to modal or chromatic effects.

Dispersion-compensating Fiber (DCF): A fiber that has the opposite dispersion of the fiber being used in a transmission system. It is used to nullify the dispersion caused by that fiber.

86

Dispersion Penalty: The result of dispersion in which pulses and edges smear making it difficult for the receiver to distinguish between ones and zeros. This result in a loss of receiver sensitivity compared to a short fiber and measured in dB. The equations for calculating dispersion penalty are as follows:

Where: = Laser spectral width (nm) D = Fiber dispersion (ps/nm/km) = System dispersion (ps/km) f = Bandwidth-distance product of the fiber (Hz km) L = Fiber length (km) FF = Fiber bandwidth (Hz) C = A constant equal to 0.5 FR = Receiver data rate (b/s) dBL = Dispersion penalty (dB) Dispersion Management: A technique used in a fiber optic system design to cope with the dispersion introduced by the optical fiber. A dispersion slope compensator (illustrated) is one dispersion management technique.

Dispersion-shifted Fiber (DSF): A type of single-mode fiber designed to have zero dispersion near 1550 nm. This fiber type works very poorly for DWDM applications because of high fiber nonlinearity at the zero-dispersion wavelength. Display Order: The order in which the decoded pictures are displayed. Normally this is the same order in which they were presented at the input of the encoder. Distributed Control Systems, or DCS: An industrial measurement and control system routinely seen in factories and treatment plants consisting of a central host or master (usually called a master station, master terminal unit or MTU); one or
87

more field data gathering and control units; and a collection of standard and/or custom software used to monitor and control field data elements. Unlike similar SCADA systems, the field data gathering or control units are usually located within a more confined area. Communications may be via a local area network (LAN), and will normally be reliable and high speed. A DCS system usually employs significant amounts of closed loop control. Distributed Feedback Laser (DFB): An injection laser diode which has a Bragg reflection grating in the active region in order to suppress multiple longitudinal modes and enhance a single longitudinal mode.

Distribution quality: The level of quality of a television signal from the station to its viewers. For digital television this is approximately 19.39 Mbps. Dither: A form of smart conversion from a higher bit depth to a lower bit depth, used in the conversion of audio and graphic files. In the conversion from 24-bit color to 8-bit color (millions of colors reduced to 256), the process attempts to improve on the quality of on-screen graphics with reduced color palettes by adding patterns of different colored pixels to simulate the original color. The technique is also known as "error diffusion," and is applied to audio bit rate reduction and graphic resolution. DLP (Digital Light Processing): A projection TV technology developed by Texas Instruments, based on their Digital Micromirror Device (DMD) microchip. Each DMD chip has hundreds of thousands of tiny swiveling mirrors which are used to create the image. The popular HD2 DLP chip has an array of 1280 by 720 mirrors, where each mirror represents a single pixel to provide 720p resolution. DLP technology is used in both front- and rear-projection systems. There are two basic types of DLP projector: "single-chip" projectors use a single DMD chip along with a spinning color wheel, while much more expensive "3-chip" projectors dedicate a chip to each basic color: red, green, and blue. DLT (Digital Linear Tape): A technology designed by DEC and sold to Quantum used for backing up huge amounts of data (up to 35 GB per tape without compression, 70 GB with compression). The drives are very expensive and so is the media, but they are bulletproof. DNG: Digital news gathering. Electronic news gathering (ENG) using digital equipment and/or transmission.

88

Dolby Digital (formerly Dolby AC-3): The approved 5.1 channel (surround-sound) audio standard for ATSC digital television, using approximately 13:1 compression Six discrete audio channels are used: Left, Center, Right, Left Rear (or side) Surround, Right Rear (or side) Surround, and a subwoofer (considered the ".1" as it is limited in bandwidth). The bit rate can range from 56 kbps to 640 kbps, typically 64 kbps mono, 192 kbps two-channel, 320 kbps 35mm Cinema 5.1, 384 kbps Laserdisc/DVD 5.1 and ATSC, 448 kbps 5.1. When moving from analog recording to a digital recording medium, the digital audio coding used yields an amount of data often too immense to store or transmit economically, especially when multiple channels are required. As a result, new forms of digital audio coding--often known as "perceptual coding"--have been developed to allow the use of lower data rates with a minimum of perceived degradation of sound quality. Dolby's third generation audio coding algorithm (originally called AC-3) is such a coder. This coder has been designed to take maximum advantage of human auditory masking in that it divides the audio spectrum of each channel into narrow frequency bands of different sizes, optimized with respect to the frequency selectivity of human hearing. This makes it possible to sharply filter coding noise so that it is forced to stay very close in frequency to the frequency components of the audio signal being coded. By reducing or eliminating coding noise wherever there are no audio signals to mask it, the sound quality of the original signal can be subjectively preserved. In this key respect, a coding system like Dolby Digital is essentially a form of very selective and powerful noise reduction. Dolby E: A new coding system designed specifically for use with video available from Dolby Laboratories. The audio framing is matched to the video framing, which allows synchronous and seamless switching or editing of audio and video without the introduction of gaps or A/V sync slips. All of the common video frame rates, including 30/29.97, 25, and 24/23.976, can be supported with matched Dolby E audio frame sizes. The Dolby E coding technology is intended to provide approximately 4:1 reduction in bit rate. The reduction ratio is intentionally limited so that the quality of the audio may be kept very high even after a number of encode-decode generations. The fact that operations such as editing and switching can be performed seamlessly in the coded domain allows many coding generations to be avoided, further increasing quality. A primary carrier for the Dolby E data will be the AES/EBU signal. The Dolby E coding will allow the two PCM audio channels to be replaced with eight encoded audio channels. A VTR PCM track pair will become capable of carrying eight independent audio channels, plus the accompanying metadata. The system is also intended to be applied on servers and satellite links. A time delay when encoding or decoding Dolby E is unavoidable. In order to facilitate the provision of a

89

compensating video delay, the audio encoding and decoding delay have been fixed at exactly one frame. When applied with video recording formats which incorporate frame based video encoding, it can be relatively easy to provide for equal video and audio coding delays. When applied with uncoded video, it may be necessary to provide a compensating one frame video delay. Dolby Surround (Dolby Stereo, & Dolby 4:2:4): Matrix Analog coding of four audio channels--Left, Center, Right, Surround (LCRS), into two channels referred to as Right-total and Left-total (Rt, Lt). On playback, a Dolby Surround Pro Logic decoder converts the two channels to LCRS and, optionally, a subwoofer channel. The Pro Logic circuits are used to steer the audio and increase channel separation. The Dolby Surround system, originally developed for the cinema, is a method of getting more audio channels but suffers from poor channel separation, a mono limited bandwidth surround channel and other limitations. A Dolby Surround track can be carried by analog audio or linear PCM, Dolby Digital and MPEG compression systems. DOM - (Document Object Model): The Document Object Model is specified by the W3C in 3 Levels. Basically it is used for how objects in a (Web or DVB-HTML) page (text, images, headers, links, etc.) are represented. The DOM defines what attributes are associated with each object, and how the objects and attributes can be manipulated. It is a platform- and language-neutral interface that will allow programs and scripts to dynamically access and update the content, structure and style of documents. The document can be further processed and the results of that processing can be incorporated back into the presented page. DVB-HTML uses up to DOM Level 2 with some variants and seeks to have compatibility with Level 3 DOM key events see Sect 8 of MHP Ver1.1 spec ETSI TR 102 812. Note: Dynamic HTML (DHTML) is a term used by some vendors to describe the combination of HTML, style sheets and scripts that allows documents to be animated. It relies on the DOM to dynamically change the appearance of Web pages after they have been downloaded. The W3C's DOM specification supports both HTML and XML. Domain: This term describes the Internet's addressing scheme, and also a security construct in Windows operating systems. For the Internet, domains are represented by domain names such as Geek.com or UGeek.org. These domains are mapped to TCP/IP addresses by DNS servers so that browsers can find websites. Under Windows, domains are groups of server and client machines that exist in the same security structure. You can also define security relationships based on how domains interact with each other. Windows NT 4 domains include a primary domain controller and optionally one or more backup domain controllers to manage security. Windows 2000 domains have domain controllers that share responsibilities more equally, eliminating the primary and backup distinction. Domain name: A more friendly representation of a more complex TCP/IP addresses. For example, we purchased the Geek.com domain name so we could use it to represent our server's address. You can now purchase domain names through several providers, although you used to have to go through InterNIC. The
90

fee for owning a domain name typically ranges from US$10-$35 per year. Domain name purchasing is first come first served. Domain Name Service (DNS): This service maps TCP/IP numbers, such as 123.12.4.245, to a more easily remembered name, such as www.geek.com. Thus, when you type www.geek.com into your browser, it goes out to the DNS server specified by your ISP and asks for a matching TCP/IP address.If the browser finds a DNS entry for the name you typed in, you see the appropriate website. If not, it lets you know. Every domain name that is actually being used for a website has a corresponding TCP/IP address. When you set up a site you have your ISP add a DNS entry to its DNS servers (or manage it yourself). This entry gets replicated across the Internet in a matter of hours, and, once fully replicated, you can reach your website from any Internet connection. Down converting: The process which changes the number of pixels and/or frame rate and/or scanning format used to represent an image by removing pixels. Down converting is done from high definition to standard definition. Dot Matrix Printer: This type of printer prints out little dots that can form graphics or characters. It was popular a while back because the only other choice was a daisy- wheel printer that didn't print any graphics. These printers are generally loud, producing a high-pitched buzzing sound, and they don't produce very good graphics. You'll still find them in bank machines, cash registers, and applications that require a physical imprint on paper (to create a carbon copy), but most people don't want anything to do with them. Dot Pitch: The smaller the better, as it relates to CRT monitors. The dot pitch is a measure of distance between phosphor dots of the same color on a CRT monitor. A high dot pitch generally produces a blurred and unclear picture. Smaller dot pitches produce a sharper, crisper image that stands up to close examination. There is contention on how the dot pitch should be measured: horizontally, vertically, or diagonally, resulting in some monitor makers reporting the lowest rating as their official dot pitch. Take the measure of dot pitch with a grain of salt. Dots Per Inch (DPI): Most often this term is used to describe printer or scanner resolution. If a printer is said to print at 300 dpi, it will be capable of printing 300 dots horizontally and 300 dots vertically over a square inch. Thus, if you have a printer with a higher dpi value, you should have a crisper image. The same holds true for scanners--a higher dpi theoretically relates to a higher quality scan. Besides scanners and printers, dpi is often discussed when referring to flat panel displays. As flat panel technology increases you get a higher dpi value, and thus can display a larger image in a smaller space without losing any detail. The goal in printers, scanners, and displays is to keep dpi values high enough so that the user is displayed a high-quality image and cannot see the individual dots (or pixels) that make up the image. Double-window Fiber: 1) Multimode fibers optimized for 850 nm and 1310 nm operation. 2) Single-mode fibers optimized for 1310 nm and 1550 nm operation.
91

Download: The process of transferring files from the internet to your computer. Downstream: The downloading (receiving) of data from the Internet to a client machine. Downstream speeds are typically much greater than upstream speeds in high speed consumer Internet connections such as cable modems and ADSL. DQPSK (Differential Quadrature Phase Shift Keying): DQPSK is a digital modulation technique commonly used with cellular systems. Motorola's CyberSurfr cable modem uses DQPSK to carry data upstream from the subscriber's computer to the Internet on a narrower frequency band than standard QPSK. Narrower bands allow more upstream channels, so the CyberSurfr has additional noise-free channels to choose from when it's installed. DRAM: Dynamic RAM (Random Access Memory). High density, cost-effective memory chips (integrated circuits). Their importance is such that the Japanese call them the "rice of electronics." DRAMs are used extensively in computers and generally in digital circuit design, but also for building framestores and animation stores. Being solid state, there are no moving parts and they offer the densest available method for accessing or storing data. Each bit is stored on a single transistor, and the chip must be powered and clocked to retain data. Driver: A driver is software that works to communicate between an operating system and a peripheral. Think of it as a translator. If you use a crappy driver, your OS won't understand your video card and may become unstable and crash. Hardware manufacturers constantly update drivers to make them faster and more stable. Operating systems typically come with a set of drivers, but peripherals newer than your operating system typically require new drivers which must be installed via CD-ROM, floppy, or downloaded from the Web.

DRM (Digital Radio Mondiale): DRM is the non-proprietary digital radio system for short-wave, AM/medium-wave and long-wave. It has been endorsed by the ITU, IEC and ETSI. While DRM currently covers the broadcasting bands below 30 MHz, the DRM consortium voted in March 2005 to begin the process of extending the system to the broadcasting bands up to 120 MHz. The design, development and testing phases are expected to be completed by 2007-2009. The quality of DRM audio is excellent, and the improvement upon analogue AM is immediately noticeable. DRM can be used for a range of audio content, including multi-lingual speech and music. Besides providing near-FM quality audio, the DRM system has the capacity to integrate data and text. This additional content can be displayed on DRM receivers to enhance the listening experience. Unlike digital systems that require a new frequency allocation, DRM uses existing AM broadcast frequency bands. The DRM signal is designed to fit in with the existing AM broadcast band plan, based on signals of 9 kHz or10kHz bandwidth. It has modes requiring as little as 4.5kHz or 5kHz bandwidth, plus modes that can take advantage of wider

92

bandwidths, such as 18 or 20kHz. Many existing AM transmitters can be easily modified to carry DRM signals. DRM mode propagation A B C D minor fading as A plus selective fading as B plus severe fading as C plus severe Doppler shift data rate highest middle low lowest error protection lowest low middle highest

DRM has 4 transmission modes, referred to as Mode A, B, C, and D. Mode A is most likely to be used on medium and long wave using ground wave propagation for local broadcasts. Mode B is the most common mode for single hop short wave broadcasts and all broadcasts within Europe tend to use this mode. Night time or international medium wave broadcasts (sky wave) may use Mode B instead of Mode A. Mode C for long path multi-hop broadcasts on short wave. For example direct broadcasts from New Zealand or Australia being beamed into Europe. Mode D is the most robust and could be used for Near Vertical Incidence Skywave (NVIS) broadcasts where the radio signal is broadcast upward and the ionosphere reflects the signal back down. This mode is little used for broadcasting in Europe but radio amateurs have recently been allowed to broadcast in the 5 MHz band to asses whether NVIS could be used for national coverage. NVIS is common in the tropical short wave bands as it gives considerable area coverage with a single transmitter located in the centre of the broadcast area. Mode A and B At the bit rates available for Mode A and Mode B and using AAC and SBR frequency enhancement, gives 15.2 kHz audio bandwidth (equivalent to VHF-FM radio) at the DRM receiver but thanks to AACplus compression only requires 9/10 kHz of radio spectrum to achieve this. For most DRM broadcasts in Mode A and Mode B protection level 1 is typical. Mode B is the most common mode used for short-wave broadcasts. The lower the protection level number the more robust the signal is against selective fading and multi-path interference. Only the higher bit rates available in Mode A and Mode B could be described as near FM quality - the main selling point of DRM. The following tables show the maximum aacPlus bit rate available for the given

93

mode and protection level. There is overlap between the available bit rates on different modes. It is up to the broadcaster to decide which mode/bit rate best suits the propagation path from the transmitter to the target area. If the radio station is also broadcasting a data service then the bit rate allocated to the audio is reduced accordingly. All figures given assume EEP (Equal Error Protection). protection Mode A Mode B level 64-QAM 64-QAM Typical bit rates available for DRM 0 19.6 kbps 15.2 kbps broadcasting in 9 kHz channel (medium wave) 1 23.5 kbps 18.3 kbps channel using 64-QAM for the MSC. 2 27.8 kbps 21.6 kbps 3 protection level 0 1 2 3 Mode C and D DRM Mode C and Mode D are only used on short-wave and demonstrate the trade off between making the signal robust against fading and multi-path signals and the actual data carrying capacity. Audio quality can be poor and comparable with AM broadcasts as many broadcasts can only use the AAC codec as there is insufficient data rate to support both AAC and SBR codecs. With Mode C and Mode D broadcasts using a high protection level then the audio quality (bandwidth) can sound worse than AM and obviously there is no possibility of including multimedia with these broadcast. Although the audio may be poor quality, provided there is sufficient signal-to-noise ratio at the receiver then the audio will not have any of the characteristics fading or distortion of an equivalent AM signal. 30.8 kbps Mode B 16-QAM 11.6 kbps 14.5 kbps ----24.0 kbps Mode B 64-QAM 17.4 kbps 20.9 kbps 24.7 kbps 27.4 kbps Typical bit rates available for Mode B DRM broadcasting in 10 kHz channel (short-wave) channel. For a more robust Mode B short-wave broadcast using 16-QAM for the MSC then only the AAC codec can be used - no SBR.

94

protection level 0 1 2 3 protection level 0 1 2 3

Mode C MSC 16-QAM 9.1 kbps 11.4 kbps ----Mode D MSC 16-QAM 6.0 kbps 7.5 kbps -----

Mode C MSC 64-QAM 13.7 kbps 16.4 kbps 19.4 kbps 21.5 kbps Mode D MSC 64-QAM 9.1 kbps 10.9 kbps 12.9 kbps 14.3 kbps

Mode C transmitting in 10 kHz channel.

Mode D transmitting in 10 kHz channel

DRM applications will include fixed and portable radios, car receivers, software receivers and PDAs. Several early prototype DRM receivers have been produced, including a software receiver. The DRM system uses a type of transmission called COFDM (Coded Orthogonal Frequency Division Multiplex). This means that all the data, produced from the digitally encoded audio and associated data signals, is shared out for transmission across a large number of closely spaced carriers. All of these carriers are contained within the allotted transmission channel. The DRM system is designed so that the number of carriers can be varied, depending on factors such as the allotted channel bandwidth and degree of robustness required. The DRM system can use three different types of audio coding, depending on broadcasters preferences. MPEG4 AAC audio coding, augmented by SBR bandwidth extension, is used as a general-purpose audio coder and provides the highest quality. MPEG4 CELP speech coding is used for high quality speech coding where there is no musical content. HVXC speech coding can be used to provide a very low bit-rate speech coder. The robustness of the DRM signal can be chosen to match different propagation conditions. Refer to the following documents. See the lists of related specifications number attached here.

Dropdown Menu: A type of menu that appears as a text box with an arrow pointing down in part of the box. It allows a user to click on it, and a list of choices
95

appears below the menu. After the information drops down, one of the items can be selected. It's possible that the information appears above the menu if there is not space below it. DROP FRAME: Colour video was slowly introduced into broadcast. It was therefore necessary to make it compatible with black and white receivers and to design colour receivers or televisions to be able to receive black and white programming as well. In order to accommodate the extra information needed for colour the b&ws 30 frame/second rate was slowed to 29.97 f/s for colour. Although usually not an issue for non broadcast applications, in broadcast, the small difference between real time (or the wall clock) and the time registered on the video can be problematic. Over a period of 1 hour (SMPTE) the video will be 3.6 seconds or 108 extra frames longer in relation to the wall clock. To overcome this discrepancy drop frame is used. DS0 (Digital signal level zero): 64 kbps. Framing specification used in transmitting digital signals over a single channel at 64-kbps on a T1 facility. DS1: 1) A telephone company format for transmitting information digitally. DS1 has a capacity of 24 voice circuits at a transmission speed of 1.544 megabits per second (in US). 2) Framing specification used in transmitting digital signals at 1.544-Mbps on a T1 facility (in the United States) or at 2.108-Mbps on an E1 facility (in Europe). DS3: A terrestrial and satellite format for transmitting information digitally. DS3 has a capacity of 672 voice circuits at a transmission speed of 44.736 Mbps (commonly referred to as 45 Mbps). DS3 is used for digital television distribution using mezzanine level compression--typically MPEG-2 in nature, decompressed at the local station to full bandwidth signals (such as HDTV) and then re-compressed to the ATSC's 19.39 Mbps transmission standard. DSI (Data Search Information): Information for Fast Forward/Fast Backward and seamless playback. This is real time control data spread throughout the DVD-Video data stream. Along with PGCI, these packets are part of the 1.00 mbit/sec overhead in video applications (Book B). These packets contain navigation information which makes it possible to search and maintain seamless playback of the Video Object Unit (VOBU). The most important field in this packet is the sector address where the first reference frame of the video object begins. Advanced angle change and presentation timing are included to assist seamless playback. DSL (Digital Subscriber Line): The ability to use a standard telephone line to transport data. xDSL is the generic term for each of two varieties: ADSL (asynchronous), where the upstream and downstream data rates are different, and SDSL (synchronous), where the upstream and downstream data rates are the same.

96

DSM-CC (Digital Storage Media - Command and Control): A cyclic data transmission protocol. The data is split into smaller modules, which are then transmitted in a repetitive sequence i.e. a carousel. If there are several modules to complete a single entity (or file) but a module is received with errors then the receiver may wait for subsequent resends to complete the entity. The DSM-CC download server is defined in ISO/IEC 13818-6 Section of the MPEG-2 standards. Most common use is for interactive applications or receiver software upgrades. It is similar to the carousel transmission of teletext pages. DSP (Digital Signal Processor): A DSP is a microprocessor designed to work with analog signals such as video or audio that have been digitally encoded. The DSP then takes these digital representations and performs operations on them. DSPs are used in video, sound, and modem technology. Intel's MMX instruction set was the first attempt to make x86 processors (specifically the Pentium processor line) more capable of DSP operations. This follows Intel's theory of putting all the processor work onto its processors. However, DSP chips are still used in many devices for PCs, as they can keep the CPU from being bogged down. DSU (Digital Service Unit): a device used to connect a V.35 serial interface to a digital circuit. Generally any CPE that terminates a digital circuit is referred to as a CSU"/DSU. DSS (Digital Satellite System): Due to trademark issues, the abbreviation is no longer used. See DBS. DTCP (Digital Transmission Copy Protection): HDTV copy-protection scheme more commonly called 5C. DTE (Data Terminal Equipment): Refer to LAPB (Link Access Protocol, Balanced). DTLA (Digital Transmission Licensing Administrator): organization for the 5C DTCP HDTV copy-protection technology. The licensing

DTS (Digital Theater Systems Sound): A product of DTS, Inc., DTS is a multichannel audio compression format similar to Dolby Digital used in DVD-video discs, DVD-audio, 5.1 channel audio CDs, and some movie theaters. DTS differs from Dolby Digital in that it generally uses higher data rates and many have the opinion that DTS is better quality. DTS can only be on a DVD-video disc if accompanied by a Dolby Digital or LPCM track (for North America) or mpeg audio and LPCM (European Community) to ensure compatibility, because DVD players are only required to decode those standards in those regions. DTS Website DTT (Digital Terrestrial Television): The term used in Europe to describe the broadcast of digital television services using terrestrial frequencies.

97

DTV Team: Originally Compaq, Microsoft and Intel, later joined by Lucent Technologies. The DTV Team promotes the computer industry's views on digital television--namely, that DTV should not have interlace scanning formats but progressive scanning formats only. (Intel, however, now supports all the ATSC Table 3 formats, including those that are interlace, such as 1080i.) Internet: www.dtv.org. Dub: A "dub" is a duplicate copy of an existing tape. Dual boot: A system that can boot to two different operating systems. Some OSes, such as Windows NT/2000/XP and versions of Linux, allow for dual booting when installed. Of course, you can also use other methods, such as commercial programs that install a special boot partition that is capable of launching operating systems from other partitions. You can also boot to many more than two operating systems on the same machine. Duplex: A telecommunications term that describes part of the communications between a local modem and a remote computer. In full duplex mode, the remote computer is set up to return the characters that are sent to it so that they can be displayed on your screen. In half duplex mode, the remote computer does not return the characters sent to it. Also see full duplex and half duplex for descriptions of those terms in other contexts. Duplex Cable: A two-fiber cable suitable for duplex transmission.

Duplex Transmission: Transmission in both directions, either one direction at a time (half-duplex) or both directions simultaneously (full-duplex).

Dual-prime Prediction: A prediction mode in which two forward field-based predictions are averaged. The predicted block size is 16 x 16 luminance samples. Duty Cycle: In a digital transmission, the fraction of time a signal is at the high level.

98

DV (Digital Video): This digital VCR format is a cooperation between Hitachi, JVC, Sony, Matsushita, Mitsubishi, Philips, Sanyo, Sharp, Thomson and Toshiba. It uses 6.35 mm (0.25-inch) wide tape in a range of products to record 525/60 or 625/50 video for the consumer (DV) and professional markets (Panasonic's DVCPRO, Sony's DVCAM and Digital-8). All models use digital intra-field DCT-based "DV" compression (about 5:1) to record 8-bit component digital video based on 13.5 MHz luminance sampling. The consumer versions, DVCAM, and Digital-8 sample video at 4:1:1 (525/60) or 4:2:0 (625/50) video (DVCPRO is 4:1:1 in both 525/60 and 625/25) and provide two 16-bit/48 or 44.1 kHz, or four 12-bit/32 kHz audio channels onto a 4 hour 30 minutes standard cassette or smaller 1 hour "mini" cassette. The video recording rate is 25 Mbps. DVB (Digital Video Broadcasting): DVB is an acronym for "Digital Video Broadcasting". DVB was set up by the EBU (European Broadcast Union) to set the standards for digital video transmission. They have published these via ETSI (European Telecommunications Standards Institute) who also set standards for devices such as GSM telephones. In fact there are several DVB standards for different transmission media. Some of these are: DVB-S Satellite DVB-C Cable DVB-T Terrestrial DVB-SI Specification for Service Information DVB-CI Common Interface for conditional access Internet: www.dvb.org. DVB-C (Digital Video Broadcasting Cable System): The DVB-C cable system is based on DVB-S, but the modulation scheme is used is Quadrature Amplitude Modulation (QAM) instead of QPSK. For cable networks, no inner-codeforward error-correction (FEC) is needed. The system is centered on 64-QAM, but lower-level systems, such as 16-QAM and 32-QAM, and higher level systems such as 128-QAM and 256-QAMcan also be used. In each case, the data capacity of the system is traded against robustness of the data. In terms of capacity, an 8 MHzchannel can accommodate a payload capacity of 38.5 Mbit/s if 64-QAMis used, without spill-over into adjacent channels. Higher-level systems, such as 128-QAM and 256-QAMare also possible, but their use depends on the capacity of the cable network to cope with the reduced decoding margin. DVB-H (DVB Handheld Terminals): The DVB Project has an established terrestrial transmission system in the form of DVB-T, but until recently lacked a standard supporting handheld terminals. DVB-H was formulated to fill that gap, providing the transmission system features important to handheld receivers such as low power, cell-tocell handover, and severe mobile multipath reception.

99

The DVB-H physical layer specification is based on DVB-T, and reuses existing DVB concepts for service location and data broadcasting. As a result DVB-H signals can either share existing DVB-T infrastructure which is helpful in the crowded UHF spectrum or use their own dedicated network. The key additional features of DVB-H are time slicing and additional Forward Error Correction (FEC) protection. These are implemented above the physical layer in the link layer. (1) Time slicing The time slicing mechanism transmits services in high data rate bursts, enabling the receiver to be powered down for up to 95% of the time, reducing its average power consumption. Each burst contains notification to the receiver of when to wake for the next burst. Note that the transmitter is always on, providing a continuous transmission. Time slicing also enables receivers to support efficient cell-to-cell transfer, as neighbouring cells can be monitored during the off-time between bursts. Based on DVB-T, backwards fully compatible Gives additional features to support Handheld portable and mobile reception battery saving mobility with high data rates, single antenna reception, SFN networks impulse noise tolerance increased general robustness support for seamless handover The above have been achieved by adding options Time-slicing for power saving MPE-FEC for additional robustness and mobility 4k mode for mobility and network design flexibility plus additional minor changes, e.g., in signalling DVB-H is meant for IP-based services via MPE insertion DVB-H can share DVB-T multiplex with MPEG2 services Portable/indoor coverage should be built to the network for fully exploit the DVB-H possibilities businesswise.

100

(2) Forward Error Correction (FEC)


The additional forward error correction (FEC) is supported for multi-protocol encapsulated (MPE) data, in a way that enables MPE-FEC ignorant receivers to receive the service. It is important to note that DVB-H only supports IP-datagrams or other network layer datagrams as its payload, not MPEG-2 transport stream. The objective of MPE-FEC is to improve the Doppler performance and C/N in mobile channels; it also improves impulse noise tolerance. Studies have demonstrated that the gain from MPEFEC is broadly equivalent to gains made from using antenna diversity with maximum ratio combining. The buffer memory already required for time slicing is used for MPE-FEC frame storage. Datagrams are arranged in a table. Received data is written into columns with Reed-Solomon parity bytes arranged in rows, providing virtual
101

time-interleaving of data during error correction. By either padding some broadcast data or omitting parity columns, the additional error correction may be flexibly strengthened or weakened according to each broadcasters individual requirements.

These items have driven the standardization of the DVB-H system defined in the Annex F in the ETSI Final draft ETSI EN 300 744 V1.5.1 (2004-06). This annex provides additional features to DVB-T to support Handheld terminals transmitting DVB-H services.

102

Orthogonal Frequency Division Multiplexing (OFDM) transmission: an additional 4K mode is provided with the implied reference signals and Transmission Parameter Signalling (TPS). Inner interleaving: a native inner interleaver for the 4K transmission mode is provided as well as an in-depth symbol interleaver to be used with the 2K or 4K modes. Also, the implied Transmission Parameter Signalling (TPS) is defined. Signalling information: Transmission Parameter Signalling bits are defined to signal to the receivers the use of Time-Slicing and / or MPE-FEC in the DVB-H transmission. The additional "4K" transmission mode is an interpolation of the parameters defined for the "2K" and "8K" transmission modes. It aims to offer an additional trade-off between transmission cell size and mobile reception capabilities, providing an additional degree of flexibility for network planning. Terms of the trade-off can be expressed as follows: The DVB-T "8K mode" can be used both for single transmitter operation and for small, medium and large SFNs. It provides a Doppler tolerance allowing high speed reception. The DVB-T "4K mode" can be used both for single transmitter operation and for small and medium SFNs. It provides a Doppler tolerance allowing very high speed reception. The DVB-T "2K mode" is suitable for single transmitter operation and for small SFNs with limited transmitter distances. It provides a Doppler tolerance allowing extremely high speed reception. The additional features affect the blocks shaded in figure F.1 below.

NOTE: The following clauses are numbered in accordance with the normative elements to which they refer in clause 4, thus they are not necessarily in sequence. DVB-S (Digital Video Broadcasting Satellitesystem): The DVB-S system is designed to cope with the full range of satellite transponder bandwidths. DVB-S is the oldest, most established of the DVB standards family, and arguably forms the core of the DVB's success. Services using DVB-S are on-air on 6 continents. DVB-S is a single-carrier system. One way of picturing it is to consider it as a kind of 'onion'. In the centre, the onion's core is the payload, which is the useful bit rate. Surrounding this are a series of layers to make the signal less sensitive to errors, and to arrange
103

the payload in a form suitable for broadcasting. The video, audio, and other data is inserted into fixed-length MPEGTransport Stream packets. The packetized data constitutes the payload. A number of stages of processing follow:

The data are formed into a regular structure by inverting synchronization bytes, every eighth packet header. The contents are randomised. A Reed-Solomon Forward Error-Correction (FEC) overhead is then added to the packet data. This is a very efficient system which adds only around 8% overhead to the signal. It is called the Outer Code. There is a common Outer Code for all the DVB delivery systems. Next, Convolutional Interleaving is applied to the packet contents. Following this, a further error-correction system is added, using a punctured Convolutional Code. This second error-correction system, the Inner Code, can be adjusted, in the amount of overhead, to suit the needs of the service provider. Finally, the signal is used to modulate the satellite broadcast carrier using quadrature phase-shift keying (QPSK).

In essence, between the multiplexing and the physical transmission, the system is tailored to the specific channel properties. The system is arranged to adapt to the error characteristics of the channel. Burst errors are randomized, and two layers of forward error correction are added. The second level, or Inner Code, can be adjusted to suit the circumstances (power, dish size, bit rate available). There are thus two variables for the service provider: the total size of the 'onion' and the thickness of the second error- correction outer skin'. In each case, in the home, the receiver will discover the right combination to use by very rapid trial and error on the received signal. An appropriate combination of payload size and Inner Code can be chosen to suit the service operator's environment. One example of a parameter set would be for a 36 MHz(-1 dB) transponder to use a 3/4 Convolutional Code, in which case a useful bit rate of about 39 Mbit/s will be available as the payload. The 39 Mbit/s (or other bit rates allowed by parameter sets for a given satellite transponder) can be used to carry any combination of MPEG-2video and audio. Thus, service providers are free to deliver anything from multiple-channel SDTV, 16:9 Widescreen EDTV or single-channel HDTV, to Multimedia Data Broadcast Network services and Internet over the air. DVB-T (DVB-Terrestrial): Digital television broadcasting specification is based on a radio transmission system using Coded Orthogonal Frequency Division Multiplexing, (COFDM), that has many (thousands) of closely and precisely spaced radio carriers, each separately modulated. In sum these carry packets of data that includes program material digitized to the MPEG specification, and other data (SI) relating to channel tuning, time and date and program guide information. The DVB-T system was based on a set of user requirements produced by the Terrestrial Commercial Module of the DVB Project. DVB members contributed to the technical development of DVB-T through the DTTV-SA (Digital Terrestrial
104

TelevisionSystems Aspects) of the Technical Module. The European Projects SPECTRE, STERNE, HD-DIVINE, HDTVT, dTTb, and several other organizations, developed system hardware and produced results that were fed back to DTTV-SA. The DVB-T system specification for terrestrial digital television was approved by ETSI [European Telecommunications Standards Institute (document EN 300 744) http://www.etsi.org/broadcast/dvb.html ]. MPEG-2 sound and vision coding forms the basis of DVB-T. Elements of the DVB-T specification include: COFDM transmission, which allows for the use of either 1705 carriers (usually known as '2k'), or 6817 carriers ('8k'). 1) The '2k' mode is suitable for single transmitter operation and for relatively small single frequency networks with limited transmitter power. 2) The '8k' mode can be used both for single transmitter operation and for large area single frequency networks. 3) To reduce the corruption from signal reflections or echoes (ghosting), the broadcaster can select and set in the transmission, a guard interval or period of time for the receiver to wait before demodulating each sample. 4) The '8k' system can provide longer protection than the '2k' system. Concatenated error correcting is used. Reed-Solomon outer coding and outer convolutional interleaving are used, in common with the other DVB standards. The inner coding (punctured Convolutional Code) is the same as that used for DVB-S. The data carriers in the COFDM frame can use QPSK and different levels of QAM modulation and code rates, in order to trade bit rate against ruggedness. Two-level hierarchical channel coding and modulation can be used, but hierarchical source coding for video and audio is not used, since its benefits do not justify the extra receiver complexity involved. The modulation system uses OFDM (Orthogonal Frequency Division Multiplexing). OFDM uses a large number of carriers that spread the information content of the signal. Used very successfully in DAB (Digital Audio Broadcasting), OFDM's major advantage is that it maintains good reception characteristics in a very strong multipath environment. See the lists of related specifications number attached. DVCAM: Sony's development of native DV which records a 15 micron (15x10-6m, fifteen thousandths of a millimeter) track on a metal evaporated (ME) tape. DVCAM uses DV compression of a 4:1:1 signal for 525/60 (NTSC) sources and 4:2:0 for 625/50 (PAL). Audio is recorded in one of two forms--four 12-bit channels sampled at 32 kHz, or two 16-bit channels sampled at 48 kHz. DVCPRO: See: D7. DVCPRO50: This variant of DV uses a video data rate of 50 Mbps--double that of other DV systems--and is aimed at the higher quality end of the market. Sampling is 4:2:2 to give enhanced chroma resolution, useful in post production processes (such as chromakeying). Four 16-bit audio tracks are provided. The format is similar to Digital-S (D9).

105

DVCPRO HD: This variant of DV uses a video data rate of 100 Mbps--four times that of other DV systems--and is aimed at the high definition EFP end of the market. Eight audio channels are supported. The format is similar to D9 HD. DVCPRO P: This variant of DV uses a video data rate of 50 Mbps--double that of other DV systems--to produce a 480 progressive picture. Sampling is 4:2:0. DVD: Digital Variable/Versatile/Video Disc. This is much like a CD-ROM except that it stores over 7 times as much data in its simplest form. DVD is the successor to CD-ROM technology. DVD discs are the same size physically as CD-ROM discs, but hold between 4.7-18 GB of data using dual-layer and double-sided discs. The first wave of DVD drives were read-only devices, but newer versions (such as DVD-R/+R/-RW/+RW/-RAM) are beginning to work with write-once and rewriteable media. DVD+R - This standard writes to DVD+R media and is write once read many (WORM). It is being pushed by HP, Dell, and others as the next de facto DVD writing standard, along with DVD+RW. DVD+RW - This standard reads standard DVD-ROM discs, and reads and writes to DVD+RW media. DVD-R - This standard is to DVD-ROM like CD-R is to CD-ROM. It uses 4.7 GB disks that can only be written to once, and which then can be read by standard DVD-ROM drives. Apple embraced this standard early, and it is already an entrenched standard that Gateway is also supporting. DVD-RAM - The DVD-RAM standard uses media that can be written and read multiple times, like RAM chips. The first DVD-RAM media held 2.6 GB worth of data per side, and had to be manually flipped over to access the other side. Updates to the standard allowed full 4.7 GB DVD capacity. DVD-RAM drives can read standard DVD disks, but need a special caddy to hold the disks. DVD-RAM media is shaped like the caddy and cannot be inserted into standard DVD drives. DVD-RAM is on the way out. DVD-RW - The rewriteable form of DVD-R. It is being added on to some DVD-R drives to add functionality. DVD-RW disks can write 4.7 GB of data. DVD Video Filesystem: A new file system was chosen for DVD video which would suit both read-only and writable versions. This file system is a subset of UDF 1.02 called micro UDF (M-UDF). The main characteristics of UDF are: - Robust file exchange - System & vendor independent - Writable & read-only media - Based on ISO 13346 DVE: Digital Video Effects. A registered trademark of Nippon Electric Company. Refer to video equipment that performs digital effects such as compression and transformation. D-VHS: Digital-VHS. Version of VHS (Video Home System) videocassette standard capable of recording HDTV via a FireWire connection; D-VHS decks are made by JVC and Mitsubishi.

106

DVI (Digital Video Interface): DVI is a new form of video interface technology made to maximize the quality of flat panel LCD monitors and high- end video graphics cards. It is a replacement for the P&D Plug & Display standard, and a step up from the digital-only DFP format for older flat panels. DVI is becoming increasingly popular with video card manufacturers, and most cards purchased include both a VGA and a DVI output port. In addition to being used as the new computer interface, DVI is also coming out as the digital transfer method of choice for HDTV, EDTV, Plasma Display, and other ultra-high-end video displays for TV, movies, and DVDs. Likewise, even a few of the top-end DVD players are now featuring DVI outputs in addition to the high-quality analog Component Video. Don't expect to throw away all your old video cables just yet, but keep an eye out for DVI availability in the future. DVR: Digital Video Recorder. A television recorder such as Replay and TiVo that uses a hard drive, an EPG (Electronic program guide), and internal processing to drastically simplify programmed recording and playback of recorded programs. A DVR vastly increases recording time compared to VCRs or DVD-recording decks; often enables smart programming, in which the device records an entire series or programming defined by keywords, genre, or personnel; and offers pause control over "live" broadcasts. Also called personal video recorder (PVR) or hard disk video recorder. DVTR: Digital Video Tape Recorder. DWDM (Dense Wave Division Multiplexing): - a technique for high speed data transmission over fiber optics. Dynamic Range: The ratio between the greatest signal power that may be transmitted over a multichannel analog transmission system without exceeding distortion or other performance limits, and the least signal power that may be utilized without exceeding noise, error rate or other performance limits. Dynamic Rounding: The intelligent truncation of digital signals. Some image processing requires that two signals are multiplied, for example in digital mixing, producing a 16-bit result from two original 8-bit numbers (see: Byte). This has to be truncated, or rounded, back to 8-bits. Simply dropping the lower bits can result in visible contouring artifacts especially when handling pure computer generated pictures. Dynamic Rounding is a mathematical technique for truncating the word length of pixels--usually to their normal 8-bits. This effectively removes the visible artifacts and is non-cumulative on any number of passes. Other attempts at a solution have involved increasing the number of bits, usually to 10, making the LSBs (least significant bit) smaller but only masking the problem for a few generations. Dynamic Rounding is a licensable technique, available from Quantel and is used in a growing number of digital products both from Quantel and other manufacturers.

107

-EE1: Wide-area digital transmission scheme used predominantly in Europe that carries data at a rate of 2.048 Mbps. E1 lines can be leased for private use from common carriers. Compare with T1. See also DS-1. E3: Wide-area digital transmission scheme used predominantly in Europe that carries data at a rate of 34.368 Mbps. E3 lines can be leased for private use from common carriers. Compare with T.120. See also DS-3. EAV: End of Active Video in component digital systems. EBU: European Broadcasting Union. An organization of European broadcasters that, among other activities, produces technical statements and recommendations for the 625/50 line television system. CP 67, CH-1218 Grand-Saconnex GE, Switzerland. Tel: 011-41-22-717-2221. Fax: 011-41-22-717-2481. Email: ebu@ebu.ch. Internet: www.ebu.ch. EBCDIC: Extended Binary Coded Decimal Interchange Code. A way of encoding 256 characters in binary, much like ASCII, but used mainly on mainframes. Most of the time EBCDIC is only mentioned in translations between EBCDIC and ASCII. EBIOS: Enhanced BIOS. This translates between the partition table limitations of a standard computer BIOS and the IDE limitations to provide up to 8 GB of storage space using the IDE interface. Your computer's BIOS has maximums of 1024 cylinders, 256 heads, and 63 sectors (8 GB). The IDE interface has a maximum of 65,536 cylinders, 16 heads, and 256 sectors (128 GB). Put these maximums together (1024 cylinders, 16 heads, and 63 sectors) and you've got a measly 504 MB of data to work with. The EBIOS translates these limitations in such a way that you can actually achieve the BIOS max of 8 GB on one IDE device. Newer IDE standards have since been developed to up the top hard drive size to 128 GB and beyond. ECC (Error Check and Correct): A block of check data, usually appended to a data packet in a communications channel or to a data block on a disk, which allows the receiving or reading system both to detect small errors in the data stream (caused by line noise or disk defects) and, provided they are not too long, to correct them. ECC Constraint Length: The number of sectors that are interleaved to combat bursty error characteristics of discs. 16 sectors are interleaved in DVD. Interleaving takes advantage of typical disc defects such as scratch marks by spreading the error over a larger data area, thereby increasing the chance that the error correction codes can conceal the error. E-Cinema (also D-Cinema): Electronic cinema. Typically the process of using video at 1080/24p instead of film for production, post production and presentation. EDFA (Erbium-doped Fiber Amplifier): Optical fibers doped with the rare earth element, erbium, which can amplify light in the 1550 nm region when pumped by an external light source.

108

Edge-emitting Diode: An LED that emits light from its edge, producing more directional output than surface-emitting LED's that emit from their top surface.

Editing: The process by which one or more compressed bit streams are manipulated to produce a new compressed bit stream. Edited bit streams meet the same requirements as streams which are not edited. EDH: Error detection and handling for recognizing inaccuracies in the serial digital signal. It may be incorporated into serial digital equipment and employ a simple LED error indicator. Effective Area: The area of a single-mode fiber that carries the light.

EIA: Electronics Industries Association. http://www.eia.org/tech/index.cfm Encoding: Encoding is the process of changing data from one form into another according to a set of rules specified by a codec. The data is usually a file containing audio, video or still image. Often the encoding is done to make a file compatible with specific hardware (such as a DVD Player) or to compress or reduce the space the data occupies. Common video encoding methods are DivX, MPEG-1, MPEG-2 and MPEG-4. A common audio encoding method is MP3 although many others exist including MPEG1 audio, DTS, and Dolby Digital. Entropy Coding: In entropy coding, also called variable length coding or Huffmann coding, the more likely values are associated to shorter codewords, while less likely values are associated to longer codewords. Consequently, known the statistics of the event to code, provided that such statistics are representative enough, it is possible to code such event with an average number of bits lower than in fixed length coding. So a variable number of bits is produced in the time unity, but many applications (transmission usually) need Constant Bit Rate. In this case the decoder needs a buffer, where it stores the received bits at constant bit rate and from which it gets the bits to decode at variable bit rate. The encoder must have a similar buffer for transmission and must take into account the decoder buffer size and buffer fillness so that neither overflow nor underflow occur at the decoder buffer. This is the purpose of
109

Annex C to Specification ISO/IEC 13818-2 entitled Video buffering verifier. The tool available to control the bit-rate is the changing of the quantiser_scale, but the rate control algorithm isn't specified and it's a responsability of the encoder. Event Information Table (EIT) (for ATSC): The ATSC PSIP table that carries event information including titles and start times for events on all the virtual channels within the transport stream. ATSC requires that each system contain at least 4 EIT tables, each representing a different 3-hour time block. EIT (Event Information Table for DVB): The DVB SI table that supplies the decoder with a list of events corresponding to each service and identifies the characteristics of these events. Four types of EITs are defined by DVB: 1) The EIT Actual Present/Following supplies information for the present event and the next or following event of the transport stream currently being accessed by the decoder. 2) The EIT Other Present/Following defines the present event and the next or following events on transport streams that are not currently being accessed by the decoder. This table is optional. 3) The EIT Actual Event Schedule gives the decoder a detailed list of events in the form of a schedule that goes beyond what is currently or next available. This optional table references event for the transport stream currently being accessed by the decoder. 4) The EIT Other Event Schedule gives the decoder a detailed schedule of events that goes beyond what is currently or next available. This optional table references events for transport streams that are not currently being accessed by the decoder. Electromagnetic Field: EMF. A form of radiation given off by all electrical devices. Most notably for computer users, CRT computer monitors used to give off potentially dangerous amounts of EMF radiation, especially from the sides and rear. This radiation was blamed for causing miscarriages and even cancer. Newer monitors are heavily shielded and give off much lower levels of radiation, especially through the front, where users typically sit. Electromigration. This is when metal atoms wander into the dividing layers on a microprocessor. It is caused by the combination of electricity and heat. Processors are designed to run within certain heat and electrical specifications, and if run at higher heat and/or electrical specifications, electromigration may occur. If this occurs to a great degree and enough metal atoms wander off of the lines in a processor, they may permanently ruin the processor by thinning a connection so that it does not work effectively, or even making an electrical connection where one is not intended to be. Overclocking and raising voltage supplied to a processor increases the risk of electromigration. Electronic Program Guide (EPG): An EPG implies a degree of electronic interactivity and navigation capability that is not possible in a static Program Guide. Basic program schedule information is usually included as data in digital television broadcast formats. Broadcasters using DVB, transmit basic information in Event Information Tables (EIT). These tables may be in several different forms containing immediate information - EIT now/next, (also known as EITpf - present/following) and
110

in limited cases, future program information, known as EITschedule, which may also include other broadcasters program details. EITschedule is not generally used however because there can be a significant amount of data but the DVB specification doesn't provide any means to compress this data to lessen the data bandwidth requirements. Receivers usually display the EIT now/next data when a viewer changes program, and some receivers include an enhanced program guide where information is compiled from EITschedule or captured if the viewer 'surfs' to other broadcasters' transmissions. An EPG that provides interactivity and navigation facility requires the broadcaster(s) and receiver makers to nominate a suitable API/middleware - eg. MHP or OpenTV, Liberate etc. The EPG data is continually updated in the broadcast and displayed in the receiver by an application that either may be downloaded (also received in the broadcast), or embedded in the non-volatile receiver memory usually at time of manufacture. The disadvantage of an embedded application is the inability to change it later. The advantage of an EPG is the Broadcaster's ability to change the look and feel to suit changing requirements. Other suppliers may provide EPGs, either through unrelated data streams or, perhaps, via modem or over the Internet, as is provided to TiVo subscribers in the US and UK. Note that in Australia, the Broadcasting Services Act imposes certain obligations on broadcasters and datacasters in regard to EPGs. Elementary Stream (ES): The output of an MPEG video encoder is a video elementary stream and the output of an audio encoder is an audio elementary stream. Before being multiplexed video and audio elementary streams are packetized to form the Video PES and the Audio PES. A generic term for one of the coded video, coded audio or other coded bit streams in PES packets. One elementary stream is carried in a sequence of PES packets with one and only one stream_id. A programme comprises one or more elementary streams. An elementary stream is the name given to a single, digitally-coded and possibly MPEG-compressed component of a programme; for example, coded video or audio. The simplest type of programme is a radio service comprising a single audio elementary stream. A traditional television broadcast comprises three elementary streams; one carrying coded video, another carrying coded stereo audio and another carrying teletext data. Digital television broadcasts are certain to comprise many more elementary streams. For example, there may be additional video elementary streams carrying the same picture at alternative resolutions and there may be a choice of audio and teletext elementary streams each in a different language. The MPEG-2 Systems layer describes how elementary streams comprising one or more programmes are multiplexed together to form a single data stream suitable for digital transmission or storage. The output of an MPEG-2 multiplexer (see Fig. 1) is a contiguous stream of 8-bit-wide data bytes. There are no constraints on the data rate but clearly it must at least equal the total of the combined data rates of the contributing elementary streams. The multiplex may be of fixed or variable data rate and may contain fixed or variable data rate elementary streams.

111

Elementary Stream Clock Reference (ESCR): A time stamp in the PES Stream from which decoders of PES streams may derive timing. Ellipticity: Describes the fact that the core or cladding may be elliptical rather than circular.

Embedded Audio: Digital audio that is multiplexed and carried within an SDI connection--so simplifying cabling and routing. The standard (ANSI/SMPTE 272M-1994) allows up to four groups each of four mono audio channels. Generally VTRs only support Group 1 but other equipment may use more, for example Quantel's Clipbox server connection to an edit seat uses groups 1-3 (12 channels). 48 kHz synchronous audio sampling is pretty well universal in TV but the standard also includes 44.1 and 32 kHz synchronous and asynchronous sampling. Synchronous means that the audio sampling clock is genlocked to the associated video (8,008 samples per five frames in 525/60, 1,920 samples per frame in 625/50). Up to 24-bit samples are allowed but mostly only up to 20 are currently used. 48 kHz sampling means an average of just over three samples per line, so three samples per channel are sent on most lines and four occasionally--the pattern is not specified in the standard. Four channels are packed into an Ancillary Data Packet and sent once per line (hence a total of 4 x 3 = 12 or 4 x 4 = 16 audio samples per packet per line). Embedded Memory: This is memory that is built directly onto a processor. For example, a graphics chip may have embedded memory instead of using separate memory chips. Use of embedded memory in PCs and PC components nowadays is fairly rare, as attaching a large amount of memory to a chip reduces yields and increases costs. Embedded Processor: A microprocessor used in an embedded system. Typically these processors are smaller, consume less power, and utilize a surface mount form factor, as opposed to more standard consumer processors. Embedded processors are only sold to consumers pre-built into embedded systems, not separately.

112

Embedded System: A system that is located entirely on a processor. All logic is contained in a single chip and has a single purpose. New cars have many embedded systems working to keep emissions low and performance high. E-mail: Electronic mail. The exchange of messages between two parties using the internet. Usually text, but can include pictures (video, photographs, computer graphics) and sound. The sender and receiver must both have an e-mail address or mailbox (space on the internet). E-mail Address: Space on the internet allocated for the exchange of and storage of messages. EMI (Electromagnetic Interference): Any electromagnetic disturbance that interrupts, obstructs, or otherwise degrades or limits the effective performance of electronics/electrical equipment. It can be induced intentionally, as in some forms of electronic warfare, or unintentionally, as a result of spurious emissions and responses, intermodulation products, and the like. EMI is also an engineering term used to designate interference in a piece of electronic equipment caused by another piece of electronic or other equipment. EMI sometimes refers to interference caused by nuclear explosion. Synonym: radio frequency interference. Optical fibers neither emit nor receive EMI. Emulate: (v. to emulate). The act of mimicking the behavior of one computer program, operating system, or piece of hardware with a computer program. Emulation is almost always slower than using hardware designed for a particular purpose. Emulation: The process of using a computer program to mimic the behaviors of another computer program, operating system, or piece of hardware. Emulator: This is usually a program that performs the same operation of another program or a piece of hardware. For example, there are programs that allow a PC to act like a Commodore 64, a Nintendo Entertainment System, or even a Macintosh. These emulators are often developed by talented programmers just to prove that something can be done.

Encoding Schemes:
Code Name NRZ-L Binary Code Code Definition Non-Return-to-Zero Level "One" is represented by one level "Zero" is represented by another level lower the one but not zero. Non-Return-to-Zero Mark "One" is represented by a change in level "Zero" is represented by no change in level. Non-Return-to-Zero Space "One" is represented by no change in level "Zero" is represented by change in level.

NRZ-M

NRZ-S

113

NRZ-I

Non-Return-to-Zero Inverse "One" is represented by no change in level "Zero" is represented by change in level. Bi-Phase Level (Split Phase) Level change occurs at the beginning of every bit period "One" is represented by a "One" level with transition to the "Zero" level "Zero" is represented by a "Zero" level with transition to the "One" level. Bi-Phase Mark Level change occurs at the beginning of every bit period "One" is represented by a midbit level change "Zero" is represented by no midbit level change. Bi-Phase Space Level change occurs at the beginning of every bit period "One" is represented by no midbit level change "Zero" is represented by a midbit level change. Differential Bi-Phase Mark Level change occurs at the center of every bit period "One" is represented by no level change at the beginning of the bit period "Zero" is represented by a level change at the beginning of the bit period. Differential Bi-Phase Space Level change occurs at the center of every bit period "One" is represented by a level change at the beginning of the bit period "Zero" is represented by no level change at the beginning of the bit period.

Bi-Phase-L

Bi-Phase-M

Bi-Phase-S

DBi-Phase-M

DBi-Phase-S

Encrypt (v. to encrypt): The act of making data unreadable in an orderly fashion so that it can be decrypted later. Encryption: The act of altering data to make it unreadable unless you know how to decrypt it. Endian (Big vs. Little Endian): Big Endia is the order of bytes in a word in which the most significant byte or digits are placed leftmost in the structure. Little endian places the least significant digits on the left. A bi-endian machine, such as the PowerPC, is built to handle both types of byte ordering. Energy Dispersal: operation involving deterministic selective complementing of bits in the logical frame, intended to reduce the possibility that systematic patterns result in unwanted regularity in the transmitted signal Energy-per-bit to noise density ratio (Eb/No): A common SNR-like figure of merit for digital communication systems, particularly those obeying Nyquist criteria. Also understood as SNR-per-bit, relates to BER for a given modulation type. Energy-per-symbol to noise density ratio (Es/No): A common SNR-like figure of merit for digital communication systems. Equivalent to SNR for systems obeying Nyquist criteria, it relates to BER for a given modulation type, and relates to Eb/No through number of bits per symbol.
114

Enhancements: Producers add these to interactive and digital television, as well as other digital content to enhance program material. Examples are supplementary text and graphics that add more depth and richness, or links to reach a Web site, as is done using TV Crossover Links. In analog, the vertical blanking interval (VBI) is used to broadcast enhancements, while in digital, the enhancements are part of the ATSC MPEG-2 stream. Enhancements can be created using industry-standard tools and technologies, like HTML and the ECMA Internet Scripting. Encryption: The process of coding data so that a specific code or key is required to restore the original data. In broadcast, this is used to make transmissions secure from unauthorized reception as is often found on satellite or cable systems. Enhancements: Producers add these options to some digital programming to enhance program material -- allowing viewers the ability to download related program resources to specially equipped computers, cache boxes, set-top boxes, or DTV receivers. Enhanced TV: Term used by PBS for certain digital on-air programming (usually educational) that includes additional resources downloaded to viewers. Some forms of enhanced TV allow live interaction; other forms are not visible on-screen until later recalled by viewers. Also known as "datacasting. Entitlement Control Message (ECM): Entitlement Control Messages are private conditional access information which specify control words and possibly other, typically stream-specific, scrambling and/or control parameters. Entitlement Management Message (EMM): Entitlement Management Messages are private conditional access information which specifies the authorization levels or the services of specific decoders. They may be addressed to single decoders or groups of decoders. Entropy Coding: Variable length lossless coding of the digital representation of a signal to reduce redundancy. Entry Point: Refers to a point in a coded bit stream after which a decoder can become properly initialized and commence syntactically correct decoding. The first transmitted picture after an entry point is either an I-picture or a P-picture. If the first transmitted picture is not an I-picture, the decoder may produce one or more pictures during acquisition. Environment: Normally this is your surroundings. Inside your computer, the environment is the settings of a group of variables. Think of it as the surroundings or boundaries set for a process or program running inside your computer. Instead of having a wall and a door, a program in your PC may have x=4 and name="Tara" as its environment. E/O: Abbreviation for electrical-to-optical converter. A device that converts electrical signals to optical signals, such as a laser diode.
115

EPROM: Erasable Programmable Read Only Memory. This is normally read-only memory that retains its information until it is exposed to ultraviolet light. You can often tell a chip is an EPROM by the small window on it that lets ultraviolet light though to program the EPROM. Equilibrium Mode Distribution (EMD): The steady modal state of a multimode fiber in which the relative power distribution among modes is independent of fiber length.

Error Concealment: In digital video recording systems, a technique used when error correction fails. Erroneous data is replaced by data synthesized from surrounding pixels. Error Correction: Errors are inevitable but by means of robust error correction systems, CD and DVD can have uncorrectable error rates as low as that specified for computers, i.e., 10-12 (one uncorrectable error in one trillion). Audio applications do not require this degree of accuracy. 1) Sources of error: Include dropouts from the media (oxide wear, fingerprints, scratches), signal degradation (reflection, intersymbol interference, impedance mismatches, RF interference). 2) Measures of error: The burst length is the maximum number of adjacent erroneous bits that can be fully corrected. The bit-error rate BER is the number of error bits per total bits. Optical disk systems can handle BERs of 1:100000 to 1:10000. The block error rate BLER is the rate of block or frames per second having at least one incorrect bit. The burst error length BEL is the number of consecutive blocks in error. 3) Methods of correction: Goal is to introduce redundancy to permit validity checking and error detection, error correction code ECC to replace errors with calculated valid data, and error concealment to substitute approximate data for uncorrectable invalid data. Redundancy includes repeating the data, adding single-bit parity bits (to check if odd or even), checksums (e.g., weighted checksums computed modulo 11), and cyclic redundancy check code CRCC. CRCC uses a parity check word obtained by dividing a k-bit data block by a fixed number (generation polynomial g) and appended to the data block to create the transmission polynomial v. When the data u is received, it is divided by the same g, and the result subtracted from the original checksum to yield the syndrome c: a zero syndrome indicates no error. Error correction can be accomplished using mathematical manipulation and modulo arithmetic ... Polynomial notation is the standard terminology in the field: e.g., the fixed number 1001011 (MSB leading) is represented as 1x26 + 0x25 + 0x24 + 1x23 + 0x22 + 1x21 + 1x20 or 26 + 23+ 21 + 20. CRCC is typically used as an error pointer and other methods are used for
116

correction. Error correction techniques employ block codes having row and column parity (CRCC are a subclass of linear block codes), convolutional or recurrent codes (which introduce a delay), and interleaving including cross-interleaving. In digital video recording systems, a scheme that adds overhead to the data to permit a certain level of errors to be detected and corrected. Error Detection: Checking for errors in data transmission. A calculation is made on the data being sent and the results are sent along with it. The receiver then performs the same calculation and compares its results with those sent. If an error is detected the affected data can be deleted and retransmitted, the error can be corrected or concealed, or it can simply be reported. Error Correction and Detection: In computer science and information theory, the issue of error correction and detection has great practical importance. Given some data, error detection methods enable one to check whether the data has not been corrupted by the introduction of errors, but the method does not tell us where the errors have been introduced. Error-correction schemes permit this but give the possibility of correcting errors that have been introduced as well. Error correction and detection schemes find use in implementations of reliable data transfer over noisy transmission links, data storage media (including dynamic RAM, compact discs), and other applications where the integrity of data is important. In digital telecommunications, channel coding is a pre-transmission mapping applied to a digital signal or data file, usually with an error-detection or an error-correction code. Error Detection and Handling: See: EDH. Error reporting: Protocol at the receive side reports error conditions to the H.222.1 user. Essence: The actual program (audio, video and/or data) without metadata. Ethernet: The term Ethernet refers to the family of local-area network (LAN) products covered by the IEEE 802.3 standard that defines what is commonly known as the CSMA/CD protocol. Three data rates are currently defined for operation over optical fiber and twisted-pair cables: 10 Mbps10Base-T Ethernet 100 MbpsFast Ethernet 1000 MbpsGigabit Ethernet

10-Gigabit Ethernet is under development and will likely be published as the IEEE 802.3ae supplement to the IEEE 802.3 base standard in late 2001 or early 2002. Other technologies and protocols have been touted as likely replacements, but the market has spoken. Ethernet has survived as the major LAN technology (it is currently used for approximately 85 percent of the world's LAN-connected PCs and workstations) because its protocol has the following characteristics: Is easy to understand, implement, manage, and maintain Allows low-cost network implementations Provides extensive topological flexibility for network installation

117

Guarantees successful interconnection and operation of standards-compliant products, regardless of manufacturer.

The CSMA/CD Access Method The CSMA/CD (Carrier Sense, Multiple Access and Collision Detect) protocol was originally developed as a means by which two or more stations could share a common media in a switch-less environment when the protocol does not require central arbitration, access tokens, or assigned time slots to indicate when a station will be allowed to transmit. Each Ethernet MAC (Media Access Control) determines for itself when it will be allowed to send a frame. The CSMA/CD access rules are summarized by the protocol's acronym: Carrier senseEach station continuously listens for traffic on the medium to determine when gaps between frame transmissions occur. Multiple accessStations may begin transmitting any time they detect that the network is quiet (there is no traffic). Collision detectIf two or more stations in the same CSMA/CD network (collision domain) begin transmitting at approximately the same time, the bit streams from the transmitting stations will interfere (collide) with each other, and both transmissions will be unreadable. If that happens, each transmitting station must be capable of detecting that a collision has occurred before it has finished sending its frame. Each must stop transmitting as soon as it has detected the collision and then must wait a quasirandom length of time (determined by a back-off algorithm) before attempting to retransmit the frame. Data encapsulationAssembling the frame into the defined format before transmission begins, and disassembling the frame after it has been received and checked for transmission errors. Media access control (MAC) In the required CSMA/CD half-duplex mode, and in the optional full-duplex mode.

The basic MAC responsibilities were defined:

Two optional MAC capability extensions and their associated frame formats were discussed. The VLAN tagging option allows network nodes to be defined with logical as well as physical addresses, and provides a means to assign transmission priorities on a frame-by-frame basis. A specific format for the pause frame, which is used for
118

short-term link flow control, is defined in the standard but was not covered here because it is automatic MAC capability that is invoked as needed to prevent input buffer overrun. The PHY (Physical) layer discussions included descriptions of the signaling procedures and media requirements/limitations for the following: 10Base-T 100Base-TX, 100Base-T4, and 100Base-T2 1000Base-T, 1000Base-CX, 1000Base-LX, and 1000Base-SX

Although 100Base-FX was not specifically discussed, it uses the same signaling procedure as 100Base-TX, but over optical fiber media rather than UTP copper. ETSI (European Telecommunications Standards Institute): The European Telecommunications Standards Institute (ETSI) is an independent, non-profit organization, whose mission is to produce telecommunications standards for today and for the future. http://www.etsi.org/ Evanescent Wave: Light guided in the inner part of an optical fiber's cladding rather than in the core, i.e. the portion of the light wave in the core that penetrates into the cladding.

Event: 1) An event is defined as a collection of elementary streams with a common time base, an associated start time, and an associated end time. Refer SI (System Information). 2) Network message indicating operational irregularities in physical elements of a network or a response to the occurrence of a significant task, typically the completion of a request for information. Event Scheduling Information: Set of events for a defined period of time for one or more services controlled by the CAS. Execution Engine: The part of a computing system that reacts to executable code such as machine language or in machine code being the native language a particular computer processor can follow directly. The source code of a programming language is either compiled directly into executable code or into an intermediate language. For example, C++ is compiled into executable machine code. Java source code is compiled into an intermediate bytecode language that must be turned into executable code at runtime by the Java Virtual Machine (JVM - See Item 70 Java) . Technically, the compiler creates object code, which is the machine language representation of the source code, and the link editor creates the executable code. The link editor combines program object code, object code from library routines and any other
119

required system code into one addressable machine language file. Execution Unit : The part of a microprocessor pipeline that actually follows and runs the instructions that are sent to the CPU after the instructions are decoded. Excess Loss: In a fiber optic coupler, the optical loss from that portion of light that does not emerge from the nominal operation ports of the device. Expansion Card : This is generally a printed circuit board that can be plugged into a computer, and is designed to increase the functionality of that computer. Expansion Slot : Any type of slot in a computer into which you can plug an expansion card. Examples include ISA, EISA, PCI, and PCMCIA, but there are other types and there will be more in the future. Exploit (n. exploit): A means of gaining access to a computer system, typically through a known bug in a program or operating system. Many webservers on the Internet that are not up to date with security patches are vulnerable to exploits, and the effects of these exploits are seen when malicious worms run rampant and spread to unpatched systems. Export: When you export data you are taking that data from a program, database, or file and saving it in another format that is generally easier to manipulate or pull into a different program. An example would be pulling data from a SQL database and saving it as text so that you can use it in a mailmerge. Thus, the exporting frees the mailmerge program from having to understand the complex SQL format--it just needs to understand the exported text file. Extended Studio PAL: A 625-line video standard that allows processing of component video quality digital signals by composite PAL equipment. The signal can be distributed and recorded in a composite digital form using D2 or D3 VTRs. Extended Text Table (ETT): The optional ATSC PSIP table that carries long descriptions of events and channels. There are two types of ETTs: Channel ETTs, which carry channel descriptions, and Event ETTs, which carry event descriptions. Extensible Markup Language (XML): A standard created by the W3C. It is a language with many similarities to HTML. What XML adds is the ability to define custom tags, such as , and define the meaning of those tags within the XML document itself--thus the term "extensible." You can extend the XML language easily. XML is becoming more and more common as more browsers and webservers support it. It is also a very flexible way to exchange data over the Web and interpret and use data from other websites. External Modulation: Modulation of a light source by an external device that acts like an electronic shutter. Extremely High Frequency (EHF): A signal in the frequency range of from 30 to 300 GHz. Extinction Ratio: The ratio of the low, or OFF optical power level (PL) to the high, or ON optical power level (PH): Extinction Ratio (%) = (PL/PH) x 100

120

Extrinsic Loss: In a fiber interconnection, that portion of loss not intrinsic to the fiber but related to imperfect joining of a connector or splice. Eye Pattern: A diagram that shows the proper function of a digital system. The "openness" of the eye relates to the BER that can be achieved.

121

-FFab: This is short for fabrication plant. A fab is a factory that takes raw silicon wafers and creates chips with them. Often, fabs are categorized by what process they use. For example, the Intel Pentium chip with MMX was produced in a fab with a 0.35 micron process. Fabrication plants cost billions of dollars to build and are outdated within years. However, they can have a decent life producing volume products after they are no longer considered high-end. Fabless: This term refers to a company that produces chips but doesn't own a fabrication plant, or fab. These companies are starting to become more and more successful at creating chips and renting out other companies' excess fab space to produce their chips. In fact, some fabrication plant owners specifically target such fabless companies to offer their fab space for rent. Fabrication Plant: A fab is a factory that takes raw silicon wafers and creates chips with them. Often, fabs are categorized by what process they use. For example, the Intel Pentium chip with MMX was produced in a fab with a 0.35 micron process. Fabrication plants cost billions of dollars to build and are outdated within years. However, they can have a decent life producing volume products after they are no longer considered high-end. Fading: In telecommunications, fading is a change in the attenuation of a communications channel. Examples of fading (in alphabetical order): Rayleigh fading; In electromagnetic wave propagation, phase-interference fading caused by multipath, and which may be approximated by the Rayleigh distribution. Rician fading Fall Time: Also called turn-off time. The time required for the trailing edge of a pulse to fall from 90% to 10% of its amplitude; the time required for a component to produce such a result. Typically measured between the 90% and 10% points or alternately the 80% and 20% points.

FAQ (Frequently Asked Questions): A document that lists the most common questions about something (with the answers, of course). A simple way to find information on a complex topic is to do an Internet search for _topic_ FAQ, where _topic_ represents the topic for which you are looking. Fast Access Channel (FAC): channel of the multiplex data stream which contains the information that is necessary to find services and begin to decode the multiplex
122

Failure: A termination of the ability of an item to perform a required function. A failure is caused by the persistence of a defect. Faraday Effect: A phenomenon that causes some materials to rotate the polarization of light in the presence of a magnetic field parallel to the direction of propagation. Also called magneto-optic effect. Far-end Crosstalk: A WDMs isolation of a light signal in the desired optical channel from the unwanted optical channels. Also called Wavelength Isolation. Fast Forward Playback : The process of displaying a sequence, or parts of a sequence, of pictures in display-order faster than real-time. Fast Fourier Transform (FFT): A hardware implementable algorithm for spectral analysis in signal processing applications, including basic OFDM systems. Fast SCSI 2: This version of SCSI transfers data at 10 megabytes per second. The connections all contain 50 pins. See also Fast-Wide SCSI 2. Fast-SCSI: Plain vanilla fast-SCSI never really existed, but was sometimes used as slang for Fast SCSI 2. This version of SCSI transfers data at 10 megabytes per second. The connections all contain 50 pins. See also Fast-Wide SCSI 2. Fast-Wide SCSI 2: This version of SCSI upped the pin count to 68, effectively doubling the signal speed of Fast-SCSI 2 to 20 megabytes per second. FAT: File Allocation Table). This is one way to index the contents of storage media, such as your hard drive. The operating system looks here to know where on the drive files are located. There are different flavors of FAT: the standard DOS flavor is called FAT-16; Windows 95 OSR2 and newer versions additionally supported FAT-32, which allows for larger hard drive partitions. Fault Management: One of five categories of network management defined by ISO for management of OSI networks. Fault management attempts to ensure that network faults are detected and controlled. FCC: Federal Communications Commission. These are the people in the government who decide what's legal and illegal to broadcast, including what frequencies are allowed to be used by whom. FC/PC: A threaded optical connector that uses a special curved polish on the connector for very low backreflection. Good for single-mode or multimode fiber. See PC.

FDD: Floppy Disk Drive: This commonly refers to a 3.5 inch disk drive that uses 1.44 MB 3.5 inch floppy disks. However, the term can be used to apply to any drive that uses floppy disk media.

123

FDDI: Fiber Distributed Data Interface. Standards for a 100 Mbps local area network, based upon fiber optic or wired media configured as dual counter rotating token rings. This configuration provides a high level of fault tolerance by creating multiple connection paths between nodes--connections can be established even if a ring is broken. Abbreviation for fiber distributed data interface. 1) A dual counter-rotating ring local area network. 2) A connector used in a dual counter-rotating ring local area network (illustrated).

FDMA (Frequency Division Multiple Access): This multiple access technique allows many cell phone users to communicate with one base station by assigning each user a different frequency channel. An AMPS network, for example, has 832 channels spaced about 30 KHz apart. In digital networks, FDMA is used in conjunction with CDMA or TDMA. Feature: A feature is something that a piece of hardware or software is designed to do. Many things that appear to be bugs are actually features. Often, a hardware or software developer will have to make a trade-off in functionality that causes some undesirable effects. If the developer is aware of this and accepts it, it is not a bug but a feature. Feeder Link: A radio link between an earth station and a satellite, conveying information for a space radio communications service other than fixed satellite service. In the broadcasting-satellite service, all feeder links are uplinks (from the earth to the satellite), but in the mobile-satellite service, feeder links can be both uplinks and downlinks. Fiber Bundle: A group of parallel optical fibers contained within a common jacket. A bundle may contain from just a few to several hundred fibers. Fibre Channel: A high speed data link planned to run up to 2 Gbps on a fiber optic cable. A number of manufacturers are developing products to utilize the Fiber Channel--Arbitrated Loop (FC-AL) serial storage interface at 1 Gbps so that storage devices such as hard disks can be connected. Supports signaling rates from 132.8 Mbps to 1,062.5 Mbps, over a mixture of physical media including optical fiber, video coax, miniature coax, and shielded twisted pair wiring. The standard supports data transmission and framing protocols for the most popular channel and network standards including SCSI, HIPPI, Ethernet, Internet Protocol, and ATM. Fiber Channel-Arbitrated Loop: FC-AL. A fast, serial-based standard meant to replace the parallel SCSI standard. It is primarily used in high cost server systems, and to connect storage devices to servers. It is software-compatible with SCSI and can handle 126 devices per port. FC-AL connections can run as far as 10 kilometers using optical cabling, or 30 meters with standard copper cabling.

124

Fiber Optics: Thin glass filaments within a jacket that optically transmits images or signals in the form of light around corners and over distances with extremely low losses. Field: In an interlaced video a field is 1/2 of a complete picture (Frame) consisting of the even or odd scan lines in the frame. Usually each field is labeled i.e. Field A and Field B. When working with interlaced video it is important to note the Field Order (if Field A comes before Field B in the video stream) especially when encoding. If you notice flicker after encoding you will want to change the field order in the encoding template and re-encode. Field-based Prediction: A prediction mode using only one field of the reference frame. The predicted block size is 16 16 luminance samples. Fiber Distributed Data Interface: FDDI. This is a fiber optic interface that allows data to travel extreme distances (many miles/kilometers) without signal loss. It is far superior to copper wire for data integrity as well. FDDI is often used to connect Internet, LAN, or WAN backbones together due to these properties Field Period: The reciprocal of twice the frame rate. Field Picture; Field Structure Picture: A field structure picture is a coded picture with picture_structure is equal to "Top field" or "Bottom field". FIFO: First-In First-Out. FIFO Buffer: First In First Out Buffer. An area of memory that holds information in the order in which it was received until the computer has time to use it. File Format: Applications save files in a certain way. They organize the data in a way that makes sense for the information they are saving and to programs that work with that type of file. There are many standard file formats, such as GIF and JPG. These are graphical formats and are in common use. Filter: To manipulate a video stream to achieve a desired effect. This can include, but is not limited to, re-sizing, noise reduction, de-interlacing, softening, sharpening, and noise reduction. Final Cut Pro: Final Cut Pro 3 is a comprehensive and innovative editing solution with unprecedented flexibility featuring award-winning editing capabilities, precise color correction tools, and built-in compositing and effects. Revolutionary out-of-the-box G4 real-time effects deliver real-time playback without rendering or additional PCI hardware1. The new OfflineRT format yields over 40 minutes of footage per gigabyte, providing a highly efficient editing workflow. Designed for ease of use, with tools that can expertly handle virtually any video format, Final Cut Pro 3 is the complete solution for professional video editing. With a comprehensive set of award winning features that rival expensive proprietary editing systems, its no wonder why Final Cut Pro is quickly becoming the digital editing choice for professional editors and filmmakers worldwide. FIP (File Transfer Protocol): Application protocol, part of the TCP/IP protocol stack, used for transferring files between network nodes.
125

FIR (Finite impulse response): Well-known linear phase digital filter type commonly used in DSP, which performs spectral modification in the discrete domain similar to the function of analog filters. Firewall: A security device that stands between a private network and the Internet. It is like a wall in that it can prevent unwanted traffic from passing either way. Some firewalls have proxy functions built-in. In fact, the distinction between a firewall and a proxy is often blurry. Add in the differences and similarities between a firewall and a packet-filtering router and you've got one big ball of confusion. True firewalls generally support packet-filtering, proprietary application filtering, and some proxy functions, combining the features of other devices or software into one unified package. FireWire: A low-cost digital interface originated by Apple Computer and further developed by engineers and adopted by CEMA. It can transport data at 100, 200, or 400 Mbps. This is widely viewed as one key solution to connect digital-related TV components with each other. Also known as IEEE-1394, Apple Computer's trademark for IEEE 1394. See: IEEE 1394. Firmware: The software that is embedded onto a piece of hardware in order to control that hardware. Generally, firmware can be upgraded and is placed on an EEPROM. Sometimes if a new driver for a piece of hardware is released, new firmware will also be released that is required to get the full functionality or performance of the driver. In other cases, firmware will have some bugs or undesirable features, and can be upgraded to work out the problems. Of course, the rest of the hardware cannot be upgraded without replacing pieces of it, so manufacturers try to store as many critical function controls as they can in the firmware in case they need to change them. For them, if a bug is found it is the difference between recalling a product and simply telling their affected customers to upgrade the firmware. Fixed Data Rate Compression: Techniques designed to produce a data stream with a constant data rate. Such techniques may vary the quality of quantization to match the allocated bandwidth. Fixed Stuff: A bit or byte whose function is reserved. Fixed stuff locations, sometimes called reserved locations, do not carry overhead or payload. Flag: A one bit integer variable which may take one of only two values (zero and one). Flash Memory: A type of non-volatile memory that holds onto its contents even when an electrical charge is not applied. Contrast this to DRAM, which must continually be refreshed even when a charge is applied. Flash memory is used in many applications including PDAs, hardware MP3 players, and some digital cameras. The downsides to Flash memory are that it is slower than DRAM, and much more expensive. Flat Panel Displays: Flat panel displays encompass a growing number of technologies enabling video displays that are lighter and much thinner than traditional television and video displays using cathode ray tubes, usually less than 10 cm (4 inches) thick. These include: Flat panel displays requiring continuous refresh:
126

Plasma displays Liquid crystal displays (LCDs) Digital light processing (DLPs) Organic light-emitting diode displays (OLEDs) Field emission displays (FEDs) Liquid crystal on silicon (LCOSs) Surface-conduction Electron-emitter Displays (SEDs) Nano-emissive display (NEDs) Only the first three of these displays are commercially viable today. OLED displays are beginning deployment in small sizes, while the NED is (as of May 2005) in the prototype stage. Bistable flat panel displays (or electronic paper): e-ink displays Gyricon displays Iridigm displays magink displays Bistable displays are beginning deployment in niche markets (magink displays in outdoor advertising, e-ink and Gyricon displays in in-store advertising). Flat panel displays balance their smaller footprint and trendy modern look with high costs and in many cases inferior images compared with traditional CRTs. In many applications, specifically modern portable devices such as laptops, cellular phones, and digital cameras, whatever disadvantages are overcome by the portability requirements. Flip Chip-Plastic Grid Array: FC-PGA. This is Intel's newer packaging of the Socket 370 design. It features a different electrical setup than Socket 370, but is physically compatible. Thus, old Socket 370 motherboards will not be compatible with new FC-PGA chips, but new FC- PGA motherboards may be able to handle Socket 370 processors. It is not clear why Intel made the electrical change. The physical changes put the core closer to the surface, allowing better cooling as the processor core comes in closer contact to the heatsink. Floating Mode: A tributary mode that allows the synchronous payload to begin anywhere in the VC. Pointers identify the starting location of the LO-VC. LO-VCs in different multiframes may begin at different locations. Floating Point: A three-part representation of a number that contains a decimal point. The number is represented first by the sign, then the number itself, then decimal position. Some examples of floating point numbers are 4.23423412, 1234.1234234, or 4.00. Floating point numbers offer a specific amount of precision, often 8-bit, 16-bit, 32-bit, or 64-bit. This precision controls how accurately floating
127

point results are represented and calculated during arithmetic operations between floating point numbers. For a simple example, if you have a low level of precision and you divide 1 by 3, you will get 0.33. With a higher level of precision you would see that it is 0.333333333333, and so forth. Thus, your calculation is inaccurate by 0.003333333333 (which is 0.333333333333 - 0.33). A small inaccuracy such as that may not matter if you are pumping out frame rates on a 3D game, but if you are building an airplane's engine you might want to make sure your design program handles a proper amount of precision for the job you are doing. When writing software, larger floating point precision takes more space to store, and may be slower depending on the hardware you are running on. FLOP: Floating Point Operation. This describes a single manipulation of a floating point number in a microprocessor. One measure of the speed of a microprocessor is how many FLOPs can be accomplished in a second. Floating Point Unit: FPU. The part of a microprocessor that is designed to handle floating point calculations. Often the efficiency of this part of the processor will decide whether a processor is successful or not. Cyrix M-II chips and AMD K6-2 chips had weak floating point units when compared to Intel Pentium III processors, thus limiting their success. AMD's Athlon chip had a strong floating point unit, and it was very successful against the Pentium III. Floating point performance matters a lot in games and complex data manipulations, thus the "pro" or "Geek" segment of the market that drives system buys and recommends components. FM: Frequency Modulation. A method of sending and distinguishing radio signals by modifying the frequency of the radio wave. See also AM. Folder: A term coined to be synonymous and more accessible than "directory." Now the terms are basically synonymous, but folder tends to imply a more graphical interface for managing files. Footprint: This refers to the general size of something, whether physical or virtual. The footprints of small Internet appliances are compared against those of larger PCs, as are smaller LCD screens vs. CRT screens. Different operating systems and programs have their own footprints in the amount of bytes they typically take up on a hard drive, or the amount of physical memory they consume while running. Font: Refers to a set of characters (eg. a z, plus numbers, punctuation marks, etc.) in a particular aesthetic style. Derived from printing and typeface heritage, the modern definition is a member of a typeface family. Times New Roman Italic is a font or font file in the Times New Roman typeface family. This Font is 12 point Arial. Point size is a typographic unit of measure. Traditionally there are 72.27 points to one inch (or 28.45 points to the cm). In the font formats of PostScript and TrueType there are 72.0 points to the inch. In video, a font is a mechanism that allows the specific rendering of a particular character to be specified See also Tiresias (typeface/fonts) Item 141, page 42 and Portable Font Resource (PFR) Item 106, page 36. Format (v. to format): After partitioning your hard drive, it must be formatted so that
128

it can be used by the operating system. Formatting basically makes your hard disk ready for initial use. The same procedure must be done to removable media such as floppy disks. Some types of formatting wipe out the whole disk, while others just set up the framework for data to be written using a type of quick format. Another type of formatting is a SCSI low-level format. This type of format makes a drive ready to be used by a SCSI controller. When you format a drive you typically lose all information on that drive. Format Conversion: The process of both encoding/decoding and resampling digital rates to change a digital signal from one format to another. Forbidden: This term, when used in clauses defining the coded bit stream, indicates that the value shall never be used. This is usually to avoid emulation of start codes. Fortran: A high-level programming language, a bit more advanced than BASIC but not quite as complex as C. This language refuses to die because it is so huge in the scientific research community. It's not a tough language to learn, and it's fairly powerful. About 60% of scientific programming is still done in Fortran. Forward Error Correction (FEC): Mainly applicable to data systems where the data is available only once without any capability to access the original source. FEC methods improve the ability to recover error-free data, usually by adding extra data (about the payload data) before transmission or storage. The cost is a reduced payload in a given data bandwidth system. The two main categories of FEC are block coding and convolution coding. Block codes work on fixed-size blocks of bits or symbols of predetermined size, while convolutional codes work on bit or symbol streams of arbitrary length. A convolutional code can be turned into a block code, if desired. Convolution codes are most often decoded with the Viterbi algorithm, though other algorithms are sometimes used. There are many types of block codes, but the most important by far is Reed-Solomon coding because of its widespread use on the Compact disc, the DVD, and in computer hard drives. Golay, BCH and Hamming codes are other examples of block codes. Nearly all block codes apply the algebraic properties of finite fields. Block and convolutional codes are frequently combined in concatenated coding schemes in which the convolutional code does most of the work and the block code (usually Reed-Solomon code) "mops up" any errors made by the convolutional decoder. This has been standard practice in satellite and deep space communications since Voyager 2 first used the technique in its 1986 encounter with Uranus. For example in DVB-T, a system of FEC referred to as Viterbi or inner coding can be set to different levels from 7/8 to 1/2 with 1/2 providing the most error protection but virtually reducing the payload by half. Another error protection system widely used (including DVB-T and some ASI systems), is known as Reed-Solomon (RS). This adds a further 16 bytes to the 188 Byte MPEG-2 transport stream packet making a new packet size of 204 bytes. This is different to simple error detection, which for example, in teletext is by adding to each 7-bit word, an extra single (odd) parity bit. In DVB each descriptors data parcel is concluded with a 32bit CRC. FourCC: A four character code that uniquely identifies a video data stream format. A
129

video player will look up the FourCC code then look for the codec associated to the code in order to play the associated video stream. This idea was used in the IFF multimedia format developed by Electronic Arts for the Amiga in the early 1980s. This file format was copied by Apple (who called it AIFF) and Microsoft (RIFF). Foundry: A synonym for semiconductor fabrication plant. A foundry is the actual location that produces microprocessors. The term foundry initially referred to a location where metal was poured into molds, but in technology speak it refers to the place where microprocessors are molded. Four Wave Mixing (FWM): A nonlinearity common in DWDM systems where multiple wavelengths mix together to form new wavelengths, called interfering products. Interfering products that fall on the original signal wavelength become mixed with the signal, mudding the signal, and causing attenuation. Interfering products on either side of the original wavelength can be filtered out. FWM is most prevalent near the zero-dispersion wavelength and at close wavelength spacings.

fps (Frames Per Second): the number of still frames (pictures) that give the illusion of motion, which appear in a single second of time. FPGA (Field Programmable Gate Array): A microchip that can be made with thousands of programmable logic gates. Good features of FPGAs include short development times and low production costs. FPGAs are often used for prototype or custom designs, including DSP and logic emulation. Fractual Compression: A technique for compressing images that uses fractals. It can produce high quality and high compression ratios. The drawback to fractal compression is that it is computationally expensive, so therefore takes a long time.

Fragment: A piece of a file. When a file is written to a hard drive it is sometimes written in multiple fragments because there is no contiguous space available that is large enough to store the file. See fragmentation. Fragmentation: This occurs when a hard drive writes a file in multiple segments instead of in a physically contiguous area. A higher level of fragmentation means that most files are fragmented, and many files contain lots of fragments. A low level of fragmentation implies that more files are in one piece, and that if files are fragmented
130

they are only in a few fragments. For example, say you have two files, file A and file B and you write them both to a hard drive. A low level of fragmentation may be represented by: AAAABBBB. A high level of fragmentation may be represented by: AABBABABA. Fragmentation is caused by deletion of small files, and then trying to fit larger files into the leftover spaces. Over time it is a common occurrence that is unavoidable in most file systems. Frame: A frame is one complete image in a sequence of images. In video, the frame captures and displays all pixels and lines of an image. In a progressive-scanning format, there is no decomposition into fields. In an interlaced-scanning format, the frame consists of odd and even line fields, captured or displayed at different times, which in combination contain all pixels and lines of an image. The frame rate of a progressive scan format is twice that of an interlace scan format. A set of scanlines in video to make a complete picture. If the video is interlaced the frame consists of both of the interlaced fields (half frames). If the video is progressive the frame is made up of one continuous scan from top to bottom. The number of scanlines varies in a frame depending on the TV system used. PAL50 uses 625 scan lines, NTSC60 (US) 525. Frame buffer: Memory used to store a complete frame of video. Frame Rate: The rate at which frames are displayed. The frame rate for movies is 24 frames per second (24 fps). In regular NTSC video, the frame rate is 30 fps. The frame rate of a progressive-scan format is twice that of an interlaced-scan format. Example: the frame rate for 480i DVD is 30 fps (or 60 interlaced fields per second); for progressive-scan DVD at 480p, it's 60 fps. Frame Relay: A DTE-DCE interface specification based on LAPD(Q.921; an ITU-T recommended standard), the Integrated Services Digital Network (ISDN)version of LAPB(X.25 datalink layer). Frame Relay is the result of wide area networking requirements for speed; LAN-WAN and LAN-LAN internetworking; (1) "bursty" data communications; multiplicity of protocols and protocol transparency. These requirements can be met with technology such as optical fibre lines, allowing higher speeds and fewer transmission errors; (2) intelligent network end devices (personal computers, workstations, and servers); standardisation and adoption of ISDN protocols. Frame Relay could connect dedicated lines and X.25to ATM, SMDS, B-ISDNand other "fast packet" technologies. Frame Synchronizer: A digital buffer that, by storage, comparison of sync information to a reference, and timed release of video signals, can continuously adjust the signal for any timing errors. Freeware: Software that is free for use and does not require a fee to be paid to access its full functionality. See also shareware and Open Source. Free-Space Path Loss: In an antenna, the loss between two isotropic radiators in free space resulting from the decrease in power density with the square of the separation distance. Free-To-Air (FTA-broadcaster): It is possible that a service is free (i.e. no payment required) and clear (not scrambled/encrypted) on one platform, (eg. when carried on
131

a terrestrial transmission system), but encryption may be involved, and a payment may be required, when carried by another system, (eg. satellite). Free-to-View is a term used to distinguish services that are broadcast in a scrambled/ encrypted mode, but may be made accessible to viewers, listeners or other users without payment of a subscription charge. "Free-to-view" in the UK see http://www.freetoview.co.uk Freeze: In digital picture manipulators, the ability to stop or hold a frame of video so that the picture is frozen like a snapshot. Freeze Frame: The storing of a single frame of video. Frequency: The number of times per second that a signal fluctuates. The international unit for frequency is the hertz (Hz). One thousand hertz equals 1 KHz (kilohertz). One million hertz equals 1 MHz (megahertz). One billion hertz equals 1 GHz (gigahertz). Television is broadcast in frequencies ranging from 54 MHz to 216 MHz (VHF) and 470 MHz to 806 MHz (UHF). Frequency Allocation: A band of radio frequencies identified by an upper and lower frequency limit earmarked for use by one or more of the 38 terrestrial and space radio communications services defined by the International Telecommunication Union under specified conditions. Frequency Allotment: The designation of portions of an allocated frequency band to individual countries or geographical areas for a particular radio communication service; for a satellite service, specific orbital positions may also be allotted to individual countries. Frequency assignment: Authorization given by a nation's government for a station or an operator in that country to use a specific radio frequency channel under specified conditions. Frequency-division Multiplexing (FDM): A method of deriving two or more simultaneous, continuous channels from a transmission medium by assigning separate portions of the available frequency spectrum to each of the individual channels. Frequency Domain: Frequency domain is a term used to describe the analysis of mathematical functions with respect to frequency. Speaking non-technically, a time domain graph shows how a signal changes over time, whereas a frequency domain graph shows how much of the signal lies within each given frequency band over a range of frequencies. A frequency domain representation also includes information on the phase shift that must be applied to each frequency in order to be able to recombine the frequency components to recover the original time signal. The frequency domain relates to the Fourier series by decomposing a signal into an infinite or finite number of frequencies. Frequency-hopping spread spectrum (FH-SS): Technique of spectrum spreading performed to secure the data transmission by selecting among multiple possible tones based on a pseudorandom sequence .
132

Frequency Reuse Factor (K): A number based on frequency reuse to determine how many channels per cell. Frequency-shift Keying (FSK): Frequency modulation in which the modulating signal shifts the output frequency between predetermined values. Also called frequency-shift modulation, frequency-shift signaling. Frequency Stacking: The process that allows two identical frequency bands to be sent over a single cable by up converting one of the frequencies and "stacking" it with the other. Fresnel Reflection Loss: Reflection losses at the ends of fibers caused by differences in the refractive index between glass and air. The maximum reflection caused by a perpendicular air-glass interface is about 4% or about -14 dB. Front-end: The part of a program or process that the user interfaces with and controls. See also back-end. Front-projection TV: A 2-piece display system consisting of a separate front projector (typically placed on a table or ceiling-mounted) and screen. Front-projection systems can display images up to 20 feet across, or larger. The traditional large, expensive CRT-based front projectors are rapidly giving way to much more compact and lightweight digital projectors using DLP or LCD technology. These digital projectors are usually much more affordable, too. Full Duplex Trasmission: Originally this referred to a communication between a modem and a remote system, where characters were sent both ways over the phone line so that they could be accurately displayed on a terminal. Now full duplex has taken on the meaning that signals can be sent in both directions at the same time, such as in network communications. This either requires twice the amount of wires or differing frequencies for each type of signal so they do not interfere when on the same wire. Full duplex network connections are preferred, especially for servers, which must send and receive a lot of data.

Full Screen Anti-Aliasing: FSAA. A method used by 3D graphics cards to provide anti-aliasing to all objects in a 3D environment. This acts to smooth out the jagged appearance of edges on some 3Dobjects. Different graphics cards have different implementations of FSAA, and image quality varies greatly. Function key (F1, F2, etc.): One of the set of 12 keys at the top of a standard computer keyboard. These keys are labelled F1 through F12. The keys are basically
133

general purpose extra keys so that programmers can assign the keys to special functions in their programs. One handy and common use of F3 in applications is to "Find again," or find the value again for which you most recently searched. Fused Coupler: A method of making a multimode or single-mode coupler by wrapping fibers together, heating them, and pulling them to form a central unified mass so that light on any input fiber is coupled to all output fibers.

Fusion Splicer: An instrument that permanently bonds two fibers together by heating and fusing them.

Fuzzy Logic: Logic without an absolute true or false. Instead, you have gradients of true and false. This is necessary for solving some problems, especially those involving artificial intelligence. For example, the question, "Do I get some food now?" isn't always yes or no, and varies due to environmental factors and degrees of hunger. FWHM: Abbreviation for full width half maximum. Used to describe the width of a spectral emission at the 50% amplitude points. Also known as FWHP (full width half power).

134

-GG703: ITU-T Data Standard Recommendation. G.703 General aspects of digital transmission systems - Terminal Equipment. Physical/electrical characteristics of hierarchical digital interfaces using 75 ohm unbalanced or 120 ohm balanced interconnects. GaAlAs: Abbreviation for gallium aluminum arsenide. Generally used for short wavelength light emitters. GaAs: Abbreviation for gallium arsenide. Used in light emitters. GaInAsP: Abbreviation for gallium indium arsenide phosphide. Generally used for long wavelength light emitters. Gain: Measures the light-reflecting ability of a projection screen. The higher the number, the greater the amount of light reflected back to the viewer(s). Gain, dB: A ratio of output divided by input, expressed in decibels. In antennas, the ratio of the radiation intensity, in a given direction, to the radiation intensity that would be obtained if the power accepted by the antenna were radiated equally in all directions (isotropically). Gain, dBd: Antenna gain, expressed in decibels referenced to a half wave dipole. Gain, dBi: Antenna gain, expressed in decibels referenced to a theoretical isotropic radiator. Gain, dBic: Antenna gain, expressed in decibels referenced to a theoretical isotropic radiator that is circularly polarized. Game Console: A consumer gaming system that is usually sold in retail stores and toy stores. It differs from a standard computer in that it is designed from the ground up to provide a certain amount of game playing power at a specific price point, and typically is very limited in upgradeability and expansion. Game Port: A 15-pin analog port on a PC used specifically for game controllers like a joystick. It also doubles as a MIDI connector. Usually you find one on the back of your sound card. Gamma: In computer graphics and digital video, this refers to a numerical parameter that describes the nonlinearity of intensity reproduction. Basically, as colors get lighter the human eye has more trouble discerning them, and a gamma setting is used to compensate for this so that shades of color on an object, such as those caused by shadows, can be discerned properly. Incorrect gamma settings can cause colors to look too dark or too light, losing detail to the viewer. Gamma Correction: The process of altering the gamma of computer graphics, pictures, or video in order to make the picture show up properly and enable the proper recognition of shades of color. Some graphics accelerators have gamma correction features, as early 3D games often had issues with colors appearing too dark.
135

Gap Loss: Loss resulting from the end separation of two axially aligned fibers.

Garbage In Garbage Out (GIGO): Garbage In, Garbage Out or sometimes called Crap In, Crap Out is a computer term describing the fact that the output data is only as good as the input data. It means basically the same as a video term, the output video and audio quality can only be as good as the source video and audio quality. Gate: 1) A device having one output channel and one or more input channels, such that the output channel state is completely determined by the input channel states, except during switching transients. 2) One of the many types of combinational logic elements having at least two inputs. Gateway: In the IP community, an older term referring to a routing device. Today, the term router is used to describe nodes that perform this function, and gateway refers to a special-purpose device that performs an application layer conversion of information from one protocol stack to another. Compare with router.

Gaussian Beam: A beam pattern used to approximate the distribution of energy in a fiber core. It can also be used to describe emission patterns from surface-emitting LEDs. Most people would recognize it as the bell curve (illustrated). The Gaussian beam is defined by the equation: E(x) = E(0)e-x2/w02

Generation (loss): The signal degradation caused by successive recordings. Freshly recorded material is first generation, one re-recording, or copy, makes the second, etc. This is of major concern in analog linear editing but much less so using a digital suite. Non-compressed component DVTRs should provide at least twenty generations before any artifacts become noticeable, but the very best multi-generation results are possible with disk-based systems. Generations are effectively limitless. Besides the limitations of recording, the action of processors such as decoders and coders will make a significant contribution to generation loss. The decode/recode cycle of NTSC and PAL is well known for its limitations but equal caution is needed for digital video compression systems, especially those using MPEG, and the color space conversions that typically occur between computers handling RGB and video equipment using Y, Cr, Cb. Genlock: A process of sync generator locking. This is usually performed by introducing a composite video signal from a master source to the subject sync generator. The generator to be locked has circuits to isolate vertical drive, horizontal
136

drive and subcarrier. The process then involves locking the subject generator to the master subcarrier, horizontal, and vertical drives so that the result is that both sync generators are running at the same frequency and phase. Geosynchronous Orbit: A direct, circular, low-inclination orbit around Earth having a period of 23 hours 56 minutes 4 seconds and a corresponding altitude of 35,784 km (22,240 miles, or 6.6 Earth radii). In such an orbit, a satellite maintains a position above Earth that has the same longitude. However, if the orbits inclination is not exactly zero, the satellites ground-track describes a figure eight. In most cases, the orbit is chosen to have a zero inclination, and station-keeping procedures are carried out so that the spacecraft hangs motionless with respect to a point on the planet below. In this case, the orbit is said to be a geostationary orbit. Geostationary Orbit (GEO): A direct, circular geosynchronous orbit at an altitude of 35,784 km that lies in the plane of Earth equator. A satellite in this orbit always appears at the same position in the sky and its ground-track is a point. Such an arrangement is ideal for some communication satellites and weather satellites since it allows one satellite to provide continuous coverage of a given area of Earths surface.

GIF (pronounced jif): Graphics interchange format. A computer graphics file format developed by CompuServe for use in compressing graphic images, now commonly used on the Internet. GIF compression is lossless, supports transperancy, but allows a maximum of only 256 colors. Images that will gain the most from GIF compression are those which have large areas (especially horizontal area) with no changes in color. Gigabit Ethernet: Abbreviated GbE, a version of Ethernet, which supports data transfer rates of 1 Gigabit (1,000 megabits) per second. The first Gigabit Ethernet standard (802.3z) was ratified by the IEEE 802.3 Committee in 1998. GIS (Geographic Information System): A system for capturing and manipulating data relating to the Earth. A common use of GIS is to overlay several types of maps (for example, train routes, elevation data, street maps) to determine useful data about a given geographic area. GIX (Global Internet eXchange): Common routing exchange point which allows pairs of networks to implement agreed-upon routing policies. The GIX is intended to allow maximum connectivity to the Internet for networks all over the world. Glitch: This often refers to a bug in a program that is somewhat different than a freeze or a crash, usually causing erroneous or garbage results to be displayed. The term glitch sounds like a mess, and that's what programs with glitches create. Global Positioning System (GPS): A system of satellites around the Earth that broadcast the time via radio signals based on an internal atomic clock. GPS devices can receive the signals from multiple satellites, and by measuring the time it took the signal to arrive they can determine your current position on the Earth. Global System for Mobile Communications (GSM): A 2G digital standard for cellular phone communications that is used in many countries. GSM communications
137

bands range from 900-1800MHz. The GSM initials were initially derived from the French "Groupe de travail Spiale pour les services Mobiles." GMSK (Gaussian minimum shift keying): A form of MSK used in wireless (see MSK) applications, which uses a filter with Gaussian impulse response, resulting in narrower spectral containment at the expense of adding dispersion (ISI). GOP (Group of Pictures): A Group Of Pictures (GOP) consists of all the pictures that follow a GOP header before another GOP header. The GOP layer allows random access because the first picture after the GOP header is an Intra pictures (I picture) that means that it doesn't need any reference to any other picture. The GOP layer is optional, i.e. it's not mandatory to put any GOP header in the bitstream. In the header there is also the timecode of the first picture of the GOP to be displayed. The decoding process, as the GOP header is immediately followed by an Intra picture, can begin at that point of the bitstream. Anyway it's possible that some B pictures, following such I_picture in the bitstream, have references coming from the previous GOP and can't be correctly decoded. In this case the GOP is called an Open GOP because some references from the previous GOP exist; if a random access to such a GOP is performed, some B_pictures shouldn't be displayed. A GOP is called a Closed GOP when either there are no B_pictures immediately following the first I_picture or such B_pictures haven't any references coming from the previous GOP (in this case a GOP header flag must be set). In the "coding people" language the GOP length is the period (often expressed in frames) by which an Intra frame occurs. It must be noticed that such a value cannot be found in the bitstream and it is unnecessary to the decoding process. Furthermore it isn't specified any fixed period for the Intra frame. As the presence of the Intra frames is quite important for many applications, it is the encoder that has to provide them, while the decoder has only to work with all the valid bitstreams. A GOP of 1 could be I frames only, (similar to digital tape formats such as DVCPro50) or IB, IB, IB... (a GOP of 2 similar to Beta-SX format). For best quality at low data rates (eg. for broadcast), longer GOP lengths of 12 and 15 are used. MPEG1 and MPEG2 digital video compression schemes achieve their high degree of compression by relying on any sequence of video frames being very similar and thus only any changes from frame to frame being signaled - eg. any object that has moved or entered the scene if the camera is panned. Refer to the MPEG Data Hierarchy shown in the figure below.

Graded-index Fiber: Optical fiber in which the refractive index of the core is in the form of a parabolic curve, decreasing toward the cladding.
138

GPS: Global Positioning System: A system of satellites around the Earth that broadcast the time via radio signals based on an internal atomic clock. GPS devices can receive the signals from multiple satellites, and by measuring the time it took the signal to arrive they can determine your current position on the Earth. Grand Alliance: The United States grouping, formed in May 1993, to produce "the best of the best" initially proposed HDTV systems. The participants are: AT&T, General Instrument Corporation, Massachusetts Institute of Technology, Philips Consumer Electronics, David Sarnoff Research Center, Thomson Consumer Electronics and Zenith Electronics Corporation. The format proposed is known as the ATSC format. See also: ATSC. Graphics Processing Unit (GPU): A microprocessor specifically designed for processing 3D graphics data. This term was first coined by NVIDIA to describe its GeForce 256 chip. NVIDIA justified the name by stating that its graphics chip had a similar amount of transistors as then-current CPU chips. Now GPU has become a more widely used term to describe the complex chips that power 3D graphics cards. Gray Market: The market where goods are sold in an unauthorized manner, or before their expected release date. Sometimes gray market goods have been stolen from the manufacturer, or have been dumped by the manufacturer to create a result such as flooding the market. Gray market products are more likely to be remarked as different products or unsupported by their manufacturers than products purchased through proper channels. However, gray market goods are sometimes cheaper or available before their time, making them attractive to potential purchasers. One of the largest sources of gray market products are computer shows, where many gray market goods are sold "as is." Ground: Electrically speaking, this is a neutral point that can absorb excess electricity. Often it is the ground itself (as in the ground you stand on), but it is also commonly a large piece of metal, such as the chassis on a car. Electrical devices have to be connected to a ground so that the ground can disperse any excess electrical charges. Ground Loop Noise: Noise that results when equipment is grounded at points having different potentials thereby creating an unintended current path. The dielectric properties of optical fiber provide electrical isolation that eliminates ground loops. Group Index: Also called group refractive index. In fiber optics, for a given mode propagating in a medium of refractive index (n), the group index (N), is the velocity of light in a vacuum (c), divided by the group velocity of the mode. Group Velocity: 1) The velocity of propagation of an envelope produced when an electromagnetic wave is modulated by, or mixed with, other waves of different frequencies. 2) For a particular mode, the reciprocal of the rate of change of the
139

phase constant with respect to angular frequency. 3) The velocity of the modulated optical power. GroupWare: This term is used to describe any form of software designed to allow a group of people to easily share ideas and data. Examples include Lotus Notes, Novell GroupWise, and Microsoft Exchange. GSM (Global System for Mobile Communications): A 2G digital standard for cellular phone communications that is used in many countries. GSM communications bands range from 900-1800MHz. The GSM initials were initially derived from the French "Groupe de travail Spiale pour les services Mobiles." GUI (Graphical User Interface): A GUI (usually pronounced GOO-ee) is a Graphical User Interface to a computer. As you read this, you are looking at the GUI or graphical user interface of your particular Web browser. The term came into existence because the first interactive user interfaces to computers were not graphical; they were text-and-keyboard oriented and usually consisted of commands you had to remember and computer responses that were infamously brief. The command interface of the DOS operating system (which you can still get to from your Windows operating system) is an example of the typical user-computer interface before GUIs arrived. An intermediate step in user interfaces between the command line interface and the GUI was the non-graphical menu-based interface, which let you interact by using a mouse rather than by having to type in keyboard

140

-HH222.0: This Recommendation | International Standard specifies the system layer of the coding. It was developed principally to support the combination of the video and audio coding methods defined in Parts 2 and 3 of ISO/IEC 13818. The system layer supports five basic functions: 1) the synchronization of multiple compressed streams on decoding; 2) the interleaving of multiple compressed streams into a single stream; 3) the initialization of buffering for decoding start up; 4) continuous buffer management; and 5) time identification. An ITU-T Rec. H.222.0 | ISO/IEC 13818-1 multiplexed bit stream is either a Transport Stream or a Program Stream. Both streams are constructed from PES packets and packets containing other necessary information. Both stream types support multiplexing of video and audio compressed streams from one program with a common time base. The Transport Stream additionally supports the multiplexing of video and audio compressed streams from multiple programs with independent time bases. For almost error-free environments the Program Stream is generally more appropriate, supporting software processing of program information. The Transport Stream is more suitable for use in environments where errors are likely. An ITU-T Rec. H.222.0 | ISO/IEC 13818-1 multiplexed bit stream, whether a Transport Stream or a Program Stream, is constructed in two layers: the outermost layer is the system layer, and the innermost is the compression layer. The system layer provides the functions necessary for using one or more compressed data streams in a system. The video and audio parts of this Specification define the compression coding layer for audio and video data. Coding of other types of data is not defined by this Specification, but is supported by the system layer provided that the other types of data adhere to the constraints defined in 2.7 Restrictions on the multiplexed stream semantics in the H.222.0. H.263: A standard for variable low bit rate coding of video. H.263 is better than MPEG-1/MPEG-2 for low resolutions and low bit rates. H.263 is less flexible than MPEG, but therefore requires much less overhead. H.264/AVC: A new video encoding layer of MPEG-4, called nowadays officially as AVC. MPEG-4 itself contains various encoding methods, called "subsets" or "layers", and the H.264 is a latest standardized layer in the MPEG-4 standard, launched in late 2002. 1) Technical overview of H.264/AVC The H.264/AVC design [2] supports the coding of video (in 4:2:0 chroma format) that contains either progressive or interlaced frames, which may be mixed together in the same sequence. Generally, a frame of video contains two interleaved fields, the top and the bottom field. The two fields of an interlaced frame, which are separated in time by a field period (half the time of a frame period), may be coded separately as two field pictures or together as a frame picture. A progressive frame should always be coded as a single frame picture; however, it is still considered to consist of two fields at the same instant in time. 2) Network abstraction layer The VCL, which is described in the following section, is specified to represent, efficiently, the content of the video data. The NAL is specified to format that data and
141

provide header information in a manner appropriate for conveyance by the transport layers or storage media. All data are contained in NAL units, each of which contains an integer number of bytes. A NAL unit specifies a generic format for use in both packet-oriented and bitstream systems. The format of NAL units for both packet-oriented transport and bitstream delivery is identical, except that each NAL unit can be preceded by a start code prefix in a bitstream-oriented transport layer. 3) Video coding layer The video coding layer of H.264/AVC is similar in spirit to other standards such as MPEG-2 Video. It consists of a hybrid of temporal and spatial prediction, in conjunction with transform coding. Fig. 1 shows a block diagram of the video coding layer for a macroblock.

In summary, the picture is split into blocks. The first picture of a sequence or a random access point is typicallyIntra, coded, i.e., without using information other than that contained in the picture itself. Each sample of a block in an Intra frame is predicted using spatially neighbouring samples of previously coded blocks. The encoding process chooses which and how neighbouring samples are used for Intra prediction, which is simultaneously conducted at the encoder and decoder using the transmitted Intra prediction side information. For all remaining pictures of a sequence or between random access points, typically .Inter. coding is used. Inter coding employs prediction (motion compensation) from other previously decoded pictures. The encoding process for Inter prediction (motion estimation) consists of choosing motion data, comprising the reference picture, and a spatial displacement that is applied to all samples of the block. The motion data which are transmitted as side information are used by the encoder and decoder to simultaneously provide the Inter prediction signal. The residual of the prediction (either Intra or Inter) . which is the difference between the original and the predicted block . is transformed. The transform coefficients are scaled and quantized. The quantized transform coefficients are entropy coded and transmitted together with the side information for either Intra-frame or Inter-frame prediction. The encoder contains the decoder to conduct prediction for the next blocks or the next picture. Therefore, the quantized transform coefficients are inverse scaled and
142

inverse transformed in the same way as at the decoder side, resulting in the decoded prediction residual. The decoded prediction residual is added to the prediction. The result of that addition is fed into a deblocking filter which provides the decoded video as its output. The average bit-rate savings provided by each encoder, relative to all other tested encoders over the entire set of sequences and bit-rates, are depicted in Table 1. It can be seen that H.264/AVC significantly outperforms all other standards. The highly flexible motion model and the very efficient context-based arithmetic- coding scheme are the two primary factors that enable the superior rate-distortion performance of H.264/AVC.

Source of this item: The emerging H.264/AVC standard by Ralf Schfer, Thomas Wiegand and Heiko Schwarz :Heinrich Hertz Institute, Berlin, Germany <Reference> [1] ITU-T Recommendation H.262. ISO/IEC 13818-2 (MPEG-2): Generic coding of moving pictures and associated audio information . Part 2: Video ITU-T and ISO/IEC JTC1, November 1994. [2] T. Wiegand: Joint Final Committee Draft Doc. JVT-E146d37ncm, Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/ SC29/WG11 and ITU-T SG16 Q.6), November 2002. [3] ITU-T Recommendation H.263: Video coding for low bit-rate communication Version 1, November 1995; Version 2 (H.263+), January 1998; Version 3 (H.263++), November 2000. [4] ISO/IEC 14496-2: Coding of audio-visual objects . Part 2: Visual. ISO/IEC JTC1. MPEG-4 Visual version 1, April 1999; Amendment 1 (Version 2), February 2000. [5] T. Wiegand and B.D. Andrews: An Improved H.263 Coder Using Rate-Distortion Optimization ITU-T/SG16/Q15-D-13, April 1998, Tampere, Finnland. H.323: An ITU-T standard for transferring multimedia videoconferencing data over packet-switched networks, such as TCP/IP. There is a LAN standard for high-quality video, and an Internet standard for lower-bandwidth video over lines as slow as 28.8Kbits. Hacker: Someone who seeks to understand computer, phone, or other systems strictly for the satisfaction of having that knowledge. Hackers wonder how things work, and have an incredible curiosity. Hackers will sometimes do questionable legal things, such as breaking into systems, but they generally will not cause harm once they break in. Contrast a hacker to the term cracker or malicious hacker.
143

Half Duplex Transmission: Originally a modem communications term, half duplex now mainly refers to network communications that transmit in one direction at a time.

Handheld Devices Markup Language (Wireless Markup Language): Part of the Wireless Application Protocol (WAP) , allowing text portions of Web content to be separated from graphical content for display on wireless devices. Handoff: A frequency channel will be changed to a new frequency channel as the vehicle moves from from one cell to another without the user's intervention. Hard Copy: This refers to information printed out on paper, as contrasted to information stored in electronic formats. Hard Disk (HD): This is commonly the slang term for a hard drive. It refers to the type of re-writeable media that is inflexible, as opposed to a floppy disk. A hard drive typically contains multiple hard disk platters. Hardware: Any physical pieces of a computer. For example, the computer chassis itself, a monitor, a printer, DRAM, and video cards are all hardware. See also software. HAVi: Home Audio Video interoperability: HAVi is a home networking standard for interoperability between digital audio and video consumer devices. It is an initiative from eight major Consumer Electronics companies Grundig AG, Hitachi, Ltd., Matsushita Electrical Industrial Co., Ltd. (Panasonic), Royal Philips Electronics, Sharp Corp., Sony Corp., Thomson Multimedia and Toshiba Corp. Parts of the HAVi specifications including user interface have been included into the MHP. www.havi.org HBT: Abbreviation for Heterojunction Bipolar Transistors. A very high performance transistor structure built using more than one semiconductor material. Used in high performance wireless telecommunications circuits such as those used in digital cell phone handsets and high-bandwidth fiber optic systems.

144

HD-0: A set of formats based partially on the ATSC Table 3, suggested by The DTV Team as the initial stage of the digital television rollout. Pixel values represent full aperture for ITU-R 601. The DTV Team's HD0 Compression Format Constraints. HD-1: A set of formats partially based on the ATSC Table 3, suggested by The DTV Team as the second stage of the digital television rollout, expected to be formalized in the year 2000. Pixel values represent full aperture for ITU-R 601. (Items in bold have been added to HD-0.) The DTV Team's HD1 Compression Format Constraints HD-2: A set of formats partially based on the ATSC Table 3, suggested by The DTV Team as the third stage of the digital television rollout contingent on some extreme advances in video compression over the next five years. Pixel values represent full aperture for ITU-R 601. (Items in bold have been added to HD-1.) The DTV Team's HD2 Compression Format Constraints HDB3: High Density Bipolar 3. A bipolar coding method that does not allow more than three consecutive zeros. HD D5: A compressed recording system developed by Panasonic which uses compression at about 4:1 to record HD material on standard D5 cassettes. HD D5 supports the 1080 and the 1035 interlaced line standards at both 60 Hz and 59.94 Hz field rates, all 720 progressive line standards and the 1080 progressive line standard at 24, 25 and 30 frame rates. Four uncompressed audio channels sampled at 40 kHz, 20 bits per sample, are also supported. HDCAM: Sometimes called HD Betacam--is a means of recording compressed high-definition video on a tape format (1/2-inch) which uses the same cassette shell as Digital Betacam, although with a different tape formulation. The technology is aimed specifically at the USA and Japanese 1125/60 markets and supports both 1080 and 1035 active line standards. Quantization from 10 bits to 8 bits and DCT intra-frame compression are used to reduce the data rate. Four uncompressed audio channels sampled at 48 kHz, 20 bits per sample, are also supported. HDCP (High-bandwidth Digital Content Protection): Copy-protection scheme developed by Intel to be used in conjunction with DVI and HDMI connections. HDCP encryption is used with high-resolution signals over DVI and HDMI connections and on D-Theater D-VHS recordings to prevent unauthorized duplication of copyrighted material. HD-DVD (High-definition digital video disc): HD-DVD (HD stands for both high-density and high-definition) was under development before DVD came out. It finally emerged in 2003. Some high-definition versions of HD-DVD use the original DVD physical format but depend on new video encoding technology such as H.264 to fit high-definition video in the space that used to hold only standard-definition video. High-density formats use blue or violet lasers to read smaller pits, increasing data
145

capacity to around 15 to 30 GB per layer. High-density formats use high-definition MPEG-2 video (for compatibility with ATSC and DVB HD broadcasts) and may also use advanced encoding formats, probably supporting 1080p24 video. HD discs will not play on existing players. As of early 2003 there are five proposals for HD-DVD, with the possibility of others: HD-DVD-9, aka HD-9 Advanced Optical Disc (AOD) Blu-ray Disc (BD) Blue-HD-DVD-1 Blue-HD-DVD-2 HDL (Hardware Description Language): A kind of language used for the conceptual design of integrated circuits. Examples are VHDL and Verilog. HDLC (High-Level Data Link Control): Bit-oriented synchronous data link layer protocol developed by ISO. Derived from SDLC, HDLC specifies a data encapsulation method on synchronous serial links using frame characters and checksums. HDLC shares the frame format of SDLC, and HDLC fields provide the same functionality as those in SDLC. Also, as in SDLC, HDLC supports synchronous, full-duplex operation. HDLC differs from SDLC in several minor ways, however. First, HDLC has an option for a 32-bit checksum. Also, unlike SDLC, HDLC does not support the loop or hub go-ahead configurations. The major difference between HDLC and SDLC is that SDLC supports only one transfer mode, whereas HDLC supports three:

Normal response mode (NRM)This transfer mode is also used by SDLC. In this mode, secondaries cannot communicate with a primary until the primary has given permission. Asynchronous response mode (ARM)This transfer mode enables secondaries to initiate communication with a primary without receiving permission. Asynchronous balanced mode (ABM)ABM introduces the combined node, which can act as a primary or a secondary, depending on the situation. All ABM communication occurs between multiple combined nodes. In ABM environments, any combined station can initiate data transmission without permission from any other station.

HDMI (High-Definition Multimedia Interface): HDMI is the first industry-supported, uncompressed, all-digital audio/video interface. HDMI provides an interface between any audio/video source, such as a set-top box, DVD player, and A/V receiver and an audio and/or video monitor, such as a digital television (DTV). HDMI supports standard, enhanced, or high-definition video, plus multi-channel digital audio on a single cable. It transmits all ATSC HDTV standards and supports 8-channel digital audio, with bandwidth to spare to accommodate future enhancements and requirements. Using an adapter, HDMI is backward-compatible with most current DVI connections. HDTV: High definition television. High definition television has a resolution of
146

approximately twice that of standard television in both the horizontal (H) and vertical (V) dimensions and a wide screen picture aspect ratio (H: V) of 16:9. Rec ITU-R BT.1125 further defines HDTV quality as the delivery of a television picture that is subjectively identical with the interlaced HDTV studio standard. For digital TV transmissions using MPEG-2, a receiver with a main profile at high level (MP @ HL) MPEG decoder is required. Note that the number of TV lines alone does not give an indication of the perceived picture sharpness. Different types of display will affect the result. For example, it is generally agreed that a 720-line progressive format display will have similar degree of sharpness in the vertical direction to a 1080 line interlace format display. The higher resolution picture is the main selling point for HDTV. Imagine 720 or 1080 lines of resolution compared to the 525 lines people are used to in the United States (or the 625 lines in Europe). Of the 18 DTV formats, six are HDTV formats, five of which are based on progressive scanning and one on interlaced scanning. Of the remaining formats, eight are SDTV (four wide-screen formats with 16:9 aspect ratios, and four conventional formats with 4:3 aspect ratios), and the remaining four are video graphics array (VGA) formats. Stations are free to choose which formats to broadcast. The formats used in HDTV are: 720p - 1280x720 pixels progressive 1080i - 1920x1080 pixels interlaced 1080p - 1920x1080 pixels progressive HDTV-ready: Term used to describe TVs which can display digital high-definition TV formats when connected to a separate HDTV tuner. These TVs generally have built-in tuners for receiving regular NTSC broadcasts, but not digital. An HDTV-ready TV may also be referred to as an "HDTV monitor." Headend: 1) A central control device required within some LAN and MAN systems to provide such centralized functions as remodulation, retiming, message accountability, contention control, diagnostic control, and access to a gateway. 2) A central control device within CATV systems to provide such centralized functions as remodulation (illustrated).

Header: In data transmission, the header is protocol control information located at the beginning of a protocol data unit. Heatsink: A device that makes contact with a microprocessor (or other object in need of cooling) and removes heat by exchanging it with air in a more efficient
147

manner than the flat surface the heatsink is attached to. The heatsink does this by providing more surface area than a flat surface. This is usually accomplished by having a group of fins built into the shape of the heatsink. The longer the fins, and the more of them, the higher the surface area, and the better the efficiency. Most heatsinks are aluminum for cost and weight reasons. Copper heatsinks are more efficient, but are seldom used because they are much more expensive, heavier, and harder to build. It is common for a fan to be mounted onto a heatsink to increase the efficiency of heat exchange by blowing cooler air through the fins on the heatsink. Help Desk: If you've ever worked in an office environment, you may have called the help desk. This is the support organization designed to take care of your computer and phone problems. Help Desk staff work long hours, have a never ending workload, get harassed by everyone, and get crappy pay. Hertz: A rental company, formerly represented by O.J. Simpson. It is also a measure of speed. One Hertz means one cycle per second, so one megahertz (MHz) means one million cycles per second. This is the common measure of speed for processors and electronic activities inside a computer. Heterogeneous Data Sources: A data-warehousing term that describes the idea of drawing data from several different (heterogeneous) data sources on different platforms and computers. Hexadecimal (Hex): A base 16 number system where 16 digits are used instead of 10. In addition to the standard base 10 system numbers 0-9, letters A-F are used to represent the numbers 10-15. Thus, a one digit hexadecimal number is a value between 0 and 15 (where F is 15). A two digit hex number is the value between 0 and 255 (represented as 00 and FF in hex, where 00 is 0*16 + 0*1, and FF is 15*16 + 15*1), and so forth. Hexadecimal numbers are often used to represent data because they are easily translatable to base 2 numbers, which are commonly used to apply to the 0 and 1 (off and on) conditions present in electronics, while base 10 numbers are not. HFC: Hybrid fiber coax. A type of network that contains both fiber-optic cables and copper coaxial cables. The fiber-optic cables carry TV signals from the head-end office to the neighborhood; the signals are then converted to electrical signals and then go to coaxial cables. HFC (Hybrid Fiber Coax): A transmission system or cable construction (illustrated) that incorporates both fiber optic transmission components and copper coax transmission components.

148

HFC Network: A telecommunication technology in which optical fiber and coaxial cable are used in different sections of the network to carry broadband content. The network allows a CATV company to install fiber from the cable headend to serve nodes located close to business and homes, and then from these fiber nodes, use coaxial cable to individual businesses and homes.

Hierarchical Coding: Not used in DVB implementations. This is a technique intended to give the opportunity for various grades of receivers (decoders) only the resolution they require. For example a high resolution picture could be transmitted and a small hand-held PDA device could receive and display a low-resolution picture. It is a possible feature of MPEG-2 where the video and/or audio information can be coded and scaled spatially or temporally. That is where a base or course layer and a fine detail layer may be sent separately. For example, the Simple, Main, SNR Scalable, Spatially Scalable and High profiles have a hierarchical relationship. Therefore the syntax supported by a higher profile includes all the syntactic elements of lower profiles (e.g., for a given level, a Main profile decoder shall be able to decode a bitstream conforming to Simple profile restrictions). Refer ISO/IEC 13818, Parts 1 to 3. Hierarchical COFDM Modulation: A feature of the DVB-T COFDM system is that radio-frequency transmissions may be configured in any one of several hundred possible modes to optimise the transmission for ghosting interference and/or extended distance coverage. In general, the transmission can be made less prone to interference and receivable in a wider range of locations, but at the expense of reduction of the data carrying capacity. However a group of transmission modes known as "hierarchical modulation" (HM), may provide much improved receivability for a proportion of the data without marginal loss of receivability of the remaining data transmission. Hierarchical modulation may offer viewers that don't have access to a good antenna system or are located in difficult or marginal reception areas, the ability to receive some of the services offered by a broadcaster, whereas the alternative may be none at all. The only alternative to a broadcaster is the installation of some remedial infrastructure like translators or gap-fillers but these may not offer much help to restricted reception encountered in urban high-rise. When operating in a Hierarchical Modulation mode, the data carrying capacity effectively is split into two parts known as high and low priority. The high priority part can be received at greater distances or in more difficult reception locations than the low priority part, which has similar receivability characteristics to a standard non-hierarchical broadcast. Each part carries its own transport data stream and the sum of these two data streams is to close the capacity of a non-hierarchical transmission. Because there are 2 separate data streams there is a small data loss
149

due to the need for each stream to have its own System Information (SI) tables. In technical terms, it is best understood by considering the COFDM constellation of possible modulated states in each of the four quadrants. The simplest, and most robust modulation is 4QAM (QPSK) with 16QAM and 64QAM as other less robust possibilities. If a case of QPSK is taken and say, a lower amplitude 16QAM signal (at a synchronous clock rate and same guard interval) added on top, then the constellation would have the appearance of 64QAM but the QPSK and 16QAM would each be carrying their own data (transport stream). The high priority stream is carried on the base QPSK modulation and usually protected by stronger FEC and/or a shift of the modulation pattern (alpha factor) by increasing the amplitude of the QPSK component. Another possibility is QPSK on QPSK, the resultant constellation then appearing as 16QAM. The high priority part of the COFDM transmission is capable of being more reliably received under difficult reception conditions such as in mobile or portable situations, whereas the low priority stream (if 16QAM), could have comparable receivability to standard 64QAM. Receivers with a single COFDM demodulator will need to select either the high priority stream or, if reception conditions allow, the low priority stream. With the 16QAM on QPSK case, the demodulated high priority stream typically has a data capacity of between one third and one quarter of the non-hierarchical signal while the low priority is about two thirds to one half. Refer to ETSI EN 300 744 and AS 4599. This implies that in a 7MHz channel, typically 6 Mb/s and 14 Mb/s may be carried in each transport stream allowing SD in the high priority stream and a progressive format HD or SD and other combinations in the low priority stream. Hierarchical COFDM transmissions may be identified by the COFDM low data rate Transmission Parameter Signalling (TPS) or in the DVB-T NIT of the high priority QPSK stream. To assist receiver operation, each stream should also carry NITother and SDTother with details about the other stream. Hierarchical Relationship: This relationship is one where elements at lower levels are submissive to elements at higher levels. Just think of the military hierarchy, where the General is above a Captain who is above a Private. HIPPI: High performance parallel interface. A parallel data channel used in mainframe computers that supports data transfer rates of 100 Mbps. HIPPI as defined by the ANSI X3T9.3 document, a standard technology for physically connecting devices at short distances and high speeds. Primarily to connect supercomputers and to provide high-speed backbones for Local Area Network (LANs). High Frequency (HF): A signal in the frequency range of from 3 to 30 MHz. Hit: When a user requests an HTML document, image, or external object on the World Wide Web, the server records that request as a "hit." The problem with measuring popularity based on "hits" is that webservers, depending on the logging level, count each graphic on that page as a hit. For example, if you look at a page with 5 images on it, some servers count that as 6 hits: one for the HTML page and 5 for the graphics. For this reason "hits" are a questionable measurement of how popular a website actually is.

150

Homepage: A page on the World Wide Web (WWW) that is the first page of a website. Usually it is called index.html or index.htm. When you go to a Web address, this is the first page your browser looks for. Honeypot: A system left open and unprotected to entice hackers to break into it. Usually this is done so that system administrators can monitor the methods used to break in, the frequency of attack, or just to throw off attackers from the real goodies. Horizontal Polarization: In an antenna, a linearly polarized electric field vector whose direction is horizontal relative to ground or some arbitrary coordinate system. Host: A generic term used to describe a computer or program that makes a resource available, usually over a network. It is often used as a prefix along with other terms, such as host IP or host computer. HTML (HyperText Markup Language (RFC 1866)): HTML defines a simple method of giving layout to documents. It can be created with almost anything from a simple plain text editor - you type it in from scratch- to sophisticated WYSIWYG authoring tools. HTML uses tags such as <h1> and </h1> to structure text into headings, paragraphs, lists, hypertext links, graphics and multimedia. HTML is already overburdened with dozens of interesting but incompatible inventions from different manufacturers, because it provides only one way of describing your information. HTML is considered to be at the limit of its usefulness as a way of describing information, and while it will continue to play an important role for the content it currently represents, many new applications require a more robust and flexible infrastructure and are better contained in a XML format. Three versions of HTML have stabilized the explosion in functionalities of the Web's primary markup language. HTML 3.2 was published in January 1997, followed by HTML 4 (first published December 1997, revised April 1998, revised as HTML 4.01 December 1999). XHTML 1.0, which features the semantics of HTML 4 using the syntax of XML, became a Recommendation in January 2000. HTTP (HyperText Transfer Protocol): The way the data in an HTML document is transferred. A document coming in over the HTTP protocol, usually TCP/IP port 80, is read as an HTML document. You may notice in our Internet browser's address bar that the address begins with "HTTP://" in order to tell the browser to expect HTML files. HTTPS (Secure HyperText Transfer Protocol): A secure means of transferring data using the HTTP protocol. Typically HTTP data is sent over TCP/IP port 80, but HTTPS data is sent over port 443. This standard was developed by Netscape for secure transactions, and uses 40-bit encryption ("weak" encryption) or 128-bit ("strong" encryption). If you are at a secure site, you will notice that there is a closed lock icon on the bottom area of your Navigator or IE browser. The HTTPS standard supports certificates. A webserver operator must get a digital certificate from a third-party certificate provider that ensures that the webserver in question is valid. This certificate gets installed on the webserver, and verifies for a period of a year that that server is a proper secure server. Hub: A central connection point. This is standard terminology for a device that connects multiple computers in a network.
151

Huffman Coding: Huffman coding is a common technique to compress a set of data with a statistical method. The technique is to use a short length code for representing high probability codes, and use short length code for lower probability one. The Huffman encoding has following property:
o o

No two code words may consist of identical sequence of code bits. No information beyond the code itself is required to specify the beginning and end of any code value. For a given number of symbols arranged in descending order of probability, the length of their associated code values will be such that the code associated with any given symbol will be less or equal to the length of the code associated with the next less probable symbol. No more that two code words of the same length are alike in all but their final bits.

It is widely used in video compression systems where it often contributes a 2:1 reduction in data. Human Visual System (HVS): Human Visual System is less sensitive to colour information than it is to brightness information. The chrominance signals can therefore be represented with a lower resolution than the luminance. In short, the luminance component (Y) has a high sampling rate while the color difference components (Cb and Cr) have low sampling rate. A colour image is represented in the RGB component system. ( Image = R-component + B-component + G-component ). This system also contains information about the brightness (Luminance) as well as the colour (Chrominance). Red = luminance (Y) + red colour difference (Cr) Blue = Y + Cb Green = Y + Cg Because Cr + Cb + Cg = constant, we need only to encode and store two chrominance components, usually Cr and Cb. Hybrid Fibre/Coax (HFC) system: A broadband bi-directional shared-media transmission system using fibre trunks between the head-end and the fibre nodes, and coaxial distribution from the fibre nodes to the customer locations. Hyper Link: Pointer within a hypertext document that points (links) to another document, which may or may not also be a hypertext document. Hyper Text: Electronically-stored text that allows direct access to other texts by way of encoded links. Hypertext documents can be created using HTML, and often integrate images, sound, and other media that are commonly viewed using a browser. See also HTML and browser Hyper Text Transfer Protocol (HTTP): The way the data in an HTML document is transferred. A document coming in over the HTTP protocol, usually TCP/IP port 80, is
152

read as an HTML document. You may notice in our Internet browser's address bar that the address begins with "HTTP://" in order to tell the browser to expect HTML files. Hyperlink (link): Part of an HTML document that points to another resource. When you view an HTML document using a browser, it is common practice to display hyperlinks in blue with an underlined font. When you click on a hyperlink you will jump, or link, to another area in that document or a different document. The linked document or item may be on the same page, the same server, or a server hundreds of miles away. The work all goes on behind the scenes as long as you are connected to the Internet. Hypertext: A text format that allows for links from keywords in a document to other sections of the document or to other documents.

153

-IIBOC (IN-BAND ON-CHANNEL): The iBiquity Digital Corporations digital audio broadcasting system is designed to permit a smooth evolution from current analog Amplitude Modulation (AM) and Frequency Modulation (FM) radio to a fully digital in-band on-channel (IBOC) system. This system delivers digital audio and data services to mobile, portable, and fixed receivers from terrestrial transmitters in the existing Medium Frequency (MF) and Very High Frequency (VHF) radio bands. Broadcasters may continue to transmit analog AM and FM simultaneously with the new, higher-quality and more robust digital signals, allowing themselves and their listeners to convert from analog to digital radio while maintaining their current frequency allocations In Band On Channel digital Audio system developed by iBiquity Digital Corporation in the USA supports simulcasting of digital audio, along with analogue broadcasts in the same frequency band to overcome the problem of the spectrum availability. Successful experiments have been conducted for simulcasting the analogue and digital signals in the common channel bandwidth by keeping the total digital signal power 20 dB below the total analogue FM signal power. IBOC system also employs OFDM, convolutional coding and QPSK modulation. As a result IBOC system is able to provide robust reception in multipath environment. With a guard interval of 150 micro sec, it not only avoids inter symbol interference but facilitates the deployment of on channel repeaters for the coverage of shadow regions. As shown below, 400 kHz IBOC channel uses 138 kHz bandwidth for accommodating multiple carriers (OFDM carriers) of digital signals adjacent to the analogue side bands. With this bandwidth, IBOC system is able to provide 100 kbps bitrate at the source out of which 96 kbps can be used for providing high quality audio. The system uses MPEG 2 AAC digital audio compression scheme. 3 to 4 kbps bit rate is available for auxiliary data services, out of which 1 kbps is dedicated and 2 to 3 kbps is opportunistic which can be used to carry Programme Associated Data or PAD. The dedicated bit rate of 1 kbps can be extended further by reducing the audio signal bit rate for data services. See the lists of related specifications number attached here.

IC (Integrated Circuit): A combination of multiple circuits into a single integrated device. Today the common microprocessor uses many millions of transistors, with each transistor counting as a single circuit. This combination is an integrated circuit. Icon: A graphic, usually a small one, that represents something, usually a file on the
154

hard drive. Icons can also represent directories, folders, devices, or programs. In a GUI environment, icons can represent just about everything. IDE (Integrated Device Electronics): when talking about hard drives and storage devices, and Integrated Development Environment when talking about development tools and programming. IDGT (Inverse Discrete Cosine Transform ): A step in the MPEG decoding process to convert data from temporal back to spatial domain. IEC (International Electrotechnical Commission): The leading international organization, based in Switzerland that prepares and publishes international standards for all electrical, electronic and related technologies. These serve as a basis for national standardization and as references when drafting international tenders and contracts. http://www.iec.ch/ IEEE (Institute of Electrical & Electronic Engineers): A professional organization that helps set transmission system standards. An international non-profit, technical professional association that supports the publication of academic and professional journals and proceedings in more than 20 disciplines of computer engineering, biomedical technology and telecommunications, to electric power, aerospace and consumer electronics. www.ieee.org/ It also sponsors Standards committees and is accredited by the American National Standards Institute. Various standards have become widely used such as IEEE 802 (Local area network Ethernet and Wi-Fi wireless Ethernet 802.11a & b), and IEEE 1394 (FireWire/i.Link). IEEE 1394: Also known as FireWire or i.LinkTM. A high-speed interconnect standard. Mainly used for consumer digital television applications. The so-called multimedia bus is unique in its ability to carry video and audio with assured quality, based on both its high bandwidth and isochronous ("timesensitive") services. Products based on P1394a can communicate in a peer-to- peer fashion at 100, 200, or 400 Mbits/sec. A 4-wire and 6-wire version are in common use, with the 6-wire version also carrying DC power to the peripheral device. www.1394ta.org Plug-&-Play is supported and the standards is widely used for domestic high-speed digital interlinking of DV camcorders with PC editing computers and DTV receivers with recorders such as D-VHS. Program content protection and copy management has been added see DTCP, Item 26. I Frames or I Pictures (Intra-Coded Pictures): One of the three types of frames that are used in MPEG-2 coded signals. These contain data to construct a whole picture as they are composed of information from only one frame (intra frame). The original information is compressed using DCT. An I frame is encoded as a single image, with no reference to any past or future frames. Often video editing programs can only cut MPEG-1 or MPEG-2 encoded video on an I frame since B frames and P frames depend on other frames for encoding information. See also B Frames, P Frames, MPEG.

155

I Frame Picture Coding:

The MPEG transform coding algorithm includes the following steps: 1. Discrete cosine transform (DCT) 2. Quantization 3. Run-length encoding Both image blocks and prediction-error blocks have high spatial redundancy. To reduce this redundancy, the MPEG algorithm transforms 8x8 blocks of pixels or 8x8 blocks of error terms from the spatial domain to the frequency domain with the discrete Cosine Transform (DCT). The combination of DCT and quantization results in many of the frequency coefficients being zero, especially the coefficients for high spatial frequencies. To take maximum advantage of this, the coefficients are organized in a zigzag order to produce long runs of zero. The coefficients are then converted to a series of run-amplitude pairs, each pair indicating a number of zero coefficients and the amplitude of a non-zero coefficient. These run-amplitude pairs are then coded with a variable-length code(Huffman Encoding), which uses shorter codes for commonly occurring pairs and longer codes for less common pairs. Some blocks of pixels need to be coded more accurately than others. For example, blocks with smooth intensity gradients need accurate coding to avoid visible block boundaries. To deal with this inequality between blocks, the MPEG algorithm allows the amount of quantization to be modified for each macroblock of pixels. This mechanism can also be used to provide smooth adaptation to particular bit rate

156

IGP (Interior Gateway Protocol): Internet protocol used to exchange routing information within an autonomous system. Examples of common Internet IGPs include IGRP, OSPF, and RIP. See also OSPF and RIP. See also IGRP. i.LINK: Sony's name for the IEEE 1394, or FireWire, interface. For some reason Sony decided to pick its own name for the IEEE 1394 interface when it is used on Sony's devices. There are no differences between "i-Link" and IEEE 1394. IIS (Internet Information Server): The name for Microsoft's webserver. It works with server versions of Microsoft's operating systems, and was first developed for Windows NT Server. Starting with Windows 2000 Server, IIS ships on the CD. With Windows NT 4 Server you had to install additional software to get IIS installed. Illegal: Within the context of a computer program, this refers to any operation that attempts to access an area of memory or perform an instruction that is not within the rules of the computer architecture or computer programming language. IMAP (Internet Message Access Protocol): A standard for retrieving mail that is more advanced than POP3 in that you create folders on the server and those folders will show up in any e-mail client you use. In addition, an IMAP client can do more advanced operations than POP3 when retrieving mail from an IMAP server, such as screening the subject lines before downloading mails. Illegal Colors: Colors that force a color system to go outside its normal bounds. Usually these are the result of electronically painted images rather than direct camera outputs. For example, removing the luminance from a high intensity blue or adding luminance to a strong yellow in a paint system may well send a subsequent NTSC or PAL coded signal too high or low--producing at least inferior results and maybe causing technical problems. Out of gamut detectors can be used to warn of possible problems. IMAP (Internet Message Access Protocol: A protocol allowing a client to access and manipulate electronic mail messages on a server. It permits manipulation of remote message folders (mailboxes), in a way that is functionally equivalent to local mailboxes. IMAP includes operations for creating, deleting, and renaming mailboxes; checking for new messages; permanently removing messages; searching; and selective fetching of message attributes, texts, and portions thereof. It does not specify a means of posting mail; this function is handled by a mail transfer protocol such as SMTP.

iMode (i-Mode): A packet-based means of wireless data transfer used widely in Japan and offered by Japanese mobile phone company NTT DoCoMo. iMode uses CWML (Compact Wireless Markup Language) instead of WAP's WML for data display. i-Mode was introduced in 1999 and was the first method available to browse the Web from a cellular phone. Impedance: A measure of resistance to electrical current flow when a voltage is moved across something, such as a resistor. Impedance is measured in ohms, and is the ratio of voltage to the flow of current allowed.

157

Impedance Matching: The connection of an additional impedance to an existing one in order to achieve a specific effect, such as to balance a circuit or to reduce reflection in a transmission line. Import: The process of pulling data into a program. Normally it refers to taking a plain text file and pulling it into a database format so that you can work with it using a database program. For example, you call a company you need some data from and ask it to send you some data to add to your database. The company sends the data as a text file, and you import it into your database. Now you can run queries on the data in your database program. However, you could import any data format your database program supports. There are hundreds out there. Impression: When a user looks at a single page on the World Wide Web, that visit is counted as one impression regardless of how many images are on that page. Most access log analysis tools remove server requests that reference ".gif" or ".jpg" (image tags) from their analysis. This is the most accurate measurement yet developed. An impression could almost be called a "page look." Infinite Impulse Response (IIR): A form of filter often used in digital signal processing that performs functions similar to the analog filter, and uses a recursive difference equation expression.

Information Services (IS): This refers to the field of computer technology, but has been replaced by the newer and sexier term "IT." Information Technology (IT): The field of work dealing with computers and technology, or more specifically, the organization within a company that takes care of all of the computers, telephones, webservers, and Internet connectivity that keeps a company able to communicate with the outside world by electronic means. Infrared: A form of radiation that has a wavelength above that of visible light and below that of microwaves. Infrared radiation has wavelengths between 750 and 100,000 nanometers. Infrared sensors are used in night-vision goggles and sensors. As well, infrared light can be used to send signals wirelessly back and forth between computing devices, but is limited to short line-of-sight communications. InfraRed Data Association (IRDA): These people developed the IRDA port standard that transfers data through the use of infrared light. Of course, you must have two IRDA devices to get any real use out of this technology. Most notebooks today come standard with this port, as do PDAs and some printers as well. It's handy if you road warriors want to print a document and you've got all the right equipment. I2O: The I2O standard was designed to simplify and speed up I/O operations on servers. It was going to eliminate the need for different drivers for each OS, and for each SCSI card and network card. The speed-up was achieved by using an Intel 960 chip on the server's motherboard to handle a lot of the I/O processing that would normally be handled by the processor or I/O subsystem. The I20 SIG disbanded in late 2000, and it's no longer in existence, much like the technology.
158

In-band Signalling: The multiplexing service provides multiple connection end points at the user/H.222.1 service boundary. Protocol is provided that signals to the receive side the association between a PDU and a connection end point. The nature of the information carried by the connection is also described. Injection Laser Diode (ILD): A laser employing a forward-biased semiconductor junction as the active medium. Stimulated emission of coherent light occurs at a PIN junction where electrons and holes are driven into the junction. In-line Amplifier: An EDFA or other type of amplifier placed in a transmission line to strengthen the attenuated signal for transmission onto the next, distant site. In-line amplifiers are all-optical devices.

InP: Indium Phosphide. A semiconductor material used to make optical amplifiers and HBTs. Insertion Loss: The loss of power that results from inserting a component, such as a connector, coupler (illustrated), or splice, into a previously continuous path.

Intellectual Property (IP): Any base of knowledge that was developed for a particular company or entity. Usually, if you work for a technical company you have to sign some sort of agreement that states that any work you do or ideas you have while on company time are that company's intellectual property. As you can imagine, the possession and retention of intellectual property is a lawsuit-laden endeavor. For example, the intellectual property of a software company is not only its software, but also the ideas behind the software, and the methods used to program the software, and just about anything you can imagine about the software. Intensity: The square of the electric field strength of an electromagnetic wave. Intensity is proportional to irradiance and may get used in place of the term "irradiance" when only relative values are important. Intensity Modulation (IM): In optical communications, a form of modulation in which the optical power output of a source varies in accordance with some characteristic of the modulating signal. Interface: A connection point that allows for interaction between some hardware or software and other hardware or software (or a person). Interlaced: This means that the graphic data is split (usually into two parts), and is displayed alternately line by line. The first pass draws every even line, and then another pass is made to draw every odd line. So for video output, if you have 480 pixel rows shown vertically, an interlaced image will show 240 pixel rows (every other
159

pixel row) at once. It will alternate fast enough that the image looks whole and complete. This is a trick used to increase resolution. However, it makes text and small shapes shimmer in a disturbing way. Standard television signals are interlaced, and that's one reason that text looks awful on a television screen. See also non-interlaced. Interleave: This typically refers to the numbering scheme of sectors of data on hard drives. With an interleave of one, sectors are numbered sequentially. With an interleave of two, sequential data is stored on every second sector, and so forth. The reason for interleaves higher than one is that hard drives cannot always keep up with the spin speed of the disk and sometimes need a break before getting to the next region of data. For example, if the data were listed as 11111 xxxxx 22222 yyyyy (interleave of two) then while the hard drive spins over xxxxx it can take some time and get ready to start reading 22222. If a hard drive read head is quick enough to have an interleave of one, data is simply put sequentially onto the media. Interactive Television: A combination of television with interactive content and enhancements. Interactive television provides better, richer entertainment and information, blending traditional TV-watching with the interactivity of a personal computer. Programming can include richer graphics, one-click access to Web sites through TV Crossover Links, electronic mail and chats, and online commerce through a back channel. See also: Back channel. Interference: Any extraneous energy, from natural or manmade sources, that impedes the reception of desired signals. The interference may be constructive or destructive, resulting in increased or decreased amplitude, respectively. Interferometric Intensity Noise (IIN): Noise generated in optical fiber caused by the distributed backreflection that all fiber generates mainly due to Rayleigh scattering. OTDRs make use of this scattering power to deduce the fiber loss over distance. Interframe Coding: Data reduction based on coding the differences between a prediction of the data and the actual data. Motion compensated prediction is typically used, based on reference frames in the past and the future. Inter-frame prediction: A compression technique that periodically encodes a complete reference frame and then uses that frame to predict the preceding and following frames. Interlaced: Short for interlaced scanning. Also called line interlace. A system of video scanning whereby the odd- and even-numbered lines of a picture are transmitted consecutively as two separate interleaved fields. Interlace is a form of compression. I/O (Input/Output): This abbreviation refers to any operation in a computer where data is transferred in or out of the computer. I/O may seem like a vague concept, but it refers to the basic power of a computer to communicate with the world outside of the box. Interlaced Scanning: In a television display, interlaced scanning refers to the process of re-assembling a picture from a series of electrical (video) signals. The "standard" NTSC system uses 525 scanning lines to create a picture (frame). The frame/picture is made up of two fields: The first field has 262.5 odd lines (1,3,5...) and
160

the second field has 262.5 even lines (2,4,6...). The odd lines are scanned (or painted on the screen) in 1/60th of a second and the even lines follow in the next 1/60th of a second. This presents an entire frame/picture of 525 lines in 1/30th of a second. I/O: Input/output. Typically refers to sending information or data signals to and from devices. Intermediate frequency (IF): The carrier center frequency that often follows a frequency conversion stage operating on an RF input. Chosen for ease of subsequent processing, functionality, and standardization. Intermodulation (Mixing): A fiber nonlinearity mechanism caused by the power dependant refractive index of glass. Causes signals to beat together and generate interfering components at different frequencies. Very similar to four wave mixing. Internet Protocol (IP): A connectionless communications protocol that forms part of the basis for the TCP/IP protocol suite. It is a fast protocol, but it has no mechanism for sequencing or error conditions--error packets are simply lost. IP will basically just move datagrams. TCP and UDP run on top of the Internet Protocol, where TCP is the more robust protocol and UDP is intended for applications that don't require exact quality of service--like voice transmission. Internet Relay Chat (IRC): A client/server protocol where IRC clients can connect to an IRC server and "chat." There are several networks of these computers, and there are thousands of channels (or rooms) and hundreds of thousands of people. There is a channel for any interest you could have--it is simply a matter of finding the name. You need to use an IRC client to get to the channels, but as long as you have an Internet connection you can get hooked in. There are also methods of directly transferring files between IRC clients. Internet Server API (ISAPI): An API proposed by Microsoft to replace CGI. Programs written to ISAPI are compiled as DLLs and stored in memory so they can be run faster than CGI scripts--or at leastCGI scripts not using FastCGI or Mod-Perl, which both have similar caching features. Internet Service Provider (ISP): A company that provides Internet access to people or corporations. Early ISPs generally had pools of modems awaiting dial-up connections, but many ISPs nowadays only deal in high-end business communications. Smaller ISPs buy bandwidth from larger ISPs. Interoperability: Ability of computing equipment manufactured by different vendors to communicate with one another successfully over a network. Interpolation (spatial): When re-positioning or re-sizing a digital image inevitably more, less or different pixels are required from those in the original image. Simply replicating or removing pixels causes unwanted artifacts. For far better results the new pixels have to be interpolated--calculated by making suitably weighted averages of adjacent pixels--to produce a more transparent result. The quality of the results will depend on the techniques used and the number of pixels (points--hence 16-point interpolation), or area of original picture, used to calculate the result.
161

Interpolation (temporal): Interpolation between the same point in space on successive frames. It can be used to provide motion smoothing and is extensively used in standards converters to reduce the judder caused by the 50/60 Hz field rate difference. The technique can also be adapted to create frame averaging for special effects. Intersymbol Interference (ISI): Digital communication system impairment where adjacent symbols in a sequence are distorted by frequency response non-idealities, creating dispersion that interferes in the time domain with neighboring symbols. At a certain threshold, intersymbol interference will compromise the integrity of the received data. Intersymbol interference may be measured by eye patterns. Intranet: A local network of computers using TCP/IP as the standard communications protocol. Usually an intranet features some sort of HTML content that you can use a browser to look at. Think of it as a mini, private Internet. Many companies have intranets that contain information only of use to their employees. Intrinsic Losses: Splice losses arising from differences in the fibers being spliced. Intro: A video clip, often short, used at the start of a disc. Some common intros are THX, DTS, Dolby Digital sound bites with graphics or a countdown like the old movies. On a VCD you should ensure the length of the clip is at least 4 second for compliance with the specification. Inverse fast Fourier transform (IFFT): Analytical or digital signal processing step that converts frequency domain information into a time domain sequence. Inverse Telecine: Inverse telecine (IVTC) is when a codec takes a 29.97 frames per second interlaced NTSC video that has gone through the telecine process and reconstructs the original 24 frames per second progressive FILM video. I/O (Input/Output): This abbreviation refers to any operation in a computer where data is transferred in or out of the computer. I/O may seem like a vague concept, but it refers to the basic power of a computer to communicate with the world outside of the box. IP (Internet Protocol): 32-bit address assigned to hosts using TCP/IP. An IP address belongs to one of five classes (A, B, C, D, or E) and is written as 4 octets separated by periods (dotted decimal format). Each address consists of a network number, an optional subnetwork number, and a host number. The network and subnetwork numbers together are used for routing, while the host number is used to address an individual host within the network or subnetwork. A subnet mask is used to extract network and subnetwork information from the IP address. Classless interdomain routing (CIDR) provides a new way of representing IP addresses and subnet masks. Also called an Internet address. Class A IP. A group of IP addresses where the first number remains the same, and the last three can vary. It could be represented by w.x.y.z, where the x, y, and z can be any number from 0-255, and the w represents the first static part of the IP address (e.g., 10.x.y.z). Thus, the number of possible combinations within the Class A address are 256*256*256 = 16,777,216. Of course, some of the addresses, like those ending in .0 and .255, are not used, so the actual
162

number of usable addresses in a Class A is somewhat less than that. The subnet mask of a class A IP address is 255.0.0.0. Thus, only the last three digits of the IP address are used to determine where traffic gets routed within the Class A. There are 254 class A groups in existence. Class B IP. A group of IP addresses where the first two numbers remain the same and the last two can vary. It could be represented by w.x.y.z, where the y and z can be any number from 0-255, and the w and x represent the first static part of the IP address (e.g., 10.251.y.z). Thus, the number of possible combinations within the Class B address are 256*256 = 65,536. Of course, some of the addresses, like those ending in .0 and .255, are not used, so the actual number of usable addresses in a Class B is somewhat less than that. The subnet mask of a class B IP address is 255.255.0.0. Thus, only the last two digits of the IP address are used to determine where traffic gets routed within the Class B. There are about 65,000 class B groups in existence. Class C IP. A group of IP addresses where the first three numbers remain the same and the last one can vary. It could be represented by w.x.y.z, where the z can be any number from 0-255, and the w, x, and y represent the first static part of the IP address (e.g., 10.251.37.z). Thus, the number of possible combinations is 256. Of course, some of the addresses, like those ending in .0 and .255, are not used, so the actual number of usable addresses in a Class C is 254. The subnet mask of a class C IP address is 255.255.255.0. Thus, only the last digit of the IP address is used to determine where traffic gets routed within the Class C. There are about 16.7 million class C groups in existence. IP Address: The specific network address of a computer on a network using TCP/IP as its networking protocol. IP Datagram: Fundamental unit of information passed across the Internet. Contains source and destination addresses along with data and a number of fields that define such things as the length of the datagram, the header checksum, and flags to indicate whether the datagram can be (or was) fragmented. IPIP: IP Encapsulation within IP IPv4 (Internet Protocol version 4): An outdated version of the IP protocol that is still in use on the Internet. It uses a 32-bit addressing scheme, represented by four 8-bit (0-255) numbers separated by periods, such as 123.3.12.255. The addressing scheme allows for a maximum of about 4.3 billion numbers (256*256*256*256). This gets to be a problem as more and more devices are connected to the Internet. ISPs have taken to using Network Address Translation to get around the problem for now, but IPv6 is the ultimate solution. IPv4 may be with us for a long time, even though it is technically outdated. IPv6 (Internet Protocol version 6): The current version of the IP protocol that features a 128-bit addressing scheme, as opposed to the 32-bit addressing scheme of IPv4, supporting a much higher number of addresses. It also features other improvements over IPv4, such as support for multicast and anycast addressing.
163

IPX/SPX: A network protocol made famous by Novell's NetWare operating system. It is much more scalable than NetBEUI, but it is not as easy to route as TCP/IP. Thus, IPX/SPX did not become the protocol of choice for the Internet. Most companies have eliminated IPX/SPX from their networks, as TCP/IP has become the network protocol of choice, and current versions of NetWare now support TCP/IP as the default protocol. IR (Internet Registry): IR was delegated the responsibility of network address and autonomous system identifiers from the IANA, which has the discretionary authority to delegate portions of its responsibility.

IRE Unit: An arbitrary unit created by the Institute of Radio Engineers to describe the amplitude characteristic of a video signal, where pure white is defined as 100 IRE with a corresponding voltage of 0.714 Volts and the blanking level is 0 IRE with a corresponding voltage of 0.286 Volts.

IRQ (Interrupt request): A means for devices to request time from the processor to do their jobs. For instance, every time you hit a key on your keyboard an interrupt is generated on the keyboard IRQ. This is mainly only a concern for PC users, and IRQs were the bane of existence for many computer builders when ISA cards were more popular. Nowadays IRQ problems can still be a hassle, as you only get 16 of them and many are used already, but devices share IRQs more intelligently. ISDB-T: Integrated Services Digital Broadcasting-Terrestrial. Integrated Services Digital Broadcasting (ISDB-T) system has been developed by Japan and is suitable for both Television (ISDB-T) and Radio broadcasting (ISDB-TSB). The system is being promoted by Digital Broadcasting Expert Group (DiBEG). ISDB has two versions - a narrow band version ISDB-TSB for Digital Terrestrial sound and data broadcasting and a wide band version ISDB-T for Digital Terrestrial Television Broadcasting. The system deploys Band Segmented Transmission-OFDM (BST-OFDM) and concatenated error correction code with convolutional and Reed Solomon codes. In ISDB, the entire transmission band is divided into 13 segments. For sound and data broadcasting with ISDB-TSB one to three segments can be used.

164

The basic approach of BST-OFDM is that a transmitting signal consists of the required number of narrow band OFDM blocks called BST-segments. Each segment occupies 429 kHz/500 kHz/571 kHz bandwidth for 6 MHz/ 7 MHz/ 8 MHz CHANNEL bandwidths respectively. For TV broadcasts 6 to 13 segments are used depending on the quality of TV picture required. Each segment occupies 429 kHz/500 kHz/571 kHz bandwidth for 6 MHz/ 7

MHz/ 8 MHz channel bandwidths respectively. ISDB employs two dimensional time interleaving and frequency interleaving. It is a hierarchical system and can have combination of different modulation schemes from DQPSK, 16 QAM to 64 QAM and different code rates for different segments, depending upon the propagation channel and particular service requirement. Data Broadcasting with ISDB-T is possible by assigning one to three segments for data. The system can also use the embedded data in the audio frames of digital compression schemes MPEG 2 Layer II, MPEG 2 AAC, AC3 for carrying PAD. See the lists of related specifications number attached here. ISDN: Integrated Services Digital Network. Allows data to be transmitted at high speed over the public telephone network. ISDN operates from the Basic Rate of 64 kbits/sec to the Primary Rate of 2 Mbps (usually called ISDN-30 as it comprises 30 Basic Rate channels). Most of the Western world currently has the capability to install ISDN-2 with 128 kbps and very rapid growth is predicted for ISDN generally. In the television and film industries, audio facilities are already using it. The cost of a call is usually similar to using a normal telephone. Nominally ISDN operates internationally, but there are variations in standards, service and ISDN adapter technologies. Some operators in the USA use a similar system, Switch 56 (56 kbits/sec and upwards), although the availability of ISDN is becoming wider. IS-IS (Intermediate System-to-Intermediate System): OSI link-state hierarchical routing protocol based on DECnet Phase V routing, whereby ISs (routers) exchange routing information based on a single metric, to determine network topology. Compare with Integrated IS-IS.
165

ISO: International Organization for Standardization. A worldwide federation of national standards bodies organized to promote the development of standardization to facilitate the international exchange of goods and services, and to develop cooperation in the spheres of intellectual, scientific, technological and economic activity. The best known ISO standards are the ISO9000 series on quality management. www.iso.org/ ISO 9000: This is a certification granted to companies that manufacture goods. It specifies that they have properly recorded all of their procedures and they meet with certain specifications. ISO 9660: An ISO 9660 file system is a standard CD-ROM file system that allows you to read the same CD-ROM whether you're on a PC, Mac, or other major computer platform. The standard, issued in 1988, was written by an industry group named High Sierra. Almost all computers with CD-ROM drives can read files from an ISO 9660 file system. There are several specification levels. In Level 1, file names must be in the 8.3 format (no more than eight characters in the name, no more than three characters in the suffix) and in capital letters. Directory names can be no longer than eight characters. There can be no more than eight nested directory levels. Level 2 and 3 specifications allow file names up to 32 characters long. Joliet, an extension to ISO 9660 from Microsoft, allows the use of Unicode characters in file names (needed for international users) and file names up to 64 characters in length. ISO/IEC JTC1: This joint technical committee of the ISO and IEC covers international standardization in the field of Information Technology, including the specification, design and development of systems and tools dealing with the capture, representation, processing, security, transfer, interchange, presentation, management, organization, storage and retrieval of information. The MPEG development is within Sub-Committee 29,WG11 of this body. http://www.jtc1.org/ ISP (Internet Service Provider): A company that provides Internet access to people or corporations. Early ISPs generally had pools of modems awaiting dial-up connections, but many ISPs nowadays only deal in high-end business communications. Smaller ISPs buy bandwidth from larger ISPs. IT (Information Technology): The field of work dealing with computers and technology, or more specifically, the organization within a company that takes care of all of the computers, telephones, webservers, and Internet connectivity that keeps a company able to communicate with the outside world by electronic means. ITU (International Telecommunications Union): The International Telecommunication Union is unique among international organizations in that it was founded on the principle of cooperation between governments and the private sector. With a membership encompassing telecommunication policy-makers and regulators, network operators, equipment manufacturers, hardware and software developers, regional standards-making organizations and financing institutions, ITUs activities, policies and strategic direction are determined and shaped by the industry it serves.
166

The three Sectors of the Union Radiocommunication (ITU-R), Telecommunication Standardization (ITU-T), and Telecommunication Development (ITU-D) work today to build and shape tomorrows networks and services. ITU-R draws up the technical characteristics of terrestrial and space-based wireless services and systems, and develops operational procedures. It also undertakes the important technical studies which serve as a basis for the regulatory decisions made at radiocommunication conferences. ITU-T experts prepare the technical specifications for telecommunication systems, networks and services, including their operation, performance and maintenance. Their work also covers the tariff principles and accounting methods used to provide international service. ITU-D experts focus their work on the preparation of recommendations, opinions, guidelines, handbooks, manuals and reports, which provide decision-makers in developing countries with best business practices relating to a host of issues ranging from development strategies and policies to network management. Email: itumail@itu.int. Internet: www.itu.int. ITU-R (ITU-Radio Communication Standard Sector): An international Standards body and a part of the ITU, based in Geneva which is the recognized Standards publisher for broadcast radio and television transmission standards and Standards on international program exchange. (Formerly known as the CCIR). http://www.itu.int/home/index.html, http:// ITU-R BT.601-5: Studio Encoding Parameters of Digital Television for Standard 4:3 and Wide-Screen 16:9 aspect Ratios . This international standard defines the encoding parameters of digital television for studios. It is the international standard for digitizing component television video in both 525 and 625 line systems and is derived from the SMPTE RP125. ITU-R 601 deals with both color difference (Y, R-Y, B-Y) and RGB video, and defines sampling systems, RGB/Y, R-Y, B-Y matrix values and filter characteristics. It does not actually define the electro-mechanical interface--see ITU-R BT.656. ITU-R 601 is normally taken to refer to color difference component digital video (rather than RGB), for which it defines 4:2:2 sampling at 13.5 MHz with 720 luminance samples per active line and 8 or 10-bit digitizing. Some headroom is allowed with black at level 16 (not 0) and white at level 235 (not 255)--to minimize clipping of noise and overshoots. Using 8-bit digitizing approximately 16 million unique colors are possible: 28 each for Y (luminance), Cr and Cb (the digitized color difference signals) = 224 = 16,777,216 possible combinations. The sampling frequency of 13.5 MHz was chosen to provide a politically acceptable common sampling standard between 525/60 and 625/50 systems, being a multiple of 2.25 MHz, the lowest common frequency to provide a static sampling pattern for both. ITU-R BT.656-4: Interfaces for Digital Component video Signals in 525-line and 625-line Television Systems Operating at the 4:2:2 Level of Recommendation ITU-R BT.601-5 (Part A). The physical parallel and serial interconnect scheme for ITU-R BT.601-5. ITU-R BT.656-4 defines the parallel connector pin-outs as well as the blanking, sync, and multiplexing schemes used in both parallel and serial interfaces. Reflects definitions in EBU Tech
167

3267 (for 625-line signals) and in SMPTE 125M (parallel 525) and SMPTE 259M (serial 525). ITU-R BT.709-5: Parameter values for the HDTV* standards for production and international programme exchange. Two HDTV scanning standards, 1125/60/2:1 and 1250/50/2:1, were previously developed for that purpose, having a significant number of parameters which have been agreed on a worldwide basis, and for which some equipment remains in use. A HDTV common image format of 1 920 pixels by 1080 lines providing square pixel sampling and a number of interlace and progressive picture rates has been designed for digital television, computer imagery and other applications (in this Recommendation, the term pixel is used to describe a picture element in the digital domain). Ratified by the International Telecommunications Union (ITU) in June 1999, the 1920x1080 digital sampling structure is a world format. All supporting technical parameters relating to scanning, colorimetry, transfer characteristics, etc. are universal. The CIF can be used with a variety of picture capture rates: 60p, 50p, 30p, 25p, 24p, as well as 60i and 50i. Refer to the ITU following Recommendations related Digital Broadcasting. ITU-R BT.798-1: Digital Television Terrestrial Broadcasting in the VHF/UHF Bands. ITU-R BT.1125: Basic Objectives for the Planning and Implementation of Digital Terrestrial Television Broadcasting Systems. ITU-R BT.1206: Spectrum Shaping Limits for Digital Terrestrial Television Broadcasting. ITU-R BT.1207-1: Data Access Methods for Digital Terrestrial Television Broadcasting. ITU-R BT.1208-1: Video coding for digital terrestrial television broadcasting ITU-R BT.1209-1: Service Multiplex Methods for Digital Terrestrial Television Broadcasting. ITU-R BT.1299: The Basic Elements of a Worldwide Common Family of Systems for Digital Terrestrial Television Broadcasting. ITU-R BT.1301: Data Services in Digital Terrestrial Television Broadcasting. ITU-R BT.1300-2: Service multiplex, transport, and identification methods for digital terrestrial television broadcasting. ITU-R BT.1306-1: Error-correction, Data Framing, Modulation and Emission Methods for Digital Terrestrial Television Broadcasting. ITU-R BT.1368-4: Planning Criteria for Digital Terrestrial Television Services in the VHF/UHF Bands. ITU-R SM.1682: Methods for measurements on digital broadcasting signals.

168

ITU-R BT.1368-4: Planning Criteria for Digital Terrestrial Television Services in the VHF/UHF Bands. ITU-R BS.775 An international recommendation for multichannel stereophonic sound systems with and without accompanying picture. This recommendation gives speaker placements for various types of sound systems. ITU-T (ITU Telecommunication Standardization Sector): ITU-T is one of the three Sectors of the International Telecommunication Union (ITU). ITU-T's mission is to ensure an efficient and on-time production of high quality standards (Recommendations) covering all fields of telecommunications. ITU-T Recommendations related digital broadcasting are listed below. ITU-T H.262: INFORMATION TECHNOLOGY GENERIC CODING OF MOVING PICTURES AND ASSOCIATED AUDIO INFORMATION: VIDEO (MPEG-2) ITU-T H.264: ADVANCED VIDEO CODING FOR GENERIC AUDIOVISUAL SERVICES (H.264 AVC) These Recommendations specify coded representation of video data and the decoding process required to reconstruct pictures. It provides a generic video coding scheme which serves a wide range of applications, bit rates, picture resolutions and qualities. Its basic coding algorithm is a hybrid of motion compensated prediction and DCT. Pictures to be coded can be either interlaced or progressive. Necessary algorithmic elements are integrated into a single syntax, and a limited number of subsets are defined in terms of Profile (functionalities) and Level (parameters) to facilitate practical use of this generic video coding standard. This second edition of this Recommendation | International Standard consists of ITU-T Rec. H.262 (1995) | ISO/IEC 13818-2:1996, subsequently altered by two corrigenda and six amendments: 1. A first corrigendum adding a slice picture identifier, allowing an application to define default colour description parameters, removing a prohibition of field-structured DCT coding in progressive frames, clarifying ambiguity on restricted range of reconstructed motion vectors, clarifying ambiguity of VBV at boundaries of sequences, and making various minor corrections. 2. A second corrigendum altering the inverse discrete cosine transform requirements, altering temporal_reference for low delay, and making various minor corrections. 3. A first amendment providing a method of obtaining and registering copyright identifiers. 4. A second amendment defining a 4:2:2 profile. 5. A third amendment adding a camera parameters extension and a multi-view profile. 6. A fourth amendment adding an ITU-T extension. 7. A fifth amendment adding a high level to the 4:2:2 profile. 8. A sixth amendment reducing the upper bound for the number of lines per
169

frame in the high level of all profiles from 1152 to 1088. ITU-T H.220.0: INFORMATION TECHNOLOGY - GENERIC CODING OF MOVING PICTURES AND ASSOCIATED AUDIO INFORMATION: SYSTEMS (MPEG-2 SYSTEMS) This Recommendation | International Standard specifies generic methods for multimedia multiplexing, synchronization and timebase recovery. The specifications provide a packet based multimedia multiplexing where each elementary bit stream is segmented into Packetized Elementary Stream (PES), and then respective packets are multiplexed into either of the two streams: Program Stream (PS) which is a multiplex of variable length PES packets and designed for use in error free environments, Transport Stream (TS) which consists of 188 byte fixed length packets, has functionality of multiple program multiplexing as well as multiplexing of various PES packets and is designed for use in error prone environments. The multimedia synchronization and timebase recovery are achieved by time-stamps for system time clock and presentation/decoding. ITU-T H.222.1: MULTIMEDIA MULTIPLEX AND SYNCHRONIZATION FOR AUDIOVISUAL COMMUNICATION IN ATM ENVIRONMENT This Recommendation describes the multiplexing and synchronization of multimedia information for audiovisual communications in ATM environments. It specifies the H.222.1 Program Stream and the H.222.1 Transport Stream by choosing necessary coding elements from the generic H.222.0 specifications and by adding items for use in ATM environments. It also covers methods for accommodating ITU-T defined elementary streams.

170

Java : A programming language developed by Sun Microsystems (http://java.sun.com/), designed to generate applications that can run without modification, on many different hardware platforms. Java is an interpreted language. That is, the source code of a Java program is compiled into an intermediate language called bytecode, which cannot run by itself. The bytecode must be converted (interpreted) into machine code at runtime. Upon finding a Java applet, a browser invokes a Java interpreter (Java Virtual Machine -JVM), which translates the bytecode into machine code and runs it. This means Java programs are not dependent on any specific hardware and will run in any computer with the Java Virtual Machine software. While being modeled after the C++ computer language, Java was designed to run in small amounts of memory and provides enhanced features for the programmer, including the ability to release memory when no longer required. This automatic "tidying up" feature has been lacking in C and C++ and has been the bane of programmers for years. Java's "write once-run anywhere" model has been one of the Holy Grails of computing for decades. However, a problem for consumer equipment is that an extra layer of translation is required to execute an interpreted language. That requires more time for a result, or a faster processor. JavaTV is divided into packages such as javax.tv.service.navigation which then relate to classes such as javax.tv.service.navigation. ServiceType Like other programming languages, Java is royalty free to developers for writing applications. However, the Java Virtual Machine, which executes Java applications, is licensed to the companies that incorporate it in their browsers, Web servers and DTVs (for MHP). Java applet: A Java program that is downloaded and run from the browser. The Java Virtual Machine built into the browser is interpreting the instructions. Contrast with Java application. JavaScript: A scripting language that uses a similar syntax as Java. It is not compiled into bytecode, but remains in source code contained in an HTML document. When received, it must be translated a line at time, into machine code by the JavaScript interpreter, a software program running in the browser. JavaScript is very popular and is supported by all Web browsers. JavaScript has a more limited scope than Java and primarily deals with the elements on the displayed page itself. JavaScript is endorsed by a number of software companies and is an open language that anyone can use without purchasing a license. Jitter - Timing jitter is the short term variation of a digital signal's significant instant from their ideal positions in time, where short term implies phase oscillations of frequency greater than or equal to 10Hz. Significant instants include for instance, optimum sampling instants. Long term variations, where the variations are of frequency less than 10Hz, are called wander. Jitter Generation - The process whereby jitter appears at the output port of an individual piece of digital equipment in the absence of applied jitter at the input. When looped back at the high speed rate, whether or not a standard interface exists at the
171

TM

-J-

higher rate, Category I equipment must produce less than 0.3 Unit Intervals (UI) of rms jitter and less than 1.0 UI of peak-to-peak timing jitter at the output of the terminal receiver. This is as specified in TR-499. In TR-253 for SONET a DS-3 interface shall generate jitter less than 0.4 UI peak-to-peak. Jitter Tolerance - For STS-N electrical interfaces input jitter tolerance is the maximum amplitude of sinusoidal jitter at a given jitter frequency, which when modulating the signal at an equipment input port, results in no more than two errored seconds cumulative, where these errored seconds are integrated over successive 30 second measurement intervals. Requirements on input jitter tolerance as just stated, are specified in terms of compliance with a jitter mask, which represents a combination of points. Each point corresponds to a minimum amplitude of sinusoidal jitter at a given jitter frequency which when modulating the signal at the equipment input port results in two or fewer errored seconds in a 30 second measurement interval. For the OC-N optical interface it is defined as the amplitude of the peak-to-peak sinusoidal jitter applied at the input of an OC-N interface that causes a 1 db power penalty. Jitter Transfer - This is the relationship between jitter applied at the input port and the jitter appearing at the output port. Joliet: An extension to ISO 9660, the specification for the file system (including file names) for the content on a compact disc (CD); it allows file names up to 64 characters in length (including spaces) and the use of Unicode characters in file names (sometimes needed for internationalization). Written by Microsoft, Joliet is fully supported in Windows 95 and later Windows operating systems (except Windows NT prior to its version 4). In operating systems (such as Windows 3.1) that support only eight-character file names, a longer file name on the CD is truncated into an eight-character name using a tilde (~) followed by a unique number as the last characters in the name. Joystick: An input device first found on arcade game machines, then home game systems, and finally on computers. It consists of any stick-like object attached to a base that can be pushed in four or more directions. Usually there's a button in the vicinity of the joystick. Joysticks come in many shapes and sizes, with the typical arcade joystick having a large ball at the top that can be easily gripped; but lately, joysticks have been replaced by cheap button pads in many home gaming systems. However, smaller joysticks that can be pushed around with a single finger have been added to some of the pads as well. JPEG (Joint Photographic Experts Group): ISO/ITU-T. JPEG is a standard for the data compression of still pictures (intrafield). In particular its work has been involved with pictures coded to the ITU-R 601 standard. JPEG uses DCT and offers data compression of between two and 100 times and three levels of processing are defined: the baseline, extended and "lossless" encoding. See also: Motion-JPEG.

172

-KK (Kelvin): Measure of temperature where pure water freezes at 273K and boils at 373K. Karnaugh Mapping: A means of showing the relationship between logic inputs and desired output. Generally a truth table is mapped to a smaller, more workable grid of output values (1s and 0s). Karnaugh Mappings are often used when working with electronic circuits and trying to predict their output. Kernel: The guts of any operating system. The kernel is loaded into main memory and stays there, while other pieces of the OS are loaded in and out of memory. The kernel controls all requests for disk, processor, or other resources. Generally the smaller and faster the kernel, the faster the operating system will operate. However, larger kernels can provide more functionality. Keyboard: The main input device on most PCs. It consists of a "board" with a set of buttons on it that represent all the letters in the alphabet, the numbers 0 through 9, and any extra keys, like cursor keys and function keys, that enable some keys to represent additional characters. Keyframe: A set of parameters defining a point in a transition, such as a DVE effect. For example, a keyframe may define a picture size, position and rotation. Any digital effect must have a minimum of two keyframes, start and finish, although more complex moves will use more--maybe as many as 100. Increasingly, more parameters are becoming "keyframeable," meaning they can be programmed to transition between two, or more, states. Examples are color correction to made a steady change of color, and keyer settings, perhaps to made an object slowly appear or disappear. Keying: Generating signals by the interruption or modulation of a steady signal or carrier. Keystone Correction: "Keystoning" is a form of video image distortion that occurs with front projectors if the centerline of the projector's lens is not perpendicular to the screen. Keystoning results in an image which is shaped like a trapezoid rather than a rectangle the top of the picture is wider than the bottom, or the left side is taller than the right, or vice versa. Most front projectors include "keystone correction" to correct this distortion. Some models have vertical keystone correction, while others include both vertical and horizontal correction. Although keystone correction allows greater mounting flexibility, it is a form of processing which usually has a slight softening and dimming effect on the picture. Keyword: A term most often used to describe content on a Web page so that search engines can properly index the page. Keywords are not used any longer by most search engines, as they have been abused too many times by people listing keywords that have nothing to do with their pages in an attempt to get extra traffic.

173

-LLAPB (Link Access Protocol, Balanced): Link Access Procedure, Balanced (LAPB) is a data link layer protocol used to manage communication and packet framing between data terminal equipment (DTE) and the data circuit-terminating equipment (DCE) devices in the X.25 protocol stack. LAPB, a bit-oriented protocol derived from HDLC, is actually the HDLC in BAC mode (Balanced Asynchronous Class). LAPB makes sure that frames are error free and properly sequenced. LAPB shares the same frame format, frame types, and field functions as SDLC and HDLC. Unlike either of these, however, LAPB is restricted to the Asynchronous Balanced Mode (ABM) transfer mode and is appropriate only for combined stations. Also, LAPB circuits can be established by either the DTE or DCE. The station initiating the call is determined to be the primary, and the responding station is the secondary. Finally, LAPB use of the P/F bit is somewhat different from that of the other protocols. In LAPB, since there is no master/slave relationship, the sender uses the Poll bit to insist on an immediate response. In the response frame this same bit becomes the receivers Final bit. The receiver always turns on the Final bit in its response to a command from the sender with the Poll bit set. The P/F bit is generally used when either end becomes unsure about proper frame sequencing because of a possible missing acknowledgement, and it is necessary to re-establish a point of reference. LAPB's Frame Types:

I-Frames (Information frames): Carries upper-layer information and some control information. I-frame functions include sequencing, flow control, and error detection and recovery. I-frames carry send and receive sequence numbers. S-Frames (Supervisory Frames): Carries control information. S-frame functions include requesting and suspending transmissions, reporting on status, and acknowledging the receipt of I-frames. S-frames carry only receive sequence numbers. U-Frames (Unnumbered Frames): carries control information. U-frame functions include link setup and disconnection, as well as error reporting. U-frames carry no sequence numbers

LAP (Link Access Procedure on the D channel): ISDN datalink layer protocol for the D channel. LAPD was derived from the LAPB protocol and is designed primarily to satisfy the signaling requirements of ISDNbasic access. Defined by ITU-T Recommendations Q.920and Q.921.

Laptop: A computer small enough to fit completely on your lap. See also notebook. Large Core Fiber: Usually, a fiber with a core of 200 m or more.

174

Large Scale Integration (LSI): This refers to chips containing thousands of transistors--but less than a million. See also ULSI, VLSI, MSI, and SSI. Laser: Acronym for light amplification by stimulated emission of radiation. A light source that produces, through stimulated emission, coherent, near monochromatic light. Laser Diode (LD): A semiconductor that emits coherent light when forward biased.

Laser Printer: A printer that uses a laser to heat up a drum and etch out what is to be printed. Toner goes on this etching and then the toner is heated to bond with the print material. LATA (Local Access and Transport Area): LATA is a term in the U.S. for a geographic area covered by one or more local telephone companies, which are legally referred to as local exchange carriers (LECs).A connection between two local exchanges within the LATA is referred to as intra LATA. A connection between a carrier in one LATA to a carrier in another LATA is referred to as inter LATA. Inter LATA is long-distance service. The current rules for permitting a company to provide intra LATA or inter LATA service (or both) are based on the Telecommunications Act of 1996.

Latency: The factor of data access time due to disk rotation. The faster a disk spins the quicker it will be at the position where the required data can start to be read. As disk diameters have decreased so rotational speeds have tended to increase but there is still much variation. Modern 31/2-inch drives typically have spindle speeds of between 3,600 and 7,200 revolutions per minute, so one revolution is completed in 16 or 8 milliseconds (ms) respectively. This is represented in the disk specification as average latency of 8 or 4 ms. Lateral Displacement Loss: The loss of power that results from lateral displacement of optimum alignment between two fibers or between a fiber and an active device.

175

Launch Fiber: An optical fiber used to couple and condition light from an optical source into an optical fiber. Often the launch fiber is used to create an equilibrium mode distribution in multimode fiber. Also called launching fiber. Layer: One of the levels in the data hierarchy of the video and system specifications defined in Parts 1 and 2 of ITU-T Rec. H.222.0 | ISO/IEC 13818-1. Layered Embedded Encoding: The process of compressing data in layers such that successive layers provide more information and thus higher quality reconstruction of the original. That is, a single stream of data can supply a range of compression and thus, in the case of video, a scalable range of video resolution and picture quality. This is particularly useful for a muticast where a single stream is sent out and people are connecting over varying bandwidths. The low bandwidth connection can take just the lower layers while the high-bandwidth connection can take all of the layers for the highest quality. L-Band: The wavelength range between 1570 nm and 1610 nm used in some CWDM and DWDM applications. LCD Panel: Liquid Crystal Display Panel). A slab of specially treated glass which is used to sandwich liquid crystal. You can then send electricity through the treated glass to change the phase of the liquid, which then changes color. This technology requires a separate light source, usually a flourescent light, to be easily visible to the user. LCD Projection Panel: This is physically similar to a standard LCD panel except it does not have any back/backlight, so you can put it on an overhead projector and shine a light through it. LCD Projector: A projector designed to display an image from your computer (or consumer video device) onto a large wall or movie screen. LCD projectors use three small, separate LCD screens (red, green, and blue) and shine a bright light through them to generate the image. See also DLP projector. LED: Light Emitting Diode. This piece of electronics emits light when a current is passed through it. It does not work the same way as a light bulb, so it does not have the problem of burning out. It also only emits a few frequencies of light, so it can be a specific pure color. Legacy device: A type of device or peripheral that is not Plug-and-Play-compatible. Such devices often contain jumpers that must be set manually. Legacy System: Any old computer system that was set up before your time and now continues to work and need support. Often legacy systems are problematic to upgrade because the people that put them together aren't around any more. One great example was the Year 2000 problem. Legacy systems were driving everyone nuts because no one programmed in COBOL anymore, and lots of legacy code was written in COBOL.

176

LEO (Low Earth Orbit): LEO used to describe satellites orbiting between 600 and 1000 miles above the Earth. A low-earth-orbit (LEO) satellite system employs a large fleet of "birds," each in a circular orbit at a constant altitude of a few hundred miles. The orbits take the satellites over, or nearly over, the geographic poles. Each revolution takes approximately 90 minutes to a few hours. The fleet is arranged in such a way that, from any point on the surface at any time, at least one satellite is on a line of sight. The entire system operates in a manner similar to the way a cellular telephone network functions. The main difference is that the transponders, or wireless receiver/transmitters, are moving rather than fixed, and are in space rather than on the earth. A well-designed LEO system makes it possible for anyone to access the Internet via wireless from any point on the planet, using an antenna no more sophisticated than old-fashioned television "rabbitears." Some satellites revolve around the earth in elliptical orbits. These satellites move rapidly when they are near perigee, or their lowest altitude; they move slowly when they are near apogee, or their highest altitude. Such "birds" are used by amateur radio operators, and by some commercial and government services. They require directional antennas whose orientation must be constantly adjusted to follow the satellite's path across the sky.

Letterbox: Image of a widescreen picture on a standard 4:3 aspect ratio television screen, typically with black bars above and below. Used to maintain the original aspect ratio of the source material. See also: Side panels and pillarbox. L-I Curve: The plot of optical output (L) as a function of current (I) which characterizes an electrical-to-optical converter. A typical L-I curve is shown at right.

License: Software license. Most corporations need multiple copies of software, but do not need the media in which they come, either because they already have it or because they allow users to install software from a server on the network. Companies still need to purchase a copy for each user, however, so they need a way to prove they have actually purchased a copy of each. These companies purchase software licenses with no associated media. Such licenses are typically just sheets of paper that cost a lot of money, but allow you to legally use additional copies of the software.
177

Lifetime: The lifetime of an (MHP) application characterizes the time from which the application if Loaded to the time the application is Destroyed (from MHP). Light: In a strict sense, the region of the electromagnetic spectrum that can be perceived by human vision designated the visible spectrum, and nominally covering the wavelength range of 0.4 m to 0.7 m. In the laser and optical communication fields, custom and practice have extended usage of the term to include the much broader portion of the electromagnetic spectrum that can be handled by the basic optical techniques used for the visible spectrum. This region has not been clearly defined, but, as employed by most workers in the field, may be considered to extend from the near-ultraviolet region of approximately 0.3 m, through the visible region, and into the mid-infrared region to 30 m. Light-emitting Diode (LED): A semiconductor that emits incoherent light when forward biased. Two types of LED's include edge-emitting LED's and surface-emitting LED's (illustrated).

Lightguide: Synonym for optical fiber. Lightwave: The path of a point on a wavefront. The direction of the lightwave is generally normal (perpendicular) to the wavefront. Linear Device: A device for which the output is, within a given dynamic range, linearly proportional to the input. Linearity: The basic measurement of how well analog-to-digital and digital-to-analog conversions are performed. To test for linearity, a mathematically perfect diagonal line is converted and then compared to a copy of itself. The difference between the two lines is calculated to show linearity of the system and is given as a percentage or range of least significant bits. Line in - An analog I/O port for a sound device that allows a device to receive a line-level audio signal. A line-level signal is a non-amplified sound signal meant to be sent to a device that has a line-in port. Since the line-level signal isn't amplified it is typically cleaner. Line out - An analog I/O port for a sound device that allows a device to send a line-level audio signal. This is opposed to an amplified signal on a speaker out port. Link (hyperlink): Part of an HTML document that points to another resource. When you view an HTML document using a browser, it is common practice to display hyperlinks in blue with an underlined font. When you click on a hyperlink you will jump, or link, to another area in that document or a different document. The linked document or item may be on the same page, the same server, or a server hundreds of miles away. The work all goes on behind the scenes as long as you are connected
178

to the Internet. Link Budget (for Radio System): A link budget is the accounting of all of the gains and losses from the transmitter, through the medium (free space, cable, waveguide, fiber, etc.) to the receiver. A simple link budget equation looks like this: Received Power (dB) = Transmitted Power (dBm) - Losses (dB) For a line of sight radio system, a link budget equation might look like this: RxP = TxP + TxG - TxL - FSL - ML + RxG - RxL where: RxP = received power TxP = transmitter output power (dBm) TxG = transmitter antenna gain (dBi) TxL = transmitter losses (coax, connectors...) (dB) FSL = free space loss or path loss (dB) ML = miscellaneous losses (fading, body loss, polarization mismatch, other losses...) (dB) RxG = receiver antenna gain (dB) RxL = receiver losses (coax, connectors...) (dB) Line of sight deployments for example will have path losses that are related to the inverse square of the distance. The Free Space Loss equation can be written in several ways depending on the units of measure. Here are three examples: FSL (dB) = 32.45 dB + 20*log[frequency(MHz)] + 20*log[distance(km)] also FSL (dB) = 27.55 dB + 20*log[frequency(MHz)] + 20*log[distance(m)] and FSL (dB) = 36.6 dB + 20*log[frequency(MHz)] + 20*log[distance(miles)] Linux: An Open Source, UNIX-like operating system originally developed by Linus Torvalds. Linux is freeware by default, but may be sold for the cost of packaging, bundling, and technical support. Companies such as Red Hat, SuSe, and Caldera sell Linux packages; however, they also allow you to download them for free. Linux was first developed for x86 computers, but now runs on a wide variety of platforms. Liquid Crystal Display (LCD): An LCD television or monitor uses liquid crystals that act as "shutters" within the television screen. An LCD television has thousands of small light sources at the rear of the display. A layer of cells containing the liquid crystals is placed between the light sources and the display screen. When the liquid crystal cells are electrified with current, the crystals align and block any light from shining through, or scatter allowing the light to shine through to the screen. LCD monitors typically only display video signals in a progressive scan format. LCD monitors do not use phosphors and are not susceptible to screen burn. Liquid Crystal on Silicon (LCos): A projection TV display technology that sandwiches a layer of liquid crystal between a cover glass and a highly reflective,
179

mirror-like surface patterned with pixels that sits on top of a silicon chip. These layers form a microdisplay that can be used in rear-projection and front-projection TVs. Manufacturers use different names for their LCoS-based technologies. JVC uses D-ILA or HD-ILA, while Sony uses SXRD. Line Doubling: A method, through special circuitry, to modify an NTSC interlaced picture to create an effect similar to a progressively scanned picture. The first field of 262.5 odd-numbered lines is stored in digital memory and combined with the even-numbered lines. Then all 525 lines are scanned in 1/30th of a second. The result is improved detail enhancement from an NTSC source. Light Output: Measures the amount of light produced by a front projector. Expressed in "lumens" or "ANSI lumens," with a higher number indicating greater light output. Live-streaming: Streaming media that is broadcast to many people at a set time. LLC (Logical Link Control): Higher of the two data link layer sublayers defined by the IEEE. The LLC sublayer handles error control, flow control, framing, and MAC-sublayer addressing. The most prevalent LLC protocol is IEEE 802.2, which includes both connectionless and connection-oriented variants. LNB: Abbreviation for Low Noise Blockdown Converter. A device within a satellite dish which converts X-Band intermediate frequencies (IF) to lower L-Band intermediate frequencies.

Local Area Network (LAN) & Wi-Fi: A non-public data network in which serial transmission is used for direct data communication among data stations located on the user's premises. A number of wired LAN systems have been developed for computer networks including IBMs Token Ring, Arcnet, Apples Apple Talk but the IEEE 802.3 (Ethernet) has become the defacto preference and most widely used with a backward compatible advances resulting in a suite of standards, such as 10baseT (10Mbps), 100baseT, (100Mbps) and now Gigabit Ethernet for wired systems.

A wireless LAN link does not require lining up devices like line-of-sight infra-red links (IrDA commonly found on notebook PCs), but allow roaming within a given range omnidirectional radio waves that can transmit through walls and other non-metal
180

barriers. Two-way radio-linked data LANs have been developed with the most common using variants of the IEEE 802 standard. 802.11b operates at 2.4GHz with throughput of typically 5Mbps. This has become known as "Wi-Fi" and is now expanded to include the 5GHz, IEEE 802.11a standard with throughputs of about 60Mbps. 802.11g also uses 2.4GHz but with increased payload. The range of Wi-Fi is typically 100 metres but can be extended by high gain directional antennas with ranges reported in excess of 2km. The very low range (<10m) Bluetooth standard (developed by Ericsson see below), is different as it is low data rate intended for wireless connections application such as mobile phone to personal earpiece/microphone set. The Wi-Fi Alliance (http://www.weca.net/OpenSection/index.asp) is a nonprofit international association formed in 1999 to certify interoperability of wireless Local Area Network products based on IEEE 802.11x specification. (See also http://www.wi-fizone.org/ ) Short distance wireless networks a.k.a. WPAN or Wireless Personal Area Network serve only small workgroups down to a single person with limited range and do not support roaming. These include Bluetooth and IEEE 802.15, which is concerned with interoperability between wireless LANs (802.11) and WPANs. They are typically used to transfer data between a laptop or PDA and a desktop machine or server as well as to a printer. Similar to the way a cordless phone works with its base station, technologies such as Bluetooth and HomeRF are expected to be deployed in dual mode smart phones that can download e-mail and Web data while on the road and then exchange that data with a laptop or desktop machine in the office. and wireless LAN. Bluetooth Special Interest Group (www.bluetooth.com ) founded in 1998 by Ericsson, IBM, Intel, Nokia and Toshiba. Bluetooth is an open standard for short-range transmission of digital voice and data between mobile devices (laptops, PDAs, phones) and desktop devices. It supports point-topoint and multipoint applications. Bluetooth provides up to 720 Kbps data transfer within a range of 10 meters and up to 100 meters with a power boost. Bluetooth transmits in the unlicensed 2.4GHz band and uses a frequency hopping spread spectrum technique that changes its signal 1600 times per second. If there is interference from other devices, the transmission does not stop, but its speed is downgraded. Ericsson (a Scandinavian company) was the first to develop Bluetooth. The name comes from Danish King Harald Blatan (Bluetooth) who in the 10th century began to Christianize the country. Local Program Insertion: In the H.222.1 Transport Stream mechanisms are provided to assist in the replacement of one elementary stream with another elementary stream. This service only has meaning at a network element. This service is not explicitly supported in the H.222.1 Program Stream. Log: A record of events. It is also an abbreviation for logarithm, which is a mathematical operation. Log File: A file that records events. Many programs produce log files. Often tech support will ask you to look at a log file to determine what is happening when problems occur. Log files usually record much "grittier" events than are shown to the user, like pieces of Windows 95 starting. You probably don't want to see each part of
181

Windows start, but it's good to have a log of it if something goes wrong. Often log files will have the extension ".log". Logical channel: See virtual channel. Long-haul Telecommunications: 1. In public switched networks, regarding circuits that span long distances, such as the circuits in inter-LANA, interstate, and international communications. 2. In military use, communications among users on a national or worldwide basis. Long-haul communications are characterized by a higher level of users, more rigorous performance requirements, longer distances between users, including world wide distances, higher traffic volumes and densities, larger switches and trunk cross sections, and fixed and recoverable assets. Usually pertains to the U.S. Defense Communications System. Longitudinal Mode: An optical waveguide mode with boundary condition determined along the length of the optical cavity. Loop: 1) A communication channel from a switching center or an individual message distribution point to the user terminal (illustrated). 2) In telephone systems, a pair of wires from a central office to a subscriber's telephone. 3) A type of antenna used extensively in direction-finding equipment and in UHF reception.

Loss: The amount of a signals power, expressed in dB, that is lost in connectors, splices, or fiber defects. Loss Budget: An accounting of overall attenuation in a system. See optical link loss budget. Lossy Audio Compression: As opposed to lossless compression, where information redundancy is reduced, most lossy compression reduces perceptual redundancy; sounds which are considered perceptually irrelevant are coded with decreased accuracy or not coded at all. 1. Transform domain methods In order to determine what information in an audio signal is perceptual irrelevant, most lossy compression algorithms use transforms such as the modified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain. Once transformed, typically into the frequency domain, component frequencies can be allocated bits according to how audible they are. Audibility of spectral components is determined by first calculating a masking threshold, below which it is estimated that sounds will be beyond the limits of human perception.

182

The masking threshold is calculated using the absolute threshold of hearing and the principles of simultaneous masking - the phenomenon wherein a signal is masked by another signal separated by frequency - and, in some cases, temporal masking - where a signal is masked by another signal separated by time. Equal-loudness contours may also be used to weight the perceptual importance of different components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models. 2. Time domain methods Other types of lossy compressors, such as the linear predictive coding (LPC) used with speech, are source-based coders. These coders use a model of the sound's generator (such as the human vocal tract with LPC) to whiten the audio signal (i.e., flatten its spectrum) prior to quantization. LP may also be thought of as a basic perceptual coding technique; reconstruction of an audio signal using a linear predictor shapes the coder's quantization noise into the spectrum of the target signal, partially masking it. Lossless Compression: 1) Reduction of the storage size of digital data by employing one or more appropriate algorithms in such a way that the data can be recovered without loosing integrity. 2) Reduction of the amount of data that needs to be transmitted per unit time though an analogous real-time process that does not compromise the ability to completely restore the data. Lossy Compression: Reduction of the bit-rate for an image signal by using algorithms that achieve a higher compression than lossless compression. Lossy compression presents loss of information and artifacts that can be ignoring when comparing to original image. Lossy compression takes advantage of the subtended viewing angle for the intended display, the perceptual characteristics of the human eye, the statistics of image populations, and the objective of the display. 2) Removal of redundant bits from an image in video technology producing a minor loss of image quality. Low Noise Amplifier (LNA): RF gain device designed specifically for very low imposition of additional noise power. Used to amplify very low signals without contributing significant SNR degradation. LSB: Least Significant Bit. The bit that has the least value in a binary number or data byte. In written form, this would be the bit on the right. For example: Binary 1101 = Decimal 13 In this example the right-most binary digit, 1, is the least significant bit--here representing 1. If the LSB in this example were corrupt, the decimal would not be 13 but 12. See also MSB. Lumen: The unit of measure for light output of a projector. Different manufacturers may rate their projectors' light output differently. "Peak lumens" is measured by illuminating an area of about 10% of the screen size in the center of the display. This measurement ignores the reduction in brightness at the sides and corners of the screen. The more conservative "ANSI lumens" (American National Standards Institute)
183

specification is made by dividing the screen into 9 blocks, taking a reading in the center of each, and averaging the readings. This number is usually 20-25% lower than the peak lumen measurement. Luminance: The component of a video signal that includes information about its brightness. See also: Chrominance.

184

-MMAC (Multiplexed Analog Components): A video standard developed by the European community. An enhanced version, HD-MAC delivers 1250 lines at 50 frames per second, HDTV quality. MAC (Medium Access Control): 1) A service feature or technique used to permit or deny use of the components of a communication system. 2) A technique used to define or restrict the rights of individuals or application programs to obtain data from, or place data onto, a storage device, or the definition derived from that technique. MAC Address (Media Access Control Address): Standardized data link layer address that is required for every port or device that connects to a LAN. Other devices in the network use these addresses to locate specific ports in the network and to create and update routing tables and data structures. MAC addresses are 6 bytes long and are controlled by the IEEE. A unique 128-bit address of a network card or device. The first part of the address is unique to the company that produced the device, and beyond that it is a sequence of digits unique to a single device manufactured by a company. Macro: A means of executing a group of instructions within a program. Many programs offer the capability to put together macros so that you don't have to do the same group of repetitive instructions one by one. You can program the macro to do any number of instructions. Many programs allow you to extend the simple macro languages that they provide with other, more complex programming languages. Macrobending: In a fiber, all macroscopic deviations of the fibers axis from a straight line, that will cause light to leak out of the fiber, causing signal attenuation.

Macroblock / Macroblocking: In the typical 4:2:0 picture representation used by MPEG-2, a macroblock consists of four eight by eight blocks of luminance data (arranged in a 16 by 16 sample array) and two eight by eight blocks of color difference data which correspond to the area covered by the 16 by 16 section luminance component of the picture. The macroblock is the basic unit used for motion compensated prediction. Macroblocking refers to 'blockiness' seen in MPEG and JPEG encoded pictures usually due to too much compression and too low a data rate. When macroblocking gets severe enough that chroma information starts to get lost, it is called "pixelization". Also the veil-like shimmer that can appear around a slowly moving object is "mosquitoing" or "netting" either way it looks like the object is wearing a mosquito net (see Mosquito Noise). Bright objects placed against dark backgrounds (such as a face), can lose some edge definition (hence "bearding" or "tearing"). Mainframe: Basically a large and powerful computer designed to be very fault tolerant. Historically, mainframes with lots of memory and disk space are hooked to a bunch of dumb terminals that can be used to access data and run programs on the mainframe, but can do nothing without the mainframe. See also Client/Server. Main Level: A range of allowed picture parameters defined by the MPEG-2 video
185

coding specification with maximum resolution equivalent to ITU-R Recommendation 601. Main Profile: A subset of the syntax of the MPEG-2 video coding specification that is expected to be supported over a large range of applications. MAMA: The Media Asset Management Association. MAMA serves as a advanced user-group and independent international industry consortium, created by and for media producers, content publishers, technology providers, and value-chain partners to develop open content and metadata exchange protocols for digital media creation and asset management. Internet: www.mamgroup.org. MAN (Metropolitan Area Network): A network covering an area larger than a local area network. A series of local area networks, usually two or more, that cover a metropolitan area.

Manchester Code: Manchester encoding is a synchronous clock encoding technique used by the OSI physical layer to encode the clock and data of a synchronous bit stream. In this technique, the actual binary data to be transmitted over the cable are not sent as a sequence of logic 1's and 0's (known technically as Non Return to Zero (NRZ)). Instead, the bits are translated into a slightly different format that has a number of advantages over using straight binary encoding (i.e. NRZ). Manchester encoding follows the rules shown below: Original Data Logic 0 Logic 1 Value Sent 0 to 1 (upward transition at bit centre) 1 to 0 (downward transition at bit centre)

Note that in some cases you will see the encoding reversed, with 0 being represented as a 0 to 1 transition. This occurs because when an inverting line driver is used to convert the binary digits into an electrical signal, the signal on the wire is the exact opposite of that output by the encoder. Differential physical layer transmission, (e.g. 10BT) does not suffer this inversion. The following diagram shows a typical Manchester encoded signal with the corresponding binary representation of the data (1,1,0,1,0,0) being sent.

186

The waveform for a Manchester encoded bit stream carrying the sequence of bits 110100.

In the Manchester encoding shown, a logic 0 is indicated by a 0 to 1 transition at the centre of the bit and a logic 1 is indicated by a 1 to 0 transition at the centre of the bit. Note that signal transitions do not always occur at the 'bit boundaries' (the division between one bit and another), but that there is always a transition at the centre of each bit. (N.B. since most line driver electronics actually inverts the bits prior to transmission, you may observe the opposite coding on an oscilloscope connected to a cable). A Manchester encoded signal contains frequent level transitions which allow the receiver to extract the clock signal using a Digital Phase Locked Loop (DPLL) and correctly decode the value and timing of each bit. To allow reliable operation using a DPLL, the transmitted bit stream must contain a high density of bit transitions. Manchester encoding ensures this, allowing the receiving DPLL to correctly extract the clock signal. The penalty for introducing frequent transitions, is that the Manchester coded signal consumes more bandwidth than the original signal (in NRZ). For a 10 Mbps LAN, the signal spectrum lies between the 5 and 10 MHz. Manchester encoding is used as the physical layer of an Ethernet LAN, where the additional bandwidth is not a significant issue. MAP: Abbreviation for Manufacturing Automation Protocol. Computer programs that run manufacturing automation systems. Map/Demap: A term for multiplexing, implying more visibility inside the resultant multiplexed bit stream than available with conventional asynchronous techniques. Mapping: The process of associating each bit transmitted by a service into the SDH payload structure that carries the service. For example, mapping an E1 service into an SDH VC-12 associates each bit of the E1 with a location in the VC-12. Margin: Allowance for attenuation in addition to that explicitly accounted for in system design. Master Guide Table (MGT): The ATSC PSIP table that identifies the size, type, PID value, and version number for all other PSIP tables in the transport stream. Material Dispersion: Dispersion resulting from the different velocities of each wavelength in a material.

187

Maximum Transmission Unit (MTU): The largest size of a data packet that can be sent over a TCP/IP or other packet- or frame-based network. Ethernet uses an MTU of 1,500 bytes, while the standard Internet MTU is 576 bytes. Using a higher MTU is recommended for fast networks, while slower or more congested networks need a smaller MTU, as there is a greater chance that not all of the packet will make it through in one shot. Of course, using a small MTU on a fast network causes a lot of extra traffic to be generated, as the packet has to be split into smaller pieces with more conversations for data to be transmitted. Mbone: Multicast backbone. A virtual network consisting of portions of the Internet in which multicasting has been enabled. The Mbone originated from IEFT in which live audio and video were transmitted around the world. The Mbone is a network of hosts connected to the Internet communicating through IP multicast protocols, multicast-enabled routers, and the point-to-point tunnels that interconnect them. MDCT: Modified Discrete Cosine Transform. The modified discrete cosine transform (MDCT) is a Fourier-related transform based on the type-IV discrete cosine transform (DCT-IV), with the additional property of being lapped: it is designed to be performed on consecutive blocks of a larger dataset, where subsequent blocks are overlapped so that the last half of one block coincides with the first half of the next block. This overlapping, in addition to the energy-compaction qualities of the DCT, makes the MDCT especially attractive for signal compression applications, since it helps to avoid artifacts stemming from the block boundaries. Thus, an MDCT is employed in MP3, AC-3, Ogg Vorbis, and AAC for audio compression, for example. (There also exists an analogous transform, the MDST, based on the discrete sine transform, as well as other, rarely used, forms of the MDCT based on different types of DCT.) In MP3, the MDCT is not applied to the audio signal directly, but rather to the output of a 32-band polyphase quadrature filter (PQF) bank. The output of this MDCT is postprocessed by an alias reduction formula to reduce the typical aliasing of the PQF filter bank. Such a combination of a filter bank with an MDCT is called a hybrid filter bank or a subband MDCT. AAC, on the other hand, normally uses a pure MDCT; only the (rarely used) MPEG-4 AAC-SSR variant (by Sony) uses a four-band PQF bank followed by an MDCT. ATRAC uses stacked quadrature mirror filters (QMF) followed by an MDCT. Mean Launched Power: The average power for a continuous valid symbol sequence coupled into a fiber.

188

Mean Time Between Failures (MTBF): A time normally given in hours that predicts the failure rate of a device. The larger number the better. Mechanical Splice: An optical fiber splice accomplished by fixtures or materials, rather than by thermal fusion. The capillary splice, illustrated, is one example of a mechanical splice.

Megabyte (Mbyte): One million bytes (actually 1,048,576); one thousand kilobytes. Megaframe Initialization Packet (MIP): A transport stream packet used by DVB-T to synchronize the transmitters in a multi-frequency network. Megapixel: One million pixels. This term is most often used when talking about how fast 3D graphics cards can pump data to the display device, or how much data a digital camera can capture. As a general rule higher megapixel ratings typically equate to higher maximum performance for graphics, or better quality pictures for digital cameras. See also gigapixel. Memory: Chips in a computer that remember data. Also commonly referred to as RAM. Message: A sequence of adjacent data packets transferred in a given direction. The end of such a message is defined to be the transfer on one or more adjacent data packets in the other direction. Message Jitter: Starts with the abstraction of a communicated message. It measures the inter-packet gaps only for data packets within messages. It does not measure the packets between the messages themselves. Metadata (side information): Informational data about the data itself. Typically information about the audio and video data included in the signal's data stream. Mezzanine Compression: Contribution level quality encoded high definition television signals. Typically split into two levels: High Level at approximately 140 Mbps and Low Level at approximately 39 Mbps (for high definition within the studio, 270Mbps is being considered). These levels of compression are necessary for signal routing and are easily re-encoded without additional compression artifacts (concatenation) to allow for picture manipulation after decoding. DS3 at 44.736 will be used in both terrestrial and satellite program distribution.
NHEG (Multimedia and Hypermedia information coding Expert Group): The

Standard known as MHEG-5 is ISO/IEC IS 13522-5 (1996): "Information technology Coding of Multimedia and Hypermedia Information - Part 5: Support for Base-Level Interactive Applications". This standard has been adopted by UK terrestrial broadcasters. MHP (Multimedia Home Platform): MHP defines a generic interface between interactive digital applications and the terminals (eg STBs, iDTV receivers) on which
189

those applications execute. This interface decouples different provider's applications from the specific hardware and software details of different MHP terminal implementations. It enables digital content providers to address all types of terminals ranging from low-end to highend set top boxes, integrated digital TV sets and multimedia PCs. The MHP extends the existing, DVB open standards for broadcast and interactive services in all transmission networks including satellite, cable, terrestrial and microwave systems. Elements of the MHP specification include the DVB-J platform with DVB defined APIs and selected parts from existing Java APIs, JavaTV, HAVi (user interface) and DAVIC APIs. MHP standards are published by ETSI - MHP Specification 1.0 is TS 101 812 including corrigenda is actually Spec 1.0.2 MHP Specification 1.1 which adds a number of features such as access to SmartCards and Internet access with optional DVB-HTML support is found in TS 102 812 V1.1.1 (2001-11). http://www.mhp.org/ MHz (Megahertz): Equal to one million Hz. Video signal bandwidth is typically expressed in megahertz. MIB (Management Information Base): Defined for structured monitoring in SNMP applications with an Object Identifier (OID) so that a browser may go to the appropriate level. An organizational system that defines operational processes or information into tree structure with layers, for separation of function or application. Refer to ETSI TR 101 290. Microbending: Mechanical stress on a fiber that introduces local discontinuities, which results in light leaking from the core to the cladding by a process called mode coupling.

Microchip: Synonymous with microprocessor. This term is commonly used to describe the CPU. More specifically, it refers to the part of the CPU that actually does the work, since many CPUs now contain L1 and L2 caches on-chip. Microcomputer: The older term for a common home computer, or single processor computer. The next step up is a workstation. Microdisplay: A general term covering several different technologies used in digital rear-projection and front-projection TVs. These displays produce large images; the "micro" refers to the postage stamp-sized image chips that create the images. Microdisplay types include DLP, LCD, and LCoS Microstrip Antenna: One consisting of a thin metallic conductor (patch) bonded to a thin grounded dielectric substrate and fed by a coaxial probe or a microstrip transmission line. Middleware: In an iDTV receiver or STB, 'middleware' is software that runs on 'top' the receiver's basic operating system (OS), similar to the familiar MS Windows that operates on a PC's start-up OS. It provides an environment to run the various application software. Examples of set-top middleware are MHP, OpenTV and Liberate.
190

MIDI (Musical Instrument Digital Interface): The way to connect musical instruments (traditionally an electronic piano keyboard) to your computer. To connect them to your computer you need a MIDI cable and a MIDI port. The MIDI port usually doubles as a game controller port on your sound card. Instead of recording the sound that comes from the instrument, it records the notes that are played. This can then be played back in a number of ways: its speed can be changed or its tone can be changed. The most interesting aspect is that it can be applied to different instruments. MIL-SPEC (Military Specification): Performance specifications issued by the Department of Defense that must be met in order to pass a MIL-STD. MIL-STD (Military Standard): Standards issued by the Department of Defense. Minimum Bend Radius: The smallest radius an optical fiber or fiber cable can bend before increased attenuation or breakage occurs. Minimum Shift Keying (MSK): A digital communication modulation type with very low adjacent sidelobe regrowth following power amplification due to its phase continuous properties. Can be shown to be a form of offset QPSK with symbol weighting applied. Mirror: This can refer to many things in the land of technology. Most often it is used to describe a method of redundancy where data is mirrored across two devices, whether they are physically separate devices or not. Basically the same data is written to both devices, so if one device fails the other can be used and the data will be intact. As far as disk drive redundancy is concerned, when drives are mirrored they are in a RAID 1 configuration. See RAID 1 for more info. Misalignment Loss: The loss of power resulting from angular misalignment, lateral displacement, and fiber end separation. Mission Critical Application: Any application that is critical to the proper running of a business. If this application fails for any length of time you may be out of business. For example, an order-entry system may be considered mission critical if your business relies on taking lots of orders. You don't want your mission critical apps running on junky hardware ... or software for that matter. MJD (Modified Julian Date): The Modified Julian Day is an abbreviated version of the old Julian Day (JD) dating method which has been in use for centuries by astronomers, geophysicists, chronologers, and others who needed to have an unambiguous dating system based on continuing day counts. 1) Start of the JD count is from 0 at 12 noon 1 JAN -4712 (4713 BC), Julian proleptic calendar. Note that this day count conforms with the astronomical convention starting the day at noon, in contrast with the civil practice where the day starts with midnight (in popular use the belief is widespread that the day ends with midnight, but this is not the proper scientific use). 2) The Modified Julian Day, on the other hand, was introduced by space scientists in the late 1950's. It is defined as MJD = JD - 2400000.5
191

The half day is subtracted so that the day starts at midnight in conformance with civil time reckoning. 3) This MJD has been sanctioned by various international commissions such as IAU, ITU, and others who recommend it as a decimal day count which is independent of the civil calendar in use. To give dates in this system is convenient in all cases where data are collected over long periods of time. Examples are double star and variable star observations, the computation of time differences over long periods of time such as in the computation of small rate differences of atomic clocks, etc. The MJD is a convenient dating system with only 5 digits, sufficient for most modern purposes. The days of the week can easily be computed because the same weekday is obtained for the same remainder of the MJD after division by 7. EXAMPLE: MJD 49987 = WED., 27 SEPT, 1995 Mode: A single electromagnetic wave traveling in a fiber. Modal Dispersion: See Multimode Dispersion. Modal Noise: Noise that occurs whenever the optical power propagates through mode-selective devices. It is usually only a factor with laser light sources. Mode Filter: A device that removes higher-order modes to simulate equilibrium mode distribution. A mode filter is most easily constructed

Mode Scrambler: A device that mixes modes to uniform power distribution.

Mode Stripper: A device that removes cladding modes.

192

MODEM: Modulator/Demodulator. A device that serves as a bridge between your digital computer and some form of analog line used to transmit data, such as a phone line (standard modem) or analog cable connection (cable modem). The modem can receive the analog signals from the line and turn them digital, or transmit your digital signals into analog signals that are capable of being decoded digitally. Modified Julian Date (MJD): Used in DVB transmissions. A day numbering system (based on the Western Julian Calendar), used for computers and in DVB as a way of conveying dates just as a relatively unambiguous number. The starting date is midnight on 17 November 1858. The earlier Julian date System started on noon on 1 January 4713 B.C. (Julian Day zero), which is 2,400,000.5 days earlier. Annex C of ETSI document EN 300 468 Specification for Service Information in DVB Systems shows how to convert between MJD and Year, Month, Day values. See also Item 140 Time and Date Notation. Modulation: The process by which the characteristic of one wave (the carrier) modifies another wave (the signal). Examples include amplitude modulation (AM), frequency modulation (FM), and pulse-coded modulation (PCM). Modulation Index: In an intensity-based system, the modulation index is a measure of how much the modulation signal affects the light output. It is defined as follows: m = (highlevel - lowlevel) / (highlevel + lowlevel) Modulation Error Ratio (MER): MER is intended to provide a single "figure of merit" analysis of the DTV received signal. It's computed to include the total signal degradation likely to be present at the input of a commercial receiver's data decoding decision circuits to give an indication of the ability of that receiver to correctly decode the signal. For terrestrial DVB applications the 'technical report' from DVB/ETSI TR 101 290 (V1.2.1 May 2001) describes the basis of this and other measurements of importance to DVB and MPEG systems. TR 101 290 also notes that MER is closely related to another "figure of merit" calculation known as Error Vector Magnitude (EVM). Mole Technology: A seamless MPEG-2 concatenation technology in which an MPEG-2 bit stream enters a Mole-equipped decoder, and the decoder not only decodes the video, but the information on how that video was first encoded (motion vectors and coding mode decisions). This "side information" or "metadata" in an information bus is synchronized to the video and sent to the Mole-equipped encoder. The encoder looks at the metadata and knows exactly how to encode the video. The video is encoded in exactly the same way (so theoretically it has only been encoded once) and maintains quality. If an opaque bug is inserted in the picture, the encoder only has to decide how the bug should be encoded (and then both the bug and the video have been theoretically encoded only once). Problems arise with transparent or translucent bugs, because the video underneath the bug must be encoded, and therefore that video will have to be encoded twice, while the surrounding video and the bug itself have only been encoded once theoretically. What Mole can not do is make the encoding any better. Therefore the highest quality of initial encoding is suggested. Moore's Law: A prediction for the rate of development of modern electronics. This has been expressed in a number of ways but in general states that the density of information storable in silicon roughly doubles every year. Or, the performance of
193

silicon will double every eighteen months, with proportional decreases in cost. For more than two decades this prediction has held true. Moore's law initially talked about silicon but it could be applied to disk drives: the capacity of disk drives doubles every two years. That has been true since 1980, and will continue well beyond 2000. Named after Gordon E. Moore, physicist, co-founder and chairman emeritus of Intel Corporation. Mosaic: This program is attributed with being the first graphical browser for the Web. Mosaic recognizes the HTTP protocol and will display HTML graphically. It was created by Marc Andreessen in 1993 while he was at the NCSA (National Center for Supercomputing Applications) and released as freeware. After Mosaic was introduced to the Internet the Web flourished, as the amount of websites and Web users grew very quickly. Mosaic Display: An array of small pictures to make a total full-screen picture. Generally provided to give an indication of the content of a number of video programs. Mosquito Noise: Visible effect in MPEG encoded picture around the edge of higher contrast parts of the image. So-called as it looks like swarming mosquitos. Usually due to poor encoding and insufficient data rate. Motherboard: The large circuit board into which your CPU, memory boards, and peripheral cards are plugged. Motion compensation: The use of motion vectors to improve the efficiency of the prediction of pixel values. The prediction uses motion vectors to provide offsets into past and/or future reference frames containing previously decoded pixels that are used to form the prediction and the error difference signal. Regarding P-frame and B-frame coding using Motion Compensation, motion compensation is done at macroblock level. The temporal redundancy is eliminated by using motion vectors. Instead of sending the entire frame, only the motion vectors and the content difference between the macroblocks (error) are sent in a compressed form. This is the coding technique used for P-frames and B-frames to eliminate the temporal redundancy. Motion estimation: An image compression technique that achieves compression by describing only the motion differences between adjacent frames, thus eliminating the need to convey redundant static picture information from frame to frame. Used in the MPEG standards. Motion-JPEG: Using JPEG compressed images as individual frames for motion. For example, 30 Motion-JPEG frames viewed in one second will approximate 30-frame per second video. Motion Representation Macroblocks: As in ISO/IEC 11172-2, the choice of 16 by 16 macroblocks for the motion-compensation unit is a result of the trade-off between the coding gain provided by using motion information and the overhead needed to represent it. Each macroblock can be temporally predicted in one of a number of different ways. For example, in frame encoding, the prediction from the previous reference frame can itself be either frame-based or field-based. Depending on the type of the macroblock, motion vector information and other side information is encoded with the compressed prediction error in each macroblock. The motion
194

vectors are encoded differentially with respect to the last encoded motion vectors using variable length codes. The maximum length of the motion vectors that may be represented can be programmed, on a picture-by-picture basis, so that the most demanding applications can be met without compromising the performance of the system in more normal situations. It is the responsibility of the encoder to calculate appropriate motion vectors. MOV: The file extension used by MooV format video files on Windows. These MOV files are generated with Apple Computer's QuickTime and played on Windows systems via QuickTime for Windows. MP3: MP3 is the audio coding format defined by the International Organization for Standardization (ISO) as part of the MPEG-1 Audio Layer-3 specification. MP3 has become a popular audio compression format on the Internet and computers. MPEG (Moving Picture Experts Group): Moving Picture Experts Group (MPEG) is a working group of ISO/IEC in charge of the development of standards for coded representation of digital audio and video. A voluntary body known as ISO/IEC Joint Technical Committee 1, Sub Committee 29/Working Group 11 (ISO/IEC JTC1/SC29/WG11), which has and continues to develop Standards for digital compressed moving pictures and associated audio. Forum: www.prompeg.org/ MPEG (pronounced EM-peg) has standardized the following compression formats and ancillary standards: MPEG-1: Initial video and audio compression standard. Later used as the standard for Video CD, and includes the popular Layer 3 (MP3) audio compression format. MPEG-2: Transport, video and audio standards for broadcast-quality television. Used for over-the-air digital television ATSC, DVB and ISDB, digital satellite TV services like DirecTV, digital cable television signals, and (with slight modifications) for DVD video discs. MPEG-4: Expands MPEG-1 to support video/audio "objects", 3D content, low bitrate encoding and support for Digital Rights Management. Several new (newer than MPEG-2 Video) higher efficiency video standards are included (an alternative to MPEG-2 Video), notably, Advanced Simple Profile and H.264/MPEG-4 AVC. MPEG-7: A formal system for describing multimedia content. <MPEG-2 Specifications> ISO/IEC 13818-1 - Information Technology - Generic Coding of Moving Pictures and Associated Audio: Systems. Recommendation H.222.0, 15 April 1996. ISO/IEC 13818-2 - Information Technology - Generic Coding of Moving Pictures and Associated Audio: Video, 15 May 1996. ISO/IEC 13818-3 - Information Technology - Generic Coding of Moving Pictures and Associated Audio: Audio, 15 May 1995. ISO/IEC 13818-4 - Information Technology - Generic Coding of Moving Pictures and Associated Audio: Compliance, 24 March 1995. ISO/IEC 13818-9 - Information Technology - Generic Coding of Moving Pictures and Associated Audio: Real-time jitter, 12 December 1996.
195

MPEG- 1 and MPEG- 2: Also refer to entry under Moving Pictures Experts Group (MPEG). http://www.cselt.it/mpeg/ These are the most common video and audio compression schemes now in use. MPEG-1 was used for lower data rate video on early CDi and VideoCDs. MPEG-2 provides for better quality (at higher data rates) and is used in a variety of professional and consumer applications from SDTV to HDTV on terrestrial, satellite and cable broadcast and digital video (Versatile) disks. MPEG's success has been in that the standard defines the decoder, not the encoder. This has enabled encoder designers to achieve improvements over time and experience. For example, for a given quality of picture compressed by MPEG-2, the bit rate needed has been reduced to about 2/3 since the specification was finalised in 1994/5. MPEG video coding is defined in various "profiles" and "levels" of quality with a range of line and frame rates. 3 aspect ratios can be used. Distribution quality colour coding of 4:2:0 is normal but contribution quality of 4:2:2 has been added in Addendums. Source Input Format (SIF), with 352 pixels x 240 lines x 30 frames/sec, is also known as Low Level (LL), (1) MPEG-1: ISO/IEC 11172 (all parts), Information technology Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s. (2) MPEG-2: ISO/IEC 13818 (also known as ITU-R H.262), Information technology Generic coding of moving pictures and associated audio information in 10 sections and Addendums and Corrigenda Part 1 - Systems Part 2 - Video Part 3 - Audio Part 4 - Compliance testing Part 5 - Simulation software Part 6 - Digital storage media command and control (DSMCC) Part 7 - Non-backwards compatible audio Part 8 - 10 bit video extension Part 9 - Real-time interface MPEG- 1 layer II audio: The compressed audio Standards of MPEG-1 were published in the 1993 International Standard ISO/IEC 11172-3. This dealt with mono and two-channel stereo sound coding, at sampling frequencies commonly used for high quality audio (48, 44.1 and 32 kHz). Compared to Layer I, Layer II is able to remove more of the signal redundancy and to apply the psychoacoustic threshold more efficiently. MP3 There is also a Layer III, known as MP3, which is again more complex and is directed towards lower bit rate applications due to the additional redundancy and irrelevancy extraction from enhanced frequency resolution in its filterbank. A full Layer II decoder accepts Layer I and II bitstreams. Likewise an MPEG-2 decoder should be capable of fully compatible decoding an MPEG-1 bitstream. The development of audio standards for MPEG-2 resulted in the 1995 International Standard ISO/IEC 13818-3. For more details on Stereo modes see Item 136 Stereo & Joint Stereo (for MPEG), and for audio compression see also Item 3 Advanced Audio Coding (AAC), as used in MPEG-4.
MPEG-4: A more advanced way of encoding and video, audio and other material 196

principally using object coding techniques. MPEG-4's language for describing and dynamically changing the scene is named the Binary Format for Scenes (BIFS). http://www.cselt.it/ufv/leonardo/mpeg/index.htm MPEG-4 has the possibility to encode SD and HD television program material at compression ratios with, typically, from 2 ~ 6 times improvement over MPEG-2 for comparable picture quality. Early examples of MPEG-4 like compression are used in Apple Quicktime, DIVx, Real and Microsoft Windows Media-9 codecs. The MPEG-4 Standard is ISO/IEC 14496 (obtainable from www.iso.ch/ittf) while the ITU version is known as H.264 (similar to MPEG-2/H.262) Information is available from the MPEG-4 Industry Forum, M4IF, http://www.m4if.org/ ITU/ISO/IEC Joint Video Team (JVT) and with ISO/IEC JTC1/SC29/WG11 (MPEG) JVT Coding ITU-T H.26L; ISO MPEG-4, Part 10 In December 2001 the Joint Video Team (JVT) was formed from VCEG and MPEG to finalize H.26L as a joint project. The video coding is generally teamed with an improved Advanced Audio Coding (AAC). MPEG-7: MPEG-7 "Multimedia Content Description Interface" has standardized description schemes for content description, management, and organization, as well as navigation, access, user preferences and usage history. (http://www.mpeg-industry.com/), These multimedia descriptions could be used in the following types of applications: (1) Search Engines, Digital Libraries, Broadcast Networks, Entertainment and News Distributors, Streaming Businesses (2) Dynamic start-up companies, searching for cutting edge technologies. (3) Governmental, Educational, Law, Medical & Remedial Services, and Non-profit organizations looking for digital media solutions. (For example, the U.S. Library of Congress receives over 10,000 multimedia items each week. There is a commitment to the long term preservation of these items in digital format, and making much of their collection more accessible.) (4) XML, Metadata, Modelling/Simulation, & Surveillance Industries (5) AI Practitioners, Content Creators and Providers. Standard: ISO/IEC 15938-1, Information technology - Multimedia content description interface (1) Systems: specifies the tools for preparing descriptions for efficient transport and storage, compressing descriptions, and allowing synchronization between content and descriptions. (2) Description definition language: specifies the language for defining the standard set of description tools (DSs, Ds, and data types) and for defining new description tools. (3) Visual: specifies the description tools related to visual content. (4) Audio: specifies the description tools related to audio content. (5) Multimedia description schemes: specifies the generic description tools related to multimedia including audio and visual content. (6) Reference software: provides a software implementation of the standard. (7) Conformance testing: specifies the guidelines and procedures for testing conformance of implementations of the standard. (8) Extraction and use of MPEG-7 descriptions: provides guidelines and examples of the extraction and use of descriptions.

197

MPEG LA: MPEG LA (Licence Administration) provides one-stop technology standards licensing, starting with a portfolio of essential patents for the international digital video compression standard known as MPEG-2, which it began licensing in 1997. This licensing method enables widespread technological implementation, interoperability and use of fundamental broad-based technologies covered by many patents owned by many different patent holders. MPEG LA provides users with fair, reasonable, nondiscriminatory access to these essential patents on a worldwide basis under a single license. The MPEG-2 Patent Portfolio License now has more than 290 licensees and includes more than 300 MPEG-2 essential patents in 29 countries owned by 17 patent holders. As the legal and business template for one-stop technology standards licensing, MPEG LA also provides an innovative way to achieve fair, reasonable, nondiscriminatory access to patent rights for other technology standards - the high-speed transfer digital interconnect standard known as IEEE 1394 and now the terrestrial digital television standard used in Europe and Asia known as DVB-T. In addition, MPEG LA has been asked to facilitate the development of joint licenses for MPEG-4 and other emerging technologies. The company is based in Denver, CO and has offices in Washington DC, San Francisco and London. Also refer to DVBs Licence Administrator http://www.dvbla.com MPEG splicing: The ability to cut into an MPEG bit stream for switching and editing, regardless of type of frames (I, B, P). See profile & lavel. MP@HL: Main profile at high level. MP@ML: Main profile at main level. MQW: See Multi-Quantum Well laser. MQW (Multi-quantum Well Laser): A laser structure with a very thin (about 10 nm thick) layer of bulk semiconductor material sandwiched between the two barrier regions of a higher bandgap material. This restricts the motion of the electrons and holes and forces energies for motion to be quantized and only occur at discrete energies. MSB (Most significant bit): The bit that has the most value in a binary number or data byte. In written form, this would be the bit on the left. For example: Binary 1110 = Decimal 14 In this example, the left-most binary digit, 1, is the most significant bit--here representing 8. If the MSB in this example were corrupt, the decimal would not be 14 but 6. See also: LSB. MSO (Multiple Service Operator): A telecommunications company that offers more than one service, e.g. telephone service, Internet access, satellite service, etc. MTBF (Mean Time Between Failures): Abbreviation for mean time between failure. Time after which 50% of the units of interest will have failed. Also called MTTF (mean time to failure). MTS (Multichannel Television Sound): The method of broadcasting stereo sound over ordinary analog TV channels. MTS reception capability is built into virtually all stereo TVs and HiFi VCRs.

198

Multicast: 1. Data flow from single source to mutiple destinations; a multicast may be distinguished from a broadcast in that number of destinations may be limited. 2. Aterm often used incorrectly to describe digital television program multiplexing. Multicast IP: A form of TCP/IP being proposed that will allow for high-bandwidth transmissions (like television channels) to be broadcast over the Internet to all the routers in the world (possibly) that are connected to someone watching that channel. To see the benefit, think of 1,000 people making separate connections to the USA network (if it were an Internet TV channel--give us some leeway!). Each packet of data sent out would have to be sent to 1,000 people. Thus, you have 1,000 conversations active among the USA network server and people's machines, each one saying the same thing. That's poor scalability. Now imagine that in this fantasy network we only have 10 routers with these 1,000 people connected to them. In simplistic terms, you only have to send each packet to 10 places, and each machine downloads packets directly from the router it is connected to. Thus the USA network server doesn't have to do much work to keep Gary Busey movies going to 1,000 people at once, and the rest of the network isn't clogged with all 1,000 conversations. However, there are lots of minute details that must be considered, such as how many channels are offered and how much router memory that takes. How about if no one is watching? That's wasted traffic. Multicasting Because digital television allows you to pack much more information into the allotted signal, we can transmit multiple channels on the same bandwidth instead of just one. Think of a broadcasting bandwidth as a multi-lane freeway. You can run a big, flashy, wide-load truck carrying HDTV and take up all of the lanes, or you can send multiple compact cars down the same freeway, each carrying specialized programming. The option to multicast was made possible by digital technology to allow each digital broadcast station to split its bit stream into 2, 3, 4 or more individual channels of programming and/or data services. (For example, on channel 7, you could watch 7-1, 7-2, 7-3 or 7-4.) Multiframe: Any structure made up of multiple frames. SDH has facilities to recognize multiframes at the E1 level and at the VC-N level. Multilongitudinal Mode (MLM) Laser: An injection laser diode which has a number of longitudinal modes.

Multimedia content description interface: See: MPEG-7. Multimedia and Hypermedia information coding Expert Group (MHEG): International Organization For Standardization ISO/IEC JTC1/SC29/WG12 http://www.mheg.org
199

Multimode (MM) Fiber: An optical fiber that has a core large enough to propagate more than one mode of light. The typical diameter is 62.5 micrometers.

Multimode Dispersion: Dispersion resulting from the different transit lengths of different propagating modes in a multimode optical fiber. Also called modal dispersion. Multipath interference: The signal variation caused when two RF signals take multiple paths from transmitter to receiver. In analog television, this creates ghosting. In digital television, this can cause the receiver not to output a signal as it can not differentiate between signals. Multipoint: A term used by network designers to describe network links that have many possible endpoints. Multiplexer: 1) Device for combining two or more electrical signals into a single, composite signal. A physical device that is capable of inserting/extracting video, audio, data into/out of an MPEG-2 transport stream. 2) A device that combines two or more optical signals into one output.

Multiplexing: 1) Multiplexing is based on a sequence of PDUs, each of which carry consecutive data from one and only one media source type, i.e. audio, video or other data signals. In the H.222.1 Program Stream these PDUs may be of variable length and relatively large in size. In the H.222.1 Transport Stream these PDUs are of fixed length and of relatively small size. The H.222.1 Transport Stream has a large multiplex capacity. (1) To transmit two or more signals at the same time or on the same carrier frequency. (2) To combine two or more electrical signals into a single, composite signal, such as ATSC multicasting. 2) The process by which two or more signals are transmitted over a single communications channel. Examples include time-division multiplexing (TDM) and wavelength-division multiplexing (WDM). Multiplex/Demultiplex: Multiplex (MUX) allows the transmission of two or more signals over a single channel. Demultiplex (DEMUX) is the process of separating two or more signals previously combined by compatible multiplexing equipment to recover signals combined within it and for restoring the distinct individual channels of the signals. Multiplex Section Alarm Indication Signal (MS-AIS): MS-AIS is generated by
200

Section Terminating Equipment (STE) upon the detection of a Loss of Signal or Loss of Frame defect, on an equipment failure. MS-AIS maintains operation of the downstream regenerators, and therefore prevents generation of unnecessary alarms. At the same time, data and orderwire communication is retained with the downstream equipment. Multiplex Section Remote Defect Indication (MS-RDI): A signal returned to the transmitting equipment upon detecting a Loss of Signal, Loss of Frame, or MS-AIS defect. MS-RDI was previously known as Multiplex Section FERF. Multiplex Section Overhead (MSOH): 18 bytes of overhead accessed, generated, and processed by MS terminating equipment. This overhead supports functions such as locating the payload in the frame, multiplexing or concatenating signals, performance monitoring, automatic protection switching, and line maintenance. (Multiplexed) Stream: A bit stream composed of 0 or more elementary streams combined in a manner that conforms to ITU-T Rec. H.222.0 | ISO/IEC 13818-1. Multiple Reflection Noise (MRN): The fiber optic receiver noise resulting from the interference of delayed signals from two or more reflection points in a fiber optic span. Also known as multipath interference. Multitasking: The ability of an operating system to run two or more tasks at once. With one processor you will not normally have more than one task using the processor at a given moment in time, but the tasks will be scheduled so that they can all appear to be running at the same time and do not interfere with one another. A task can be a program (e.g., the Windows Calculator) or an instance of a program (e.g., opening the Windows Calculator multiple times). MUD (Multi User Domain): This is also known as Multi-User Dungeon, but that is a misnomer. A MUD is a world created and that exists solely for the interaction of people within it. MUDs often have themes, but the basis is simply that there is a world detailed by words that you, as a character, can navigate through and interact with other characters. For many early Internet users this is the most addictive part of the Internet. Nowadays most people prefer the graphical interface of commercial programs that provide a world where less reading--and imagination--is required. Multiview display: Additional video programs relating to a main program. The additional video may be from other (fixed or switched) point-of-view cameras or simply graphical and text information such as a sporting event statistics. These additional picture sources may be similar or lesser formats to the main program and may be displayed full screen or in advanced receivers in a PIP type window. Must-carry: This refers to the legal obligation of cable companies to carry analog or digital signals of over-the-air local broadcasters M-way Phase Shift Keying (MPSK): Digital communication system that uses one of M phases to represent log 2 (M) bits, where each symbol point in the constellation rests along the circumference of a circle. QPSK is 4-PSK.

201

M-way Quadrature Amplitude Modulation (M-QAM): Digital communication which uses one of M symbols, each representing log 2 (M) bits, where both amplitude and phase of the waveform carry information. Commonly, QAM implies a square or near-square constellation carrying information in I- and Q-data carriers.

202

-NNA (Numerical Aperture): The light-gathering ability of a fiber; the maximum angle to the fiber axis at which light will be accepted and propagated through the fiber. NA = sin , where is the acceptance angle. NA also describes the angular spread of light from a central axis, as in exiting a fiber, emitting from a source, or entering a detector.

NAB (National Association of Broadcasters): A trade association that promotes and protects the interests of radio and television broadcasters before Congress, federal agencies and the Courts. http://www.nab.org/ NAK (Negative AcKnowledgment): The opposite of ACK. NAK is a slang term that means that you disagree, or do not acknowledge something. This also refers to the 21st ASCII character. NA Mismatch Loss: The loss of power at a joint that occurs when the transmitting half has a numerical aperture greater than the NA of the receiving half. The loss occurs when coupling light from a source to fiber, from fiber to fiber, or from fiber to detector.

Nanotechnology : The purposeful manipulation of matter at the atomic level to achieve a defined goal. Atomic constructs can be measured in nanometers. Someday nanotechnology may be used to send a group of tiny machines into your bloodstream and free up clogged arteries, or a bunch of tiny nano-war machines could be built and unintentionally change the entire earth into a gray blob of nano-devices. NAP (Network Access Point): a public network exchange facility where Internet Service Providers (ISPs) can connect with one another in peering arrangements. The NAPs are a key component of the Internet backbone because the connections within them determine how traffic is routed. They are also the points of most Internet congestion. Napster: The most infamous file swapping utility/company. The Napster client allows you to connect to Napster servers and download MP3 files, or allows your own MP3 files to be downloaded by others. The popularity of Napster erupted into lawsuits brought by the RIAA and various recording artists. The actual Napster company had its assets sold off to the highest bidder, but the client lives on through Open Source projects like OpenNap. Narrowband: Services requiring up to 2 Mbit/s transport capacity.
203

Narrow SCSI: The original form of SCSI, using 50 pins and transmitting data at 5MBps. See also Wide SCSI. Drives and adapters that support Narrow SCSI usually have an "N" in the part number. National Electric Code (NEC): A standard governing the use of electrical wire, cable and fixtures installed in buildings; developed by the American National Standards Institute (ANSI), sponsored by the National Fire Protection Association (NFPA), identified by the description ANSI/NFPA 70-1990. http://www.ansi.org NCTA (National Cable Television Association): The major trade association for the cable television industry. http://www.ncta.com Near-End Crosstalk (NEXT): Impairment typically associated with twisted-pair transmission, where a local transmitter interferes with a local receiver. NEMA: Abbreviation for National Electrical Manufacturers Association. Organization responsible for the standardization of electrical equipment, enabling consumers to select from a range of safe, effective, and compatible electrical products. http://www.nema.org NetBIOS (Network Basic Input/Output System): An IBM-developed(initially for IBM'sSytek PC LANprogram) standard software interface (API)to a network adapter. Has become a standard interface supported (either natively or with an additional layer of software) by most network operating systems to permit PCapplication programs to communicate directly with other PCsat the transport layer. When the NetBIOS software is loaded, it broadcasts its proposed (upto 15-character) name to all other stations to ensure that it will have a network-unique name. NetBIOS:

Does not support windowing (it has a Window size of 1) Has a small packet size (under 678 bytes) Does not have a network layer (so it is not routable)

These are among the reasons why NetBIOS is not a desirable protocol (especially for WAN links). Netshow: Microsoft NetShow is a service that runs on Windows NT servers, delivering the high-quality streaming multimedia to users on corporate intranets and the Internet. It consists of server and tools components for delivering audio, video, illustrated audio, and other multimedia types over the network. NetShow provides the foundation for building rich, interactive mutimedia applications for commerce, distance learning, news and entertainment delivery, and corporate communications. Network: 1) Collection of MPEG-2 Transport Stream (TS) multiplexes transmitted on a single delivery system, e.g. all digital channels on a specific cable system. 2) A group of interconnected computers. The computers must be capable of transferring data to form a true network--you can't just weld a bunch of computers together. Put that torch down!
204

Network Address: Network layer address referring to a logical, rather than a physical, network device. Also called a protocol address. Network Element (NE): Any device which is part of an SDH transmission path and serves one or more of the section, line and path-terminating functions. In SDH, the five basic network elements are: Add/drop multiplexer Broadband digital cross-connect Wideband digital cross-connect Flexible multiplexer Regenerator Network Information Table (NIT): The Network Information Table (NIT) contains network characteristics and shows the physical organization of the transport streams carried on the network. The decoder uses the tuning parameters provided in this table to change channels at the viewers request when the desired program is not on the current transport stream. Tuning parameters are specific to the type of transmission assigned to the network, whether it be terrestrial, cable or satellite.

Figure: The NIT defines the network organization and provides tuning information for the channels in the network.

Network Layer: Layer 3 of the OSI reference model. This layer provides connectivity and path selection between two end systems. The network layer is the layer at which routing occurs. Corresponds roughly with the path control layer of the SNA model. Network Operating System (NOS): An operating system designed to run across a network. It refers to the operating system that runs on a server, not the client. Network OSes are typically designed to provide access to server resources to clients, making the server function as a file server, print server, or other type of server. Network Topology: The specific physical, i.e., real, logical, or virtual, arrangement of the elements of a network. Common network topologies include a bus (or linear) topology, a ring topology, and a hybrid topology, which can be a combination of any two or more network topologies. Illustrated to the right is a bus topology utilizing tee couplers to connect a series of stations that listen to a single backbone of cable.

205

Nibble: Four bits or half a byte. NMS (Network Management System): System responsible for managing at least part of a network. An NMS is generally a reasonably powerful and well-equipped computer such as an engineering workstation. NMSs communicate with agents to help keep track of network statistics and resourcesNN (Network Node): 1) SNA intermediate node that provides connectivity, directory services, route selection, intermediate session routing, data transport, and network management services to LEN nodes and ENs. The NN contains a CP that manages the resources of both the NN itself and those of the ENs and LEN nodes in its domain. NNs provide intermediate routing services by implementing the APPN PU 2.1 extensions. 2) Neural Network. A network of many simple processors that imitates a biological neural network. Neural networks have some ability to "learn" from experience, and are used in applications such as speech recognition, robotics, medical diagnosis, signal processing, and weather forecasting. Also called artificial neural network Node - One computer/machine or address on a network. 1) A terminal of any branch in network topology or an interconnection common to two or more branches in a network. 2) One of the switches forming the network backbone in a switch network. 3) A point in a standing or stationary wave at which the amplitude is a minimum. Nonlinear: A term used for editing and the storage of audio, video and data. Information (footage) is available anywhere on the media (computer disk or laser disc) almost immediately without having to locate the desired information in a time linear format. Nonlinear editing: Nonlinear distinguishes editing operation from the "linear" methods used with tape. Nonlinear refers to not having to edit material in the sequence of the final program and does not involve copying to make edits. It allows any part of the edit to be accessed and modified without having to re-edit or re-copy the material that is already edited and follows that point. Nonlinear editing is also non-destructive--the video is not changed but the list of how that video is played back is modified during editing. Nonlinearity: The deviation from linearity in an electronic circuit, an electro-optic device or a fiber that generates undesired components in a signal. Examples of fiber nonlinearities include SBS(Stimulated Brillouin Scattering), SRS(Stimulated Raman
206

Scattering), FWM(Four Wave Mixing), SPM(Self-Phase XPM(Cross-Phase Modulation), and Intermodulation.

Modulation),

Normalize: A verb used to describe what can be done to data to remove useless or extraneous entries. For example, if you set up a survey with choices A, B, and No Response, and then wanted to report the % of respondents that picked A or B, you could cut out the "No Responses" and thus normalize the data. Another example would be changing the loudness of an MP3 file by analyzing the file for the loudest part and then setting the loudness of the entire file to a percentage of that. That way, if you are putting together a mix file from many separate sources, you get a more uniform loudness. Noise: 1) An undesired disturbance within the frequency band of interest; the summation of unwanted or disturbing energy introduced into a communications system from man-made and natural sources. 2) A disturbance that affects a signal and that may distort the information carried by the signal. 3) Random variations of one or more characteristics of any entity such as voltage, current, or data. Noise Equivalent Power (NEP): The noise of optical receivers, or of an entire transmission system, is often expressed in terms of noise equivalent optical power. Noise Figure (NF): Parameter describing amount of excess noise added by a component that contributes to SNR degradation from device input to device output. Non Dispersion-shifted Fiber (NDSF): The most popular type of single-mode fiber deployed. It is designed to have a zero-dispersion wavelength near 1310 nm. NOS (Network Operating System): An operating system which makes it possible for computers to be on a network, and manages the different aspects of the network. Some examples are Novell NetWare, VINES, Windows for Workgroups, AppleTalk, DECnet, and LANtastic. NOT: An operation that can be executed on a binary string. It returns the opposite of that string. (NOT 1) = 0, (NOT 0) = 1. Thus, (NOT 0011) = 1100. Not-return-to-zero (NRZ): Data encoding format where each bit is represented for the entire duration of the bit period as a logic high or a logic low (July 97). NTSC: NATIONAL TELEVISION SYSTEM COMMITTEE. A US committee formed in the late 1940s and 50s, which defined the analog color television broadcast standard used today in North America, Mexico, Japan & South Korea. Standard published by EIA as RS-170A. Total of 525lines and nominally 60 field/s interlaced, 30 frame per second Total active picture lines 483 reduced to 480 for digital,. 60 fields/sec originally selected to be same as US mains power frequency but now after color introduced, reduced by 0.1% to 59.94 f/s to reduce interference to 3.58MHz color subcarrier on over-the-air broadcasts with sound FM subcarrier at 4.5 MHz. 59.94f/s causes problems with editing time-code, which required some frame numbers to be dropped to keep proper time drop-frame.
207

In digital systems, the number of lines in active-picture is 480. Nulls: In an antenna, near zero-level signals of sharp angular width seen in a radiation patterns. The opposite of lobes. NVOD (Near Video On Demand): Rapid access to program material on demand achieved by providing the same program on a number of channels with staggered start times. Many of the hundreds of TV channels soon to be on offer will be made up of NVOD services. These are delivered by a disk-based transmission server. Nyquist Frequency (Nyquist Rate): The lowest sampling frequency that can be used for analog-to-digital conversion of a signal without resulting in significant aliasing. Normally, this frequency is twice the rate of the highest frequency contained in the signal being sampled.

208

-OOAM (Operations, Administration, and Maintenance): Also called OAM&P. OAM&P (Operations, Administration, Maintenance, and Provisioning): Provides the facilities and personnel required to manage a network. OAN (Optical Access Network): A network technology, based on passive optical networks (PONs), that includes an optical switch at the central office, an intelligent optical terminal at the customers premises, and a passive optical network between the two, allowing services providers to deliver fiber-to-the-home while eliminating the expensive electronics located outside the central office.

Object: This can refer to the objects in object-oriented programming or the objects in OLE (Object Linking and Embedding). In OLE an object is a piece of a document, a graphic, or some multimedia. In object-oriented programming an object can be a spell checker or a piece of a graphics program used to draw squares or circles. Do you remember the crazy story people used to try to tell about a word processor where you could pick all of your favorite pieces (favorite spell checker, grammar checker, text editor, font manager, etc.) and piece them together to form the ultimate customizable word processor? Well, those pieces are objects. Object Linking and Embedding (OLE): A standard for sharing data between applications. It has been around since Windows 3.1 and continues to improve. For example, if you cut a picture out of Paint and paste it into a word processor document, you are using OLE to properly put the data into your document. Of course, if it doesn't work quite right you can blame OLE, or the program's use of it. OLE allows objects to be linked to and embedded in other documents. Linking creates a link to the actual object; embedding puts a copy of the object into the document. You can usually access the program an object was created with in order to edit the linked or embedded object just by clicking on the object. This is much more advanced than just taking a screenshot of the data you want and pasting it into another program as a graphic that has no relation to the original data. Object-oriented Programming (OOP): This term refers to programming languages that allow you to work with objects. These objects can contain not only data type and data structure information, but also information about how the object can be used by procedures. OC-1 (Optical Carrier Level 1): The optical equivalent of an STS-1 signal. STS-1 (Synchronous Transport Signal Level 1) is the basic SONET building block signal transmitted at 51.84 Mb/s data rate. Refer to SONET and SDH.

209

OC-N (Optical Carrier Level N): The optical equivalent of an STS-N signal. STS-N (Synchronous Transport Signal Level N) is the signal obtained by multiplexing integer multiples (N) of STS-1 signals together. Refer to SONET and SDH OC-3 (Optical Carrier 3) - A fiber optic connection that can handle 155Mbps connection speeds. OC-9 (Optical Carrier 9) - A fiber optic connection that can handle 466Mbps connection speeds. OC-12 (Optical Carrier 12) - A fiber optic connection that can handle 622Mbps connection speeds. OC-18 (Optical Carrier 18) - A fiber optic connection that can handle 933Mbps connection speeds. OC-24 (Optical Carrier 24) - A fiber optic connection that can handle 1.244Gbps connection speeds. OC-48 (Optical Carrier 48) - A fiber optic connection that can handle 2.488Gbps connection speeds. OC-96 (Optical Carrier 96) - A fiber optic connection that can handle 4.976Gbps connection speeds. OC-192 (Optical Carrier 192) - A fiber optic connection that can handle 10Gbps connection speeds. OC-255 (Optical Carrier 255) - A fiber optic connection that can handle 13.21Gbps connection speeds. Octet: A group of eight bits, also called a byte. ODBC (Open DataBase Connectivity): A standard API for communicating with database servers. There are different ODBC drivers supporting most of the major database servers, such as Oracle and Microsoft SQL Server. If you program to ODBC you get the advantage of (theoretically) being able to easily use your application on different databases without reprogramming. However, ODBC drivers are not always perfect. ODI (Open Data-link Interface): A protocol-independent structure used by early Novell NetWare clients. It provides support for simultaneous connection to multiple network protocols, much like Microsoft's NDIS did back in the 16-bit client days. O/E (Optical-to-Electrical Converter): A device used to convert optical signals to electrical signals. Also known as OEC.

OEIC (Opto-Electronic Integrated Circuit): An integrated circuit that includes both optical and electrical elements. OEL (Organic Electroluminescent): A type of display technology that allows for flat screens that are very bright and offer wider viewing angles than LCD technology. As well, no backlight is required. OEL technology may one day replace LCD technology. See OLED for a similar, competing technology. OEM (Original Equipment Manufacturer): This acronym is used to denote equipment that is sold to other companies or resellers for integration into systems.
210

For example, a hard drive manufacturer may sell an OEM hard drive in bulk quantities with no manual or cables with the promise that it will go into full systems--or maybe external enclosures--and be resold. What often happens is that you can pick up OEM products at computer shows for low prices. Large computer companies use OEM monitors with their brand name on them instead of actually building their own monitors. OFDM (Orthogonal Frequency Division Multiplexing): A modulation system which uses a very large number of separate radio frequency carriers each of which carries a small proportion of the total information content to be sent. Also used in digital audio broadcasting such as DRM, DAB, ISDB-TSB and IBOC, OFDM has good performance in a very strong multipath (ghosting) environment. Terrestrial Digital Television Broadcasting (DTTB) such as DVB-T and ISDB-T use Coded OFDM. OFDM Symbol: transmitted signal for that portion of time when the modulating amplitude and phase state is held constant on each of the equally-spaced carriers in the signal Off-line (editing): A decision-making process using low-cost equipment usually to produce an EDL or a rough cut which can then be conformed or referred to in a high quality on-line suite--so reducing decision-making time in the more expensive on-line environment. While most off-line suites enable shot selection and the defining of transitions such as cuts and dissolves, very few allow settings for the DVEs, color correctors, keyers and layering that are increasingly a part of the on-line editing process. ONC (Open Network Computing): Distributed applications architecture designed by Sun Microsystems, currently controlled by a consortium led by Sun. The NFS protocols are part of ONC. On-chip (on-die): This term is most often used when referring to the L2 cache on a microprocessor. It implies that the L2 cache itself is on the same single piece of silicon as the microprocessor. On-Demand Streaming: Streaming media content that is transmitted to the client upon request. Ones Density: Scheme that allows a CSU/DSU to recover the data clock reliably. The CSU/DSU derives the data clock from the data that passes through it. In order to recover the clock, the CSU/DSU hardware must receive at least one 1 bit value for every 8 bits of data that pass through it. Also called pulse density. On-line (editing): Production of the complete, final edit performed at full program quality--the buck stops here! Being higher quality than off-line, time costs more but the difference is reducing. Preparation in an off-line suite will help save time and money in the on-line. To produce the finished edit on-line has to include a wide range of tools, offer flexibility to try ideas and accommodate late changes, and to work fast to maintain the creative flow and to handle pressured situations. OLT (Optical Line Termination): Optical network elements that terminate a line signal. On-Screen Display (OSD): On screen display of menu for users setup of receivers
211

operating parameters. Also refers to display of choices and interaction with receiver and/or program. Open Architecture: A hardware or software designs with specifications that are public. This includes any system architecture the specifications of which may be published by a standards organization or even privately designed architectures whose specifications are made public by the designers. The great advantage of open architectures is that anyone can design add-on products for it. By making architecture public, however, a manufacturer allows others to duplicate its product. Linux, for example, is considered open architecture because its source code is available to the public for free. The opposite of open is closed or proprietary. The fact that system architecture is open does not imply however that it or parts of it, may be intellectual property or royalty free. Open Cable: A project aimed at obtaining a new generation of set-top boxes that are interoperable. These new devices will enable a new range of interactive services to be provided to cable customers. Open Source: Software that can be freely distributed, and must be distributed along with its source code. Thus the source can be changed easily, and the program can be altered to fix bugs or add features. Depending on the Open Source license (see FreeBSD license and GNU Public License), you may be unable to redistribute altered code or charge money for the distribution of the software. Some popular examples of Open Source software are Linux and Mozilla. OS (Operating System): The base program that manages a computer and gives control of the functions designed for general purpose usage--not for specific applications. Common examples are MS-DOS and Windows for PCs, Mac OS8 for Apple Macintosh, and UNIX (and its variations IRIX and Linux). For actual use, for example, as a word processor, specific applications software is run on top of the operating system. Optical Add/Drop Multiplexer (OADM): A device which adds or drops individual wavelengths from a DWDM system.

Optical Amplifier: A device to amplify an optical signal without converting the signal from optical to electrical and back again to optical energy. The two most common optical amplifiers are Erbium-Doped Fibre Amplifiers (EDFAs), which amplify with a laser pump diode and a section of erbium-doped fibre, and semiconductor laser amplifiers.

212

Optical Channel Spacing: The wavelength separation between adjacent WDM channels.

Optical Connector: A mechanical or optical device that provides a demountable connection between two fibers or a fiber and a source or detector.

Optical Directional Coupler (ODC): A component used to combine and separate optical power.

Optical disks: Disks using optical techniques for recording and replay of material. These offer large storage capacities on a small area, the most common being the 5-1/4-inch compact disk, being removable and having rather slower data rates than fixed magnetic disks--but faster than floppies. Write Once, Read Many or "WORM" optical disks first appeared with 2 GB capacity on each side of a 12-inch platter--useful for archiving images. In 1989 the read/write magneto-optical (MO) disk was introduced which can be re-written around a million times. With its modest size, just 5-1/4-inches in diameter, the ISO standard cartridge can store 325 MB per side--offering low priced removable storage for over 700 TV pictures per disk. A

213

variant on the technology is the phase change disk but this is not compatible with the ISO standard. An uprated MO disk system introduced in 1994 has a capacity of 650 MB per side, 1.3 GB per disk. In 1996 a second doubling of capacity was introduced offering 2.6 GB on a removable disk. Besides the obvious advantages for storing TV pictures this is particularly useful where large format images are used, in print and in film for example. The NEC DiskCam system uses optical disks for storage. Optical Fall Time: The time interval for the falling edge of an optical pulse to transition from 90% to 10% of the pulse amplitude. Alternatively, values of 80% and 20% may be used.

Optical Fiber: A glass or plastic fiber that has the ability to guide light along its axis. The three parts of an optical fiber are the core, the cladding, and the coating or buffer.

Optical Isolator: A component used to block out reflected and unwanted light. Also called an isolator.

Optical Link Loss Budget: The range of optical loss over which a fiber optic link will operate and meet all specifications. The loss is relative to the transmitter output power and affects the required receiver input power.

Optical Path Power Penalty: The additional loss budget required to account for
214

degradations due to reflections, and the combined effects of dispersion resulting from intersymbol interference, mode-partition noise, and laser chirp.

Optical Pump Laser: A shorter wavelength laser used to pump a length of fiber with energy to provide amplification at one or more longer wavelengths. See also EDFA. Optical Return Loss (ORL): The ratio (expressed in dB) of optical power reflected by a component or an assembly to the optical power incident on a component port when that component or assembly is introduced into a link or system. Optical Rise Time: The time interval for the rising edge of an optical pulse to transition from 10% to 90% of the pulse amplitude. Alternatively, values of 20% and 80% may be used.

Optical Signal-to-Noise-Ratio (OSNR): The optical equivalent of SNR. Optical Waveguide: Another name for optical fiber. Opportunistic Data: Data inserted into the remaining available bandwidth in a given transport stream after all necessary bits have been allocated for video and audio services. The data is sent in the first available space but it is nondeterministic, that is theres no guarantee when the data will be sent or how long it will take to send a complete file. The SMPTEs standard, SMPTE 325M, could be used to govern the implementation of flow control mechanisms for opportunistic data within MPEG-2 broadcast systems. This is a request and wait protocol, which typically could be implemented in a transmission multiplexer. Normally this would insert Null-packet data if required to maintain its internal buffer. But if it sends a SMPTE 325M packet request to the opportunistic data server, those null-packets could be replaced by the opportunistic data. The number of packets required is conveyed within the packet request message. These flow control messages are formatted as DSM-CC sections encapsulated within MPEG-2 transport packets. Some of the ways that SMPTE 325M packets may be linked from the data server are: for high bandwidth and low latency messaging an MPEG-2 transport stream on DVB-ASI or SDTI; or, either TCP or UDP on Ethernet may be used. Such systems would also probably need to modify the transport streams PAT to include the PIDs of the data packets to advise receivers.

215

OQPSK (Offset Quadrature Phase Shift Keying): QPSK system in which the two bits that make up a QPSK symbol are offset in time by a half-bit period for nonlinear amplification (power amps) of a bandpass spectrum with fewer spectral regrowth concerns. OR: An operation that can be executed on two or more binary strings. OR returns true, or "1", if either of the compared strings contains a 1 at a particular bit position, and a false, or "0", if neither string contains a 1 at that position. For example (0 OR 0) = 0, (0 OR 1) = 1, (1 OR 0) = 1, (1 OR 1) = 1. Thus: (0011 OR 1001) = 1011. Orderwire: A dedicated voice channel used by installers to expedite the provisioning of lines. OSI Reference Model (Open Systems Interconnection Model): Network architectural model developed by ISO and ITU-T. The model consists of seven layers, each of which specifies particular network functions such as addressing, flow control, error control, encapsulation, and reliable message transfer. The lowest layer (the physical layer) is closest to the media technology. The lower two layers are implemented in hardware and software, while the upper five layers are implemented only in software. The highest layer (the application layer) is closest to the user. The OSI reference model is used universally as a method for teaching and understanding network functionality. A way of representing the complexities of computer networking in a 7-layer model, ranging from the physical hardware of networking all the way up to how application programs talk to the network. The 7 layers are: Physical layer, Data link layer, Network layer, Transport layer, Session layer, Presentation layer, and Application layer. The 7-layer OSI model can be used to help diagnose network problems. It is also used as a measurement of how well people know their networking. If you're looking for a job in networking, you should familiarize yourself with the OSI model. Orthogonal: This term is used to describe two or more things that are independent of one another. Also, it is a mathematical term that refers to vectors that meet at right angles. OTDR (Optical Time Domain Reflectometer): An instrument that locates faults in optical fibers or infers attenuation by backscattered light measurements.

Output: Anything that comes out of a computer or system, either electronically or


216

physically, as in, "Watch out for that pile of OUTPUT." Overburning: The process of recording more than 74 minutes of audio or more than 650 MB of data onto a CD-R disk. There are software programs that allow you to do this, and now there is media that supports larger storage than the standard CD-R normally holds. Overclocking: The act of running a chip at a higher clock speed than it was specified for. Very often, chips are capable of running faster than they are specified for, and thus can be safely overclocked. You overclock a chip by setting it at a higher bus speed, a higher multiplier, or both. Sometimes you need to set the chip to run at a higher voltage to accomplish this, further increasing the heat output of the chip. Overdrive: A processor upgrade sold by Intel that is used to upgrade an older processor to a newer, faster processor. Overflow: This is when there is more information than can be properly handled. It is often used to describe the production of a number larger than a variable can handle. For instance, if you are expecting 10 digits but receive 11, that's overflow. Overhead: Extra bits in a digital stream used to carry information besides traffic signals. Orderwire, for example, would be considered overhead information. Oversampling: Sampling data at a higher rate than normal to obtain more accurate results or to make it easier to sample.

217

-PP2P (Peer to Peer): A method of distributing files over a network. Using P2P client software a client can advertise, send, or receive files with another client. Some P2P file distribution systems require a centralized database of available files (such as Napster), while other distribution systems (like Gnutella) are more decentralized. PABX: Abbreviation for private automatic branch exchange. See PBX. Pack: A pack consists of a pack header followed by zero or more packets. It is a layer in the system coding syntax described in 2.5.3.3 of ITU-T Rec. H.222.0 | ISO/IEC 13818-1. Packet: In data communications, a sequence of binary digits, including data and control signals, that is transmitted and switched as a composite whole. The packet contains data, control signals, and possibly error control information, arranged in a specific format. Packet Data: Contiguous bytes of data from an elementary stream present in a packet. Packet Filter: Anything that filters out network traffic based on a sender's address, receiver's address, and the type of protocol being sent. Some routers support packet filtering, all firewalls do, and some proxy servers do as well. Packet Identifier (PID): A unique integer value used to identify elementary streams of a program in a single- or multi-program MPEG-2 Transport Stream as described in 2.4.3 of ITU-T Rec. H.222.0 | ISO/IEC 13818-1. Carried within each 4 byte packet header (13 bits). Packet Switching: An efficient method for breaking down and handling high-volume traffic in a network. A transmission technique that segments and routes information into discrete units. Packet switching allows for efficient sharing of network resources as packets from different sources can all be sent over the same channel in the same bitstream. Padding (audio): A method to adjust the average length of an audio frame in time to the duration of the corresponding PCM samples, by conditionally adding a slot to the audio frame. Page: Some regular amount of memory that is accessed by a program or hardware. Memory is split into pages to be dealt with more easily. For example, a database program may deal with 2 KB pages of memory. If it needs 2.1 KB of space to store some data, it will use two pages, or 4 KB of data, since you can't split pages up into smaller groupings of memory. Page Fault: This is not an error, as "fault" would usually indicate. It simply means that the computer had to resort to using the swap file as memory. If you are getting a lot of page faults you should upgrade your system's memory.

218

Pages Per Minute: (PPM): This term usually refers to the amount of printed pages a printer can output over a minute's time. The time is based on some average amount of ink or toner coverage on a page. If you are printing complex graphics on inkjet color printers, expect your printer to function more slowly than its PPM rating, as it has to make up to four passes over each area of a page instead of a single pass for black ink only. Paging: The act of moving pages of memory from RAM to virtual memory on a hard drive. Excessive paging is caused by a lack of actual system memory. In this case the system has to use the hard drive as memory frequently, and performance is degraded. PAL (Phase Alternate Line): An analog encoding system for color television. Compatible with analog monochrome systems of the same scanning rates. It was developed by AEG Telefunken Laboratories in Hanover, Germany, principally by Dr Walter Bruch. Used in Europe, parts of Asia and Australia at 625line/50 and in Brazil at 525line/60. Broadcasting commenced in UK (PAL-I on UHF), in Sept 1967 and Australia (PAL-B) in March 1975. For 625 line systems 50 fields per sec interlaced, (25 pictures per sec), only 576 lines compose the actual picture information, the remainder being included as the Vertical Blanking Interval (VBI), (see Item 149). PAL uses a subcarrier modulated by two colour component signals (U & V), similar to the principles used in NTSC but alternates the phase of one of the colour signals (V), so that the receiver may correct colour errors by adding and subtracting the colour signals on subsequent lines with a delay line. The vector combination of the U and the V subcarriers results in a single subcarrier signal where the phase of the subcarrier relays the hue (spectral colour) and the amplitude (length of the vector), the depth or saturation of the colour. The colour subcarrier at 4,433,618.75Hz (4.43MHz for 625/50) is a fixed relationship to the picture line frequency to reduce the visibility of the subcarrier in the picture. The carrier modulation is double-sideband, suppressed carrier, so if there is no colour in a particular part of a picture eg. black, grey or white, then there is no subcarrier, further reducing visible artefacts. The system includes a short burst of colour subcarrier in the back-porch of the horizontal blanking period to synchronise a receivers colour decoder. This colour burst is shifted in phase on alternate lines from 135 to 225 which also tells the receiver the phase of the V signal. Palette: In 8-bit images or displays, only 256 different can be displayed at any one time. This collection of 256 colors is called the palette. In 8-bit environments, all screen elements must be painted with the colors contained in the palette. The 256-color combination is not fixed--palettes can and do change frequently. But at any one time, only 256 colors can be used to describe all the objects on the screen or image. Palmtop: A form factor of a computer that can easily fit in your palm. This usually refers to very small PDAs with no keyboards, but can also refer to larger fold-open PDAs, or micro-laptops with full keyboards.

219

Pan and Scan: The technique used to crop a widescreen picture to conventional 4:3 television ratio, while panning the original image to follow the on-screen action. The process of transferring a movie or other source material to videocassette, DVD, or broadcast so that it fits the 4:3 aspect ratio of the NTSC system, as well as most current TVs. This results in a significant amount of lost picture information, particularly in the width of the image. At the beginning of a movie on videocassette, you'll usually see a disclaimer about the movie having been "...formatted to fit your TV." That means it's been converted to pan-and-scan. Pan and Scanner: One who pans and scans, typically during a live event originating in a widescreen format (16:9) but simulcast in 4:3. PAP (Password Authentication Protocol): Authentication protocol that allows PPP peers to authenticate one another. The remote router attempting to connect to the local router is required to send an authentication request. Unlike CHAP, PAP passes the password and host name or username in the clear (unencrypted). PAP does not itself prevent unauthorized access, but merely identifies the remote end. The router or access server then determines if that user is allowed access. PAP is supported only on PPP lines. Parallel: One transmission path for each bit. This means in unison, or many things at the same time. It most commonly refers to a computer with multiple processors that can all function independently at the same time. It also refers to cabling or connection standards that use several wires to transmit data. See also serial. Parallel cable: A multi-conductor cable carrying simultaneous transmission of digital data bits. Analogous to the rows of a marching band passing a review point. Parallel data: Transmission of data bits in groups along a collection of wires (called a bus). Analogous to the rows of a marching band passing a review point. A typical parallel bus may accommodate transmission of one 8-, 16-, or 32-bit byte at a time. Parallel Digital: A digital video interface which uses twisted pair wiring and 25-pin D connectors to convey the bits of a digital video signal in parallel. There are various component and composite parallel digital video formats. Parallel Port: The parallel port is found on just about all PCs. It's a 25-pin interface cable (also called DB-25) that is designed for connection to a printer. Normally the parallel port takes up IRQ 7. In addition to printers, you can connect many other devices, such as scanners and storage. Of course, the parallel port is slow; it can transfer data at a maximum speed of 512Kbps, or 24 times slower than USB 1.x. Parity: A method of verifying the accuracy of transmitted or recorded data. An extra bit appended to an array of data as an accuracy check during transmission. Parity may be even or odd. For odd parity, if the number of 1's in the array is even, a 1 is added in the parity bit to make the total odd. For even parity, if the number of 1's in the array is odd, a 1 is added in the parity bit to make the total even. The receiving computer checks the parity bit and indicates a data error if the number of 1s does not add up to the proper even or odd total.

220

Parity Check: An error-checking scheme which examines the number of transmitted bits in a block which hold the value of one. For even parity, an overhead parity bit is set to either one or zero to make the total number of transmitted ones in the data block plus parity bit an even number. For odd parity, the parity bit is set to make the total number of ones in the block an odd number. Partition: A section of a hard drive. You must create at least one partition to begin using a new hard drive, and you can create multiple partitions and keep chunks of your data separate. It is also possible to install multiple OSes on these multiple partitions. Usually it's just one OS per partition, but some OSes, like Windows 95/98/Me and Windows NT/2000/XP, for example, are designed to run together in the same partition as long as you use a compatible file system to format the partition. Partitioning: The act of breaking a hard drive up into one or more pieces, or "partitions." Passive Branching Device: A device which divides an optical input into two or more optical outputs.

Passive Device: Any device that does not require a source of energy for its operation. Examples include electrical resistors or capacitors, diodes, optical fiber (photo), cable, wires, glass, lenses, and filters.

Patch: Minor updates to programs that are distributed with only the changes and not the whole program. Imagine an instruction manual that has an extra page stapled in it with a correction of some text for that page. That is what would be considered a patch. Patch cable: The common name for any network cable that is used to connect, or "patch," any two network ports. Patch panel: A group of network ports stuck together for easy accessibility. Usually this panel resides in a wiring closet or server room. Connections are made between this panel and ports on a hub to enable a network connection at a remote port. Each port on the patch panel is connected by a longer cable (often run through the ceiling) to a remote port anywhere in an office that needs a computer hooked up to the network. Path: A logical connection between a point where a service in a VC is multiplexed to
221

the point where it is demultiplexed. Pathological Test Code: A special test pattern used with DTV and HDTV signals to create the longest strings of zeros and ones over the serial link. This requires the serial transport link to handle much lower frequency components than is typical in a normal data link. Path Overhead (POH): Overhead accessed, generated, and processed by path-terminating equipment. Path Terminating Equipment (PTE): Network elements such as fibre optic terminating systems which can access, generate, and process Path Overhead. Payload: 1. Payload refers to the bytes which follow the header bytes in a packet. For example, the payload of some Transport Stream packets includes a PES_packet_header and its PES_packet_data_bytes, or pointer_field and PSI sections, or private data; but a PES_packet_payload consists of only PES_packet_data_bytes. The Transport Stream packet header and adaptation fields are not payload. 2. The portion of the SDH signal available to carry service signals such as E1 and E3. The contents of a VC. Payload Pointer: Indicates the beginning of a Virtual Container. Payload Capacity: The number of bytes the payload of a single frame can carry. Pay-Per-View (PPV): An event that has an associated viewing cost, and which may be purchased separately from any package or subscription. The ordered events could include movies, special events, such as sporting, or adult programming. The event could be purchased by either impulse PPV by using a television remote (this application requires a continuous land line phone based connection), or over the phone PPV (this application may have additional costs for processing). PBX: Private Branch Exchange. A private phone switch used within a company that allows inter-company phone calls without using outside lines. It also connects to one or more outside POTS lines, which are often partitioned off into outgoing lines, incoming lines, or lines that can be used for both purposes. PBXs today use digital connections for inter-company calls, thus you must use digital phones as opposed to standard analog phones (and modems). A PBX is typically a large and costly piece of machinery maintained and/or supported by the company that sold the PBX. There are various options that can be added onto PBXs for call reporting, call management, and voicemail. PC: Personal Computer. This is slang for IBM Personal Computer, or IBM-PC. This is the class of computers the works (so far) on the x86 instruction set, and were first developed by IBM as a means to put a computer in your home. Before that IBM computers were only used in business. After the PC was developed, many clone PC makers began developing them as well, and that has led to the large amount of
222

components that are PC-compatible; but it has also caused some problems when cheap components try to work properly with one another. PC (Physical Contact): Refers to an optical connector that allows the fiber ends to physically touch. Used to minimize backreflection and insertion loss.

PCM: Pulse code modulation. A method by which sound is digitally recorded and reproduced. Sounds are reproduced by modulating (changing) the playback rate and amplitude of the sampled (stored) digital pulses (waves). This enables the PCM sound to be reproduced with a varying pitch and amplitude. PDF (Portable Document Format): A format developed by Adobe. Adobe makes a freeware Adobe Acrobat Reader program that is available for a variety of platforms. Thus, using a variety of operating systems, you can download and read a PDF document with very little hassle. In addition, the PDF format strives for easy printing, viewing, and compact storage. On PCs .pdf is the file extension for this type of file. PDL (Protocol Definition Language): Special purpose language used to describe network protocols and all the fields within the header. PDU (Protocol data unit): OSI term for packet. Peak Power Output: The output power averaged over that cycle of an electromagnetic wave having the maximum peak value that can occur under any combination of signals transmitted. Peak Wavelength: In optical emitters, the spectral line having the greatest output power. Also called peak emission wavelength. Perceptual Audio Coding: "Perceptual coding" is a method of audio compression that removes portions of the audio signal that can not be easily perceived by the human ear. Forms of ''perceptual coding" in popular audio coding formats such as Dolby Digital, DTS, MP3 and AAC. Performance Rating (PR): A measure of processing power. The term was coined by Cyrix to compare its 6x86 processor to Intel's Pentium processor that ran at faster processor speeds. The 6x86 was a more complex chip that could perform as well as Pentium chips at faster clock speeds in some operations. Cyrix continued to use its Performance Rating on future chips until it was bought out by Via. AMD more recently used PR Ratings that were more accepted by the industry for rating its Athlon XP/64 and Opteron chips. Performance Management: One of five categories of network management defined by ISO for management of OSI networks. Performance management subsystems are
223

responsible for analyzing and controlling network performance including network throughput and error rates. Peripheral: Any device that is not part of the motherboard, aside from memory and the CPU. For example, video cards, sound cards, modems, and hard drives are peripherals. When speaking about the exterior of a PC peripheral refers to anything that can be connected to the exterior of a PC, like an LCD monitor or FireWire hard drive. Peripheral Component Interconnect (PCI): This interface was designed to supplant the VL-Bus architecture and provide a standard slot with a reduced size for high-speed peripherals. It normally runs at 33MHz on a 32-bit bus in a PC, but the specification allows for 64-bit and 66MHz operation that provides up to 532 MB of data transfer per second. Permanent Virtual Circuit (PVC): A path through a network from one fixed point to another that appears to be a dedicated circuit; however, the path can be changed and rerouted if necessary. The PVC typically travels over routers and connections that are used by other traffic, but appears to be private for all intents and purposes. Also, packets are sent and arrive in order, and do not need to be resequenced. Personal Computer Memory Card International Association (PCMCIA): Also known as PC card. A card/socket with 68 pins commonly found on notebook computers, digital cameras and in digital TVs for expanded facilities. Three types are defined Type I, Type II and Type III. The thinnest (3.3 mm thick) are Type I, which are primarily used for memory devices like flash ram. Type II cards are the type normally used in digital TV applications and the card may have a SmartCard slot inbuilt. Type II are 5mm thick. Common examples are modems and LAN adapters in PCs. This is also used by DVBs Common Interface (CI) socket on DTV receivers to add additional facilities such as a Conditional Access (CA) system or memory upgrade or for access to upgrade the receivers embedded (flash) software, Type III cards are 10.5 mm and are used for the devices that require more space such as disk drives and wireless communication devices. PES (Packetized Elementary Stream): The next stage in the creation of either of the MPEG-2 multiplexes is to convert each elementary stream into a Packetised Elementary Stream (PES). A Packetised Elementary Stream consists entirely of PES-packets, as shown in the figure below.

224

Figure: Conversion of an elementary stream to a Packetised Elementary Stream. A PES-packet consists of a header and a payload. The payload simply consists of data bytes taken sequentially from the original elementary stream. There is no requirement to align the start of access units and the start of PES-packet payloads. Thus, a new access unit may start at any point in the payload of a PES packet and it is possible for several small access units to be contained in a single PES-packet. PES-packets may be of variable length subject to a maximum length of 64 kBytes. (There is one exception to this: in the case of a video packetised elementary stream when carried in a transport stream, where PES-packets may be of unbounded length.) The designer of an MPEG-2 multiplexer may choose to exploit this flexibility in a number of ways. The possibilities include simply using a fixed PES-packet length or varying the length in order to align the start of access units with the start of PES-packet payloads. PES packet: The data structure used to carry elementary stream data. A PES packet consists of a PES packet header followed by a number of contiguous bytes from an elementary data stream. It is a layer in the system coding syntax described in 2.4.3.6 of ITU-T Rec. H.222.0 | ISO/IEC 13818-1. <PES Packet structure> The packet length is variable: Header packet start-code prefix (3bytes) stream identifier (1 byte) PES packet length (2 bytes) optional PES HEADER (variable length) stuffing bytes (FF) (variable length) PES packet data bytes The PES packets are the input of Program Stream (PS) and Transport Stream (TS). PES Packet Header: The figure shows the fields comprising a PES-packet header. The first four (i.e. The PES-packet start code prefix plus the stream_id) comprise the PES-packet start code. This combination of 32 bits is guaranteed not to arise in the packetised elementary stream other than at the start of a PES-packet. (There is a general exception to this; the contents of private data are not constrained by MPEG, and as such they may contain emulations of the PES-packet start code.)

225

Figure: A PES-packet header.

PFM: Abbreviation for pulse-frequency modulation. Also referred to as square wave FM. Phased Array: An antenna comprised of multiple identical radiating elements in a regular arrangement and fed to obtain a prescribed radiation pattern. Phase Modulation (PM): Encoding information onto a carrier waveform by varying the phase of the carrier. Mathematically related and similar to the FM waveform structure because of the relationship between frequency and phase. Generally describes analog modulation. Phase Noise: Rapid, short-term, random fluctuations in the phase of a wave caused by time-domain instabilities in an oscillator. Phase-shift Keying (PSK): 1) In digital transmission, angle modulation in which the phase of the carrier discretely varies in relation, either to a reference phase or to the phase of the immediately preceding signal element, in accordance with data being transmitted. 2) In a communications system, the representation of characters, such as bits or quaternary digits, by a shift in the phase of an electromagnetic carrier wave with respect to a reference, by an amount corresponding to the symbol being encoded. Also called biphase modulation, phase-shift signaling. Photodetector: An optoelectronic transducer such as a PIN photodiode (illustrated) or avalanche photodiode. In the case of the PIN diode, it is so named because it is constructed from materials layered by their positive, intrinsic, and negative electron regions.

226

Photodiode (PD): A semiconductor device that converts light to electrical current. Physical Layer: Layer 1 of the OSI reference model . The physical layer defines the electrical, mechanical, procedural, and functional specifications for activating, maintaining, and deactivating the physical link between end systems. Corresponds with the physical control layer in the SNA model. PI: Protocol Identifier Picture: A source image or reconstructed data for a single frame or two interlaced fields. A picture consists of three rectangular matrices of eight-bit numbers representing the luminance and two color difference signals. Picture Types: There are three types of pictures that use different coding methods: an Intra-coded (I) picture is coded using information only from itself; a Predictive-coded (P) picture is a picture which is coded using motion compensated prediction from a past reference frame or past reference field; a Bidirectionally predictive-coded (B) picture is a picture which is coded using motion compensated prediction from a past and/or future reference frame(s). These three types of pictures are combined to form a group of picture.

Picture-in-Picture (PIP): There are two flavors of picture-in-picture: 1-tuner PIP models require that you connect a VCR or other video component to provide the source for your second picture. 2-tuner PIP models have two built-in TV tuners, so you can watch two shows at once using only the TV. Originally, PIP allowed viewing of multiple channels or sources by creating a small inset image overlaid on the main image. With the shift to widescreen displays, the
227

inset type of PIP is gradually being replaced by "split screen" designs that are sometimes referred to as POP (picture-outside-picture) or PAP (picture-and-picture). PID (Packet Identifier): The identifier for transport packets in MPEG-2 Transport Streams. The PID is a 13-bit field, indicating the type of the data stored in the packet payload. PID value 0x0000 is reserved for the Program Association Table (see Table 2-25). PID value 0x0001 is reserved for the Conditional Access Table (see Table 2-27). PID values 0x0002 0x000F are reserved. PID value 0x1FFF is reserved for null packets (see Table 2-3).

Pigtail: A short optical fiber permanently attached to a source, detector, or other fiber optic device at one end and an optical connector at the other.

PIN Photodiode: See photodiode. Pipeline: The technique of processing multiple parts of an instruction at the same time. Many processors have two or more instruction pipelines--think of them as automobile assembly lines. As one instruction is executed, the next instruction is being decoded, and the one after that is being fetched from memory (i.e., Ellen welds the frame of one car, while Frank paints another one, and Jimmy puts the tires onto the finished frame of another car). The pipeline is only as fast as its slowest member (i.e., if it takes Jimmy an hour to put on some tires then it doesn't matter that Ellen can weld the frame in 20 minutes; either the line will slow down, or you need three Jimmys for optimal performance). The speed-up you get from having a pipeline is about equal to the amount of pieces of pipe, though it's slightly slower due to the fact that the first instruction executed begins to fill the empty pipeline (i.e., Jimmy can't put on any tires if Ellen is still welding the frame of the first car through the line!). If something happens that empties the pipeline then performance suffers--the penalty
228

is greater with larger pipelines. For some practical examples, the original PowerPC G4 has a 7-stage pipeline, while the Pentium 4 has a 20-stage pipeline. Processors with smaller pipelines are typically seen as more efficient, but at higher speeds the short pipeline can be limiting. Pillarbox: Describes a frame that the image fails to fill horizontally (a 4:3 image on a 16:9 screen), in the same way that a letterbox describes a frame that the image fails to fill vertically (a 16:9 image on a 4:3 screen). See also: Letterbox and side panels. Pixel: A shortened version of "Picture cell" or "Picture element." The name given to one sample of picture information. Pixel can refer to an individual sample of R, G, B luminance or chrominance, or sometimes to a collection of such samples if they are co-sited and together produce one picture element. Digital TV Pixels are rectangular-shaped, while HDTV Pixels are virtually square- shaped and significantly smaller in size. This allows High-Definition pictures to contain many more horizontal and vertical colored-dots than standard definition pictures. Pixel response time: Response time refers to the amount of time it takes for a single pixel in a video display to switch from active to non-active; it is measured in milliseconds (ms). If a display's response time is too slow, faint motion trails may be visible following fast-moving onscreen objects. For smooth, accurate playback of high-quality video material, look for a response time of 16 ms or less. PK (Public Key): A very secure means to encrypt data. Very computations burdensome, so typically systems use PK to transmit the SK, and use SK for the bulk of the data encryption. Plane Earth Reflection Loss: the loss between two antennas above ground due to reflection from the ground between them. Plane Earth Reflection Loss : the loss between two antennas above ground due to reflection from the ground between them. Plant native format: A physical plant's highest video resolution. Plasma: Gas-plasma technology is one of the methods used to create flat-panel TVs. Besides enabling thin, lightweight TVs that can be hung on the wall, plasma offers other advantages. The display consists of two transparent glass panels with a thin layer of pixels sandwiched in between (think of this layer as containing around one million tiny fluorescent bulbs the pixels). Each pixel is composed of three gas-filled cells or sub-pixels (one each for red, green and blue). A grid of tiny electrodes applies an electric current to the individual cells, causing the gas to ionize. This ionized gas (plasma) emits high-frequency UV rays which stimulate the cells' phosphors, causing them to glow, which creates the TV image. Plasma display: A Plasma TV Display uses hundreds-of-thousands of miniature, embedded cells to produce a picture. Each cell equals one pixel, (picture element) and has three sub-cells. The three sub-cells are filled with a plasma gas which will 'glow' red, blue, or green (depending on the phosphor coating) when charged electrically. Light from the three "RGB" sub-cells combines to form a one colored pixel on the screen. Development of Plasma Display Technology is ongoing; with
229

specific areas of concern remaining. Plasma Displays use phosphor coated screens, placing them at high risk of screen 'burn-in'. Plasma displays are not able to reproduce the color black; various shades of gray are the best that can be achieved this can negatively affect picture quality. The plasma pixel-cells deteriorate over time causing the picture quality to diminish (fade). Platform: A means of generically grouping like computers. Macintosh computers are a platform; so are PCs running Windows. It's not very specific, and multi-platform support can mean many things. If someone says to you "this application supports multiple platforms," ask that person which ones he or she is talking about. Platter: One of the rigid disks inside a hard drive used to store information. Hard drives typically contain between one and 5 platters apiece, but can contain more. Platters have been historically made out of aluminum, and more recently glass. PLD (Programmable Logic Device): An integrated circuit that consists of an array of AND and OR gates whose operation can be modified. The programming of the devices is done by blowing fuses on the PLD. Plesiochronous: A network with nodes timed by separate clock sources with almost the same timing. regular arrangement and fed to obtain a prescribed radiation pattern. PLL (Phase-Locked Loop): A phase-locked loop (PLL) is an electronic circuit that controls an oscillator so that it maintains a constant phase angle relative to a reference signal. In communications, the oscillator is usually at the receiver, and the reference signal is extracted from the signal received from the remote transmitter. Phase-locked loops are widely used in space communications for coherent carrier tracking and threshold extension, bit synchronization, and symbol synchronization. Plug and Play (PNP): A standard that was supposed to make adding peripherals to your system as easy as plugging them in and using them. Also referred to as Plug-and-Pray, its biggest contribution, aside from the endless cycle where Windows keeps finding and installing drivers you don't want installed, is the removal of jumpers from many devices. Seriously, though, Plug-and-Play has actually made configuration easier, even if it took a while to get there. Plug-in: A program that can be called within a Web browser when a particular type of data is encountered on an HTML page. If you don't have the proper plug-in program, the data will not be represented properly. PMD (Polarization Mode Dispersion): Light transmitted down a single mode fiber can be decomposed into two perpendicular polarization components. Distortion results due to each polarization propagating at a different velocity. PMD causes pulse spreading as the polarizations arrive at different times. The longer the span the worse the PMD. Total PMD = PMDc x (L)^1/2, where PMDc is the PMD coefficient and L is the length of the fiber. PMDc has the units of ps/(Km)^1/2, that is pico-seconds per root Km. PMD is generally not a factor at OC-48 but will be a factor at OC-192. Corning has stated that they have conducted field measurements on various installed SMF-28 fibers and have typical installed link centered at less than 0.1ps/(Km)^1/2. Beginning in 1994 Corning also implemented a fiber PMD
230

specification of <0.5ps/(Km)^1/2 for SMF-28 and Titan single mode fibers. For OC-192 this level of PMD probably will meet most common span engineering requirements. PN (Pseudorandom Noise): Digital noise generated using a feedback shift register sequence. The data stream is typically used in direct-sequence, spread-spectrum (DS-SS) systems and diagnostic link testing. It looks like a random data pattern, but is actually a periodic one with well defined properties. Point to Point Protocol (PPP): The mode of transport used to connect a computer to the Internet via a dial-up adapter (a modem). Point-to-Point Transmission: Transmission between two designated stations

Point to Point Tunneling Protocol (PPTP): A remote access protocol that allows people to make a connection to easily connect to their local network through the Internet or some other large network. Conversations are kept private through encryption. See also Virtual Private Network. Pointer: (1) In programming, think of a pointer as an address. The address can point to just about anything, including another pointer. Ultimately, if you follow the trail of pointers you'll probably find some data--pointers are most often used to point to data. The purpose of pointers is so that when you are programming you can pass around a small address that points to some data, instead of passing the actual data around. A part of the SDH overhead that locates a floating payload structure. AU-N pointers locate the payload. TU-M pointers locate floating mode tributaries. All SDH frames use AU pointers; only floating mode virtual containers use TU pointers. Polarization: The direction of the electric field in the lightwave. If the electric field of the lightwave is in the Y Axis, the light is said to be vertically polarized. If the electric field of the lightwave is in the X axis, the light is said to be horizontally polarized.

Polarization Maintaining Fiber: Fiber designed to propagate only one polarization of light that enters it.

Polarization Mode Dispersion (PMD): Polarization mode dispersion is an inherent property of all optical media. It is caused by the difference in the propagation velocities of light in the orthogonal principal polarization states of the transmission medium. The net effect is that if an optical pulse contains both polarization
231

components, then the different polarization components will travel at different speeds and arrive at different times, smearing the received optical signal.

PON (Passive Optical Network): A broadband fiber optic access network that uses a means of sharing fiber to the home without running individual fiber optic lines from an exchange point, telco CO, or a CATV headend and the subscribers home.

POP (Point-of-Presence): A point in the U.S. network where inter-exchange carrier facilities meet with access facilities managed by telephone companies or other service providers. POP3: Post Office Protocol 3. A standard for client/server transmission of e-mail. An e-mail server holds the e-mail, and you use a POP3 client to fetch the mail from a server. IMAP is a newer e-mail client/server protocol with more options. Point-to-multipoint: An arrangement, either permanent or temporary, in which the same data flows or is transferred from a single origin to multiple destinations; the arrival of the data at all the destinations is expected to occur at the same time or nominally the same time. Portable Font Resource (PFR): A compact, platform-independent format for representing scalable outline fonts, as used in MHP (see also Fonts - Item 47, page 23). The PFR format can provide a very memory efficient font download mechanism. Tests have shown that the combination of ASCII data plus the scalable PFR data can be more memory efficient that an equivalent GIF or PNG image with the text coded as a graphic. Bitstream own the intellectual property for the PFR technology, and the technology has been further commercialised as the TrueDoc format. This is available for use on the Internet (although its adoption is not widespread). The decoding of PFR fonts is built into Netscape Navigator and a free plugin is available for Internet Explorer. The original specification was published in DAVIC Version 1.4.1, Part 9, Annex A. The current specification is published on the Bitstream web site, and is Version 1.1, dated 27th November 2000.
232

Portable Network Graphics (PNG): An extensible file format for the loss-less, portable, well-compressed storage of raster images. PNG provides a patent-free replacement for GIF and can also replace many common uses of TIFF. Indexed-color, grayscale, and true color images are supported, plus an optional alpha channel for transparency. Sample depths range from 1 to 16 bits. The PNG specification was issued as a W3C Recommendation on 1st October, 1996 http://www.w3.org/TR/REC-png.html POTS (Plain Old Telephone Service): The regular telephone service that people have in their homes. Newer technologies such as ISDN, digital phones, cellular phones, or DSL are not referred to as POTS. PPP (Point-to-Point Protocol): Successor to SLIP that provides router-to-router and host-to-network connections over synchronous and asynchronous circuits. Whereas SLIP was designed to work with IP, PPP was designed to work with several network layer protocols, such as IP, IPX, and ARA. PPP also has built-in security mechanisms, such as CHAP and PAP. PPP relies on two protocols: LCP and NCP. PRC (Primary Reference Clock): In a synchronous network, all the clocks are traceable to one highly stable reference supply, the Primary Reference Clock (PRC). The accuracy of the PRC is better than 1 in 1011 and is derived from a cesium atomic standard. P-Frames or P-Pictures (Predicted Pictures): A P-frame is a video frame encoded relative to the past reference frame. A reference frame is a P- or I-frame. The past reference frame is the closest preceding reference frame. This technique is termed forward prediction. P-pictures provide more compression than I-pictures and serve as a reference for future P-pictures or B-pictures. P-pictures can propagate coding errors when P-pictures (or B-pictures) are predicted from prior P-pictures where the prediction is flawed. These contain only predictive information (not a whole picture) generated by looking at the difference between the present frame and the previous one. They contain much less data than the I frames and so help towards the low data rates that can be achieved with the MPEG signal. To see the original picture corresponding to a P frame a whole MPEG-2 GOP has to be decoded. Each macroblock in a P-frame can be encoded either as an I-macroblock or as a P-macroblock. An I-macroblock is encoded just like a macroblock in an I-frame. A P-macroblock is encoded as a 16x16 area of the past reference frame, plus an error term. To specify the 16x16 area of the reference frame, a motion vector is included. A motion vector (0, 0) means that the 16x16 area is in the same position as the macroblock we are encoding. Other motion vectors are relative to that position. Motion vectors may include half-pixel values, in which case pixels are averaged. The error term is encoded using the DCT, quantization, and run-length encoding. A macroblock may also be skipped which is equivalent to a (0, 0) vector and an all-zero error term. The search for good motion vector (the one that gives small error term and good compression) is the heart of any MPEG video encoder and it is the primary reason why encoders are slow.

233

Preform: The glass rod from which optical fiber is drawn.

Presentation engine: A part of a receiver hardware and/or software that processes and presents declarative applications consisting of content, such as audio, video, graphics, and text; primarily based on presentation rules defined in the presentation engine. It may also respond to formatting information associated with the content; to script statements, which controls presentation behaviour and initiate other processes in response to user input. For example, an HTML browser is a type of presentation engine (its actually a software application) that displays text and graphic content formatted in HTML. Presentation Time Stamp (PTS): A field that may be present in a PES packet header that indicates the time that a presentation unit is presented in the system target decoder. Used to synchronise the presentation (display) of decoded material. In MPEG, dependant on the GOP length there can be near a second of delay in decoding DTSs and PTSs should be used by a receivers decoder to align say, picture and sound on the output. The field size for PTS (33 bits) was chosen so they would not wrap (i.e. return to the same number), in a 24-hour period. The DTS (decoding time stamp) is similar. See also PCR. Presentation Unit (PU): A decoded Audio Access Unit or a decoded picture. Principal Planes: In an antenna, the azimuth and elevation plane radiation pattern cuts usually taken through the peak of the beam. Priority: In the H.222.1 Transport Stream, one of two priorities may be indicated for each PDU. This service only has meaning at a network element. This service is not explicitly supported in the H.222.1 Program Stream. It is not stated here as to which
234

services are mandatory. An application decides what services are appropriate. Probability Density Function (PDF): Function that statistically describes the variation of a parameter of interest (called a random variable in stochastic terminology). Information content is comparable to a continuous function histogram. Profile: Two possible meanings of "profile" within the context of IDTV have been discussed within international forums: a) A "design point", i.e. a collection of system attributes that characterizes a legitimate market segment of receiver. A design point indicates that a compliant receiver is capable of running a particular application, if resources permit. b) A guarantee of receiver resources and capabilities available to an application. Such a profile indicates that a compliant receiver will run a particular application. Within the standards community, meaning a) appears to be the more useful although qualifying alternatives to the use of profile are preferred. The MHP definition is: A description of a series of minimum configurations, defined as part of the specification, providing different capabilities of the MHP. It maps a set of functions that characterise the scope of service options. The number of profiles is small. The mapping of functions into resources and subsequently into hardware entities is out of the scope of the specification and is left to manufacturers. Profiles and Levels (MPEG-2): MPEG is applicable to a wide range of applications requiring different performance and complexity. Using all of the encoding tools defined in MPEG, there are millions of combinations possible. For practical purposes, the MPEG-2 standard is divided into profiles, and each profile is subdivided into levels (see Figure below). A profile is basically a subset of the entire coding repertoire requiring a certain complexity. A level is a parameter such as the size of the picture or bit rate used with that profile. In principle, there are 24 combinations, but not all of these have been defined. An MPEG decoder having a given Profile and Level must also be able to decode lower profiles and levels. From this information, any conforming MPEG-2 decoder can decide immediately whether it will be able to process the bitstream. Within certain application domains, specific 'profile@level' configurations have been established as mandatory, e.g. 'Main Profile@Main Level' is typically required for digital TV broadcast or DVD storage applications. Four levels are defined: 'Low', 'Main' (SD), 'High-1440' and 'High' (HD), where however not each level is combinable with each profile. The profiles defined by MPEG-2 video are as follows: Simple profile: This is for low cost and low delay applications, allowing frame sizes up to SD resolution (ITU-R Rec. BT.601) and frame rates up to 30 Hz. Usage of B frames is not allowed. Main profile: This is the most widely used MPEG-2 profile, defined for SD and HD resolution applications in storage and transmission, without providing compatible decoding of different resolutions. All interlaced-specific prediction modes as well as B frames are supported. SNR scalable profile: Similar to Main profile, but allows in addition SNR scalability invoking drift at the base layer; resolutions up to SD are supported. Spatial scalable profile: Allows usage of (drift free) spatial scalability, also in combination with SNR scalability. High profile: Similar to Spatial Scalable profile, but supporting a wider range of levels, and allowing 4:2:2 chrominance sampling additionally. This profile was
235

primarily defined for upward/downward compatible coding between SD and HD resolutions, allowing embedded bitstreams for up to 3 layers with a maximum of two different spatial resolutions. 4:2:2 profile: This profile extends the Main profile by allowing encoding of 4:2:2 chrominance sampling, and allows larger picture sizes and higher data rates. Multiview profile: This allows encoding of stereoscopic sequences provided in a left-right interleaved multiplex, using the tool of temporal scalability for exploitation of left-right redundancies. The displacement between left and right pictures is estimated and used for disparity compensated prediction, which follows the same principles as motion-compensated prediction. In addition, camera parameters can be conveyed in the stream.

Program: A program is a collection of program elements. Program elements may be elementary streams. Program elements need not have any defined time base; those that do, have a common time base and are intended for synchronized presentation. The collection of coded audio and/or coded video and perhaps other data objects such as caption data. Such objects need not have a defined time base; however, those that do, refer to a common time base for presentation synchronization. In DVB and MPEG terminology such things are described as a service which is a sequence of events each having a start time and duration. Program Association Table (PAT): A part of a DVB/MPEG transport stream. The PAT lists all the Program Map Table (PMT) information for services that are contained within a single digital MPEG transport stream. It is like a master list of services/programs. Program Clock Reference (PCR): A time stamp in the Transport Stream from which decoder timing is derived. The signal transmitted typically 25 times a second to allow synchronisation of a decoders system clock (usually 27MHz) to the original MPEG encoder clock. Decoders may also use it with Presentation Time Stamps (PTS) included with the video and audio to ensure the correct presentation of picture and sound (lip-sync).

236

Program Element: A generic term for one of the elementary streams or other data streams that may be included in a program. Program Map Table (PMT): A part of a DVB/MPEG transport stream. For each service in a transport stream there is an associated Program Map Table (PMT). The PMT has to be associated with a program number - known as service_id in the DVB SI. The PMT holds information about the content of a specific service. This includes pointers to the elementary streams that contain the audio, video and other parts included in that service including, for MHP applications, the Application Information Table (AIT). Program Stream (PS): In MPEG-2, a multiplex consisting of a concatenation of variable length PES packets, plus additional system information. Each PES packet carries coded audio, video, or data of one elementary stream. The program stream carries elementary streams of one program having a common time-base. The Program Stream is a stream definition which is tailored for communicating or storing one program of coded data and other data in environments where errors are very unlikely, and where processing of system coding, e.g. by software, is a major consideration. Program Streams may be either fixed or variable rate. In either case, the constituent elementary streams may be either fixed or variable rate. The syntax and semantics constraints on the stream are identical in each case. The Program Stream rate is defined by the values and locations of the System Clock Reference (SCR) and mux_rate fields. A prototypical audio/video Program Stream decoder system is depicted in Figure Intro. 5. The architecture is not unique system decoder functions including decoder timing control might as equally well be distributed among elementary stream decoders and the channel specific decoder but this figure is useful for discussion. The prototypical decoder design does not imply any normative requirement for the design of an Program Stream decoder. Indeed non-audio/video data is also allowed, but not shown.

The prototypical decoder for Program Streams shown in Figure Intro. 5 is composed of System, Video, and Audio decoders conforming to Parts 1, 2, and 3, respectively, of ISO/IEC 13818. In this decoder, the multiplexed coded representation of one or more audio and/or video streams is assumed to be stored or communicated on some
237

channel in some channel-specific format. The channel-specific format is not governed by this Recommendation | International Standard, nor is the channel-specific decoding part of the prototypical decoder. The prototypical decoder accepts as input a Program Stream and relies on a Program Stream Decoder to extract timing information from the stream. The Program Stream Decoder demultiplexes the stream, and the elementary streams so produced serve as inputs to Video and Audio decoders, whose outputs are decoded video and audio signals. Included in the design, but not shown in the figure, is the flow of timing information among the Program Stream decoder, the Video and Audio decoders, and the channel-specific decoder. The Video and Audio decoders are synchronized with each other and with the channel using this timing information. Program Streams are constructed in two layers: a system layer and a compression layer. The input stream to the Program Stream Decoder has a system layer wrapped about a compression layer. Input streams to the Video and Audio decoders have only the compression layer. Operations performed by the prototypical decoder either apply to the entire Program Stream ("multiplex-wide operations"), or to individual elementary streams ("stream-specific operations"). The Program Stream system layer is divided into two sub-layers, one for multiplex-wide operations (the pack layer), and one for stream-specific operations (the PES packet layer). Program Specific Information (PSI): In MPEG-2, normative data necessary for the demultiplexing of transport streams and the successful regeneration of programs. PSI defines 4 tables such as PAT (Program Association Table), PMT (Program Map Table), NIT (Program Information Table), and CAT(Conditional Access Table). PSI consists of normative data which is necessary for the demultiplexing of Transport Streams and the successful regeneration of programs and is described in 2.4.4 of ITU-T Rec. H.222.0 | ISO/IEC 13818-1.. 1) Program Association Table (PAT): for each service in the multiplex, the PAT indicates the location (the Packet Identifier (PID) values of the Transport Stream (TS) packets) of the corresponding Program Map Table (PMT). It also gives the location of the Network Information Table (NIT). 2) Conditional Access Table (CAT): the CAT provides information on the CA systems used in the multiplex; the information is private (not defined within the present document) and dependent on the CA system, but includes the location of the EMM stream, when applicable. 3) Program Map Table (PMT): the PMT identifies and indicates the locations of the streams that make up each service and the location of the Program Clock Reference fields for a service. 4) Network Information Table (NIT): the location of the NIT is defined in the present document in compliance with ISO/IEC 13818-1 [1] specification, but the data format is outside the scope of ISO/IEC 13818-1 [1]. It is intended to provide information about the physical network. The syntax and semantics of the NIT are defined in the present document. the NIT conveys information relating to the physical organization of the multiplexes/TSs carried via a given network, and the characteristics of the
238

network itself. 5) MPEG-2 PSI Tables Because viewers may choose from multiple programs on a single transport stream, a decoder must be able to quickly sort and access video, audio and data for the various programs. Program Specific Information (PSI) tables act as a table of contents for the transport stream, providing the decoder with the data it needs to find each program and present it to the viewer. PSI tables help the decoder locate audio and video for each program in the transport stream and verify Conditional Access (CA) rights. The tables are repeated frequently (for example, 10 times/second) in the stream to support random access required by a decoder turning on or switching channels. The following table gives a basic overview of the PSI tables.

Progressive: Short for progressive scanning. A system of video scanning whereby lines of a picture are transmitted consecutively, such as in the computer world. Progressive Scan: In Progressive Scanning all the horizontal scan lines are scanned on to the screen at one time. The Digital TV and HDTV Standards accept both Interlaced Scan and Progressive Scan broadcast and display methods. Progressive Scan has long been used in Computer Monitors. Protocol: A set of formal rules describing how to transmit data, especially across a network. Low level protocols define the electrical and physical standards to be observed, bit- and byte-ordering and the transmission and error detection and correction of the bit stream. High level protocols deal with the data formatting, including the syntax of messages, the terminal to computer dialogue, character sets, sequencing of messages etc. PSIP (Program and System Information Protocol): A part of the ATSC digital television specification that enables a DTV receiver to identify program information
239

from the station and use it to create easy-to-recognize electronic program guides for the viewer at home. The PSIP generator insert data related to channel selection and electronic program guides into the ATSC MPEG transport stream. See also: Electronic Program Guide (EPG) PSIP Table for ATSC: ATSCs Program and System Information Protocol (PSIP) tables provide the decoder with the necessary information to access all channels and events available on an MPEG-2/ ATSC network. They provide tuning information that allows the decoder to quickly change channels at the click of the remote control. In addition, they include provisions for viewer-defined program ratings, and they provide event descriptions to support the creation of the Electronic Program Guide (EPG). The following table gives a basic overview of the PSIP tables.

PSTN (Public Switched Telephone Network): General term referring to the variety of telephone networks and services in place worldwide. Sometimes called POTS. Pulse: A current or voltage which changes abruptly from one value to another and back to the original value in a finite length of time. Used to describe one particular variation in a series of wave motions. The parts of the pulse include the rise time, fall time, and pulse width, pulse amplitude. The period of a pulse refers to the amount of time between pulses.
240

Pulse-Code Modulation (PCM): A technique in which an analog signal, such as a voice, is converted into a digital signal by sampling the signal's amplitude and expressing the different amplitudes as a binary number. The sampling rate must be at least twice the highest frequency in the signal. Push & Pull: Pull refers to delivering content to a user upon request. Web browsing is considered "pull". Push refers to delivering content to a user. This may be regardless of user interest or in a previously indicated general field of interest. The user chooses what to view without controlling what is sent. Broadcast television is considered "push". PVC (Permanent Virtual Circuit or Connection): Virtual circuit that is permanently established. PVCs save bandwidth associated with circuit establishment and tear down in situations where certain virtual circuits must exist all the time. In ATM terminology, called a permanent virtual connection. Compare with SVC.

241

-QQuadrature-phase (Q): The relative quadrature-phase (90) component of a quadrature modulation system. QA (Quality Assurance): The practice of checking hardware, software, or systems for defects, identifying such defects, and then checking to make sure that such defects are corrected when future revisions of software or hardware are ready for testing. QA workers typically work closely with the people who develop hardware and software, and often program exhaustive scripts to automate checking and identify problems. QAM (Quadrature Amplitude Modulation): A downstream digital modulation technique that conforms to the International Telecommunications Union (ITU) standard ITU-T J. 83 Annex B which calls for 64 and 256 quadrature amplitude modulation (QAM) with concatenated trellis coded modulation, plus enhancements such as variable interleaving depth for low latency in delay sensitive applications such as data and voice. Using 64 QAM, a cable channel that today carries one analog video channel could carry 27 Mbps of information, or enough for multiple video programs. Using 256 QAM, the standard 6 MHz cable channel would carry 40 Mbps. QPSK (Quadrature Phase Shift Keying): QPSK is a digital frequency modulation technique used for sending data over coaxial cable networks. Since it's both easy to implement and fairly resistant to noise, QPSK is used primarily for sending data from the cable subscriber upstream to the Internet. Quality of Service (QoS): In a fiercely competitive market like digital television, customers expect the highest level of service quality. To meet customers expectations, operators must provide uninterrupted access to error-free programming on what could be several hundred services. In addition, providers advertising special services such as pay-per-view programming must be sure they can deliver these services flawlessly. Issues that seriously affect customer satisfaction can be divided into two categories: Service Disruption 1) Service interruption: Service interruptions occur when errors in the transport stream prevent the set-top box from decoding the stream. When this happens, the set-top box either stops decoding certain programs or crashes completely. In place of programming, the viewer sees only a blank screen. This type of error produces an immediate increase in the volume of calls to service centers. 2) Unavailable pay-per-view services: Customers who have paid a premium to Poor Service Quality 1. Poor picture or sound quality: (1) Digital television customers usually complain of poor picture or sound quality in terms of: Jumbled pictures where macroblocks from multiple video frames are mixed into a single frame Blockiness where the edges of macroblocks become visible Freezing where a single image remains on the screen for longer than the appropriate frame interval Lip Sync where images, such as moving lips, are not aligned with the associated sound
242

view a certain event, such as a movie or a boxing match, are especially unforgiving of service disruptions. For pay-per-view services, disruptions most often occur when there are errors in the Entitlement Control Messages (ECMs). These carry the keys used by the decoder to descramble the audio and video for a pay-per-view event.

Clicks or gaps in the audio (2) These problems, all related to the transport stream structure, can be caused by: Bit errors in the transport stream Dropped video or audio packets Overflow of buffers in the set-top box Incorrect PCR/PTS/DTS time stamps Invalid encoding of data in the audio or video stream Inadequate compression of the video or audio signal 2. EPG not navigable 3. Ineffective rating blockouts 4. Delayed channel changing

Quake: This was a revolutionary 3D, first-person perspective game full of blood and gore. Designed by id Software, it is now perhaps the most well-known and most user-modified line of 3D shooter games. Other competitors include the Unreal series and Doom (also from id Software). Quantization: The process of sampling an analog waveform to convert its voltage levels into digital data. Part of the spatial encoding process, it re-orders DCT coefficients according to their visual importance. Quantizing: The process of converting the voltage level of a signal into digital data before or after the signal has been sampled. Quantizing Error: Inaccuracies in the digital representation of an analog signal. These errors occur because of limitations in the resolution of the digitizing process. Quantizing Noise: The noise (deviation of a signal from its original or correct value) which results from the quantization process. In serial digital video, a granular type of noise that occurs only in the presence of a signal. Queue: A data construct that is first-in, first-out (FIFO). Think of a check-out line at a supermarket, or any type of line formed by people in society. Queues are used throughout the architecture of computers and are necessary in programming languages to accomplish certain tasks. QuickTime: Apple Computer's system-level software architecture supporting time-based media, giving a seamless integration of video, sound, and animation. For Macintosh and Windows computers.

243

-RRack: A metal frame used for holding server computers and networking equipment. A standard rack is 19" wide. There are wider racks that are 23" wide, built to hold wider equipment. Racks range in height, but are typically either 24 or 42 rack units, with each unit being 1.75". Rack Units (U): This refers to the distance of 1.75" between screw holes in a rack used to hold server equipment. Most computer equipment racks are between 24 and 42 rack units in height. If you are shopping for rack equipment, keep in mind the height of the rack you have to work with and the "U" rating of the equipment. Heights of 1U indicate equipment that can fit into a 1.75" tall rack space, and 2U would indicate twice that at 3.5", and so forth with 3U, 4U, and up. Radiation Pattern: A graphical representation in either polar or rectangular coordinates of the spatial energy distributions of an antenna. Radio Button: A GUI term denoting that the user has a group of selections to make, and that he or she can only make one selection at a time. As it relates loosely to a radio, you can only listen to one station at a time. In the computer world, an example of where a radio button may be used is on a website where you need to pick which type of payment you're using: Visa, Mastercard, Discover, or AMEX. Usually you'll have radio buttons, and you can pick only one method of payment. Radio buttons are represented by a group of small circles. When you click on one of them you get a dot on your selection. There is always a default selection. Radio cell: The area served by a radio base station in a cellular or cordless communication system. This is where the term "cellular" came from. Cell size ranges from a few tens of metres to several kilometres. See Cell. Radio Frequency (RF): The range or frequencies between 10 kilocycles per second to 300,000 megacycles per second in which radio waves can be transmitted. It can also refer to a frequency used for a specific radio station. Radio in the Loop (RITL): A telephone system where subscribers are connected to the Public Switched Telephone Network using radio signals rather than copper wire for part or all of the connection between the subscriber and the switch. Includes cordless access systems, proprietary fixed radio access and fixed cellular systems. Synonymous with Fixed Radio Access and Wireless Local Loop Systems. RAID (Redundant Array of Independent (or Inexpensive) Disks): a category of disk drives that employ two or more drives in combination for fault tolerance and performance. RAID disk drives are used frequently on servers but aren't generally necessary for personal computers. There are number of different RAID levels: Level 0 -- Striped Disk Array without Fault Tolerance: Provides data striping (spreading out blocks of each file across multiple disk drives) but no redundancy. This improves performance but does not deliver fault tolerance. If one drive fails then all data in the array is lost. Level 1 -- Mirroring and Duplexing: Provides disk mirroring. Level 1 provides twice the read transaction rate of single disks and the same write transaction rate as single disks.
244

Level 2 -- Error-Correcting Coding: Not a typical implementation and rarely used, Level 2 stripes data at the bit level rather than the block level. Level 3 -- Bit-Interleaved Parity: Provides byte-level striping with a dedicated parity disk. Level 3, which cannot service simultaneous multiple requests, also is rarely used. Level 4 -- Dedicated Parity Drive: A commonly used implementation of RAID, Level 4 provides block-level striping (like Level 0) with a parity disk. If a data disk fails, the parity data is used to create a replacement disk. A disadvantage to Level 4 is that the parity disk can create write bottlenecks. Level 5 -- Block Interleaved Distributed Parity: Provides data striping at the byte level and also stripe error correction information. This results in excellent performance and good fault tolerance. Level 5 is one of the most popular implementations of RAID. Level 6 -- Independent Data Disks with Double Parity: Provides block-level striping with parity data distributed across all disks. Level 0+1 A Mirror of Stripes: Not one of the original RAID levels, two RAID 0 stripes are created, and a RAID 1 mirror is created over them. Used for both replicating and sharing data among disks. Level 10 A Stripe of Mirrors: Not one of the original RAID levels, multiple RAID 1 mirrors are created, and a RAID 0 stripe is created over these. Level 7 A trademark of Storage Computer Corporation that adds caching to Levels 3 or 4. RAID S EMC Corporation's proprietary striped parity RAID system used in its Symmetrix storage systems. Soft RAID A RAID system implemented by low level software in the host system instead of a dedicated RAID controller. While saving on hardware, operation consumes some of the host's power. RAM (Random access memory): A temporary, volatile memory into which data can be written or from which data can be read by specifying an address. Raman Amplifier: An optical amplifier based on Raman scattering which generates many different wavelengths of light from a nominally single-wavelength source by means of lasing action or by the beating together of two frequencies. The optical signal can be amplified by collecting the Raman scattered light. Random Access: The process of beginning to read and decode the coded bit stream at an arbitrary point. Random Jitter (RJ): Random jitter is due to thermal noise and may be modeled as a Gaussian process. The peak-to-peak value of RJ is of a probabilistic nature, and thus any specific value requires an associated probability.

245

Rainbow Effect: A visual artifact associated with single-chip DLP-based rear- and front-projection displays. Fortunately, only a few people see these momentary flashes of color, and fewer still find these "rainbows" to be distracting. For those unlucky few, rainbows typically occur when the viewer's eyes dart away from the screen. Rainbows result from DLP's use of a color wheel, which causes the three primary colors red, green, and blue to be projected sequentially, rather than continuously. The latest DLP-based displays incorporate improved color-wheel technology, which minimizes this effect. Rate Conversion: 1. The process of converting from one digital sample rate to another. The digital sample rate for the component digital video format is 13.5 MHz. For the composite digital video format, it is either 14.3 MHz for NTSC or 17.7 MHz for PAL. 2. Often used incorrectly to indicate both resampling of digital rates and encoding/decoding. Rating Region Table (RRT): An ATSC PSIP table that defines ratings systems for different regions or countries. The table includes parental guidelines based on Content Advisory descriptors within the transport stream. Raw: Any data that has not been translated. It also refers to sending print data directly to a printer instead of translating it first to an EMF file. Printing to raw format uses less CPU power for translation, but skips the print spooler and makes printing more dependent on the application and less on the OS. Rayleigh Scattering: The scattering of light that results from small inhomogeneities of material density or composition. Read Before Write: A feature of some videotape recorders that plays back the video or audio signal off of tape before it reaches the record heads, sends the signal to an external device for modification, and then applies the modified signal to the record heads so that it can be re-recorded onto the tape in its original position. Read Only: This means that the object cannot be written to, which means that you can't save any modifications you make to it. An operating system can have a file set to read-only for security purposes, or certain media, like a CD-ROM or DVD-ROM, is read-only by design and cannot be altered. Read Only Memory (ROM): Memory containing a program, data, or information about the device that has been programmed onto the chip at the factory and cannot be changed. There is no easy mechanism to write to ROM, but it is usually possible to do it. It's just not made easy, as writing to ROM is not intended to done. Real-time: Tasks that are time-critical and must happen in our time (as opposed to the much faster computer time). Real-time is the highest priority you can give to a thread or process in the Windows environment. Even though this is accounted for in the operating systems, Windows has historically been a poor real-time OS. The user interface should always be real-time. If you move the mouse, your pointer should move on screen immediately. Unfortunately, Windows can bog down enough so that this doesn't happen. Other real-time applications can include medical applications and control of real world systems. For example, you want to make sure that the train switches tracks on time and not too late because your server is bogged down.

246

Real-time Communications: A communication service (usually two-way) in which the information sent is received instantly by the other party in a continuous stream. As an example, telephone calls and videoconferencing are real-time, while database access and e-mails are not. RealVideo: Popular software for streaming audio and video over the Internet. made by RealNetworks of Seattle, Washington. Rear-projection TV: Typically referred to as "big-screen" TVs, these large-cabinet TVs generally have built-in screens measuring at least 40". Up until a few years ago, all rear-projection TVs used three CRTs to create images. Using CRTs resulted in cabinets that were relatively heavy and bulky nearly always designed as floorstanding TVs. Newer digital microdisplay rear-projection technologies, like DLP, LCD, and LCoS, make possible the more compact, lightweight "tabletop" big-screen TVs. Reboot: To restart a computer. It comes from "boot," which is a term that means starting the operating system on the computer. When you are using your computer and have weird problems, the tech guy comes over and says, "Time to reboot." If you are the tech guy you'll find yourself saying this far too often. Reception (RX): The act of downloading or receiving data. Often, the term "RX" is used on indicator lights on modems or network cards to indicate that data is flowing into the device. Receiver Sensitivity: The minimum acceptable value of received power needed to achieve an acceptable BER or performance. It takes into account power penalties caused by use of a transmitter with worst-case values of extinction ratio, jitter, pulse rise times and fall times, optical return loss, receiver connector degradations, and measurement tolerances. The receiver sensitivity does not include power penalties associated with dispersion, or backreflections from the optical path; these effects are specified separately in the allocation of maximum optical path penalty. Sensitivity usually takes into account worst-case operating and end-of-life (EOL) conditions. Reclocking: The process of clocking digital data with a regenerated clock. Recursion (recursive): A programming term describing a process that defers its operations as it runs, growing in memory size. Once all operations are deferred, there is a computation process that runs through the operations and figures out the values. If a recursive process is interrupted, there is not any easy way to recover it, as the operation is constructed on the fly with no fixed amount of variables. Contrast this to an iterative process. Redundancy: In a redundant system, if you lose part of the system, you can continue to operate. For example, if you have two power supplies and one takes over if the other one dies, that is a form of redundancy. You can take redundancy to extreme levels, but you spend more money. Redundancy is one reason that high-end server machines cost 10 times more than desktop PCs. Reduced Instruction Set Computing (RISC): This type of chip use a simpler instruction set than CISC chips to get its work done. This results in more instructions
247

that need to be processed by the processor, but they are easier to process and regular in size, so the chips can process more instructions per clock cycle than a CISC chip. Chip philosophers argue the benefits of RISC vs. CISC, but there is no clear cut winner. See also CISC for additional info. Reed-Solomon Coding (RS Coding) Powerful nonbinary block FEC (Forward Error Correction) approach used to achieve high coding gains, with unique properties well-suited to QAM systems and burst-error tolerance. Reed-Solomon coding is a coding scheme which works by first constructing a polynomial from the data symbols to be transmitted and then sending an over-sampled plot of the polynomial instead of the original symbols themselves. Because of the redundant information contained in the over-sampled data, it is possible to reconstruct the original polynomial and thus the data symbols even in the face of transmission errors, up to a certain degree of error. Reed-Solomon R-S codes (Irving Reed and Gustave Solomon 1960) employ polynomials derived from Galois fields to encode and decode block data. They are a subclass of quarry BCH codes which are a subclass of Hamming codes. They are especially effective in correcting burst errors and are widely used in audio, CD, DAT, DVD, direct broadcast satellite, terrestrial digital broadcast, and other apps. Reflection: In an antenna, the redirection of an impinging RF wave from a conducting surface. Reflector Element: In an antenna, a parasitic element located in a direction other than forward of the driven element intended to increase the direcctivity of the antenna in the forward direction. Refraction: 1) The changing of direction of a lightwave in passing through a boundary between two dissimilar media, or in a graded-index medium where refractive index is a continuous function of position. 2) The bending of an RF wave while passing through a non-uniform transmission medium.

Refractive Index Gradient: The description of the value of the refractive index as a function of distance from the optical axis along an optical fiber diameter. Also called refractive index profile. Refresh Rate: This is how often something is rewritten or updated. With CRT monitors, the refresh rate of the screen is very important to provide an image that appears stable when viewed closely. Any refresh rate under 75Hz is considered inadequate. Most CRT monitors today can work at rates of 75Hz and over in all of their supported resolutions, but that was not always the case. The refresh rate of the monitor is also controlled by your video card, which can be another limiting factor.

248

Regenerative Repeater: A repeater, designed for digital transmission, in which digital signals are amplified, reshaped, retimed, and retransmitted.

Regenerator: Device that restores a degraded digital signal for continued transmission; also called a repeater. Register: A CPU contains registers that it uses for temporary storage of data. You can think of a register as a kind of L0 cache. If the processor has to add the values of two memory locations, it may first fetch each from memory and place them into its registers, and then do an operation that adds the two registers together. Registry: This was first introduced in Microsoft's Windows NT, and then to consumers with Windows 95, as a centralized repository for all the miscellaneous settings that were stored in .ini files in Windows 3.x. Now, all current versions of Windows contain a Registry. Managing the Registry is better than managing a group of .ini files, but still has many problems. Relay: OSI terminology for a device that connects two or more networks or network systems. A data-link layer (Layer 2) relay is a bridge; a network layer (Layer 3) relay is a router. Remote Access: A means of contacting a remote network and becoming a node on that network. The problem with remote access is that the connection to the network is generally slow, such as over the Internet, so copying or working with large files is slower than when you are directly connected. The advantage is that the minimum hardware requirement is one server or remote access piece of hardware that can control many remote access lines. See also remote control. Remote Alarm Indication (RAI): A code sent upstream on an En circuit as a notification that a failure condition has been declared downstream. (RAI signals were previously referred to as Yellow signals.) Remote Control: A means of taking control of a remote computer, usually on a remote network. Programs like Symantec's pcAnywhere and a host of others offer this ability. Usually you connect to the remote computer via the Internet or a phone line, and control the computer by interacting with your computer while seeing on your screen a representation of what is actually happening on the remote computer. The downside to remote control is that each remote connection requires a separate computer. You can't easily have multiple people controlling one computer remotely and seeing a useful representation of what is on the remote display. See also remote access. Remote Defect Indication (RDI): A signal returned to the transmitting Terminating Equipment when the receiving Terminating Equipment detects a Loss of Signal, Loss of Frame, or AIS defect. RDI was previously known as Far End Receive Failure (FERF).
249

Remote Error Indication (REI): An indication returned to a transmitting node (source) that an errored block has been detected at the receiving node (sink). REI was previously known as Far End Block Error (FEBE). Remote Failure Indication (RFI): A failure is a defect that persists beyond the maximum time allocated to the transmission system protection mechanisms. When this situation occurs, an RFI is sent to the far end and will initiate a protection switch if this function has been enabled. Remultiplex Support: In the H.222.1 Transport Stream mechanisms to assist in the addition and removal of individual elementary streams are provided. This service only has meaning at a network element. This service is not explicitly supported in the H.222.1 Program Stream. Rendering: The process of non-realtime drawing of a picture relying on computer processing speed for graphics and compositing. Repeater: 1) Equipment which receives and re-transmits a DVB-T signal. It can not change the TPS bits and thus the cell_id. 2) A device used to repeat a signal to send it further away or to many more devices. The earliest network hubs are also called repeaters, as they had no switching capabilities but would only repeat the same signal to all ports.

Request For Comments (RFC): A document that was created to define accepted or proposed Internet standards or standards of practice. The acceptance of a document as an RFC is governed by the IETF (Internet Engineering Task Force). RFCs begin life as Internet-Drafts, which may be written by anyone. The IETF decides which Internet-Drafts become RFCs. RFCs are most often created by groups of interested individuals hoping to create a standard for interoperability over the Internet; however, they also describe such suggested practices, such as how to set up a mailserver to avoid being used to transmit SPAM mail. Reserved: The term "reserved", when used in the clauses defining the coded bit stream, indicates that the value may be used in the future for ISO defined extensions. Unless otherwise specified within ITU-T Rec. H.222.0 | ISO/IEC 13818-1, all reserved bits shall be set to '1'. Reserved_future_use: when used in the clause defining the coded bit stream, indicates that the value may be used in the future for ETSI defined extensions NOTE: Unless otherwise specified within the present document all "reserved_future_use" bits shall be set to "1". Resolution: 1) Detail. In digital video and audio, the number of bits (four, eight, 10, 12, etc.) determines the resolution of the digital signal. Four bits yields a resolution of one in
250

16. Eight bits yields a resolution of one in 256. Ten bits yields a resolution of one in 1,024. Eight bits is the minimum acceptable for broadcast television. 2) A measure of the finest detail that can be seen, or resolved, in a reproduced image. While influenced by the number of pixels in an image (for high definition approximately 2,000 x 1,000, broadcast NTSC TV 720 x 487, broadcast PAL TV 720 x 576), note that the pixel numbers do not define ultimate resolution but merely the resolution of that part of the equipment. The quality of lenses, display tubes, film process and film scanners, etc., used to produce the image on the screen must all be taken into account. This is why a live broadcast of the Super Bowl looks better than a broadcast recorded and played off of VHS, while all are NTSC or PAL. While a TV's Resolution can be influenced by the number of pixels in the image, it is important to note that the pixel numbers do not define ultimate resolution; ONLY the resolution of that part of the equipment. Many other variables must be taken into account - such as, the quality of lenses, display tubes, film process and film scanners, etc. used to produce the image on the screen - as well as what measurement-methods are used. HDTV:

720p - The picture is 1280x720 pixels, sent at 60 complete frames per second. 1080i - The picture is 1920x1080 pixels, sent at 60 interlaced frames per second (30 complete frames per second). 1080p - The picture is 1920x1080 pixels, sent at 60 complete frames per second.

The sharpness of a video image, signal or display, generally described either in terms of "lines of resolution," or pixels. The resolution you see depends on two factors: the resolution of your display and the resolution of the video signal. Since video images are always rectangle-shaped, there is both horizontal resolution and vertical resolution to consider. Vertical resolution: The number of horizontal lines (or pixels) that can be resolved from the top of an image to the bottom. (Think of hundreds of horizontal lines or dots stacked on top of one another.) The vertical resolution of the analog NTSC TV standard is 525 lines. But, some lines are used to carry other data like closed-captioning text, test signals, etc., so we end up with about 480 lines in the final image, regardless of the source. So, all of the typical NTSC sources VHS VCRs, cable and over-the-air broadcast TV (analog), non-HD digital satellite TV, DVD players, camcorders, etc. have vertical resolution of 480 lines. DTV (Digital Television) signals have vertical resolution that ranges from 480 lines for SDTV, to 720 or 1080 lines for true HDTV. Horizontal resolution: The number of vertical lines (or pixels) that can be resolved from one side of an image to the other. Horizontal resolution is a
251

trickier concept, because while the vertical resolution of all analog (NTSC) video sources is the same (480 lines), the horizontal resolution varies according to the source. Some examples for typical sources: VHS VCRs (240 lines), analog TV broadcasts (330 lines), non-HDTV digital satellite TV (up to 380 lines), and DVD players (540 lines). DTV signals have horizontal resolution that ranges from 640 lines for SDTV, to 1280 lines (for 720p HDTV) or 1920 lines (for 1080i HDTV). Resolution independent: Term used to describe the notion of equipment that can operate at more than one resolution. Dedicated TV equipment is designed to operate at a single resolution although some modern equipment, especially that using the ITU-R 601 standard, can switch between the specific formats and aspect ratios of 525/60 and 625/50. By their nature, computers can handle files of any size, so when applied to imaging, they are termed resolution independent. As the images get bigger so the amount of processing, storage and data transfer demanded increases--in proportion to the resulting file size. So, for a given platform, the speed of operation slows. Other considerations when changing image resolution may be reformatting disks, checking if the RAM is sufficient to handle the required size of file, allowing extra time for RAM/disk caching and how to show the picture on an appropriate display. Response Time: The elapsed time between the end of an inquiry or demand on a computer system and the beginning of a response; for example, the length of the time between an indication of the end of an inquiry and the display of the first character of the response at a user terminal. Return loss: A measure of the ratio of signal power transmitted into a system to the power reflected or returned. It can be thought of as an echo that is reflected back by impedance changes in the system. Any variation in impedance from the source results in some returned signal. Expressed in decibels, Return Loss is a measure of VSWR. Real-life cabling systems do not have perfect impedance structure and matching, and therefore have a measurable return loss. Twisted pairs are not completely uniform in impedance. Changes in twist, distance between conductors, cabling handling, cable structure, length of link, patch cord variation, varying copper diameter, dielectric composition and thickness variations, and other factors all contribute to slight variations in cable impedance. In addition, not all connecting hardware components in a link may have equal impedance. At every connection point there is the potential for a change in impedance. Each change in the impedance of the link causes part of the signal to be reflected back to the source. Return loss is a measure of all the reflected energy caused by variations in impedance of a link relative to a source impedance of 100 ohms. Each impedance change contributes to signal loss (attenuation) and directly causes return loss. Return Path: A communications connection that carries signals from the subscriber back to the operator. The return path allows for interactive television and on-demand services, such as pay-per-view, video on demand, and interactive games.

252

Return-to-zero (RZ): Data encoding format where each bit is represented for only a portion of the bit period as a logic high or a logic low, and what remains of the duration of the bit returns to logic zero. Rewriteable: This means that what was written can be erased so that it can be written to again. CD-R media is not rewriteable, but CD-RW media is. It is also more expensive. Floppy disks and hard drives are rewriteable as well. RFC (Request for Comment): The name of the result and the process for creating a standard on the Internet. New standards are proposed and published on line, as a Request For Comments. The Internet Engineering Task Force, (IETF)is a consensus-building body that facilitates discussion, and eventually a new standard is established, but the reference number/name for the standard retains the acronym RFC, e.g. the official standard for e-mail is RFC 822(IEEE). RGB: Red, Green, Blue are the additive colour primaries used in colour television. (Subtractive processes, such as printing use opposite primaries CMYK - Cyan Magenta Yellow black). Red, Green, Blue have been chosen to match the physiology of the human eye where the colour receptors, the cones, are most sensitive at these three points in the visible spectrum. Consequently the eye can be fooled into seeing yellow by a fine mixture of red and green dots on a TV picture tube. The picture information is linearly represented by red, green, and blue tristimulus values (RGB) lying in the range of 0% (reference black) to 100% (reference white). RGB signals are found internally in cameras and display devices, and very high grade graphics or electronic cinema work, but rarely otherwise. For all other applications RGB signals are converted (mathematically matrixed), into three other signals which are essentially the monochrome picture (Y) and two colour difference signals (R-Y & B-Y). This is done for a number of reasons including legacy, optimizing transmission and masking noise and distortion effects. The Y part is the luminance also known as monochrome or black and white component. The Y signal always ranges between 0 (black) and 100% (white) with intermediate values being shades of grey. Nominally for standard definition (PAL), Y= 30% R + 59% G + 11% B. But for HDTV colorimetry, ITU-R BT.709 specifies nominally for high definition (1080line), Y= 21% R + 72% G + 7% B. The most common occurrence of analog RGB signals is in consumer television equipment with Scart connectors (see item 125) and computer VGA connectors. See also Y, Pb, Pr connectors item 156). Cameras and telecines have red, blue and green receptors, the TV screen has red, green and blue phosphors illuminated
253

by red, green and blue guns. Much of the picture monitoring in a production center is in RGB. RGB is digitized with 4:4:4 sampling which occupies 50 percent more data than 4:2:2. RF (Radio Frequency) jack: Sometimes referred to as a "75-ohm coaxial" connection, this kind of jack is commonly used for bringing signals from antennas and other sources outside the home to components with some type of tuner, such as cable boxes, HDTV tuners, VCRs, satellite receivers, TVs, etc. A 75-ohm coaxial cable can carry video and stereo audio information simultaneously. However, as a way of making a video connection between components, RF is inferior to composite, S-video, and component video. RF cable connectors (often called "F-type" connectors) either screw onto the 75-ohm jack, or just push on to connect. There are different types of coaxial cable. Standard coaxial cable is stamped "RG-59"; higher-quality "RG-6" cable features better shielding, and exhibits less high-frequency loss over longer runs. (For connecting DBS satellite systems, it's essential to use RG-6 cable to correctly pass the entirety of the digital signal.) Ringing: An oscillatory transient on a signal occurring as a result of bandwidth restrictions and/or phase distortions. A type of ringing causes ghosting in the video picture. Ring topology: A network that is connected on both ends to one source, with client machines hanging off of the ring. If you break the ring, all computers in the ring lose connectivity.

RLE (Run Length Encoding): A compression scheme. A run of pixels or bytes of the same color or value are coded as a single value recording the color or byte value and the number duplications in the run. See Run Length coding (RLC). RMS (Root Mean Square): Mathematically, this refers to the square root of the average of a group of numbers. You will most often see RMS referring to the power rating of an audio amplifier. RMS power is different than peak power, and is a more accurate rating of power than peak. If an amplifier lists a power rating in watts and doesn't say whether it is RMS or peak, you can assume it is peak, as peak ratings are typically much higher than RMS power. Manufacturers of high quality audio components will always list the RMS power of their amplifiers. Roaming: Ability of a cordless or mobile phone user to travel from one cell to another, with complete communications continuity. Supported by a cellular network of radio base stations. Roaming is also the term given for inter-network operability, i.e. moving from one network provider to another (internationally).
254

ROM: Read only memory. A memory device that is programmed only once with a permanent program or data that cannot be erased. Router: Network layer device that uses one or more metrics to determine the optimal path along which network traffic should be forwarded. Routers forward packets from one network to another based on network layer information. Occasionally called a gateway (although this definition of gateway is becoming increasingly outdated). Compare with gateway. RP-125: A SMPTE parallel component digital video recommended practice. Now SMPTE 125M. RS-232 (Recommended Standard 232): This is the de facto standard for communication through PC serial ports. It can refer to cables and ports that support the RS-232 standard. Common usage would include, "Hey, Jimmy! Why don't you take your RS-232 cable and stick it right in your RS-232 port!" RS-422: A medium range (typically up to 300 m/1000 ft or more) balanced serial data transmission standard. Data is sent using an ECL signal on two twisted pairs for bi-directional operation. Full specification includes 9-way D-type connectors and optional additional signal lines. RS-422 is widely used for control links around production and post areas for a range of equipment. RTP (Real-Time Transport Protocol): One of the IPv6 protocols. RTP is designed to provide end-to-end network transport functions for applications transmitting real-time data, such as audio, video, or simulation data, over multicast or unicast network services. RTP provides services such as payload type identification, sequence numbering, time-stamping, and delivery monitoring to real-time applications. RTSP (Real Time Streaming Protocol): Enables the controlled delivery of real-time data, such as audio and video. Sources of data can include both live data feeds, such live audio and video, and stored content, such as pre-recorded events. RTSP is designed to work with established protocols, such as RTP and HTTP Run-Length Coding (RLC): A system for compressing data. The principle is to store a pixel value along with a message detailing the number of adjacent pixels with that same value. This gives a very efficient way of storing large areas of flat color and text but is not so efficient with pictures from a camera, where the random nature of the information, including noise, may actually mean that more data is produced than was needed for the original picture. Sampling: Process by which an analog signal is measured, often millions of times per second for video, in order to convert the analog signal to digital. The official sampling standard for standard definition television is ITU-R 601. For TV pictures eight or 10 bits are normally used; for sound, 16 or 20-bits are common, and 24-bits are being introduced. The ITU-R 601 standard defines the sampling of video components based on 13.5 MHz, and AES/EBU defines sampling of 44.1 and 48 kHz for audio. Running Status Table (RST): The DVB SI table that indicates a change of scheduling information for one or more events. It saves broadcasters from having to retransmit the corresponding EIT when a change occurs. This table is particularly useful if events are running late.
255

-SSampling Rate: The number of discrete sample measurements made in a given period of time. Often expressed in megahertz (MHz) for video. Sampling Frequency: The number of discrete sample measurements made in a given period of time. Often expressed in megahertz for video. SAV (Start of active video): A synchronizing signal used in component digital video. Save (v. to save): This term describes the movement of data from a computer's volatile DRAM to the non-volatile hard disk or other media. Basically, when you save something you are making sure that if your computer loses power you will be able to get back to the information that you have saved. This information may be saved to a floppy disk, a hard drive, or a CD-R. The information also can be saved to these devices connected to your local machine, or to a server on a network. SAW Filter (Surface Acoustic Wave Filter): Filter or oscillator technology characterized by its reliance on acoustic energy and electrical/ acoustic transducers used to take advantage of impressive bandpass filter shape factors that are difficult to achieve with more traditional filter technologies. SBE: Society of Broadcast Engineers. S-Band: The wavelength region between 1485 nm and 1520 nm used in some CWDM and DWDM applications. Scalability: The ability to grow with your needs. A scalable software package means that you only buy the parts you need, and that it has the ability to grow by adding on as you do. Scalable: Applications or systems that are able to scale to large amounts of users. For example, a database that completely locks out every other user when someone is using it is NOT scalable. The computer system that runs ATM and bank transactions must be highly scalable. It is often misspelled as "Scaleable," and is used in many product names. Scalar: This value, in mathematical terms, is any single real (not imaginary) number. The term superscalar is used by the semiconductor industry and refers to the ability to issue multiple instructions in a single clock cycle. Just think of scalar as a single number and you can see how this term was derived. See also vector and superscalar processor. Scaler: Circuitry that converts a video signal to a resolution other than its original format. Scaling can involve upconversion or downconversion, and may also include a conversion between progressive- and interlaced-scan formats. A scaler can be built into a TV, HDTV tuner, or DVD player, or may be a standalone component. Scalable Coding: The ability to encode a visual sequence so as to enable the decoding of the digital data stream at various spatial and/or temporal resolutions. Scalable compression techniques typically filter the image into separate bands of

256

spatial and/or temporal data. Appropriate data reduction techniques are then applied to each band to match the response characteristics of human vision. Scalable Video: Refers to video compression that can handle a range of bandwidths, scaling smoothly over them. Scaling: Scaling is the act of changing the resolution of an image. SCADA (Supervisory Control and Data Acquisition System): An industrial measurement and control system consisting of a central host or master (usually called a master station, master terminal unit or MTU); one or more field data gathering and control units or remotes (usually called remote stations, remote terminal units, or RTU's); and a collection of standard and/or custom software used to monitor and control remotely located field data elements. Contemporary SCADA systems exhibit predominantly open-loop control characteristics and utilize predominantly long distance communications, although some elements of closed-loop control and/or short distance communications may also be present. Similar to Distributed Control Systems, but usually the field elements are more geographically dispersed. Scandisk: A Microsoft program that first shipped with DOS version 6, replacing the venerable chkdsk.exe program. Technically the program is scandisk.exe. It is available in MS-DOS version 6.x, and non-NT versions of Windows. It added the ability to do a surface scan for physical defects on drive media, and a nicer UI than chkdsk, which had no graphical UI. Windows NT/2000/XP still uses chkdsk. Scanner: A device used to copy an image from a physical source (e.g., a photograph) into a computer. Scanning: (1) Video compression: the process of transmitting the most significant DCT coefficients first, followed by less-significant coefficients, and finally an indication in the code that the remaining coefficients are all zero. (2) Network monitoring: a monitoring method that uses a single probe to monitor several transport streams. The probe rotates through the transport streams, monitoring each stream for a given period of time. SCART Plug: Large flat multi-pin plug-socket usually found on larger domestic TVs and European VCRs for modular interconnect. A mandatory requirement for TV equipment sold in Europe. It includes video and audio inputs and outputs as well as wide-screen signalling - all into a single interconnect cable. There are several configurations depending on whether the socket is intended to go to a VCR or TV set. The video connections are normally high quality RGB component and/or S-Video, although some manufacturers may only provide composite PAL video. Some manufacturers enable users to change the type of output through a set-up menu. Component video interconnection is preferred to bypass the resolution loss and artefacts caused by composite PAL. (Also known as Peritel peripheral television interconnect or Euroconnector. Outside Europe and in the USA, a 3-lead Y, Pb, Pr component video interconnect with separate audio leads, is preferred.) Scattering: 1) The change of direction of light rays or photons after striking small
257

particles. It may also be regarded as the diffusion of a light beam caused by the inhomogeneity of the transmitting material.

2) The random redirection of RF energy from irregular conducting surfaces. Scrambling: 1. To transpose or invert digital data according to a prearranged scheme in order to break up the low-frequency patterns associated with serial digital signals. 2. The digital signal is shuffled to produce a better spectral distribution. See also: Encryption. Screensaver: A program that displays an image, animation, or just a blank screen on a computer after no input has been received for a certain length of time. Screensavers were originally designed so that images would not be burned into the phosphors of older CRT screens, but soon turned into a source of entertainment. Complex screensavers can use a good deal of CPU power to run, and should be kept off of servers. Screensavers are largely unnecessary today as CRT screens are much more resistant to burn-in. Script: A group of commands usually stored in a file and run one at a time so that you don't have to type them in one at a time. Script is the newer, sexier term for batch. Don't talk about batch files anymore! It's all scripts and scripting languages. We're on the INTERNET, for goodness' sake! SCSI (Small Computer System Interface): SCSI was originally called SASI for "Shugart Associates System Interface" before it became a standard. The original standard is now called "SCSI-1" to distinguish it from SCSI-2 and SCSI-3 which include specifications of Wide SCSI (a 16 bit bus) and Fast SCSI (10 MB/s transfer). SCSI-1 has been standardised as ANSIX3.131-1986 and ISO/IEC 9316. SCSI-2: Small Computer System Interface- version 2; SCSI-2 shares the original SCSI's asynchronousand synchronous modes and adds a "Fast SCSI" mode (<10MB/s) and "WideSCSI" (16 bit, <20MB/s or rarely 32 bit). SCSI-3: Small Computer System Interface- version 3; An ongoing standardisation effort to extend the capabilities of SCSI-2. SCSI-3's goals are more devices on a bus(up to 32); faster data transfer; greater distances between devices (longer cables); more device classes and command sets; structured documentation; and a structured protocol model. In SCSI-2, data transmission is parallel (8, 16or 32 bit wide). This gets increasingly difficult with higher data rates and longer cables because of varying signal delays on different wires. Furthermore, wiring cost and drive power increases with wider data words and higher speed. This has triggerer the move to serial interfacing in SCSI-3. By embedding clock information into a serial data stream signal delay problems are eliminated. Driving a single signal also consumes less driving power and reduces connector cost and size.

258

SDH (Synchronous Digital Hierarch): SDH is an international standard for synchronous data transmission over fiber optic cables. The North American equivalent of SDH is SONET. The ITU-defined international networking standard whose base transmission level is 155 Mbit/s (STM-1). SDH standards were first published in 1989 to address interworking between the ITU and ANSI transmission hierarchies. 1. Basic SDH Signal The basic format of an SDH signal allows it to carry many different services in its Virtual Container (VC) because it is bandwidth-flexible. This capability allows for such things as the transmission of high-speed packet-switched services, ATM, contribution video, and distribution video. However, SDH still permits transport and networking at the 2 Mbit/s, 34 Mbit/s, and 140 Mbit/s levels, accommodating the existing digital hierarchy signals. In addition, SDH supports the transport of signals based on the 1.5 Mbit/s hierarchy. 2. Transmission Hierarchies Following ANSIs development of the SONET standard, the ITU-T undertook to define a standard that would address interworking between the 2048 kbit/s and 1554 kbit/s transmission hierarchies. That effort culminated in 1989 with ITU-Ts publication of the Synchronous Digital Hierarchy (SDH) standards. Tables 1 and 2 compare the Non-synchronous and Synchronous transmission hierarchies.

ITU-T Reference Materials on SDH: G.701 Vocabulary of digital transmission and multiplexing and PCM terms G.702 Digital Hierarchy bit rates G.703 Physical/electrical characteristics of hierarchical digital interfaces
259

G.704 Synchronous frame structures used at 1544, 6312, 2048, 8448, and 44736 kbit/s hierarchical levels G.706 Frame alignment and cyclic redundancy check (CRC) procedures relating to basic frame structures defined in Recommendation G.704 G.707 Network Node Interface for the SDH G.772 Protected monitoring points provided on digital transmission systems G.780 Vocabulary of terms for SDH networks and equipment G.783 Characteristics of SDH equipment functional blocks G.784 SDH management G.803 Architecture of transport networks based on the SDH G.823 Control of jitter and wander in PDH systems G.825 Control of jitter and wander in SDH systems G.826 Error performance parameters and objectives for international, constant bit rate digital paths at or above the primary rate G.827 Availability parameters and objectives for path elements of international constant bit-rate digital paths at or above the primary rate G.831 Management capabilities of transport network based on SDH G.841 Types and characteristics of SDH network protection architectures G.861 Principles and guidelines for the integration of satellite and radio systems in SDH G.957 Optical interfaces for equipment and systems relating to the SDH G.958 Digital line systems based on SDH for use on optical fibre cables I.432 B-ISDN user-network interface Physical layer specification M.2100 Performance limits for bringing-into-service and maintenance of international digital paths, sections, and transmission systems M.2101 Performance limits for BIS and maintenance of SDH paths and multiplex sections O.150 General requirements for instrumentation for performance measurements on digital transmission equipment O.172 Jitter and wander measuring equipment for digital systems which are based on the Synchronous Digital Hierarchy (SDH) O.181 Equipment to assess error performance on STM-N interfaces F.750 (ITU-R) Architectures and functional aspects of radio-relay systems for SDH-based networks SDDI (Serial Digital Data Interface): A way of compressing digital video for use on SDI-based equipment proposed by Sony. Now incorporated into Serial digital transport interface. See: Serial digital transport interface. SDI (Serial Digital Interface): The standard based on a 270 Mbps transfer rate. This is a 10-bit, scrambled, polarity independent interface, with common scrambling for both component ITU-R 601 and composite digital video and four channels of (embedded) digital audio. SDLC (Synchronous Data Link Control): IBM developed the Synchronous Data Link Control (SDLC) protocol in the mid-1970s for use in Systems Network Architecture (SNA) environments. SDLC was the first link layer protocol based on synchronous, bit-oriented operation. This chapter provides a summary of SDLC's basic operational characteristics and outlines several derivative protocols. The International Organization for Standardization (ISO) modified SDLC to create the High-Level Data Link Control (HDLC) protocol. The ITU Standardization Sector
260

(ITU-T) subsequently modified HDLC to create Link Access Procedure (LAP) and then Link Access Procedure, Balanced (LAPB). The Institute of Electrical and Electronic Engineers (IEEE) modified HDLC to create IEEE 802.2. Each of these protocols has become important in its domain, but SDLC remains the primary SNA link layer protocol for WAN links. SDTI: Full Name (Serial Digital Transport Interface): Allows faster-than-realtime transfers between various servers and between acquisition tapes, disk-based editing systems and servers. SDTI-CP: Serial digital transport interface-content package. Sony's way of formatting MPEG IMX (50 Mbps, I frame MPEG-2 streams) for transport on a serial digital transport interface. SDTV (Standard Definition Television): Digital formats that do not achieve the video quality of HDTV, but are at least equal, or superior to, NTSC, PAL, and SECAM pictures. SDTV may have either 4:3 or 16:9 aspect ratios, and it includes surround sound. SDTV also incorporates stereo sound plus a wide range of data services. It displays picture and sound without noise, ghosts and interference. This equivalent quality may be achieved from pictures sourced at the 4:2:2 level of ITU-R Recommendation 601 and subjected to processing as part of the bit rate compression. The results should be such that when judged across a representative sample of program material, subjective equivalence with NTSC is achieved. Also called standard digital television. Search Engine: Literally, this is the software used to search for information within a given database, and now often used to mean a website with a database of other websites and the information they contain. Since the Web is such a large place, it is handy to go to search engines when you can't find the information for which you are looking. (Google and Teoma are search engines.) SECAM: The television broadcast standard in France, the Middle East, and most of Eastern Europe, SECAM provides for sequential color transmission and storage in the receiver. The signals used to transmit the color are not transmitted simultaneously but sequentially line for line. SECAM processes 625 lines, a maximum of 833 pixels per line and 50 Hz picture frequency. SECAM is used as a transmission standard and not a production standard (PAL is typically used). SEC (Synchronous Equipment Clock): G.813 slave clock contained within an SDH network element. Section (SDH): The span between two SDH network elements capable of accessing, generating, and processing only SDH Section overhead. Section (PSI/SI Table): Syntactic structure used for mapping all service information defined in EN 300 468 into ISO/IEC 13818-1 TS packets. A syntactic structure used for mapping PSI/SI/PSIP tables into transport packets of 188 bytes. Section Overhead: Nine columns of SDH overhead accessed, generated, and processed by section terminating equipment. This overhead supports functions such as framing the signal and performance monitoring. Section Terminating Equipment (STE): Equipment that terminates the SDH Section layer. STE interprets and modifies or creates the Section Overhead. Secure HyperText Transfer Protocol (HTTPS): A secure means of transferring data using the HTTP protocol. Typically HTTP data is sent over TCP/IP port 80, but HTTPS data is sent over port 443. This standard was developed by Netscape for
261

secure transactions, and uses 40-bit encryption ("weak" encryption) or 128-bit ("strong" encryption). If you are at a secure site, you will notice that there is a closed lock icon on the bottom area of your Navigator or IE browser. The HTTPS standard supports certificates. A webserver operator must get a digital certificate from a third-party certificate provider that ensures that the webserver in question is valid. This certificate gets installed on the webserver, and verifies for a period of a year that that server is a proper secure server. Secure Sockets Layer (SSL): A protocol specified by Netscape that allows for "secure" passage of data. It uses public key encryption, including digital certificates and digital signatures, to pass data between a browser and a server. It is an open standard and is supported by most modern browsers. Security Management: One of five categories of network management defined by ISO for management of OSI networks. Security management subsystems are responsible for controlling access to network resources. See also accounting management, configuration management, fault management, and performance management. Self-Phase Modulation (SPM): A fiber nonlinearity caused by the nonlinear index of refraction of glass. The index of refraction varies with optical power level causing a frequency chirp which interacts with the fibers dispersion to broaden the pulse.

Self-Monitoring Analysis and Reporting Tool (SMART): This technology reports on a variety of hard drive attributes. You need a compliant BIOS and SCSI and/or IDE controller, a hard drive that supports SMART, and some sort of software package that reports on these conditions. Once you have that you should be able to receive system warnings about your hard drive. Many hard drive manufacturers have added onto the SMART technology or changed it around so that it has proprietary features for their drives. The good news about SMART is that having SMART is much better than not having it, and you can be warned of hard drive failure before it happens and backup your drive while it still works. Thus, your data is safer with SMART around. Semiconductor: A substance (usually silicon doped with germanium or arsenic) that selectively conducts electricity. The selection usually occurs by running another current through a different axis to turn current flow on or off. Microprocessors are made of semiconductive materials. Serial: A means of operation meaning in series, or one after the other. It refers to connection standards that use a single wire. See also parallel.

262

Sequence: A coded video sequence that commences with a sequence header and is followed by one or more groups of pictures and is ended by a sequence end code. Serial: One bit at a time, along a single transmission path. Serial digital: Digital information that is transmitted in serial form. Often used informally to refer to serial digital television signals. Serial digital data interface (SDDI): A way of compressing digital video for use on SDI-based equipment proposed by Sony. Now incorporated into Serial digital transport interface. See: Serial digital transport interface. Serial digital interface (SDI): The standard based on a 270 Mbps transfer rate. This is a 10-bit, scrambled, polarity independent interface, with common scrambling for both component ITU-R 601 and composite digital video and four channels of (embedded) digital audio. Most new broadcast digital equipment includes SDI which greatly simplifies its installation and signal distribution. It uses the standard 75 ohm BNC connector and coax cable as is commonly used for analog video, and can transmit the signal over 600 feet (200 meters) depending on cable type. Serial digital transport interface (SDTI): SMPTE 305M. Allows faster-than-realtime transfers between various servers and between acquisition tapes, disk-based editing systems and servers, with both 270 Mb and 360 Mb are supported. With typical realtime compressed video transfer rates in the 18 Mbps to 25 Mbps to 50 Mbps range, SDTI's 200+ Mbps payload can accommodate transfers up to four times normal speed. The SMPTE 305M standard describes the assembly and disassembly of a stream of 10-bit data words that conform to SDI rules. Payload data words can be up to 9 bits. The 10th bit is a complement of the 9th to prevent illegal SDI values from occurring. The basic payload is inserted between SAV and EAV although an appendix permits additional data in the SDI ancillary data space as well. A header immediately after EAV provides a series of flags and data IDs to indicate what's coming as well as line counts and CRCs to check data continuity. Serial interface: A digital communications interface in which data is transmitted and received sequentially along a single wire or pair of wires. Common serial interface standards are RS-232 and RS-422. Serializer: A device that converts parallel digital information to serial. Serial Port: A data port/connection standard that is usually used to connect modems and mice. It comes in 9- and 25-pin varieties, both of which effectively function the same way. Serial ports are largely being eliminated in favor of much faster USB and FireWire connection standards. Serial Storage Architecture (SSA): A high speed data interface developed by IBM and used to connect numbers of storage devices (disks) with systems. Three technology generations are planned: 20 Mbps and 40 Mbps are now available, and 100 Mbps is expected to follow. Serial video processing: A video mixing architecture where a series of video multipliers, each combining two video signals, is cascaded or arranged in a serial
263

fashion. The output of one multiplier feeds the input of the next, and so on, permitting effects to be built up, one on top of the other. Server (file): A storage system that provides data files to all connected users of a local network. Typically the file server is a computer with large disk storage which is able to record or send files as requested by the other connected (client) computers--the file server often appearing as another disk on their systems. The data files are typically around a few kilobytes in size and are expected to be delivered within moments of request Server (video): A storage system that provides audio and video storage for a network of clients. While there are some analog systems based on optical disks, most used in professional and broadcast applications are based on digital disk storage. Aside from those used for video on demand (VOD), video servers are applied in three areas of television operation: transmission, post production and news. Compared to general purpose file servers, video severs must handle far more data, files are larger and must be continuously delivered. There is no general specification for video servers and so the performance between models varies greatly according to storage capacity, number of channels, compression ratio and degree of access to store material--the latter having a profound influence. Store sizes are very large, typically up to 500 Gigabytes or more. Operation depends entirely on connected devices, edit suites, automation systems, secondary servers, etc., so the effectiveness of the necessary remote control and video networking is vital to success. Service: A collection of one or more events under the control of a single broadcaster, or sequence of programmes under the control of a broadcaster which can be broadcast as part of a schedule. Service Description Table (SDT): The Service Description Table (SDT) defines the services available on the network and gives the name of the service provider. A service is a sequence of events that can be broadcast as part of a schedule. Two types of SDTs: SDT Actual SDT Other Actual and Other, are required by DVB. The SDT Actual describes the services available on the transport stream currently being accessed by the viewer, while the SDT Other describes services available on all other transport streams in the network.

Service_id: unique identifier of a service within a TS Service Description Channel (SDC): channel of the multiplex data stream which
264

gives information to decode the services included in the multiplex Service Information (SI for DVB-T and ISDB-T): SI is digital data describing the delivery system, content and scheduling/timing of broadcast data streams, etc. ISO/IEC 13818-1 [1] specifies SI which is referred to as PSI. The PSI data provides information to enable automatic configuration of the receiver to demultiplex and decode the various streams of programs within the multiplex. Note that the ATSC specified the PSIP (Program and System Information Protocol) tables (see PSIP) but still uses the PSI. PSI defines 4 tables such as PAT (Program Association Table), PMT (Program Map Table), NIT (Program Information Table), and CAT (Conditional Access Table). In addition to the PSI, data is needed to provide identification of services and events for the user. The coding of this data is defined in the present document. In contrast with the PAT, CAT, and PMT of the PSI, which give information only for the multiplex in which they are contained (the actual multiplex), the additional information defined within the present document can also provide information on services and events carried by different multiplexes, and even on other networks. This data is structured as nine tables for DVB-T system and ISDB-T: 1) Bouquet Association Table (BAT): the BAT provides information regarding bouquets. As well as giving the name of the bouquet, it provides a list of services for each bouquet. a bouquet is a collection of services, which may traverse the boundary of a network.The BAT shall be segmented into bouquet_association_sections using the syntax defined. Any sections forming part of a BAT shall be transmitted in TS packets with a PID value of 0x0011. the sections of a BAT sub_table describing a particular bouquet shall have the bouquet_id field taking the value assigned to the bouquet described in ETR 162 [6]. - all BAT sections shall take a table_id value of 0x4A. 2) Service Description Table (SDT): the SDT contains data describing the services in the system e.g. names of services, the service provider, etc. 3) Event Information Table (EIT): the EIT contains data concerning events or programmes such as event name, start time, duration, etc.; the use of different descriptors allows the transmission of different kinds of event information e.g. for different service types. 4) Running Status Table (RST): the RST gives the status of an event (running/not running). The RST updates this information and allows timely automatic switching to events. 5) Time and Date Table (TDT): the TDT gives information relating to the present time and date. This information is given in a separate table due to the frequent updating of this information. 6) Time Offset Table (TOT): the TOT gives information relating to the present time and date and local time offset. This information is given in a separate table due to the frequent updating of the time information. 7) Stuffing Table (ST): the ST is used to invalidate existing sections, for example at delivery system boundaries.
265

8) Selection Information Table (SIT) the SIT is used only in "partial" (i.e. recorded) bitstreams. It carries a summary of the SI information required to describe the streams in the partial bitstream. 9) Discontinuity Information Table (DIT) the DIT is used only in "partial" (i.e. recorded) bitstreams. It is inserted where the SI information in the partial bitstream may be discontinuous. Where applicable the use of descriptors allows a flexible approach to the organization of the tables and allows for future compatible extensions.

Session Layer: Layer 5 of the OSI reference model. This layer establishes, manages, and terminates sessions between applications and manages data exchange between presentation layer entities. Corresponds to the data flow control layer of the SNA model.
266

Set-top box (STB): These receivers (named because they typically sit on top of a television set) convert and display broadcasts from one frequency or type--analog cable, digital cable, or digital television) to a standard frequency (typically channel 3 or 4) for display on a standard analog television set. Set-top Converter Box: This unit sits on top of the viewer's analog TV, receives the Digital TV signal, converts it to an analog signal, and then sends that signal on to the analog TV. SGML (Standard Generalized Markup Language): The international standard for defining descriptions of the structure of different types of electronic document. SGML is very large, powerful, and complex. It has been in heavy industrial and commercial use for over a decade, and there is a significant body of expertise and software to go with it. XML is a lightweight cut-down version of SGML.There is an SGML FAQ at http://lamp.infosys.deakin.edu.au/sgml/sgmlfaq.txt which is posted every month to the comp.text.sgml newsgroup, and the SGML Web pages are at http://xml.coverpages.org/. (ISO 8879:1985), SFN (Single Frequency Network): SFN (as opposed to Multi- Frequency Networks: MFNs), refer to a number of transmitters and/or repeaters or translators, all operating on the same spectrum channel to extend reception within a service area. This is not possible with analog transmissions such as PAL where reception may be severely impaired if signals from different transmission points are simultaneously received but offset by small time differences resulting in severe ghosting. But it is possible with some types of digital transmission such as DVB-Ts COFDM. In DVB a time reference packet (PID 0x15) is included which is referenced at each transmission point where they are synchronised by a GPS time reference. If a receiver receives signals that are time offset outside the transmissions guard interval then the reception reliability falls. This area of interference (aka mush zone), between transmissions can be controlled by 2 means: The COFDM guard interval length (i.e. the period when the receiver refrains from sampling the signal after a symbol change). A larger guard interval allows greater distance between transmission points. Some Australian broadcasters use 1/8th (128S on 8k) and others use 1/16th (64S); and whether a time offset is introduced at a transmission point. This can be used to push the interference into a low population area. For example, if the signals from 2 DVB-T SFN transmitters are transmitted in sync, reception locations outside the area enclosed by the 2 transmitters might encounter problems as they would receive from the nearest transmitter with a very short propagation delay (i.e. Dist/C seconds), but if the distant transmitter is received at significant level, it will have a significant delay probably greater than the guard interval and dubious reception will result. Some SFNs around Sydney use about 9s offset. Shading (rendering): This synonym for hacking up animals for human consumption determines how colors are used on each triangle displayed in a 3D image. Very complex images take a long time to render; less complex 3D images, such as those in a 3D game, can be rendered in real time. When they make computer animated movies or effects of very high quality, effects companies use many computers to render images. Shadow Mask: A thin sheet of metal with small holes poked through it. It is used to
267

focus the light from the electron beam on most CRT monitors. See also Slot Mask. Shareware: Software that can be installed and distributed freely. Some shareware is free but requires fees to be paid to the author before all features are available. Other shareware is full-featured, but "nags" you to pay the fee with extra screens that must be bypassed. Most shareware requires you to pay for it within 30 days of installation, or you must uninstall it from your system. See also freeware. Shell: This most commonly refers to the various text-based user interface programs available for UNIX or Linux. The shell is the part of the OS that interacts with the user and accepts typed commands. Different shells have different functionality, so it is important to have the proper shell loaded or you may find yourself lost as things are displayed differently and familiar commands are not supported. SHF (Super High Frequency): A signal in the frequency range of from 3 to 30 GHz. Short Message Service (SMS): A method of sending text messages that are 160 characters in length or shorter over a mobile phone. More and more mobile phones are supporting the sending and receiving of SMS messages. Shortcut: A pointer to an actual program or file, as opposed to a full copy of that file. Shortcuts can also point to other shortcuts, and are used mainly because they take up less space. For example, in Windows you can place a shortcut to a 200 KB file on the desktop which will only take up 1 KB, as opposed to copying the entire file to the desktop, which would take up the full 200 KB. Another benefit is that if you need to update a file that has several shortcuts, you don't have to update several copies of it, just one. Signal-to-Interference (S/I): Ratio of desired signal power to the power of an additive disturbance caused by an undesired signal that falls within the passband of the desired signal. In this definition, the impairing signal (when defined as interference) is narrowband relative to the data signal . Silicon: Not to be confused with "silicone," silicon is the main component of computer chips. It is an element commonly associated with glass. It is called silica when bonded with oxygen. Sand and quartz is a form of silica. Silicon on Insulator (SOI): The practice of placing a thin layer of silicon on top of an insulating material in order to speed up the performance of a microprocessor by reducing the capacitance of the transistors and making them operate faster. Side converting: The process which changes the number of pixels and/or frame rate and/or scanning format used to represent an image by interpolating existing pixels to create new ones at closer spacing or by removing pixels. Side converting is done from standard resolution to standard resolution and high definition to high definition. See also: Down converting, up converting. Side Lobe: In an antenna, a radiation lobe in any direction other than that of the major lobe. Side Panels: Image of a standard 4:3 picture on a widescreen 16:9 aspect ratio television screen, typically with black bars on the side. Used to maintain the original aspect ratio of the source material.
268

SIGGRAPH: The Association of Computing Machinery (ACM)'s Special Interest Group on Computer Graphics (SIGGRAPH). Internet: www.siggraph.org. Signal-to-Noise Ratio (SNR): Fractional relationship between the power of the desired signal to the power of the noise signal. Typically refers to the additive thermal noise impairment. Signal-to-noise-and-distortion (SINAD): Ratio of desired signal power to composite impairment energy, which consists of thermal noise and intermodulation distortion components. Signal-to-quantization noise ratio (SQNR): Fractional relationship between the power of the desired signal to the power of the noise signal, where (in this case) the noise component refers specifically to the artifacts of non-ideal encoding by an A/D converter. Signaling Rate: The bandwidth of a digital transmission system expressed in terms of the maximum number of bits that can be transported over a given period of time. The signaling rate is typically much higher than the average data transfer rate for the system due to software overhead for network control, packet overhead, etc. Signal-to-noise ratio is the magnitude of the signal divided by the amount of unwanted stuff that is interfering with the signal (the noise). SNR is usually described in decibels, or "dB", for short; the bigger the number, the better looking the picture. SIF (Standard Interchange Format): Format for exchanging video images of 240 lines with 352 pixels each for NTSC,and 288 lines by 352 pixels for PAL and SECAM.At the nominal field rates of 60 and 50 fields/s, the two formats have the same data rate. Simple Profile: MPEG image streams using only I and P frames is less efficient than coding with B frames. This profile, however, requires less buffer memory for decoding. SIMM (Single In-Line Memory Module): A small memory card used with Fast Page Mode DRAM and EDO DRAM standards. SIMMs are 8-bit memory modules that had to be added in groups of four for processors with 32-bit memory interfaces. SIMMs were replaced by DIMMs. Stimulated Brillouin Scattering (SBS): The easiest fiber nonlinearity to trigger. When a powerful light wave travels through a fiber it interacts with acoustical vibration modes in the glass. This causes a scattering mechanism to be formed that reflects much of the light back to the source.

269

Simulcast: To broadcast the same program over two different transmission systems. Currently, some AM and FM stations simulcast the same program for part of the day, and some radio stations simulcast the audio from televised music events. Although not initially required by the FCC, it is believed that most television stations will simulcast their DTV and NTSC signal. Simulcasting will be required towards the end of the DTV transition period to protect the public interest. Simulcrypt: A process that uses several Conditional Access systems in parallel to control access to pay-TV. Simple Network Management Protocol (SNMP): A protocol used to manage network devices, usually hubs and routers. It operates over UDP, which is part of TCP/IP. Devices that support SNMP send messages to a management console. The management console then stores these messages in the MIB (Management Information Base). SNMP is often used to get traffic reports on hubs and check network utilization on various devices. Slave: This usually refers to an IDE setting on a hard drive or other IDE device. When two devices are used on a single IDE channel, one is set to master and the other to slave. See also master. Sleep mode: The placement of a computing device into an inoperable mode, where less power is consumed by shutting down unnecessary devices, but leaving all data in RAM. Typically you return from sleep mode by using the keyboard or mouse, and devices are switched back on. Sleep mode in its early incarnations was very problematic in some PCs, and would often crash programs and operating systems that were not completely compatible with the sleep mode in the PC's BIOS. Slice: A slice is a series of an arbitrary number of consecutive macroblocks. The first and last macroblocks of a slice shall not be skipped macroblocks. Every slice shall contain at least one macroblock. Slices shall not overlap. The position of slices may change from picture to picture. The first and last macroblock of a slice shall be in the same horizontal row of macroblocks. Slices shall occur in the bitstream in the order in which they are encountered, starting at the upper-left of the picture and proceeding by raster-scan order from left to right and top to bottom A slice is the basic synchronizing unit for reconstruction of the image data and typically consists of all the blocks in one horizontal picture interval--typically 16 lines of the picture. Slip: An overflow (deletion) or underflow (repetition) of one frame of a signal in a receiving buffer. SLIP (Serial Line Internet Protocol) - A protocol used to connect your computer to the Internet using a serial connection, such as over a dial-up modem. SLIP isn't used often anymore, as PPP has become a much more popular method of providing dial-up connections. Slocket: A circuit board that most commonly accepts a Socket 370 and/or FC-PGA microprocessor, and in turn plugs into a Slot 1 motherboard connection. Thus, socketed microprocessors can be made to work on motherboards with only slot connections.

270

Slocket2: Similar to the Slocket1, or just plain Slocket, this is an adapter so that Socket 370/PPGA and also FC-PGA processors can fit into Slot 1 motherboards. Slocket2 adapters add the ability for FC-PGA chips to work in Slot 1 motherboards. Intel made changes to the Socket 370 spec to make the FC-PGA spec, so that normal (non-Slocket2) Slocket adapters do not work with FC-PGA chips. Slot 1: A cartridge slot found on motherboards that accepts an SECC or SECC2 cartridge. It works with Intel's Pentium II and III chips, and some Celerons were also shipped that use slot 1. Most Celerons used Socket 370 instead. Slot 2: An Intel-designed specification that accepts a slot 2 cartridge. Intel ships its Xeon family of processors in a slot 2 cartridge. Slot 2 motherboards always accept at least two slot 2 cartridges. If you use only one processor in a slot 2 motherboard, you must put a termination card in the second slot. Slot A: This slot developed by AMD is mechanically compatible with Intel's Slot 1, but not electrically compatible. This slot uses the Alpha chip's EV-6 memory bus, capable of transferring data at speeds at and over 200MHz. Slot B: This slot developed by AMD is similar to Intel's Slot 2. It is mechanically compatible, but not electrically compatible. Just as Intel uses Slot 2 for its higher-end Xeon processors, AMD designed Slot B for K7 processors with high speed L2 caches for use in servers and workstations. Slot Mask: This form of mask is similar to a shadow mask, but instead of a sheet of metal with holes poked into it it is a series of fine, vertically-aligned metal wires. Smalltalk: An early object-oriented programming language developed in 1972 by the Software Concepts Group (led by Alan Kay) for the Xerox PARC project. It added onto the Simula-67 programming language and is still in use today by some. SMART (Self-Monitoring Analysis and Reporting Tool): This technology reports on a variety of hard drive attributes. You need a compliant BIOS and SCSI and/or IDE controller, a hard drive that supports SMART, and some sort of software package that reports on these conditions. Once you have that you should be able to receive system warnings about your hard drive. Many hard drive manufacturers have added onto the SMART technology or changed it around so that it has proprietary features for their drives. The good news about SMART is that having SMART is much better than not having it, and you can be warned of hard drive failure before it happens and backup your drive while it still works. Thus, your data is safer with SMART around. SmartMedia: A type of Flash memory card that is roughly one-third the size of a PC Card (PCMCIA card) and less than 1 mm thick. It is used to store data from and exchange data among PDAs, digital cameras, and other devices. SmartMedia has largely given way to the even smaller MMC MultiMedia Card standard. SMIL: Synchronized Multimedia Integration Language 2.0 (SMIL, pronounced "smile") puts animation on a time line, allows composition of multiple animations, and describes animation elements for any XML-based host language.
271

SMPTE (Society of Motion Picture and Television Engineers): A professional organization that sets standards for American television. 595 W. Hartsdale Ave., White Plains, NY, 10607-1824. Tel: 914-761-1100. Fax: 914-761-3115. Email: smpte@smpte.org Internet: www.smpte.org. 2. An informal name for a color difference video format that uses a variation of the Y, R-Y, and B-Y signal set. SMPTE 125M-1995: The SMPTE standard for a bit parallel digital interface for 55-line interlace component video signals. SMPTE 125M defines the parameters required to generate and distribute component video signals on a parallel interface. SMPTE 244M-2003: The SMPTE standard for a bit parallel digital interface for composite video signals. Television System M/NTSC Composite Video Signals Bit-Parallel Digital Interface SMPTE 259M-1997: The SMPTE standard for standard definition serial digital component and composite interfaces. Television 10-Bit 4:2:2 Component and 4fsc Composite Digital Signals Serial Digital Interface. SMPTE 272M-2004: The SMPTE standard for formatting AES/EBU audio and auxiliary data into digital video ancillary data space. SMPTE 292M-1998: The SMPTE standard for bit-serial digital interface for high-definition television systems. SMPTE 293M-2003: The SMPTE standard defining the data representation of the 720x483 progressive signal at 59.94 Hz. Television 720 x 483 Active Line at 59.94 Hz. Progressive Scan Production Digital Representation SMPTE 294M-2001: The SMPTE standard defining the serial interfaces for both 4:2:2P (progressive) on two-SMPTE 259M links and 4:2:0P (progressive) on a single SMPTE 259M link (at 360Mbps).Television 720 x 483 Active Line at 59.94-Hz Progressive Scan Production Bit-Serial Interfaces SMPTE 299M-2004: The SMPTE standard for 24-bit digital audio format for HDTV bit-serial interface. Allows eight embedded AES/EBU audio channel pairs. SMPTE 305M.2M-2000: The SMPTE standard for Serial Digital Transport Interface (SDTI). SMPTE 310M-2004: The SMPTE standard for synchronous serial interface (SSI) for MPEG-2 digital transport streams; used as the "standard" for the output from the ATSC systems multiplexer and the input to DTV transmitters. SNA (Systems Network Architecture): Large, complex, feature-rich network architecture developed in the 1970s by IBM. Similar in some respects to the OSI reference model, but with a number of differences. SNA is essentially composed of seven layers. See data flow control layer , data-link control layer, path control layer, physical control layer, presentation services layer, and transport layer. SNMP (Simple Network Management Protocol): Network management protocol used almost exclusively in TCP/IP networks. SNMP provides a means to monitor and control network devices, and to manage configurations, statistics collection, performance, and security.

272

SNMP2 (SNMP Version 2): Version 2 of the popular network management protocol. SNMP2 supports centralized as well as distributed network management strategies, and includes improvements in the SMI, protocol operations, management architecture, and security. Soft RAID: A RAID system implemented by low level software in the host system instead of a dedicated RAID controller. While saving on hardware, operation consumes some of the host's power. See also: RAID. Software: The programs that run on computer hardware. This can include operating systems, office suites, games, Web browsers, and more. Software runs on hardware. Software Developer's Kit (SDK): A programming tool that is tailored towards a particular purpose. The kit includes a compiler, linker, and an editor. Most hardware manufacturers have SDKs available that work specifically with their hardware. For example, if you were writing a program to work with a SoundBlaster card, you'd want to get the SoundBlaster SDK from Creative Labs. Software License: Most corporations need multiple copies of software, but do not need the media in which they come, either because they already have it or because they allow users to install software from a server on the network. Companies still need to purchase a copy for each user, however, so they need a way to prove they have actually purchased a copy of each. These companies purchase software licenses with no associated media. Such licenses are typically just sheets of paper that cost a lot of money, but allow you to legally use additional copies of the software. SOHO (Small Office/Home Office): A class of equipment purchasers characterized by their requirements for low-cost but functional computers, fax machines, printers, and other office equipment. Most computer makers target this market as a separate market segment between that of standard consumers and small businesses. Solaris: A UNIX-based operating system developed by Sun Microsystems and used widely for enterprise-class servers. It is designed to work with Sun's own SPARC chips as well as Intel's x86 microprocessors. SONET (Synchronous Optical Network): A standard for connecting fiber-optic
transmission systems. SONET was proposed by Bellcore in the middle 1980s and is now an ANSI standard. SONET defines interface standards at the physical layer of the OSI

seven-layer model. SONET defines a technology for carrying many signals of different capacities through a synchronous, flexible, optical hierarchy. This is accomplished by means of a byte-interleaved multiplexing scheme. Byte-interleaving simplifies multiplexing, and offers end-to-end network management. The first step in the SONET multiplexing process involves the generation of the lowest level or base signal. In SONET, this base signal is referred to as Synchronous Transport Signal level-1, or simply STS-1, which operates at 51.84 Mb/s. Higher-level signals are integer multiples of STS-1, creating the family of STS-N signals in Table 1. An STS-N signal is composed of N byte-interleaved STS-1 signals. This table also includes the optical counterpart for each STS-N signal, designated OC-N (Optical Carrier level-N). Synchronous and Non-synchronous line rates and the relationships between each are shown in Tables 1 and 2. The international equivalent of SONET, standardized by the ITU, is called SDH.
273

SONET Reference Materials: Bellcore GR-253-CORE, SONET Transport Systems: Common Generic Criteria Consult this document for an up-to-date listing of: Generic Requirements (GR) Technical References (TR) Technical Advisories (TA) Special Reports (SR) EIA/TIA Documents American National Standards Institute (ANSI) documents ITU-T and CCITT Recommendations ISO documents IEEE documents Source (Source Code): The uncompiled code of a computer program. Before compiling you can look at the instructions and tell what the program does--if you are familiar with the programming language. After compiling, the source code is transformed into machine code that is much more low level and hard to follow, making optimizations and fixes almost impossible. Thus, some developers package the source code with their software so that if people want to improve it or fix it--or just see how it works-they can do that. Source Stream: A single non-multiplexed stream of samples before compression coding. Sound Pressure Level (SPL): SPL or sound level Lp is a logarithmic measure of the energy of a particular noise relative to a reference noise source.

where p0 is the reference sound pressure and p1 is the sound pressure being measured.
274

It can be useful to express sound pressure in this way when dealing with hearing, as the perceived loudness of a sound correlates roughly logarithmically to its sound pressure. Spatial Encoding: The process of compressing a video signal by eliminating redundancy between adjacent pixels in a frame. Spatial Redundancy Reduction: Both source pictures and prediction errors have high spatial redundancy. This Specification uses a block-based DCT method with visually weighted quantisation and run-length coding. After motion compensated prediction or interpolation, the resulting prediction error is split into 8 by 8 blocks. These are transformed into the DCT domain where they are weighted before being quantised. After quantisation many of the DCT coefficients are zero in value and so two-dimensional run-length and variable length coding is used to encode the remaining DCT coefficients efficiently. Spatial resolution: The number of pixels horizontally and vertically in a digital image. Spectrum (electromagnetic spectrum): A means of charting the order of radio and television signals in terms of their physical wavelength. A continuous range of frequencies in the earth's atmosphere. SPECTRUM Called "the invisible highway", spectrum is the collection of radio frequencies used by analog television broadcasters. Other users of spectrum include police and fire department radios, air traffic control, satellite transmissions, microwave ovens, cell phones and baby monitors. Intense overcrowding and competition for spectrum sparked a 10-year effort involving broadcasters, government agencies, scientists and engineers to reinvent television broadcast technology. This effort has brought us digital television, which will not only allow a more efficient use of available spectrum, but offers benefits that will revolutionize the way we watch and enjoy TV. Spectral Band Replication (SBR): SBR is a technology to enhance audio or speech codecs, especially at low bit rates. It can be combined with any audio compression codec: the codec transmits the lower frequencies of the spectrum and SBR creates associated higher frequency content based on the lower frequencies and transmitted side information. When applicable, it involves reconstruction of a noise-like frequency spectrum by employing a noise generator with some statistical information (level, distribution, ranges), so the decoding result is not deterministic among multiple decoding processes of the same encoded data. Both ideas are based on the principle that the human brain tends to consider high frequencies to be either harmonic phenomena associated with lower frequencies or noise, and is thus less sensitive to the exact content of high frequencies in audio signals. It has been combined with AAC to create MPEG-4 High Efficiency AAC (HE AAC), with MP3 to create Mp3PRO and with MPEG-2. It is used in broadcast systems like Digital Radio Mondiale and XM Satellite Radio. Splicing: The concatenation, performed on the system level, of two different elementary streams. The resulting system stream conforms totally to ITU-T Rec. H.222.0 | ISO/IEC 13818-1. The splice may result in discontinuities in timebase, continuity counter, PSI, and decoding.
275

Spread Spectrum: The term spread spectrum defines a class of digital radio systems in which the occupied bandwidth is considerably greater than the information rate. The technique was initially proposed for military use, where the difficulties of detecting or jamming such a signal made it an attractive choice for covert communication. The term code-division multiple access (CDMA) is often used in reference to spread spectrum systems and refers to the possibility of transmitting several such signals in the same portion of spectrum by using pseudorandom codes for each one. This can be achieved by either frequency hopping (A series of pulses of carrier at different frequencies, in a predetermined pattern) or direct sequence (A pseudorandom modulating binary waveform whose symbol rate is a large multiple of the bit rate of the original bitstream) spread spectrum and provides an alternative to frequency division multiple-access (FDMA) or time-division multiple access(TDMA) methods. Sprites: In MPEG-4, static background scenes. Sprites can have dimensions much larger than what will be seen in any single frame. A coordinate system is provided to position objects in relation to each other and the sprites. MPEG- 4's scene description capabilities are built on concepts used previously by the Internet community's Virtual Reality Modeling Language (VRML). Spurious Free Dynamic Range (SFDR): Specification, common in A/D conversion systems, describing the number of dB between the desired signal and any undesired discrete spectral lines in the output spectrum. Can serve as an artificial limit to detection sensitivity. SQL (Structured Query Language) : This is a means of managing data in a relational database. There is a SQL standard, and there are also many vendor-specific SQL packages which combine relational databases with tools SQL tools to manage them. Statements in SQL can be used to read or request data from a database, such as, "select * from geek," which would return an entire table of data from the table named "geek." Queries can also be much more complex such as, "select * from geek where name=sam" which would return records from the database where the field "name" was set to "sam." SQL statements can also be used to delete and update data. SRAM: Static RAM. This type of memory chip in general behaves like dynamic RAM (DRAM) except that static RAMs retain data in a six-transistor cell needing only power to operate (DRAMs require clocks as well). Because of this, current available capacity is 4 Mbits--lower than DRAM--and costs are higher, but speed is also greater. Stimulated Raman Scattering (SRS): A fiber nonlinearity similar to SBS but having a much higher threshold. This mechanism can also cause power to be robbed from shorter wavelength signals and provide gain to longer wavelength signals.

276

SSA: See: Serial Storage Architecture. SSM (Synchronisation Status Message): Bits 5 to 8 of SDH overhead byte S1 are allocated for Synchronisation Status Messages. For further details on the assignment of bit patterns for byte S1, see the section of this primer on Multiplex Section Overhead. Stack: A data construct that uses first-in, last-out (FILO). Think of a stack of pancakes. The first pancake cooked (first in) is put on a plate and then covered with other pancakes as they are done cooking. The original pancake is the last one that leaves the plate if you eat them one at a time. See also queue. Standalone: A hardware device or piece of software that works with nothing else required. Examples include a hardware-based MP3 player, a RAID server that hooks up directly to the network with no PC required to run it, or an executable program with the proper libraries embedded. Standalone can have many contexts, but it always refers to the ability to function without requiring other components. Standard: Set of rules or procedures that are either widely used or officially specified. See also de facto standard and de jure standard. Statistical multiplexing: Increases the overall efficiency of a multi-channel digital television transmission multiplex by varying the bit-rate of each of its channels to take only that share of the total multiplex bit-rate it needs at any one time. The share apportioned to each channel is predicted statistically with reference to its current and recent-past demands. See also: Multiplex. Star topology: A network topology that has network hubs at the center, with all connected computers linked back to the hub by a single cable. Thus, if one cable goes down, the rest of the computers can still communicate. Start Codes: 32-bit codes embedded in the coded bit stream. They are used for several purposes including identifying some of the layers in the coding syntax. Start codes consist of a 24-bit prefix (0x000001) and an 8-bit stream_id as shown in Table 2-18 of ITU-T Rec. H.222.0 | ISO/IEC 13818-1. STB: See Set-top Box. STD Input Buffer: A first-in first-out buffer at the input of a system target decoder for
277

storage of compressed data from elementary streams before decoding. Stereo & Joint Stereo (for MPEG): MPEG-1 Layer II allows 4 modes: Stereo, joint_stereo, dual_channel and single_channel. The audio may be sampled at 44.1, 48(usually), or 32 kHz. Data rates extend from 32kb/s up to 384kb/s (single channel to 192kb/s). Stereo mode creates 2 independent channels for both left and right. When stereo mode is used, it should be at bitrate greater than 192 kbits -about half of which is for each channel. Stereo_mode at a sufficient bitrate (eg. at least 256kbit/s), allows the phase relationships between channels to be maintained so that accurate Dolby PrologicTM or Prologic II decoding can add spatial enjoyment to home listening environment. Layer II also allows 4 mode_extensions in the joint_stereo mode known as intensity_stereo sub-bands. (In MPEG-1 Layer III MP3, a MS mode is also available. Joint Stereo coding is a way of reducing the data quantity requirement by exploiting any stereophonic redundancies i.e. when the program material is mainly monaural (L=R). Joint stereo has 2 submodes called IS and MS. 'Joint_Stereo intensity_stereo' destroys phase information and shouldnt be used for high-quality encoding. Joint_stereo is best when encoding at 128 kbits with suitable material. If there is too much stereo information then reproduction may be marred by a 'flanging' or 'swishing' effect. 'Joint Stereo MS' is only available in MP3. MS stereo (means Middle/Side but also sometimes called Mono & Stereo-difference) a method of carriage of 2-channel audio that gains efficiency if the program material is predominantly mono or centre with little individual left or right side directionality. i.e. there is redundancy because mostly L=R. Used in MP3 and analog FM radio as it is compatible with existing monaural receivers. The middle/centre mono signal is left plus right (L+R); the stereo difference is left minus right (L-R). Still Picture: A coded still picture consists of a video sequence containing exactly one coded picture which is intra-coded. This picture has an associated PTS and the presentation time of succeeding pictures, if any, is later than that of the still picture by at least two picture periods. Storage capacity: Using the ITU-R 601 4:2:2 digital coding standard, each picture occupies a large amount of storage space--especially when related to computer storage devices such as DRAM and disks. So much so that the numbers can become confusing unless a few benchmark statistics are remembered. Fortunately, the units of mega, giga, tera and penta make it easy to express the very large numbers involved. The capacities can all be worked out directly from the 601 standard. Bearing in mind that sync words and blanking can be regenerated and added at the output, only the active picture area need be stored. For the 525 line TV standard the line data is: 720(Y) + 360(Cr) + 360(Cb) = 1,440 pixels/line 487 active lines/picture there are 1,440 x 487 = 701,280 pixels/picture (sampling at 8-bits, a picture takes 701.3 kbytes) 1 sec takes 701.3 x 30 = 21,039 kbytes, or 21 Mbytes For the 625 line TV standard the active picture is: 720(Y) + 360(Cr) + 360(Cb) = 1,440 pixels/line.

278

With 576 active lines/picture there are 1,440 x 576 = 829,440 pixels/picture (sampling at 8-bits, a picture takes 830 kbytes) 1 second takes 830 x 25 = 20,750 kbytes, or 21 Mbytes So both 525 and 625 line systems require approximately the same amount of storage for a given time: 1 minute takes 21 x 60 = 1,260 Mbytes, or 1.26 Gbytes 1 hour takes 1.26 x 60 = 76 Gbytes Useful numbers (referred to non-compressed video): 1 Gbyte will hold 47 seconds. 1 hour takes 76 Gbytes. Store and forward packet switching: Packet-switching technique in which frames are completely processed before being forwarded out the appropriate port. This processing includes calculating the CRC and checking the destination address. In addition, frames must be temporarily stored until network resources(such as an unused link) are available to forward the message. Contrast with cut-through packet switching. Stream: To transmit multimedia files that begin playing upon arrival of the first packets, without needing to wait for all the data to arrive. 2. To send data in such a way as to simulate real-time delivery of multimedia. Streaming: This term is often used to describe technology that is capable of playing audio or video while it is still downloading. This saves you some waiting. Without streaming you'd have to wait until the entire file downloaded before viewing or listening, making Internet radio and video fairly useless. Streaming media: Multimedia content--such as video, audio, text, or animation--that is displayed by a client a client as it is received from the Internet, broadcast network, or local storage. Stuffing Table (ST): An optional DVB SI table that authorizes the replacement of complete tables due to invalidation at a delivery system boundary such as a cable head-end. Subcell: geographical area that is part of the cells coverage area and that is covered with DVB-T signals by means of a transposer. NOTE: In conjunction with the cell_id the cell_id_extension is used to uniquely identify a subcell. Subnet Mask: The 32-bit address mask used in IP to indicate the bits of an IP address that are being used for the subnet address. Sometimes referred to simply as mask. Sub-pixel: A spatial resolution smaller than that of pixels. Although digital images are composed of pixels it can be very useful to resolve image detail to smaller than pixel size, i.e., sub-pixel. For example, the data for generating a smooth curve on television needs to be created to a finer accuracy than the pixel grid itself, otherwise
279

the curve will look jagged. Again, when tracking an object in a scene or executing a DVE move, the size and position of the manipulated picture must be calculated, and the picture resolved, to a far finer accuracy than the pixels, otherwise the move will appear jerky. Subroutine: A mini program that resides inside another program and is called within that program. Typically you put together a subroutine when you have to do similar repetitive tasks in different areas of your program and you don't want to code the same thing over and over again. Supercluster: A group of computers linked together via a high speed local network. The performance of such a supercluster compares with the performance of a supercomputer. Supercomputer: A computer that is able to operate at a speed that places it at or near the top speed of currently produced computers. Most supercomputers cost millions of dollars, and the traditional model of using one large computer with proprietary hardware is being challenged by using a cluster of cheaper computers with more standard hardware. Superscalar: A processor that is capable of executing more than one instruction during a processor cycle. Processors can do this by fetching multiple instructions in one cycle, deciding which instructions are independent of other instructions, and executing them. To do this the processor must have an instruction fetching unit that can fetch more than one instruction at a time, built-in logic to determine if instructions are independent, and multiple execution units to execute all the independent instructions. Instructions that depend on the results of the previous instruction obviously can't be executed simultaneously with the instructions on which they depend. This type of instruction slows down superscalar processors. If all instructions are of this type you get no benefit from the superscalar architecture. Surround sound: Refers to listening environments where loudspeakers are positioned around the listener(s). Besides the front loudspeakers, other loudspeaker(s) are positioned to the rear and sometimes to the sides. The intention is to recreate the ambience and directionality of the original scene being reproduced. Sound systems employing surround sound can recreate such effects by processing the program stereo channels; or better stereo with phase encoded material such as Dolby ProLogic; or best, from a discrete multichannel system for example a 5.1 (six) channel system that includes a low frequency sub-woofer channel. SVG (Scalable Vector Graphics): is an open-standard vector graphics format that allows the design of Web pages and entire Web sites with high-resolution graphics that incorporate real-time data. It delivers two-dimensional vector graphics and mixed vector and raster graphics to the Web in XML, ensuring accessibility, dynamism, reusability, and extensibility. SVG Version 1.0 was progressed to a W3C Proposed Recommendation on 19 July 2001. S-Video (Separated video or Super-Video):

280

1. An encoded video signal which separates the brightness from color data. S-video can greatly improve the picture when connecting TVs to any high quality video source such as digital broadcast satellite (DBS) and DVDs. 2. A video cabling standard that splits video information into two separate signals: one for brightness (luminance), and the other for color information (chrominance). In contrast to composite video, S-Video has a sharper picture. Nowadays, DVD players, some VCRs, and many high-end televisions support S-Video. S-Video is also supported by many electronic accessories, such as camcorders, computer video cards with TV functions, and even console games such as the Sony PlayStation. Swap File: An area of your hard drive that the operating system uses for additional memory when main memory is used up. Although slower, it is usually much more abundant. Sweetening: Electronically improving the quality of an audio or video signal, such as by adding sound effects, laugh tracks, and captions. Switch: A hub that directs network packets to the port for which they are intended, without broadcasting them to all connections. At one time switched hubs were a luxury used only on the backbone of a network; nowadays even consumer equipment is switched to allow for faster networking. For an example of the benefits of switches over simple hubs, a 24-port 100BaseT hub can only transfer 100Mbits of data at any given time. A 24-port 100BaseT switch could theoretically be used to transfer 2.4Gbits/second, assuming that the switch is built to handle that much throughput. Symbol Error Rate (SER): Similar to the BER concept, but instead refers to the likelihood of mistake detection on the digital modulation symbols themselves, which may encode multiple bits per symbol. Symmetrical Communications: Two-way communications in which equal volumes of information flow in each direction. For example, a videoconference call is symmetrical, Video-on-Demand is not. Also see asymmetric communications. Synchronous: A transmission procedure by which the bit and character stream are slaved to accurately synchronized clocks, both at the receiving and sending end. Synchronisation Supply Unit (SSU): A G.812 network equipment clock. Synchronous Transport Module (STM): A structure in the SDH transmission hierarchy. STM-1 is SDHs base-level transmission rate equal to 155.52 Mbit/s. Higher rates of STM-4, STM-16, and STM-64 are also defined. Synchronize (v. to synchronize): The act of updating one set of data based on another similar set which may be more up to date. Synchronization can go one way or two ways, and follows a set of rules defined by the synchronization procedure. Synchronization is often used to update data such as address book data on PDAs, or e-mail on laptops that may not always be connected to the office e-mail server. Synchronous: This refers to things that happen at the same time. More commonly, it is used in electronics to signify something occurring at the set pace of a clock, much like a metronome. It also can indicate that a DSL connection has the same upload and download speed.
281

Symmetric Digital Subscriber Line (SDSL): A form of digital subscriber line that has the same transmission speed upstream and downstream. It is most often used for business use, and is more expensive than the consumer version, ADSL. System Clock Reference (SCR): SCR is a time stamp in the program stream, as opposed to the Program Clock Reference (PCR), which appears in the transport stream. In most common cases, the SCR values and PCR values function identically; however, the maximum allowed interval between SCRs is 700ms, while the maximum between PCRs is 100ms. The clocks used at the multiplexer and decoder do not measure time in hours and minutes but in units of 27 MHz expressed as a 42-bit binary number and there is no requirement that the clocks should be related to any national time standard. In a programme stream which can carry only a single programme, the clock at the multiplexer is called the System Clock. Access units in all the elementary streams of the programme are assigned time stamps based on this system clock. Regular samples of the system clock are carried in the programme stream* to enable the decoder clock to maintain synchronism with the system clock. These samples are called System Clock References (SCR) and they are encoded in optional fields in the pack headers of the programme stream. The alternative MPEG-2 multiplex, the transport stream, may contain many independent programmes. Each programme has its own independent clock called a Programme Clock which need not be synchronised with the clocks of other programmes. (It is, though, permitted for programmes to share Programme Clocks). Access units comprising a programme are assigned time stamps based on the associated programme clock. Samples of each programme clock called Programme Clock References (PCR) are carried in the transport stream. They enable a decoder to synchronise its clock with the programme clock of the programme to be decoded. A Programme Clock Reference for each Programme Clock must appear in the transport stream at least every 0.1 seconds. Exactly which transport packets are carrying the programme clock reference values for a particular programme may be found in the programme map table defining the programme. Note the potential confusion here. In a programme stream, the clock is called a System Clock and a sample of its value is called a System Clock Reference. In a transport stream the clocks are called Programme Clocks and samples of their values are called Programme Clock References. Programme clock references are actually carried in the adaptation fields of transport packets. Apart from these differences in naming, there is very little difference between the clocks and their function in either the programme stream or transport stream. But note, from the footnotes on this and the previous page, where the synchronizing references are placed for each type of stream. Time stamps themselves are 33-bit binary values expressed in units of 90 kHz. System Header: The system header is a data structure defined in 2.5.3.5 of ITU-T Rec. H.222.0 | ISO/IEC 13818-1 that carries information summarizing the system characteristics of ITU-T Rec. H.222.0 | ISO/IEC 13818-1 Program Stream. System Layer: Portion of the MPEG-2 specification that deals with the combination of one or more elementary streams of video, audio or data into one or more transport streams for storage or transmission. System Target Decoder (STD): A hypothetical reference model of a decoding
282

process used to define the semantics of an ITU-T Rec. H.222.0 | ISO/IEC 13818-1 multiplexed bit stream. System Time Clock (STC): The 27MHz clock of the encoder upon which transport stream timing is based. System Time Table (STT): An ATSC PSIP table that carries the current date and time of day. It provides timing information for any application requiring schedule synchronization.

283

-TT1: Two pairs of copper wire that carry data at a rate of 1.544Mbps. T1 lines are used to carry 24 DS-0 signals. They can be used to carry 24 phone lines or an Internet connection capable of 1.544Mbps data transfer. T2: Four T1 lines which can carry 96 voice channels or up to 6.312Mbps worth of data. T3: T3D: A massively parallel computer architecture,using between 32-2048 tightly coupled DECAlpha processors. 28 T1 lines together make up a T3, which can carry 672 separate voice channels or up to 44.736Mbps data throughput. T4: 6 T3 lines make up a T4, which carries data at 274Mbps. T5: 240 T1 lines, which can carry 5760 voice channels or up to 400.352Mbps worth of data. T.120: ITU-T standard that describes data conferencing. H.323 provides for the ability to establish T.120 data sessions inside of an existing H.323 session. Tab Delimited: A text file where data elements in the text file are separated by the tab character. Table: The transmission medium for system information necessary for the decoder to access and decode programs and services in the transport stream. Tables are divided into subtables then into sections before being transmitted. Several different tables are specified by MPEG, DVB, ISDB-T and ATSC. Comprised of a number of sub_tables with the same value of table_id. Tachometer: A gauge the measures how fast a motor is running in revolutions per minute. The gauge can represent the information in analog or digital, and is critical when driving performance vehicles. Without a tachometer you can judge the RPM speed of a motor by listening to how fast it sounds like it is running. Tag Image File Format (TIFF): A bitmap graphics file format. It was developed by Aldus in 1986 to provide a common format for scanners, and is mainly used for that purpose, desktop publishing, and as the data format for scanned faxes. Tag RAM: A bank of SRAM that only holds addresses. Tag RAM is used to store addresses so that when the processor makes a call for memory it first checks to see if the data is in the cache by looking for the memory address in the tag RAM. If it's there, it gets the data from cache. If not, it gets the data from main memory. Tag RAM controls the amount of memory that can be cached. For example, the Tag RAM used with most Intel 430TX chipsets only allowed the first 64 MB of RAM to be cached. You could use more RAM in your machine but it won't be cached, and thus performance will be affected when memory over 64 MB is accessed. TAO (Track at Once): A method of writing data to a CD-R or CD-RW disc on a track by track basis. Recording can be paused between writing tracks, unlike Disc at Once,
284

which requires an uninterrupted full-disc recording. You can create unclosed CDs with TAO recording that you can write to again at a later time. Tape: A storage medium that consists of a long band of magnetic material wound around a couple of reels. Tapes can hold a lot of information, but are typically slow to access different parts of the tape, and can be unreliable for long-term storage due to their magnetic nature. Tape Archive (tar): A UNIX/Linux command that was designed to allow the storage of data spread across files and directories to exist in a single tape volume. Another handy use is to squeeze a directory or group of files into a single file on a hard drive, and then compact it with a compression program such as gzip to save some space. Simple usage includes "tar cvf" (cvf = create verbose-mode file), or "tar xvf" (x = extract) to extract a .tar file back to its original contents. Tape Drive: A device that can store data on a tape. The advantage of storing data on a tape is that a tape can hold large amounts of data in a small and inexpensive package. On the downside, a tape cannot store the data indefinitely, and it is expensive and slower compared to a hard drive. But tapes themselves are cheaper, and are more easy to move around than hard drives. TCM (Time Compression Multiplexing):Transmission in opposite directions are achieved through time divisionmultiplexing. The transmission throughput over the channel is fasterthan the data rate.

TCP (Transmission Control Protocol): The part of the TCP/IP suite of protocols that is responsible for forming data connections between nodes that are reliable, as opposed to UDP, or IP, which TCP is based on, and which is by default connectionless and potentially unreliable. TCP/IP (Transmission Control Protocol/Internet Protocol): The TCP/IP suite first saw use on the original Department of Defense Internet in 1983. Its first implementation was amazingly successful, and it is still THE protocol of the Internet. In fact, it has grown even more, and is being used in private networks around the world. TCP/IP is a suite of communications protocols that allows communication between groups of dissimilar computer systems from a variety of vendors. It scales better than NetBEUI because NetBEUI is not routable, and beat out IPX/SPX as it was easier to route than that once-dominant protocol. TDM (Time division multiplex): The management of multiple signals on one channel by alternately sending portions of each signal and assigning each portion to particular blocks of time. TDMA (Time-Division Multiple Access): The principleof TDMA is basically simple. Traditionally, voice channels have been createdby dividing the radio spectrum into (ever narrower) frequency RFcarriers (channels), with one conversation occupying one (duplex) channel.This technique is known as FDMA ( frequency divisionmultiple access). TDMA divides the radio carriers into an endlessly repeatedsequence of small time slots (channels). Each conversation occupies justone of these time slots.
285

So instead of just one conversation, each radiocarrier carry a number of conversations at once. TDR (Time Domain Reflectometer/Reflectometry): a network analysis tool used to send a signal down a wire, and measuringthe 'time' it takes for various reflections to return along the originalsignal path. This allows one to adjudicate the quality of the wire or locatedamage to the wire. Tearing: A lateral displacement of the video lines due to sync instability. Visually it appears as though parts of the images have been torn away. Telecom: Refers to the industry and hardware involved with telephones and the transmission of voice data. Telecommunications: Refers to the industry and hardware involved with telephones and the transmission of voice data. Telephony: The science of audio communication through electric devices. It commonly refers to the many pieces of software that will make your $2,000 computer act like a $20 telephone. Of course, you can make this work for you with CTI. With CTI, you can make a $2,000 computer act like a $20,000 a year employee who works 24 hours a day. Telephony API (TAPI): An API for using telephony functions in Windows. For example, you can include TAPI instructions in your program that can dial numbers, receive calls, and interpret touch-tones. Temporal aliasing: A defect in a video picture that occurs when the image being sampled moves too fast for the sampling rate. A common example occurs when the rapidly rotating spokes of a wagon's wheels appear to rotate backwards because of video scanning that moves more slowly than the spokes. Temporal Encoding: The process that compresses a video signal by eliminating redundancy between sequential frames. Temporal Masking: The phenomenon that happens when a loud sound drowns out a softer sound that occurs immediately before or after it. Temporal Processing: Because of the conflicting requirements of random access and highly efficient compression, three main picture types are defined. Intra Coded Pictures (I-Pictures) are coded without reference to other pictures. They provide access points to the coded sequence where decoding can begin, but are coded with only moderate compression. Predictive Coded Pictures (P-Pictures) are coded more efficiently using motion compensated prediction from a past intra or predictive coded picture and are generally used as a reference for further prediction. Bidirectionally-predictive Coded Pictures (B-Pictures) provide the highest degree of compression but require both past and future reference pictures for motion compensation.

286

B idir ec tio na l Inte rpo la tio n

P
T 1 5 1 6 6 50 -94 /d 0 1

P re diction

Figure Intro. 1 Exam ple of tem poral picture structure

Bidirectionally-predictive coded pictures are never used as references for prediction (except in the case that the resulting picture is used as a reference in a spatially scalable enhancement layer). The organization of the three picture types in a sequence is very flexible. The choice is left to the encoder and will depend on the requirements of the application. Figure Intro. 1 illustrates an example of the relationship among the three different picture types. Temporal resolution: The ability of the display to reproduce adequate detail to allow the visual system to distinguish the separate parts or components of an object that is moving through the display. Terabit (Tb): Approximately 1 trillion bits. More exactly, it is 2^40, or 1,099,511,627,776, bits. Terabyte: One trillion bytes, or one thousand gigabytes. TeraFlop (Tflop): The ability of a system to compute one trillion floating point operations in one second. Terrestrial: A broadcast signal transmitted "over the air" to an antenna. Terrestrial Virtual Channel Table (TVCT): The ATSC table that identifies a set of one or more channels in a terrestrial broadcast. For each channel, the TVCT indicates major and minor channel numbers, short channel name, and information for navigation and tuning. Text Editor: A class of computer programs that allows the opening, changing, and saving of text files. Text editors can be used to edit HTML files, and any file that is not binary in nature. Text editors are not good for working with graphics files or proprietary formats such as Word documents that contain formatting information that is not translated properly to plain text. Text editors differ from word processors because no formatting data (such as font type, font size, etc.) is added when a file is
287

saved in a text editor. Texture Element (Texel): The smallest element of a textured 3D surface. Pixels make up 2D surfaces, but texels make up the surfaces that cover textured 3D objects. Higher texel fill-rates generally imply better performance in 3D graphics accelerators. Texture Mapping: This technique pastes saved images, to be used as textures, onto triangle surfaces to improve realism. For example, you could take a picture of a grassy field, code the program to use this picture to fill in your triangles on the floor, and you get what looks like grass on which you can walk. Timebase Recovery: A program is a collection of associated media, all of which refer to a common timebase. The timebase is referred to as the System Time Clock (STC). The ITU-T H.222.1 Program Stream supports one and only one program. The H.222.0 Transport Stream supports multiple programs. H.222.1, however, constrains the number of programs in the Transport Stream to be one. The send side and receive side each have their own timebases. Time stamps attached to specific PDUs identify the intended time of arrival of the PDU at the receive side. Synchronization of the receive side timebase with the send side timebase may be achieved using these time stamps. H.222.1 provides optional additional timebase recovery information, which may be useful in jittered environments. Time code (Vertical interval time code (VITC)): This is SMPTE time code that is recorded as video signals in the vertical interval of the active picture. It has the advantage of being readable by a VTR in still or jog. Multiple lines of VITC can be added to the signal allowing the encoding of more information than can be stored in normal LTC. 2. Linear time code (LTC). Time code recorded on a linear analog track (typically an audio channel) on a videotape. Also called longitudinal time code. Time code can be drop frame (59.94 Hz) that matches actual elapsed time by dropping occasional frames or non-drop frame (60 Hz) that runs continuously although it does not exactly match actual elapsed time. Time and Date Notation: International Atomic Time (TAI): The timing standard based on measurement of the atomic transition/oscillation of the element cesium, which is defined to have a frequency of 9,192,631,770 Hz. Universal Time (UT) is a time scale based on astronomical observation. Greenwich Mean Time (GMT) is the same as UT. UTC Coordinated Universal Time is a time scale with seconds coincident to those of TAI but within a close tolerance ( 0.8 sec) of UT. See also Modified Julian Date (MJD). More detailed information on time keeping may be found at web sites such as http://www.boulder.nist.gov/timefreq/general/enc-ch.htm#utc And further time information may be obtained at: http://www.bipm.fr/enus/5_Scientific/c_time/time_server.html Time and Date Table (TDT): A mandatory DVB SI table that supplies the UTC time (Coordinated Universal Time) and date. This table enables joint management of events corresponding to services accessible from a single reception point.

288

Time Division Multiple Access (TDMA): Approach for allotting single-channel usage amongst many users, by dividing the channel into slots of time during which each user has access to the medium. Time Domain: Time-domain is a term used to describe the analysis of mathematical functions, or real-life signals, with respect to time. In the time-domain, the signal or function's value is known at various discrete time points; or for all real numbers, for the case of continuous time. Timing Jitter Removal: Recommendation H.222.1 may offer the capabilities to remove the effects of time delay variation on the encoded bit stream at the receiver. Timing in Transportstream: PCR, PTS and DTS: Timing in the transport stream is based on the 27MHz System Time Clock (STC) of the encoder. To ensure proper synchronization during the decoding process, the decoders clock must be locked to the encoders STC. In order to achieve this lock, the encoder inserts into the transport stream a 27MHz time stamp for each program. This time stamp is called the Program Clock Reference, or PCR. Using the PCR, the decoder generates a local 27MHz clock that is locked to the encoders STC. As we mentioned earlier, compressed video frames are often transmitted out of order. This means that an I-frame used to regenerate preceding B-frames must be available in the decoder well before its presentation time arrives. To manage this critical timing process, there are two time stamps in the header of each PES packet, the Decoding Time Stamp (DTS) and the Presentation Time Stamp (PTS). These tell the decoder when a frame must be decoded (DTS) and displayed (PTS). If the DTS for a frame precedes its PTS considerably, the frame is decoded and held in a buffer until its presentation time arrives. The following figure shows the timing sequence in the transport stream. Before the transport stream is created, the encoder adds PTSs and DTSs to each frame in the PES. It also places the PCR for each program into the transport stream. Inside the decoder, the PCR goes through a Phase Lock Loop (PLL) algorithm, which locks the decoders clock to the STC of the encoder. This synchronizes the decoder with the encoder so that data buffers in the decoder do not overflow or underflow. Once the decoders clock is synchronized, the decoder begins decoding and presenting programs as specified by the PTS and DTS for each audio or video frame.

289

Timeline: In nonlinear editing, the area in which audio and video clips are applied, typically giving duration in frames and seconds. Also seen in animation and composition software. Time Offset Table (TOT): Optional DVB SI table that supplies the UTC time and date and shows the difference between UTC time and the local time for various geo graphical regions. The PID for this table is 0x0014. Time-Stamp: An indication in the transport stream of the time at which a specific action, such as the arrival of a byte or the presentation of a frame, is to take place. See also PCR, DTS, and PTS. Thin Client: A thin client is similar to a dumb terminal in that it gets all of its information from the network. Some thin clients have their own memory, but lack a hard drive. They're basically stripped down computers that are supposed to lower the total cost of owning a computer. These computers are generally used in business or commercial applications. Thin Film Transistor (TFT): A synonym for the Active Matrix display. You'll often see screens referred to as "TFT-Active Matrix," or just "TFT" if the manufacturer is low on space. ThinNet: This refers to the type of cabling on which 10Base2 Ethernet runs. It can transfer data at up to 10Mbps using the 10Base2 Ethernet standard. It is thinner than 10Base5 cabling ThinNet. Tiresias (Typeface/fonts): A san-serif typeface developed by the Royal National Institute for the Blind (RNIB a UK registered charity). Designed to optimise legibility on television displays for those viewers who relied on subtitles but also intended for use as resident in receivers for enhanced and interactive content. This font is usable for displays on both 4:3 and 16:9 widescreen displays while the characters may be a little squeezed on a 4:3 screen and a little stretched on 16:9 screen, they remain legible. The typeface may be in several fonts eg. normal 12point, italic 12point, bold 18point. RNIB retains the rights to the basic font design including other applications, such as printing. BitStream Inc. is the distributor of the font and in the proprietary PFR format it's sufficiently compact to be used in digital television. See also Fonts. Title Bar: The top portion of a window in a GUI that contains the title of the window. Token Ring: A network topology pioneered by IBM and eventually made into the IEEE 802.5 standard. The original version transmitted data at 4Mbits/second, and it was updated to transmit at 16Mbit/second. Token ring networks are wired in a ring topology, and nodes on the network pass a token around. Whichever node has the token is allowed to use the network. Toner: Basically this is ink in dust form. It is specially formulated to be sticky and to melt at a couple hundred degrees so that it bonds with paper when used in a laser printer or copier. Toner Cartridge: When referring to laser printers (or copiers or fax machines) this is a cartridge that contains toner and the electrostatic drum used to transfer that toner. Toolbar: A common user interface term that refers to any rectangular bar of buttons
290

or icons with a set of related functions. For example, most browsers use a toolbar for navigating forward or backwards through pages. You can often customize toolbars and add more functionality to them. Toolkit Without An Interesting Name (TWAIN): A set of operations that allow scanners to have a standard interface to software. This allows the use of your favorite graphics package with your favorite scanner without worrying if one will support the other. As long as both are TWAIN-complaint they will work together. Check out www.twain.org for much more information. Topology: The general structure of a network. Some examples are star and ring topology Total harmonic distortion (THD): Measure of total harmonic power at the output of a nonlinear device, typically given in dBc relative to a desired signal. TOV: Threshold of visibility. The impairment level (or D/U in dB) beyond which a source of impairment or interference may introduce visible deficiencies in more sensitive program material. For all tests, TOV was determined by expert observers. Track: One of the concentric circles of data on disk media such as hard drives, CD-ROM discs, DVD discs, and floppy disks. Track at Once (TAO): A method of writing data to a CD-R or CD-RW disc on a track by track basis. Recording can be paused between writing tracks, unlike Disc at Once, which requires an uninterrupted full-disc recording. You can create unclosed CDs with TAO recording that you can write to again at a later time. Trackball: This is basically a mouse turned upside down. Instead of moving the whole pointing device, you simply move the ball on top. It was first seen in arcade games such as Missile Command and Centipede, but is now used to replace mice where space is limited. Transceiver: A device that translates between different network cables but maintains the same network topology. Thus, a transceiver could allow an AUI (Thick-Ethernet) NIC to work with a Cat5 network cable. Transfer rate: The rate at which data is transferred in some amount of bits per second. Transport Layer: Layer 4 of the OSI reference model. This layer is responsible for reliable network communication between end nodes. The transport layer provides mechanisms for the establishment, maintenance, and termination of virtual circuits, transport fault detection and recovery, and information flow control. Corresponds to the transmission control layer of the SNA model. Transistor: An electronic device that acts like an electrically activated switch but has no moving parts, so it can switch millions of times per second. Transistor Transistor Logic (TTL): A specific method of wiring a digital circuit using bipolar transistors.
291

Transmission (TX): The act of uploading or sending data. Often the term "TX" is used on indicator lights on modems or network cards to indicate that data is flowing out of the device. Transmission Parameter Signalling (TPS): The TPS carriers are used for the purpose of signalling parameters related to the transmission scheme, i.e. to channel coding and modulation. The TPS is transmitted in parallel on 17 TPS carriers for the 2K mode and on 68 carriers for the 8K mode. Options covering 4K, in-depth inner interleaver, time-slicing and MPE-FEC bits for DVB-H signalling are specified in annex F. ETSI Final draft ETSI EN 300 744 V1.5.1 (2004-06) 30 Every TPS carrier in the same symbol conveys the same differentially encoded information bit. The TPS is defined over 68 consecutive OFDM symbols, referred to as one OFDM frame. Four consecutive frames correspond to one OFDM super-frame. The reference sequence corresponding to the TPS carriers of the first symbol of each OFDM frame are used to initialize the TPS modulation on each TPS carrier. Each OFDM symbol conveys one TPS bit. Each TPS block (corresponding to one OFDM frame) contains 68 bits, defined as follows: - 1 initialization bit; - 16 synchronization bits; - 37 information bits; - 14 redundancy bits for error protection. Of the 37 information bits, 31 are used. The remaining 6 bits shall be set to zero. Transcode: The process of converting a file or program from one formator resolution to another. Transmission Delay: To control echo and to minimize the effect on digital throughput, the maximum (one way absolute delay for steady state operation of a 100 mile transport system with no intermediate terminals is 1ms. This applies for all interface options provided. The required maximum delay for shorter systems is to be decreased in direct proportion to the route mileage. Transmitter: equipment, that allows to modulate a baseband transport stream and to broadcast it on one frequency Transponder: In telecommunication, the term transponder (sometimes abbreviated to XPDR or TPDR) has the following meanings: 1) An automatic device that receives, amplifies, and retransmits a signal on a different frequency (see also broadcast translator). 2) An automatic device that transmits a predetermined message in response to a predefined received signal. 3) A receiver-transmitter that will generate a reply signal upon proper electronic interrogation. In particular, a communications satellites channels are called transponders, because each is a separate transceiver or repeater. With digital video data compression and multiplexing, several video and audio channels may travel through a single transponder on a single wideband carrier. Original analog video only has one
292

channel per transponder, with subcarriers for audio and automatic transmission identification service ATIS. Non-multiplexed radio stations can also travel in single channel per carrier (SCPC) mode, with multiple carriers (analog or digital) per transponder. This allows each station to transmit directly to the satellite, rather than paying for a whole transponder, or using landlines to send it to an earth station for multiplexing with other stations. Transposer: type of repeater which receives a DVB-T signal and re-transmits it on a different frequency. The relationships of some of these definitions are illustrated in the service delivery model in figure1.

Transponder: Trans(mitter) and (res)ponder. The equipment inside a satellite that receives and re-sends information. Transport Stream & ATM Transmission: The packet length is the output of a compromise: on one hand longer the packet, lower the overhead; on the other hand shorter the packet, easier the streams synchronization. The 188 bytes transport stream packet can be easily divided into 4 ATM cells. Each ATM cell is composed by 5 bytes of header + 1 byte for the Adaptation Layer and 47 bytes for the payload (=188/4). So a transport stream can be transmitted through ATM with the minimum overhead. Transport Stream (TS): Data stream that includes ancillary data, compressed video, and compressed audio. In MPEG-2, a packet-based method of multiplexing into a single stream, one or more packetised elementary streams each consisting of related digital streams of various material. This material may be video, audio and other information such as teletext, bit-mapped subtitles, and other supporting material in an internet type format. Other streams may be in a private data format. The streams may have one or more independent time bases. In MPEG-2, the packet length is 188-bytes including a 4-byte header which contains a PID for identification. The 184 byte
293

payload may include data from the PES packets (i.e. video, audio, etc.), information on the format of the contents (PSI) and/or adaptation fields or stuffing bytes. The Transport Stream is a stream definition which is tailored for communicating or storing one or more programs of coded data according to ITU-T Rec. H.262 | ISO/IEC 13818-2 and ISO/IEC 13818-3 and other data in environments in which significant errors may occur. Such errors may be manifested as bit value errors or loss of packets. Transport Streams may be either fixed or variable rate. In either case the constituent elementary streams may either be fixed or variable rate. The syntax and semantic constraints on the stream are identical in each of these cases. The Transport Stream rate is defined by the values and locations of Program Clock Reference (PCR) fields, which in general are separate PCR fields for each program. There are some difficulties with constructing and delivering a Transport Stream containing multiple programs with independent time bases such that the overall bit rate is variable. The Transport Stream may be constructed by any method that results in a valid stream. It is possible to construct Transport Streams containing one or more programs from elementary coded data streams, from Program Streams, or from other Transport Streams which may themselves contain one or more programs. The Transport Stream is designed in such a way that several operations on a Transport Stream are possible with minimum effort.

Among these are: 1) Retrieve the coded data from one program within the Transport Stream, decode it and present the decoded results as shown in Figure Intro. 2. 2) Extract the Transport Stream packets from one program within the Transport Stream and produce as output a different Transport Stream with only that one program as shown in Figure Intro. 3.

294

3) Extract the Transport Stream packets of one or more programs from one or more Transport Streams and produce as output a different Transport Stream (not illustrated). 4) Extract the contents of one program from the Transport Stream and produce as output a Program Stream containing that one program as shown in Figure Intro. 4. 5) Take a Program Stream, convert it into a Transport Stream to carry it over a lossy environment, and then recover a valid, and in certain cases, identical Program Stream.

Figure Intro. 2 and Figure Intro. 3 illustrate prototypical demultiplexing and decoding systems which take as input a Transport Stream. Figure Intro. 2 illustrates the first case, where a Transport Stream is directly demultiplexed and decoded. Transport Streams are constructed in two layers: a system layer; and a compression layer. The input stream to the Transport Stream decoder has a system layer wrapped about a compression layer. Input streams to the Video and Audio decoders have only the compression layer. Operations performed by the prototypical decoder which accepts Transport Streams either apply to the entire Transport Stream ("multiplex-wide operations"), or to individual elementary streams ("stream-specific operations"). The Transport Stream system layer is divided into two sub-layers, one for multiplex-wide operations (the Transport Stream packet layer), and one for stream-specific operations (the PES packet layer). A prototypical decoder for Transport Streams, including audio and video, is also depicted in Figure Intro. 2 to illustrate the function of a decoder. The architecture is not unique some system decoder functions, such as decoder timing control, might equally well be distributed among elementary stream decoders and the channel specific decoder but this figure is useful for discussion. Likewise, indication of
295

errors detected by the channel specific decoder to the individual audio and video decoders may be performed in various ways and such communication paths are not shown in the diagram. The prototypical decoder design does not imply any normative requirement for the design of a Transport Stream decoder. Indeed non-audio/video data is also allowed, but not shown. Figure Intro. 3 illustrates the second case, where a Transport Stream containing multiple programs is converted into a Transport Stream containing a single program. In this case the re-multiplexing operation may necessitate the correction of Program Clock Reference (PCR) values to account for changes in the PCR locations in the bit stream. Figure Intro. 4 illustrates a case in which a multi-program Transport Stream is first demultiplexed and then converted into a Program Stream. Figures Intro. 3 and Intro. 4 indicate that it is possible and reasonable to convert between different types and configurations of Transport Streams. There are specific fields defined in the Transport Stream and Program Stream syntax which facilitate the conversions illustrated. There is no requirement that specific implementations of demultiplexors or decoders include all of these functions. Transport Stream Packet: A way of breaking up the continuous stream of MPEG compressed video, audio and other data for ease of passing through various transmission systems such as broadcast. In MPEG-2, the packet length is 188-bytes including a 4-byte header which contains a PID for identification. The 184 byte payload may include data from the PES packets (i.e. video, audio, etc.), information on the format of the contents (PSI) and/or adaptation fields or stuffing bytes. Transport Stream Packet Header: The leading fields in a Transport Stream packet, up to and including the continuity_counter field. Traveling Wave Tube Amplifier (TWTA): Prior to advances in solid-state electronics, an amplifier technology designed for generation of very high-power microwave signals, such as those used in satellite communication applications. Trap: Message sent by an SNMP agent to an NMS, console, or terminal to indicate the occurrence of a significant event, such as a specifically defined condition or a threshold that was reached. See also alarm and event. Tree: A means of organizing data that starts with a single node, or data element, that has any number of child elements. Each of these child elements or nodes can also have its own child elements. Trees are an important part of understanding programming and data structures. Tree Topology: LAN topology similar to a bus topology, except that tree networks can contain branches with multiple nodes. Transmissions from a station propagate the length of the medium and are received by all other stations. Compare with bus topology, ring topology, and star topology Trellis-Coded Modulation (TCM): Signaling technique that incorporates both digital modulation and error correction concepts into a symbol-mapping format that yields link performance gains approaching the Shannon boundary of theoretical
296

performance. A combination of a specially selected convolutioncode and a bit-to-symbol mapping algorithm for achieving significant coding gain with no bandwidth expansion. Tributary Unit (TU): A Tributary Unit is an information structure which provides adaptation between the Lower-Order path layer and the Higher-Order path layer. It contains the Virtual Container (VC) plus a tributary unit pointer. Tributary Unit Group (TUG): Contains several Tributary Units. Trick Mode: Mechanisms to support video recorder like control functionality, e.g. fast forward, rewind etc. are included. Truncation: Removal of the lower significant bits on a digital word--as could be necessary when sending a 16-bit word on an 8-bit bus. If not carefully handled it can lead to unpleasant artifacts on video signals. True Color: The name given to 32-bit, 16.7 million color representation. True Parity: This term has come about with the advent of logical parity memory. It simply means that the parity memory actually does something useful instead of just issuing positives over and over. TrueType: This is a font standard developed by Apple and used in Mac OS version 7. Later, Apple licensed the technology to Microsoft, which used it in Windows 3.1 and continues to use it today. However, Apple and Microsoft TrueType fonts are not compatible. Truth table: A Boolean table that describes the way that a circuit reacts to input values by showing a complete set of possible input values with corresponding outputs. TV-Anytime: Standards for PVR/PDRs. Formed in September 1999, the TV-Anytime Forum is developing open specifications for interoperable and integrated systems that will allow broadcasters and other service providers, consumer electronics manufacturers, content creators and telecommunications companies to most effectively utilize high-volume digital storage in consumer devices. The forum has published 4 specifications description, applications, metadata and content referencing. These are being extended by further and new work, including work on Rights Management and Protection. Additionally, the MHP is now reviewing this work with the intention of its incorporation into MHP Ver2.0. www.tv-anytime.org TV Crossover Links: A type of enhancement which notifies users that there is enhanced or Web content associated with a program or an advertisement. A TV Crossover Link appears as a small icon in the corner of the TV screen at a point in time determined by content producers. Clicking the link displays a panel, giving the viewer an option to go to the content enhancement (Web site) or continue watching TV. If the viewer chooses to go to the Web site, the receiver connects to the site, while the current program or advertisement remains on-screen. Pressing the View
297

button on the remote control or keyboard returns to TV viewing. The term is a trademark of the Microsoft Corporation.

298

-UUBR (Unspecified Bit Rate): QoS class defined by the ATM Forum for ATM networks. UBR allows any amount of data up to a specified maximum to be sent across the network, but there are no guarantees in terms of cell loss rate and delay. Compare with ABR, CBR, and VBR. UDP (User Datagram Protocol): Connectionless transport layer protocol in the TCP/IP protocol stack. UDP is a simple protocol that exchanges datagrams without acknowledgments or guaranteed delivery, requiring that error processing and retransmission be handled by other protocols. UDP is defined in RFC 768. UHF (Ultra High Frequency): the range used by TV channels 14 through 69 (the band between 300 MHz and 1 GHz). ULSI (Ultra Large Scale Integration): A microchip with over one million transistors. Most popular chips today fit this description. Maybe in 10 years we will have the SDULSI, for Super-Duper Ultra Large Scale Integration. For now this is at the top of the heap. See also VLSI, LSI, MSI, and SSI. Ultra 160M (160M): A form of SCSI that supercedes Ultra2 SCSI. It runs at up to 160 megabytes per second and uses 80-pin connections over copper wires. Ultra 320M (320M): A form of SCSI that doubles the potential performance of Ultra 160M SCSI and offers throughput of up to 320 megabytes per second per channel. Ultra ATA/100: Another extension to the ATA interface that adds a 50% increase in top speed over ATA/66, getting to 100MB/second, up from 66MB/second. This standard also adds some additional error-checking not found in earlier ATA standards. Like Ultra ATA/66, ATA/100 requires an 80-conductor cable to work at full speed. Ultra ATA/133: This refers to what is most probably the final extension to the parallel ATA connection standard. The proposal was created by Maxtor, and allows a top data transfer rate of 133 megabytes per second. Intel didn't support this standard in its chipsets, instead opting to wait for Serial ATA. See Serial ATA for further details. Ultra ATA/33: An extension to the ATA interface (IDE) that effectively doubles the top data transfer speed of IDE from 16.6MBytes/second up to 33 MBytes/second. Also known as Ultra-IDE. Ultra ATA/66: An extension to the ATA interface (IDE) proposed by Quantum that effectively doubles the data transfer speed of the Ultra ATA/33 interface to 66MBps. To achieve the increase in speed you must use a special 80-conductor cable with 40 data lines and 40 ground lines to keep the signal stable.

299

Ultra SCSI: This is SCSI that communicates twice as fast as standard SCSI-2. Normal Ultra-SCSI transfers data at 20MBps, and Wide Ultra-SCSI transfers data at 40MBps. Similar to Ultra-IDE, Ultra-SCSI works its magic by transferring data on the up AND the down stroke of a clock cycle, doubling throughput. Ultra XGA (UXGA): A display with 1600x1200 pixel resolution. Ultra2 SCSI (LVD): This standard doubles the maximum transmission speed of Ultra SCSI. Almost all devices made to this standard support the 80MBps transfer rate of Ultra2 Wide SCSI. See also LVD. Ultra3 SCSI: Ultra3 SCSI doubles the maximum transfer rate of Ultra2 SCSI up to 160MBps. This is synonymous with 160M SCSI. Ultraviolet: A spectrum of light with wavelengths from about four nanometers to 380 nanometers. Light below 4 nm is in the x-ray spectrum, and just above 380 nm is visible violet light, moving through the color spectrum of visible light as wavelengths get larger. UMID: Unique Material Identifier for content essence. Content may be video, audio, data or system which may represent a group of picture, audio and data. Note- the UMID has a 12 byte SMPTE label. XY defines the Material/Instance number creation and/or usage methods. UML (Unified Modeling Language): Initially created at Rational Software (now part of IBM), this is an industry-standard method of specifying, visualizing, constructing, and documenting the artifacts of object-oriented software systems using a graphical diagram that looks similar to a flowchart. You can use UML to effectively make a blueprint of the software you are developing, thus making additional development easier, as you can refer back to your UML model. Unicast: Message sent to a single network destination. Compare with broadcast and multicast. Unicast Address: Address specifying a single network device. Compare with broadcast address and multicast address..

Unicode: A universal encoding scheme for characters and text. The goal of unicode is to enable the use of all characters for all languages of the world. Unicode supports a 16-bit character code for a possible 65,000 characters, as well as an extension of the 16-bit code called UTF-16 that allows for over one million characters and is sufficient for all known encoding requirements, including all known past and present written language characters in the world. Contrast that to the 8-bit 256 character limit of ASCII. Unicode is the default encoding of HTML and XML. Uplink: Communication link from earth to a satellite. Uninterruptible Power Supply (UPS): A device that contains a battery and some circuitry to supply your computer with power for a limited time (depending on the battery) if there is any sort of interruption in the outlet power. It must switch to battery
300

power fast enough to keep the computer running during an outage. Large UPSes typically use lead-acid batteries like the one in your car. Universal Data Format (UDF): A superset of data formats used with GIS, imaging, mapping, and CAD products. You can access data in UDF format instead of converting it from one format to another, with full geographic information preserved. Universal Disk Format (UDF): A file system that supports CD-RW, DVD-ROM, and DVD-Video. It also allows for easily writing more data to CD-R and CD-RW disks with the appropriate software. Universal Naming Convention (UNC): The name given for the naming used when one specifies: the serverthe volumethe paththen the file name of a file. So, a UNC filename would look like this: MyserverDocdriveMagazineglossary.doc. Universal Resource Locator (URL): This is what is used to give Web addresses for HTML, VRML, WAV, and other files. It simply contains the Internet name of the machine containing the data and the path to the file. URLs are much like the UNC, except specifically for the Internet. The address also includes what protocol should be used, such as HTTP, HTTPS, or FTP. UNIX: An operating system that was originally developed at Bell Labs in 1969, and is now being developed by many other corporations. Its main use is as a multi-user server environment, which is ironic since its name is a play on the name Multics. UNIX was billed as a simpler OS than Multics. UNIX is often used to run computer systems at universities and large corporations. UNIX marketshare was starting to dwindle before the Internet, and the need for fast Web servers sparked a revival of sorts, since most of the first webservers were developed in UNIX environments. There is a free, Open Source version of UNIX called FreeBSD. UNIX was initially a command line-only operating system, but now supports many graphical user environments based on X-Windows, and GUI advancements that affect Linux through XFree can also be applied to UNIX. Unshielded Twisted Pair (UTP): Cables that consist of pairs of unshielded wire twisted together. These cables are used for data and telephone networks. They are cheap to produce because they are unshielded, and they get their shielding by being twisted together in pairs. Upconversion: The term used to describe the conversion of a lower apparent resolution to a higher one. This process increases the number of pixels and/or frame rate and/or scanning format used to represent an image by interpolating existing pixels to create new ones at closer spacing. As an example, Sony TVs with Digital Reality Creation can upconvert 480i video sources to 960i. Often referred to as "line-doubling." Up Converting : The process which increases the number of pixels and/or frame rate and/or scanning format used to represent an image by interpolating existing pixels to create new ones at closer spacing. Despite its name the process does not increase the resolution of the image. Up converting is done from standard definition to high definition. Upgrade: This normally refers to a newer version of software, or a version with an
301

enhanced feature set. If you already own a previous version of the software, you can often purchase an upgrade version for a lower price than a new version. In terms of hardware, you can upgrade your system by adding more memory, or any component that makes your system better or faster. Some graphics cards let you upgrade them by plugging more memory onto the board. Uplink: Communication link from earth to a satellite. Uplink Port: A special networking port on a hub or switch that is used to connect it to a larger network. Often you need a special crossover cable to make this connection function properly. If you think of a network as a tree, the hub (or switch) on top connects to the uplink ports of the hubs (or switches) below it, and so on. Upload: The act of transferring data from your computer to a remote system. This is most commonly referred to when sending files to a remote system through FTP. USB (Universal Serial Bus): an interconnect standard for PCs and other networked equipment allows attachment of multiple peripheral devices simultaneously with automatic device detection and installation. USB 1.1 allows data rates up to 12Mbits/s and the USB 2.0 standard increases the specification up to 480Mbits/s. Two plug/cable versions are used: -a straight transmit/receive pair, 4-wire version and 6-wire version, which can also feed DC power to the peripheral). Peripherals include modems, mouse, keyboard, scanner, printer, displays, speakers, etc. The standard supports plug and play or hot swapping. USB Implementers Forum, Inc: USB Implementers Forum, Inc. is a non-profit corporation founded by the group of companies that developed the Universal Serial Bus specification. The USB-IF was formed to provide a support organization and forum for the advancement and adoption of Universal Serial Bus technology. The Forum facilitates the development of high-quality compatible USB peripherals (devices), and promotes the benefits of USB and the quality of products that have passed compliance testing. http://www.usb.org UTC (Coordinated Universal Time): Since radio signals can cross multiple time zones and the international date line, some worldwide standard for time and date is needed. This standard is Coordinated Universal Time, abbreviated UTC. This was formerly known as Greenwich Mean Time (GMT). Other terms used to refer to it include "Zulu time" (after the "Z" often used after UTC times), "universal time," and "world time." UTC is used by international shortwave broadcasters in their broadcast and program schedules. Ham radio operators, shortwave listeners, the military, and utility radio services are also big users of UTC. All of the times and dates found here at DXing.com at UTC unless otherwise indicated. Greenwich Mean Time was based upon the time at the zero degree meridian that crossed through Greenwich, England. GMT became a world time and date standard because it was used by Britain's Royal Navy and merchant fleet during the nineteenth century. Today, UTC uses precise atomic clocks, shortwave time signals, and satellites to ensure that UTC remains a reliable, accurate standard for scientific
302

and navigational purposes. Despite the improvements in accuracy, however, the same principles used in GMT have been carried over into UTC. UTC uses a 24-hour system of time notation. "1:00 a.m." in UTC is expressed as 0100, pronounced "zero one hundred." Fifteen minutes after 0100 is expressed as 0115; thirty-eight minutes after 0100 is 0138 (usually pronounced "zero one thirty-eight"). The time one minute after 0159 is 0200. The time one minute after 1259 is 1300 (pronounced "thirteen hundred"). This continues until 2359. One minute later is 0000 ("zero hundred"), and the start of a new UTC day. To convert UTC to local time, you have to add or subtract hours from it. For persons west of the zero meridian to the international date line (which includes all of North America), hours are subtracted from UTC to convert to local time.

303

-VV.32: A standard naming convention used in determining modem communications, all starting with "V." This one is for specifying the Hayes standard of bidirectional 9600 baud transmission. V.32bis: The standard that came after V.32 which increased the speed from 9600 baud to 14.4 Kbits per second. V.34: This was a dramatic improvement for modem communications. It doubled the top speed of the V.32bis standard to 28.8Kbps. V.34+: This standard was made by US Robotics to indicate that its modems are superior to a standard V.34 modem in that they run at 33.6 KB baud instead of the slower 28.8 KB baud. V.42: The name given to the standard for transmitting at 2400 baud. V.42Bis: This is not a speed standard like V.32 and V.34, but an error correction and compression method that is hardware-based. Its major improvement comes from knowing when compression will be beneficial and when it will not be. V.90: The ITU's first standard for 56K modem communications. It superseded X2 and 56kflex to become the ultimate 56K standard. Most X2 and 56kflex modems were able to upgrade to V.90 for free. V.92: An extension to the V.90 modem transmission standard that adds three new features: quick connect, which speeds up the connection handshake; Modem-on-Hold, for receiving incoming calls without breaking your connection; and PCM Upstream, for speeding up upstream data rates to up to 48Kbps. Previously with V.90, upstream rates were limited to 33.6Kbps. V.Everything: US Robotics' designation for its Courier Dual standard modems, which support all types of analog modem communications. V.Fast: This standard was made between the time of V.32bis and V.34. It is also a 28.8 KB baud speed, but was not as reliable as the approved V.34 standard. Vaporware: Software or hardware that is promised or talked about but is not yet completed--and may never be released. VBR (Variable Bit Rate VBR): For many applications using data compression techniques for video and audio material, variable bit rate (VBR) compression generally gives a better result than constant bit rate (CBR) systems, both in terms of picture quality and the total amount of data generated. 1. VBR systems compress video signals such that if there is little movement or detail in the picture, there is less data required to define the scene and thus less output. Conversely, if there is a lot of fine detail in the picture or there is a lot of movement or a scene change, then the data rate rises to adequately define the picture. That is, VBR encoding devices increase data rates when necessary to preserve
304

quality, and decrease data rates with easier content to improve efficiency. These VBR devices, therefore, are sometimes referred to as providing constant-quality operation. 2. Conversely, Constant Bit Rate (CBR) encoding devices operate with data rate constrained to a constant value. When data rate is fixed, there will be some picture quality variation, which will be a function of the picture complexity. Only if data rates are sufficiently high, can these variations be relatively imperceptible. DVD is the most familiar example of VBR implementation, where total storage efficiency is especially critical. The average picture bit rate on a DVD for 24/25 frame per second, film material is around 5 Megabit per second (Mbps) while the peak rate may go past 8 Mbps. (Film can also be progressive scan encoded which is more efficient than conventional interlace scan TV pictures. Professional television equipment mainly uses CBR on tape and in some editing disk recorders. Television digital transmission may be able to use VBR by means of 'statistical multiplexing' if there are many separate nonrelated video services in the one transport stream. Regardless of the type of compression (VBR or CBR), all practical systems need some limits on allowable bit rate variations. To address this, MPEG-2 specifies a data buffer (model) for both compression and transport. It is the responsibility of the compression encoder to manage the data rate, through varying quantization granularity, to avoid buffer overflow or underflow. Verilog: A HDL. Verilog-HDL and VHDL are the two dominating "flavors". Verilog has better support for sign-off and is dominant in the US. VHDL has better support for system level simulation and is dominant in Europe VAX (Virtual Address eXtension.): A line of 32-bit servers sold by the former Digital Equipment Corp. Initially, VAX computers ran only the VMS operating system, but later versions supported UNIX as well as VMS. VB (Visual Basic): A software product developed by Microsoft. Its purpose is to bring programming down to a drag-and-drop level to speed up development cycles. In many ways that goal has been achieved. VB's main competitor at one time was Borland's Delphi. Both programs offer similar functionality, with VB based on the BASIC programming language and Delphi based on Pascal. The actual code generated by VB is BASIC, and you can go in and edit the nitty-gritty if you want to. VB was at one time very slow compared to C++, but it has been sped up significantly since those days. Vector Graphics: As opposed to raster graphics, vector graphics are composed of groups of colored lines. If you've ever seen those old Atari arcade games like Tempest, Battlezone, or Asteroids, that's vector graphics. At one point a company even made a home game machine based on vector graphics--it was called the "Vectrex." Nowadays the only time you see things that look like vector graphics is when you are creating 3D models and you haven't rendered them yet. Vertical Blanking Interval (VBI): Present in conventional, uncompressed TV signals, corresponding to the unseen space at the top of a TV picture. Historically, this time period was provided in the sequence of a television signal so that a CRT picture tube scanning circuit had time to return the scanning electron-beam spot from the bottom of the screen to the top, in preparation for the next picture scan. It can be used to carry ancillary data, such as teletext including closed captions on
305

the assumption that TV displays will not show the data which might otherwise appear as flashing white dots at the top of picture. Veronica: Similar to Archie, this is a client/server service for locating files on the Internet. While Archie uses a spider to index files, Veronica uses Gopher servers. Vertical market: An industry or group of companies that can be marketed to in a similar manner because they have similar needs. Common examples of vertical markets include the government, healthcare, and insurance. Vertical Market Application: An application written specifically for a particular vertical market, as opposed to more generic multi-purpose applications such as office suites. One example is a program written for the insurance industry that computes insurance rates. Such an application is useless in any market besides the insurance industry. Vertical Polarization: In an antenna, a linearly polarized electric field vector whose direction is vertical relative to ground or some arbitrary coordinate system. Very high-speed digital subscriber line (VDSL): The next-generation twisted-pair technology (after ADSL) that targets higher data transmission rates than ADSL in exchange for shorter guaranteed distances. Very Large DataBase (VLDB): This refers, unsurprisingly, to a database that is very large in size. How large exactly is not specifically defined, but sizes of around a terabyte or more would fit this moniker. Database technology is driven by the need for fast VLDB solutions, and there are annual VLDB conferences and VLDB journals. Very Large Scale Integration (VLSI): The amount of transistors that are incorporated in a chip. A VLSI processor has on the order of 100,000 or more transistors, but not over a million. See also ULSI, LSI, MSI, and SSI. VHDL (Very High Speed Integrated Circuit (VHSIC)): Hardware Description Language. A large high-level VLSI design language with Ada-like syntax. The Department of Defense standard for hardware description, now standardised as IEEE 1076 Virtual Channel Table (VCT): The ATSC table that describes a set of one or more channels or services. For each channel, the table indicates major and minor channel number, short channel name, and information for navigation and tuning. There are two types of VCTs, the TVCT for terrestrial systems and the CVCT for cable systems. Virtual Circuit: Very High Speed Integrated Circuit (VHSIC) Hardware Description Language. A large high-level VLSI design language with Ada-like syntax. The Department of Defense standard for hardware description, now standardised as IEEE 1076 VGA (Video Graphics Array): A video standard that allows for resolutions up to 640x480 with up to 16 colors at a time. It also allows for 320x200 resolution with 256 colors. Many older games were written to take advantage of the 320x200 resolution
306

because of the comparatively high color depth. SVGA and XGA replaced VGA, but VGA compatibility remains an important part of most graphics cards. If your video driver is messed up, versions of Windows, starting with 95 and NT, let you go in under VGA mode to fix your graphics driver. Vertical Polarization: In an antenna, a linearly polarized electric field vector whose direction is vertical relative to ground or some arbitrary coordinate system. VHF (Very High Frequency): A signal in the frequency range of from 30 to 300 Mhz. Video Buffering Verifier (VBV): A hypothetical decoder that is conceptually connected to the output of an encoder. Its purpose is to provide a constraint on the variability of the data rate that an encoder can produce. Video Card: An add-on device in computers that deals specifically with displaying to a monitor. Without one you cannot see what's going on in your computer, and may have to resort to the ancient method of using a printer as a monitor (please don't do that). Video Cassette Recorder (VCR): A device that can record and play back video to and from videotapes (video cassettes). Typical tapes can hold two to 6 hours of video, depending on quality. Video CD (VCD): This technology was developed by Sony and Philips in 1993, and allows around 70 minutes of compressed MPEG-1 video/audio to be stored on a CD. Typically VCD movies are shipped on two CDs. VCDs were very popular in Asia, and were available before DVD. Even though the VCD format was extended with SVCD, VCDs will probably eventually succumb to the higher quality of DVD. VCD resolution is 352x240 (NTSC) or 352x288 (PAL), which is fairly comparable to VHS resolution of 300x360. Video Display Terminal (VDT): An older term for a CRT monitor. It is most often used when the ergonomics of computer monitors or EMF radiation are discussed, such as "Sitting in a room full of video display terminals was thought to lead to migraine headaches, among other things." Video Coder Overload (also buffer overload): Video coder overload is tested using rapid scene cuts, at most only a few frames apart, to stress digital compression systems by presenting them with a video signal that contains little or no temporal redundancy (frame-to-frame correlation). Video compression: Video compression deals with the compression of digital video data. Video compression is necessary for efficient coding of video data in video file formats and streaming video formats. To be accurate, what people call Video compression is more accurately referred to as video data rate reduction - the methods used usually require discarding data (lossy compression) rather than just reducing the required bandwidth (lossless compression.) While lossless video compression is possible, in practice it is virtually never used, and all standard video data rate reduction involves discarding data.

307

Video is basically a three-dimensional array of color pixels. Two dimensions serve as spatial (horizontal and vertical) directions of the moving pictures, and one dimension represents the time domain. A frame is a set of all pixels that (approximately) correspond to a single point in time. Basically, a frame is the same as a still picture. However, in interlaced video, the set of horizontal lines with even numbers and the set with odd numbers are grouped together in fields. The term "picture" can refer to a frame or a field. However, video data contains spatial and temporal redundancy. Similarities can thus be encoded by merely registering differences within a frame (spatial) and/or between frames (temporal). Video compression typically reduces this redundancy using lossy compression. Usually this is achieved by image compression techniques to reduce spatial redundancy from frames (this is known as intraframe compression or spatial compression) and motion compensation and other techniques to reduce temporal redundancy (known as interframe compression or temporal compression). Formats such as DV avoid interframe compression to allow easier non-linear editing. Today, nearly all video compression methods in common use (e.g., those in standards approved by the ITU-T or ISO) apply a discrete cosine transform (DCT) for spatial redundancy reduction. Other methods, such as fractal compression, matching pursuits, and the use of a discrete wavelet transform (DWT) have been the subject of some research, but are typically not used in practical products (except for the use of wavelet coding as still-image coders without motion compensation). Interest in fractal compression seems to be waning, due to recent theoretical analysis showing a comparative lack of effectiveness to such methods. The use of most video compression techniques (e.g., DCT or DWT based techniques) involves quantization. The quantization can either be scalar quantization or vector quantization; however, nearly all practical designs use scalar quantization because of its greater simplicity. In broadcast engineering, digital television (DVB, ATSC and ISDB ) is made practical by video compression. TV stations can broadcast not only HDTV, but multiple virtual channels on the same physical channel as well. It also conserves precious bandwidth on the radio spectrum. Nearly all digital video broadcast today uses the MPEG-2 standard video compression format, although H.264 and VC-1 are emerging contenders in that domain. Video for Windows: Microsoft's system-level Windows software architecture that is similar to Apple Computer's QuickTime. Video-On-Demand (VOD): When video can be requested at any time and is available at the discretion of the end-user, it is then video-on-demand. Video Stream Data Hierarchy: The highest syntactic structure of the coded video bitstream is the video sequence.

308

(1) Video Sequence Begins with a sequence header (may contain additional sequence header), includes one or more groups of pictures (GOP), and ends with an end-of-sequence code. (2) Group of Pictures (GOP) A Header and a series of one of more pictures intended to allow random access into the sequence. (3) Picture The primary coding unit of a video sequence. A picture consists of three rectanguar matrices representing luminance (Y) and two chrominance (Cb and Cr) values. The Y matrix has an even number of rows and columns. The Cb and Cr matrices are one-half the size of the Y matrix in each direction (horizontal and vertical).
309

(4) Slice One or more "contiguous" macroblocks. The order of the macroblocks within a slice is from left-to-right and top-to-bottom. Slice are important in the handling of erros. If the bitstream contains an error, the decoder can skip to the start of the next slice. Having more slices in the bitstream allows better error concealment, but uses bits that could otherwise be used to improve picture quality. (4) Macroblock

The basic coding unit in the MPEG algorithm. It is a 16x 16 pixel segment in a frame. Since each chrominance component has one-half the vertical and horizontal resolution of the luminance component, a macroblock consists of four Y, one Cr, and one Cb block. (5) Block The smallest coding unit in the MPEG algorithm. It consists of 8x8 pixels and can be one of three types: luminance(Y), red chrominance (Cr), or blue chrominance (Cb). The block is the basic unit in intra frame coding. Viewing Angle: Measures a video display's maximum usable viewing range from the center of the screen, with 180 being the theoretical maximum. Most often, the horizontal (side to side) viewing angle is listed, but sometimes both horizontal and vertical viewing angles are provided. For most home theaters, horizontal viewing angle is more critical. Virtual Container (VC): A signal designed for transport and switching of sub-SDH payloads. Virtual Channel: A virtual channel is the designation, usually a number, that is recognized by the user as the single entity that will provide access to an analog TV program or a set of one or more digital elementary streams. It is called virtual because its identification (name and number) may be defined independently from its physical location. Examples of virtual channels are: digital radio (audio only), a typical analog TV channel, a typical digital TV channel (composed of one audio and one video stream), multi-visual digital channels (composed of several video streams and one or more audio tracks), or a data broadcast channel (composed of one or more data streams). In the case of an analog TV channel, the virtual channel designation will link to a specific physical transmission channel. In the case of a digital TV channel, the virtual channel designation will link both to the physical ATSC Program and System Information Protocol 18 March 2003 15 transmission channel and to the particular video and audio streams within that physical transmission channel. Virtual Channel Table (VCT): The ATSC table that describes a set of one or more channels or services. For each channel, the table indicates major and minor channel
310

number, short channel name, and information for navigation and tuning. There are two types of VCTs, the TVCT for terrestrial systems and the CVCT for cable systems. Virtual Private Network (VPN): A "virtual" network constructed by connecting computers together over the Internet and encrypting their communications so that other people cannot understand the communications. The benefit is that people can connect to a local LAN from anywhere on the Internet. This allows easier connectivity and lower phone bills for travelling salespeople. They just sign up with a national ISP and call local POPs from their hotels as they travel the country, easily connecting back to their company's local network. Virtual Reality (VR): A world that only exists in a computer, often experienced by looking through 3D goggles that detect which way you are looking and then display what should be there. Another form of virtual reality is a world created in your imagination by stories on the computer, such as a MUD. Someday the computer may be able to plug directly into your brain, giving even more life-like simulations of virtual worlds. Virtual Reality Markup Language (VRML): An enhancement to the HTML format used to make virtual worlds out of Web pages. Its main uses so far include going on walk-throughs of real estate over the Web. Virus: A program that makes copies of itself on the same computer without the user's knowledge. Sometimes these copies are added onto executable files or system files, and other times they are part of Word or Excel documents, called macro viruses. The virus will usually have some eventual effect on systems that are infected, known as the payload. Often the intent of a virus is malicious. Sometimes the intent is not specifically malicious, but due to the spreading of the virus and its use of resources it becomes a malicious act as it causes problems for users. Visitor: When a user arrives on a website, he or she is considered one visitor regardless of how many pages at which he or she looks. If a visitor returns to that website later in the day, he or she may be counted as another visitor by logging programs unless you are looking for "unique" visitors and the visitor is identified by cookies or IP addresses. Visual Basic Script (VBScript): A Microsoft scripting language that is embedded in many Microsoft applications. Although it allows for powerful interoperability and functionality, it also creates a great deal of security risks unless it is tightly controlled. Visual C++: A Microsoft product that is basically VB on steroids. It features a similar visual interface with drag-and-drop functionality, but the code is C++, which is much more robust than BASIC. It's also faster when compiled. Viterbi Algorithm: The Viterbi algorithm was originally conceived as an error-correction scheme for noisy digital communication links, finding universal application in decoding the convolution codes used in CDMA and GSM digital cellular, digital broadcasting, satellite and deep-space communications, and 802.11 wireless LANs. The algorithm is not general; it makes a number of assumptions. First, both the observed events and hidden events must be in a sequence. This sequence often corresponds to time. Second, these two sequences need to be aligned, and an
311

observed event needs to correspond to exactly one hidden event. Third, computing the most likely hidden sequence up to a certain point t must depend only on the observed event at point t, and the most likely sequence at point t 1. These assumptions are all satisfied in a first-order hidden Markov model. VMEbus: A backplane interconnection bus standard developed by Motorola and others, and now an IEEE standard. It has data bus sizes of 16, 32, or 64 bits, and VMEbus boards can hold CPUs or peripherals. VMS (Virtual Memory System): Designed in 1976, this is the operating system that ran on Digital Equipment Corp.'s VAX operating system. Eventually DEC ported VMS to run on the Alpha processor and added POSIX functions into the operating system, renaming it OpenVMS. Voice over IP (VoIP): The practice of using an Internet connection to pass voice data using IP instead of using the standard public switched telephone network. This allows a remote worker to function as if he or she were directly connected to a PBX even while at home or in a remote office. As well, it skips standard long distance charges, as the only connection is through an ISP. VoIP is being used more and more to keep corporate telephone costs down, as you can simply run two network cables to a desk instead of separate network and data cables. VoIP runs right over your standard network infrastructure, but it also demands a very well-configured network to run smoothly. VoIP (Voice Over IP): The practice of using an Internet connection to pass voice data using IP instead of using the standard public switched telephone network. This allows a remote worker to function as if he or she were directly connected to a PBX even while at home or in a remote office. As well, it skips standard long distance charges, as the only connection is through an ISP. VoIP is being used more and more to keep corporate telephone costs down, as you can simply run two network cables to a desk instead of separate network and data cables. VoIP runs right over your standard network infrastructure, but it also demands a very well-configured network to run smoothly. Volt: The standard unit of electric potential. It is defined as the amount of electrical potential between two points on a conductor carrying a current of one ampere while one watt of power is dissipated between the two points. Voltage: A measure of an amount of volts. Voltage-controlled oscillator (VCO): Frequency-generation component whose output frequency can be varied by changing the voltage to a control port on the device. Volume: A drive or set of drives seen as a single entity by a Novell NetWare file server. It is the highest level of the NetWare directory structure. Volumes must be mounted to be recognized by the NetWare operating system, and can be unmounted to be taken offline. VPN (Virtual Private Network): Virtual Private Network. This is a "virtual" network constructed by connecting computers together over the Internet and encrypting their
312

communications so that other people cannot understand the communications. The benefit is that people can connect to the LAN from anywhere on the Internet. This allows easier connectivity and lower phone bills for travelling salespeople. They just sign up with a national ISP and call local POPs from their hotels as they travel the country and easily connect back to their company's local network. Microsoft's answer to VPNs is PPTP. VRML (Virtual Reality Modeling Language): An ISO standard for 3-D multimedia and shared virtual worlds on the Internet. VSB (Vestigial side band): VSB is a digital frequency modulation technique used to send data over a coaxial cable network. Used by Hybrid Networks for upstream digital transmissions, VSB is faster than the more commonly used QPSK, but it's also more susceptible to noise. VSAT (Very Small Aperture Terminal) a small (under 2 meter diameter dish) antenna and satellite receiver. VSWR (Voltage Standing Wave Ratio): The ratio of the maximum value of a standing wave to its minimum value and is related to the return loss by the equation: RL = 20log [(VSWR + 1)/(VSWR-1)] Thus a VSWR of 1.5:1 corresponds to a return loss of 20log(5) = 13.97dB. Measure of the RF interface quality between adjacent RF circuits that require adequate impedance matching for proper transfer of electrical energy at high frequencies.

313

-WW3C (World Wide Web Consortium): (www.w3c.org) created in October 1994 to lead the World Wide Web to its full potential by developing common protocols that promote its evolution and ensure its interoperability. W3C has more than 500 Member organizations from around the world and has earned international recognition for its standards and contributions to the growth of the Web. Wafer: A flat, round piece of silicon that is used in the manufacture of microprocessors. Fabrication plants, or fabs, typically will take a wafer and carve many microprocessors into it. Over the years wafer size has been increasing so that more chips can be fit on a single wafer, from 6" to 8" to 12" in diameter in some plants today. Wafers are created by taking a cylinder of silicon and slicing it, much like taking a loaf of bread and cutting slices, but to much more precise measurements. Wallpaper: An optional background graphic used in a graphical user interface. WAN (Wide Area Network): Any network that spans more than one location. Typically at least one of the locations is fairly remote. Compare this to a MAN that may encompass several closely located buildings, such as a college campus. WAP (Wireless Application Protocol): A proposed standard that allows for transfer of data securely between wireless devices, such as PDAs, cellphones, pagers, or other combinations of those devices. WAP supports many different wireless networks. Wander: The long-term variations of the significant instants of a digital signal from their ideal position in time (where long term implies that these variations are of frequency less than 10 Hz). Water Cooling: This type of cooling differs from air cooling in that water is used to remove heat from a heatsink that makes contact with a hot microprocessor or other device. Ultimately the water may actually exchange its heat with metal and air, or it could be cooled by a refrigeration unit. Water cooling is more efficient than air cooling, especially if the water is refrigerated. Watt: The electrical unit of power, which is energy transferred over a unit of time. Often it is used to describe the amount of heat generated by a microprocessor. Wattage: A measure of an amount of watts. WAV (pronounced wave): The Windows-compatible audio file format. The WAV file can be recorded at 11 kHz, 22 kHz, and 44 kHz, and in 8- or 16-bit mono and stereo. Wavelength Division Multiplexing (WDM): WDM is a means of increasing the capacity of fibre-optic data transmission systems through the multiplexing of multiple wavelengths of light. WDM systems support the multiplexing of as many as four wavelengths. Wavelet-based compression: An asymmetrical image compression technique that is scalable and can provide high quality. The drawback is that it becomes more computationally expensive as the picture resolution and frame rates go up. The
314

encode and decode are asymmetrical in that one side is a lot more expensive computationally than the other. The ImMix Cube and TurboCube used wavelet-based compression. WCDMA (Wideband CDMA): A 3G standard that increases the throughput of data transmission of CDMA by using a wider 5MHz carrier than standard CDMA, which uses a 200KHz carrier. WCDMA allows for data transfer rates as high as 2Mbps. Web: A synonym for the term "World Wide Web," often referred to as "The Web." Web Developer - Grown up Webmasters are Web Developers. Generally, if you are a Web Developer you have a range of Web skills, from managing a webserver to coding HTML, CGI scripts, and even creating spiffy graphics as needed. Web Portal: A term coined to describe the large search engine sites, such as Yahoo! and Lycos, that have branched off to offer a wide variety of services. The idea is that a Web user would peer at the Web by using only one website: the portal. For example, you go to a portal to do searches, get stock quotes, buy things, etc. It would be your everything site. Each portal site wants to offer one of each type of service so that a user never has to leave the site. Website: This term describes a particular company's, user's, or organization's Web pages served up by a webserver. It may be split across multiple servers or URLs, but it is one group of HTML pages with a particular association. For example, the Geek.com website spans multiple servers and includes the domain names ChipGeek, PDAGeek, and Geek.com. WebTV: is a leading manufacturer of set-top boxes used for viewing interactive television and regular television. These receivers let users access the Internet, including use of electronic mail and online chats. WebTV set-top boxes like the WebTV Plus Receiver connect to a standard television and a phone line. The WebTV Plus Receiver supports TV Crossover Links and WebPIP. WebPIP lets users simultaneously view Web pages and TV programming on the same screen, without a special picture-in-picture TV. WebTV is a trademark and service of the Microsoft Corporation. Weighting: During video compression, it is the process by which degradation, or noise, is strategically placed in more detailed or complex picture areas where the viewer is least likely to notice it. White Noise: This is technically sound with a uniform frequency spectrum over a wide range of frequencies. Non-technically, white noise is any unobtrusive background noise, such as the hum of a fan, the sound of rain, or the sound of a channel that isn't coming in properly on an old television. Often white noise is imitated and generated to drown out other obtrusive sounds. White Paper: A complete description of a particular technology, from overview to the nitty-gritty details. It is produced by the company that created that technology, as opposed to a FAQ, which can be created by anyone.

315

Wide Area Network (WAN): Any network that spans more than one location. Typically at least one of the locations is fairly remote. Compare this to a MAN that may encompass several closely located buildings, such as a college campus. Wideband CDMA (W-CDMA): A 3G standard that increases the throughput of data transmission of CDMA by using a wider 5MHz carrier than standard CDMA, which uses a 200KHz carrier. WCDMA allows for data transfer rates as high as 2Mbps. Widescreen: Term given to picture displays that have a wider aspect ratio than normal. For example, TV's normal aspect ratio is 4:3 and widescreen is 16:9. Although this is the aspect ratio used by HDTV, widescreen is also used with normal definition systems. Wide Screen Signalling (WSS): A wide screen signalling system using line 23 in 625 line systems originally used for the European analog PAL-Plus system. The wide screen signalling information contains information on the aspect ratio range of the transmitted signal and its position, on the position of the subtitles and on the camera/film mode. Furthermore signalling for EDTV and for surround sound is included. The details of the standard are found in ETS 300 294. Wide SCSI: An improvement to the old narrow SCSI that allows for faster throughput by increasing the number of pins used to connect the drive to the controller from 50 to 68. Wide SCSI doubled the throughput of narrow versions of SCSI, but was generally more expensive due to the additional pins and marketing strategies. Of course you were getting twice the performance, so what's a few bucks? Window: 1. Video containing information or allowing information entry, keyed into the video monitor output for viewing on the monitor CRT. A window dub is a copy of a videotape with time code numbers keyed into the picture. 2. A video test signal consisting of a pulse and bar. When viewed on a monitor, the window signal produces a large white square in the center of the picture. 3. A graphical user interface that presents icons and tools for manipulating a software application. Most applications have multiple windows that serve different purposes. Windows 2000 - At first this operating system was called Windows NT 5, until Microsoft renamed it to Windows 2000. Windows 2000 was mainly a 32-bit operating system using the NT code base, but 64-bit versions also came out for Intel's Itanium processors. Windows 2000 ads new functionality into Windows NT, such as support for USB and other new devices, built-in DirectX 7.0, and many other features. Windows CE: Microsoft Windows CE is a 32-bit real-time embedded operating system (RTOS) designed from the ground up to empower the development of a new range of emerging computing appliances, including set-top boxes, digital versatile disc (DVD) drives, entertainment consoles, smart phones, highly portable and personal computing devices like handheld computers, and home appliances. Windows CE is modular, allowing use of a minimum set of software components needed to support receiver requirements. This uses less memory and improves operating system performance. Windows CE provides a subset of the Win32 application program interface (API) set, which provides an effective amount of application source-code level portability and compatibility and user interface
316

consistency with other Microsoft Windows operating systems and Windows applications. Windows Me (Windows Millennium Edition): This Microsoft Windows operating system added many UI enhancements and clutter that actually made it less stable and more hated than Windows 95 and 98. No improvements to the core were made for stability or to enhance the combination 16- and 32-bit mess under the hood. This was Microsoft's last version of Windows containing pieces of 16-bit DOS guts, and good riddance. Windows Media Player: Delivers the most popular streaming and local audio and video formats, including ASF, WAV, AVI, MPEG, Quick-Time, and more. Windows Media Player can play anything from low-bandwidth audio to full-screen video. Windows NT (Windows New Technology): A full 32-bit operating system developed by Microsoft to be a very stable operating system to be used on servers and business machines. It was developed from the ground up to be fully 32-bit without much worry about DOS compatibility. The core of Windows NT code has been updated for Windows 2000 and XP. Windows XP: The friendly-faced, updated version of Windows 2000, with an almost cartoonish interface that will surely be looked back upon with a wince. XP started to take real "advantage" of the Internet by including numerous hooks and links to Microsoft's website to improve various functionality, and that is part of what made this operating system so controversial. Also, at long last, it moved the Windows NT code-base onto consumer machines, allowing home users to get the benefits of stability that Windows Me didn't offer. Many hold-outs prefer to stick with Windows 2000. Wireless Application Protocol (WAP) - A proposed standard that allows for transfer of data securely between wireless devices, such as PDAs, cellphones, pagers, or other combinations of those devices. WAP supports many different wireless networks. "Wireless" Local Area Network (Wireless LAN) or (WLAN): A nominal 1000 foot (or less) short range computer-to-computer data communications network. Wireless Local Loop (WLL) - This is simply a wireless connection of a telephone in a home or office to a fixed telephone network. Using industry technology, it is an implementation of subscriber loops to connect devices over a multiple access radio system to the Public Switched Telephone Network (PSTN). It differs from a standard local loop in that the connection to the telephone system is made wirelessly instead of with cables. Wireless Markup Language (WML, formerly HDML): Part of the Wireless Application Protocol (WAP) , allowing text portions of Web content to be separated from graphical content for display on wireless devices. Wizard: An "enhancement" to programs that makes them easier to operate by guiding you through a process, step by step. Often wizards are scoffed at by experienced users, who prefer to do things the hard way.

317

WLL (Wireless Local Loop): This is simply a wireless connection of a telephone in a home or office to a fixed telephone network. Using industry technology, it is an implementation of subscriber loops to connect devices over a multiple access radio system to the Public Switched Telephone Network (PSTN). It differs from a standard local loop in that the connection to the telephone system is made wirelessly instead of with cables. WML (Wireless Markup Language): Part of the Wireless Application Protocol (WAP) , allowing text portions of Web content to be separated from graphical content for display on wireless devices. Word: A group of bits of data regarded as a whole while programming or transferring data. Often a word is 8 bits in length, also referred to as a byte. This is also the name of Microsoft's word processor. Word Size: The number of bits of data stored in a CPU register. Typically the number is a power of 2, with 8, 16, 32, and 64 being common. You have to deal with word size when doing certain data manipulations while programming. Workstation: A high-powered computer, one step below a minicomputer and a step above a microcomputer. The term often refers to fairly powerful dual-processor computers used to generate 3D images or manipulate 2D images or sound. Often workstations require very powerful graphics setups. World Wide Web (WWW or Web): This is basically a means of communicating text, graphics, and other multimedia objects over the Internet. Web servers on the Internet are set to respond to particular requests sent on TCP/IP port 80 by sending HTML documents to the requester. The requester usually uses a browser to receive this data. Think of the Internet as a 100-lane highway, and the Web as one of those lanes. Of course traffic in the Web lane is probably very high compared to traffic in most other lanes. World Wide Web Consortium (W3C): An industry group created to design and promote standards to increase the functionality of the Web. The W3C was initially established in collaboration with CERN, the creator of the World Wide Web. WORM: Write Once/Read Many escribes storage devices on which data, once written, cannot be erased or re-written. Being optical, WORMs offer very high recording densities and are removable, making them very useful for archiving. World Wide Web (WWW): Name of the total space for highly graphical and multimedia applications on the Internet. WYSIWYG: What you see is what you get--usually, but not always. Referring to the accuracy of a screen display to show how the final result will look. For example a word processor screen showing the final layout and typeface that will appear from the printer.

318

-XX.25: Interface between data terminal equipment (DTE) and data circuit-terminating equipment (DCE) for terminals operating in the packet mode on public data networks. X.25 predates the OSI model. X.75: Terminal and transit call control procedures and data-transfer system on international circuits between packet-switched data networks. XDSL: x Digital Subscriber Line (of any type). XGA: This started out as IBM's term for a computer monitor resolution standard of 1024x768 pixels and 16-bit color. IBM released XGA monitors and graphics cards for its PS/2 computers, but unfortunately the standard was interlaced, and ran on 14" monitors. Nowadays XGA just means 1024x768. Laptop makers often mention that their screens are XGA, meaning that they support that resolution. XHTML: The W3C has released XHTML as `a reformulation of HTML 4 in XML 1.0' . This specification defines HTML as an XML application, and provides three DTDs corresponding to the ones defined by HTML 4.0. The semantics of the elements and their attributes are as defined in the W3C Recommendation for HTML 4.0. These semantics provide the foundation for future extensibility of XHTML. Compatibility with existing HTML user agents is possible by following a small set of guidelines. Xlet: Interface used for DVB-J application life cycle control (from MHP). XML (Extensible Markup Language): XMLs strength is in its ability to be adaptable to almost any conceived application of information storage or transfer. It allows groups of people or organizations to create their own customized markup applications for exchanging information in their domain (eg. music, chemistry, electronics, hill-walking, finance, surfing, petroleum geology, linguistics, cooking, knitting, stellar cartography, history, engineering, etc). That is, XML is not just for Web pages: it can be used to store any kind of structured information, and to enclose or encapsulate information in order to pass it between different computing systems which would otherwise be unable to communicate. It is called extensible because it is not a fixed format like HTML (a single, predefined markup (declarative) language). Instead, XML is actually a `metalanguage' --a language for describing other languages. This allows design of customized markup languages for limitless different types of documents. XML, in fact, is a lightweight cut-down version of SGML, which keeps enough functionality to make it useful but removes all the optional features that make SGML too complex to program for in a Web environment. W3C makes recommendations for XML as a "Universal format for structured documents and data on the Web," and formated with: Field names in <angle brackets> ;Field values between names And defines 24 data types: ui1, ui2, ui4, i1, i2, i4, int, r4, r8, number, fixed.14.4, float char, string date, dateTime, dateTime.tz, time, time.tz
319

Boolean bin.base64, bin.hex uri uuid UPnP also uses XML XML is a text language and as such is verbose and can be easily read. To reduce storage or transmission bandwidth/download requirements, loss-less compression is required. To protect data privacy/security, encryption is required. http://www.w3.org/XML/ Xmodem: A protocol for transferring files during direct dial-up communications. Developed by Ward Christensen in 1977, Xmodem has basic error checking to ensure that information isn't lost or corrupted during transfer. It sends data in 128-byte blocks. Xmodem has undergone a couple of enhancements: Xmodem CRC uses a more reliable error-correction scheme, and Xmodem-1K transfers data faster by sending it in 1,024-byte blocks. See also Ymodem and Zmodem. XOR (exclusive Or) : An operation that can be executed on two or more binary strings. XOR returns true, or "1", if only one of the two strings contains a 1 at a particular bit position, and a false, or "0", if both strings contain 0s or 1s at both positions. It is similar to the behavior of OR, but is false when both bits are positive (thus "exclusive" one or the other, but not both). For example (0 XOR 0) = 0, (0 XOR 1) = 1, (1 XOR 0) = 1, (1 XOR 1) = 0. Thus: (0011 OR 1001) = 1010. X-TERMINAL: Terminal that allows a user simultaneous access to several different applications and resources in a multivendor environment through implementation of X Windows. See also X Window System. Yagi Antenna: A multiple element parasitic antenna; originated by Yagi-Uda, in Japan; a common VHF and UHF means of achieving high antenna gain in a compact physical size. It is a linear end-fire array consisting of a driven element, a reflector element, and one or more director elements. Ymodem: A protocol for transferring files during direct dial-up communications. It is so named because it builds on the earlier Xmodem protocol. Ymodem sends data in 1,024-byte blocks and is consequently faster than Xmodem; however, it doesn't work well on noisy phone lines. Ymodem has undergone a few enhancements: Ymodem-Batch can send several files in one session, and Ymodem-G drops software error-correction, which speeds up the process by removing the overhead used by error-correction. This works because all modems since the V.32 era have error-correction built into the hardware. Yottabyte (YB): This is 2^80 bytes, or 1024 zettabytes. See also zettabyte. Y, Pb, Pr and Y, Cb, Cr and Y, U, V: Various Standards organisations such as the SMPTE and ITU-R use the Y, Pb, Pr notation as the analog baseband representation of a colour television signal, but if this is digitised, then the digital representation is notated as Y, Cr, Cb. Y, Pb, Pr are analog signals used for equipment interconnection by three (3) separate leads. The Y lead also carries a composite of the horizontal and vertical
320

scan synchronisation signals. Y, Cb, Cr are digital signals and are usually present internally in equipment in digital processing, but when used for external interconnects, the three signals are streamed over a single lead in sequence, such as the 270 Mbps Serial Digital Interface used in professional equipment, (see ITU-R BT.656-- Interfaces for digital component video signals in625-line television systems operating at the 4:2:2 level of ITU-R BT.601), and for HD as shown in ANSI/SMPTE 295M-1997 1920x1080 50 Hz Scanning and Interfaces. Colour television requires three signals, which are initially captured, and ultimately displayed, as shades of Red, Green and Blue (RGB). For intermediate processes such as storage on tape, editing, switching, video compression and transmission, it is more convenient to mathematically matrix the RGB signals into Luminance (Y) and two colour-difference (R-Y and B-Y) signals. These have been chosen to match the physiology of the human eye to help mask noise or distortion effects introduced in editing and transmission. The Y part is the luminance also known as monochrome or black and white component. The Y signal always ranges between 0 (black) and 100% (white) with intermediate values being shades of grey. Nominally for standard definition (PAL) Y= 30% R + 59% G + 11% B. But for HDTV colorimetry, ITU-R BT.709 specifies nominally for high definition (1080line) Y= 21% R + 72% G + 7% B which is more suitable for certain types of display. The Pb or Cb component is proportional to the B-Y signal and similarly Pr or Cr is proportional to the R-Y signal. These signals can have a negative value. For example a yellow picture made from equal Red and Green is bright (so Y is large), but there is no Blue colour, so B-Y will be negative. Reference to standards show the notation as R' G' B' or Y' Cb' Cr' the ' pedantically represents that the signal is non-linear as it has been modified to fit the transfer characteristic (gamma) of the display cathode ray tube (CRT). Y, U, V nomenclature is an outcome from the analog PAL TV world. The U and V are symbols for the 4.43MHz colour subcarrier signals carrying B-Y and R-Y information, respectively. When U and V are combined, this combined RF signal is present as a separate signal in an S-Video (Y/C) connection. If the combined U and V is added to baseband Y this becomes the widely used baseband composite CVBS PAL signal. Unfortunately the term Y, U, V is occasional used when Y, Pb, Pr is what is actually meant.

321

-ZZAK (Zero Administration Kit): A software kit from Microsoft originally for Windows NT that can prevent various user actions, such as installing software or changing desktop configurations. ZAW (Zero Administration Windows): This version of the Windows operating system from Microsoft allows for various easy administration and management options, such as storing user software and settings on the network so that people can log into any machine on a network and retain the same personal settings. Basically it turns your PC into an NC. ZB (zettabyte) : This is 2^70 bytes, or 1024 exabytes. See also exabyte. Z-Buffering (or Hidden Surface Removal): This tracks the depth of each triangle in a 3D graphics representation, from the perspective of the viewer, to ensure that objects behind other objects in a scene don't appear until the viewer has them in his or her line of sight. This is why you can't see through walls to what lies behind, but why you can play multiplayer Quake and have people at many different areas in a level. Zero Administration Kit (ZAK): A software kit from Microsoft originally for Windows NT that can prevent various user actions, such as installing software or changing desktop configurations. Zero Administration Windows (ZAW): This version of the Windows operating system from Microsoft allows for various easy administration and management options, such as storing user software and settings on the network so that people can log into any machine on a network and retain the same personal settings. Basically it turns your PC into an NC. Zero-forcing (ZF): Equalizer concept in which a frequency response is corrected by processing a signal through the inverse channel response, thus forcing intersymbol interference to zero and, theoretically, removing dispersion impairment. Zero Insertion Force socket (ZIF socket): A socket designed to accept a PGA chip, such as most common CPUs produced today. The ZIF socket allows you to plug in a PGA chip with no pressure required. This is a big advantage because older sockets required you to push the chip in with equal force on all sides or risk bending the pins. It's not fun to bend pins on the new processor you just bought, especially when some server chips cost well over US$1,000. ZIF Socket (Zero Insertion Force Socket): A socket designed to accept a PGA chip, such as most common CPUs produced today. The ZIF socket allows you to plug in a PGA chip with no pressure required. This is a big advantage because older sockets required you to push the chip in with equal force on all sides or risk bending the pins. It's not fun to bend pins on the new processor you just bought, especially when some
322

server chips cost well over US$1,000. Zip: A file extension added onto files created by the PKZip file compression program written by PKWare. Zip files are the most common compressed file format in existence and are supported by most compression and decompression programs, such as WinZip and WinRar. Zmodem: This protocol has many error-correcting properties. It can detect if a bad block (of 1,024 bytes) has come through and ask to have it resent. If the transfer gets interrupted for any reason, it can resume from where it left off. It is slightly slower than Ymodem-G, but is much more reliable. Zone 1) A collection of all terminals, gateways, and multipoint control units (MCUs) managed by a single gatekeeper. A zone includes at least one terminal, and can include gateways or MCUs. A zone has only one gatekeeper. A zone can be independent of LAN topology and can be comprised of multiple LAN segments connected using routers or other devices. 2) In AppleTalk, a logical group of network devices. See also ZIP. Zone (Zone of Authority): A term having to do with DNS. Each top-level domain name, such as Geek.com, is considered a zone, where subdomains such as server1.geek.com or www.geek.com are managed by their own DNS servers. Geek.com's Zone of Authority is all the subdomains under Geek.com. Zone File: A list of DNS data elements, referred to as resource records, for a Zone of Authority. Zone Transfer: The act of replicating a zone file from one DNS server to another. Zoomed Video: A PC Card (PCMCIA) slot that is linked directly to the VGA controller. It allows easier implementation of MPEG video decoding and other video/TV features in laptops. Zoom: Zoom is a type of image scaling. Zooming is making the picture larger so that you can see more detail. Also see, Scaling.

323

Sources & Reference of this Glossary


1. http://www.digitaltelevision.com/dtvbook/glossaryf.shtml 2. http://www.dtv.gov/glossary.html 3. http://www.crutchfieldadvisor.com/ISEO-rgbtcspd/learningcenter/home/tv_glossary.html 4. http://www.pbs.org/digitaltv/glossary.htm 5. http://www.altera.com/end-markets/broadcast/glossary/bro-glossary.html#modulation 6. www.sevendigital.tv/res/pdf/glossary_v6p2.pdf) 7. http://www.sonet.com/DOCS/gloss.htm (SONET Glossary) 8. http://www.fiber-optics.info/glossary-c.htm 9. http://www.geek.com/glossary/glossary_search.cgi?h 10. http://www.infoprovider.com/infobase/h.html 11. http://www.sss-mag.com/glossary/ 12. ETSI EN 300 744 V1.5.1 (2004-11) DVB; Framing Structure, Channel Coding and Modulation for Digital Terrestrial Television 13. ETSI EN 302 304 V1.1.1 (2004-11) DVB; Transmission System for Handheld Terminals (DVB-H) 14. ETSI EN 300 468 V1.5.1 (2003-05) DVB; Specification for Service Information (SI) in DVB Systems 15. ETSI TR 101 211 V1.6.1 (2004-05) DVB; Guidelines on Implementation and Usage of Service Information (SI) 16. ITU-T H.222.0 (02/00) Information Technology- Generic Coding of Moving Pictures and associated Audio Information: Systems 17. ITU-R Task Group 11/3 A Guide to Digital Terrestrial Television Broadcasting in the VHF/UHF Bands 18. ITU-T H.262 (02/2000) AUDIOVISUAL AND MULTIMEDIA SYSTEMS Infrastructure of audiovisual services Coding of moving video 19. MPEG-2 Digital Broadcast Pocket Guide: Acterna, 20400 Observation Drive Germantown, Maryland USA
20. BC R&D 1996/02 MPEG-2: Overview of the systems layer

324

Table1: List of Main Standards for Digital Terrestrial Television Broadcasting


Standard Number Rec. ITU-R BT.798-1 Rec. ITU-R BT.1125 ITU DOCUMEN T 11-3/3-E Rec. ITU-R BT.1299 ITU-R Report BT.2035 Rec. ITU-R BT.1208-1 Rec. ITU-T H.262 Rec. ITU-R BS.1196-1 Rec. ITU-R BS.1115 ISO/IEC IS 13813-3 ISO/IEC IS.13818-7 Rec. ITU-T H.264 Rec. ITU-R BT.1209-1 Rec. ITU-R BT.1300-2 Rec. ITU-R BT.1301 Rec. ITU-R BT.1207-1 Rec. ITU-T H.222.0 Rec. ITU-T H.222.1 Rec. ITU-R BT.1206 Rec. ITU-R BT.1306-1 Rec. ITU-R BT.1368-4 Rec. ITU-R SM.1682 Title General Digital Television Terrestrial Broadcasting in the VHF/UHF Bands Basic Objectives for the Planning and Implementation of Digital Terrestrial Television Broadcasting Systems A Guide to Digital Terrestrial Television Broadcasting in the VHF/UHF Bands System The Basic Elements of a Worldwide Common Family of Systems for Digital Terrestrial Television Broadcasting** Draft Modification of Report ITU-R BT.2035 Guidelines and Techniques for the Evaluation of Digital Terrestrial Television Broadcasting Systems Source Coding & Compression Video coding for digital terrestrial television broadcasting Information Technology Generic Coding of Moving Pictures and Associated Audio Information: Video (MPEG-2) Audio Coding for Digital Terrestrial Television Broadcasting Low Bit-rate Audio Coding Information Technology Generic Coding of Moving Pictures and Associated Audio Information-Part 3: Audio Information Technology Generic Coding of Moving Pictures And Associated Audio Information-Part7:Advanced Audio Coding Advanced video coding for generic audiovisual services (H.264 AVC) Service Multiplex & Transport Service Multiplex Methods for Digital Terrestrial Television Broadcasting Service multiplex, transport, and identification methods for digital terrestrial television broadcasting Data Services in Digital Terrestrial Television Broadcasting Data Access Methods for Digital Terrestrial Television Broadcasting Information Technology - Generic Coding of Moving Pictures and Associated Audio Information: Systems (MPEG-2 Systems) Multimedia Multiplex and Synchronization for Audiovisual Communication in ATM Environment Physical Layer Spectrum Shaping Limits for Digital Terrestrial Television Broadcasting Error-correction, Data Framing, Modulation and Emission Methods for Digital Terrestrial Television Broadcasting Planning Criteria for Digital Terrestrial Television Services in the VHF/UHF Bands Methods for measurements on digital broadcasting signals Revision Date 1992-1994 1994 1996-1998

1997 WP 6E 2004-11-02

1995-1997 2000 1995-2001 1994 1994 1997 2004

1995-1997 1997-2000-200 4 1997 1995-1997 2000 1996 1995 1997-2000 1998-1998-200 02002-2004 Draft: 2004

325

Table 2: LIST OF TECHNICAL STANDARDS RELATED TO DIGITAL RADIO FOR DAB, DRM, ISDB-TSB, IBOC
Specification Number Rec. ITU-R BS.774-2 Rec. ITU-R BS.1114-5 Rec. ITU-R BS.1348-1 Rec. ITU-R BS.1514-1 Rec. ITU-R BS.1615 Rec. ITU-R BS.1660 Final draft ETSI ES 201 980 V2.1.1 ETS 300 401 FCC 02-286 iBiquity Digital Corporation iBiquity Digital Corporation ARIB Title Service Requirements for Digital Sound Broadcasting to Vehicular, Portable and Fixed Receivers Using Terrestrial Transmitters in the VHF/UHF Bands Systems for Terrestrial Digital Sound Broadcasting to Vehicular, Portable and Fixed Receivers in the Frequency Range 30-3 000 MHz Service Requirements for Digital Sound Broadcasting at Frequencies Below 30 MHz System for Digital Sound Broadcasting in the Broadcasting Bands Below 30 MHz Planning parameters for Digital Sound Broadcasting at Frequencies Below 30 MHz Technical Basis for Planning of Terrestrial Digital Sound Broadcasting in the VHF Band Digital Radio Mondiale (DRM); System Specification Radio broadcasting systems; Digital Audio Broadcasting (DAB) to mobile, portable and fixed receivers FIRST REPORT AND ORDER (for IBOC System) IBOC FM Transmission Specification IBOC AM Transmission Specification Narrow Band ISDB-T (ISDB-TSB) for Digital Terrestrial Sound Broadcasting REVISION DATE 1995

2004 2001 2002 2003 2003 2004 1997

August 2001 August 2001 November 1999

326

AS OF MAY, 2005

TABLE 3: LIST OF ETSI/DVB DIGITAL BROADCASTING SPECIFICATIONS


Categories COOKBOOK DVB Cook TRANSMISSION EN 300 421 V1.1.2 (08/97) TR 101 198 V1.1.1 (09/97) DVB BlueBook A003 Rev. 1 (05/95) Framing structure, channel coding and modulation for 11/12 GHz satellite services Implementation of Binary Phase Shift Keying (BPSK) modulation in DVB satellite transmission systems User Requirements for Cable and Satellite delivery of DVB Services, including Comparison with Technical Specification DVB BlueBook A020 Rev. 1 (05/00) Digital Video Broadcasting (DVB); A Guideline for the Use of DVB Specifications and Standards REFERENCE NO. TITLE

Second generation framing structure, channel coding Draft EN 302 307 and modulation systems for Broadcasting, Interactive V1.1.1 (06/04) Services, News Gathering and other broadband satellite applications DVB BlueBook A082 (07/04) EN 300 429 V1.2.1 (04/98) EN 300 473 V1.1.2 (08/97) TS 101 964 V1.1.1 (08/01) TS 102 252 V1.1.1 (10/03) Commercial Requirements: Advanced coding and modulation schemes for broadband satellite services Framing structure, channel coding and modulation for cable systems Satellite Master Antenna Television (SMATV) distribution systems Control Channel for SMATV/MATV distribution systems; Baseline Specification Guidelines for Implementation and Use of the Control Channel for SMATV/MATV distribution systems

Final Draft EN 300 Framing structure, channel coding and modulation for 744 V1.5.1 digital terrestrial television (06/04) TR 101 190 V1.2.1(07/04) DVB BlueBook A004 (12/94) TS 101 191 V1.4.1 (06/04) EN 302 304 V1.1.1 (06/04) EN 300 748 V1.1.2 (08/97) EN 300 749 V1.1.2 (08/97) EN 301 701 V1.1.1 (08/00) Implementation guidelines for DVB terrestrial services; Transmission aspects User Requirements for Terrestrial Digital Broadcasting Services Mega-frame for Single Frequency Network (SFN) synchronization Transmission System for Handheld Terminals (DVB-H) Multipoint Video Distribution Systems (MVDS) at 10 GHz and above Framing structure, channel coding and modulation for MMDS systems below 10 GHz OFDM modulation for microwave digital terrestrial television

327

EN 301 210 V1.1.1 (02/99) TR 101 221 V1.1.1 (03/99) EN 301 222 V1.1.1 (07/99) DVB BlueBook A033 (03/99) MULTIPLEXING EN 300 468 V1.5.1 (05/03) TR 101 211 V1.6.1 (05/04)

Framing structure, channel coding and modulation for Digital Satellite News Gathering (DSNG) and other contribution applications by satellite User guideline for Digital Satellite News Gathering (DSNG) and other contribution applications by satellite Co-ordination channels associated with Digital Satellite News Gathering (DSNG) DSNG Commercial User Requirements

Specification for Service Information (SI) in DVB systems Guidelines on implementation and usage of Service Information (SI)

Draft TR 101 162 Allocation of Service Information (SI) codes for DVB V1.2.1. (02/01) systems EN 300 472 V1.3.1 (05/03) EN 301 775 V1.2.1 (05/03) Specification for conveying ITU-R System B Teletext in DVB bitstreams Specification for the carriage of Vertical Blanking Information (VBI) data in DVB bitstreams

Final Draft EN 301 DVB specification for data broadcasting 192 V1.4.1 (06/04) TR 101 202 V1.2.1 (01/03) TS 102 006 V1.3.1 (05/04) SOURCE CODING TS 101 154 V1.5.1 (05/04) TS 102 154 V1.2.1 (05/04) DVB BlueBook A084 (07/04) SUBTITLING EN 300 743 V1.2.1 (10/02) INTERACTIVITY DVB BlueBook A008 (10/95) ETS 300 802 V1 (11/97) TR 101 194 V1.1.1 (06/97) Commercial Requirements for Asymmetric Interactive Services Supporting Broadcast to the Home with Narrowband Return Channels Network-independent protocols for DVB interactive services Guidelines for implementation and usage of the specification of network independent protocols for Subtitling systems Implementation guidelines for the use of Video and Audio Coding in Broadcasting Applications based on the MPEG-2 Transport Stream Implementation Guidelines for the use of MPEG-2 Systems, Video and Audio in Contribution Applications Implementation Guidelines for the use of Audio-Visual Content in DVB services delivered over IP (Draft TS 102 005 V1.1.1) Implementation guidelines for Data Broadcasting Specification for System Software Update in DVB Systems

328

DVB interactive services ES 200 800 V1.3.1 (11/01) TR 101 196 V1.1.1 (12/97) ETS 300 801 V1 (08/97) EN 301 193 V1.1.1 (07/98) EN 301 199 V1.2.1 (06/99) TR 101 205 V1.1.2 (07/01) EN 301 195 V1.1.1 (02/99) TR 101 201 V1.1.1 (10/97) EN 301 790 V1.3.1 (03/03) TR 101 790 V1.2.1 (01/03) EN 301 958 V1.1.1 (03/02) DVB BlueBook A073r1 (07/04) MHP ES 201 812 V1.1.1 (12/03) tam527r31 (JavaDoc1.0.3) TS 102 812 V1.2.1 (06/03) A066r1 (07/03) INTERFACING ETS 300 813 V1 (12/97) ETS 300 814 V1 (03/98) TR 100 815 V1.1.1 (02/99) DVB Interfaces to Plesiochronous Digital Hierarchy (PDH) networks DVB interfaces to Synchronous Digital Hierarchy (SDH) networks Guidelines for the handling of ATM signals in DVB systems Multimedia Home Platform (MHP) Specification 1.0.3 JavaDoc for MHP 1.0.3 Multimedia Home Platform (MHP) Specification 1.1 MHP Implementation Arrangements and associated agreements DVB interaction channel for Cable TV distribution systems (CATV) Interaction channel for Cable TV distribution systems (CATV); Guidelines for the use of ETS 300 800 Interaction channel through Public Switched Telecommunications Network (PSTN)/ Integrated Services Digital Networks (ISDN) Interaction channel through the Digital Enhanced Cordless Telecommunications(DECT) Interaction channel for Local Multipoint Distribution System (LMDS) distribution systems LMDS Base Station and User Terminal Implementation Guidelines for ETSI EN 301 199 Interaction channel through the Global System for Mobile Communications (GSM) Interaction channel for Satellite Master Antenna TV (SMATV) distribution systems; Guidelines for versions based on satellite and coaxial sections: Guidelines for versions based on satellite and coaxial sections Interaction channel for Satellite Distribution Systems Interaction channel for Satellite Distribution Systems; Guidelines for the use of EN 301 790 Interaction channel for Digital Terrestrial Television (RCT) incorporating Multiple Access OFDM Interaction channel through General Packet Radio System (GPRS)

329

TS 101 224 V1.1.1 (07/98) TS 101 225 V1.1.1 (01/01) DVB BlueBook A029 (05/97) EN 50221 V1 (02/97) R 206 001 V1 (03/97) TS 101 699 V1.1.1 (11/99) EN 50083-9 V2 (06/98) TR 101 891 V1.1.1 (02/01) TS 102 201 V1.1.1 (03/99) A016 Rev. 3 (07/04) INTERNET PROTOCOL TR 102 033 V1.1.1 (04/02) DVB BlueBook A086 (07/04) A086 - dtd_xsd TS 102 813 V1.1.1 (11/02) TS 102 814 V1.2.1 (04/03) DVB BlueBook A079 (04/04) DVB BlueBook A080 (04/04) CONDITIONAL ACCESS EN 50221 ETR 289 V1 (10/96) DVB BlueBook A011r1 (06/96)

Home Access Network (HAN) with an active Network Termination (NT) In-Home Digital Network (IHDN) Home Local Network (HLN) User and Market Requirements for In-Home Digital Networks Common Interface Specification for Conditional Access and other Digital Video Broadcasting Decoder Applications Guidelines for implementation & use of the Common Interface for DVB Decoder Applications Extensions to the Common Interface Specification Interfaces for CATV/SMATV headends and similar professional equipment for DVB/MPEG-2 transport streams Professional Interfaces: Guidelines for the implementation and usage of the DVB Asynchronous Serial Interface (ASI) Interfaces for DVB-IRDs Interfaces for DVB-IRD (Draft TS 102 201 V1.2.1)

Architectural Framework for the Delivery of DVB-Services over IP-based Networks Transport of MPEG-2 based DVB Services over IP based networks (draft TS 102 034 V1.1.1) XML description of DVB BlueBook A086 IEEE 1394 Home Network Segment Ethernet Home Network Segment DVB Interim Specification : IP Datacast Baseline Specification, PSI/SI Guidelines for IPDC DVB-T/H Systems DVB Interim Specification : IP Datacast Baseline Specification; Specification of Interface I_MT Common Interface Specification for Conditional Access and other Digital Video Broadcasting Decoder Applications Support for use of scrambling and Conditional Access (CA) within digital broadcasting systems DVB Common Scrambling Algorithm : Distribution Agreements

330

TS 101 197 V1.2.1 (02/02) TS 103 197 V1.3.1 (01/03) DVB BlueBook A045r3 (07/04) TR 102 035 V1.1.1 (02/04) MEASUREMENT TR 101 290 V1.2.1 (01/01) TR 101 291 V1.1.1 (06/98) TS 102 032 V1.1.1 POLICY STATEMENTS DVB BlueBook A048 (04/98) DVB BlueBook A002 (11/94) DVB BlueBook A006 Rev. 1 (10/95) DVB BlueBook A061 (10/00)

DVB SimulCrypt; Part 1: Head-end architecture and synchronization Head-end lmplementation of SimulCrypt Head-end lmplementation of SimulCrypt (draft TS 103 197 V1.4.1) Implementation Guidelines of the DVB Simulcrypt Standard

Measurement guidelines for DVB systems Usage of DVB test and measurement signaling channel (PID 0x001D) embedded in an MPEG-2 Transport Stream (TS) SNMP MIB for test and measurement applications in DVB systems

DVB response to the Commission Green Paper on the convergence of the Telecommunications, Media and Information Technology sectors and the implication for regulation A Fundamental Review of the Policy of European Technical Regulations Applied to the Emerging Digital Broadcasting Technology Recommendations of the European Project Digital Video Broadcasting - Anti-piracy legislation for Digital Video Broadcasting Parental Control in a Converged Communications Environment Self Regulation, Technical Devices and Meta-Information

331

TABLE 4: GUIDE TO ATSC STANDARDS FOR DIGITAL TELEVISION (DTV)


The ATSC Standard for Digital Television (DTV) encompasses a number of elements, documented in various Standards, Recommended Practices, and Implementation Guidelines. The document map on this web site includes all ATSC documents and selected documents from SMPTE, SCTE, ITU, and other organizations. As time permits, additional resources will be mapped to this service. Select any of the hyperlinks contained on the following table for additional information and a complete listing of appropriate references. This Web resource has been approved by the ATSC Implementation Subcommittee as an IS Informational Document (IS/254, 14 March 2002), with subsequent updates. Function or Service Audio Production DTV Audio Coding Primary Standard/Document Links Digital Audio A/52: Digital Audio Compression Standard (AC-3)

Video Production

Signals and Interfaces

Digital Video

Digital Recording Formats

Time and Control Code

332

Test and Measurement

DTV Video Coding

A/53: ATSC Digital Television Standard

DTV Transmission System

A/53: ATSC Digital Television Standard

Program and System Information

A/65: Program and System Information Protocol for Terrestrial Broadcast and

Data Broadcasting

A/90:ATSC Data Broadcast Standard

Satellite Systems

A/80: Modulation And Coding Requirements For Digital TV (DTV) Application A/81: Direct-to-Home Satellite Broadcast Standard SCTE Standards

Cable Distribution

Receivers

Receiver Specifications

Defining Key Terms normative reference

A reference used to provide material that is critical to the practice of a standard. It represents a Thus, the normative references become an integral part of the standard. They can be thought

informative reference A reference that provides helpful, often critical, information related to a standard, but is not req but it is not required if you otherwise understood 13818-2. So, it would be an informative. ATSC Glossary

The ATSC Glossary of Terms is available for download. This document is a compilation of acro

333

Acquiring Standards Documents The standards documents referenced on this service are available from the following sources: ATSC Standards CableLabs Specifications EIA Standards IEEE Standards IETF Standards

Advanced Television Systems Committee (ATSC), 1750 K Street N.W., Suite 1200 Washing

Cable Television Laboratories, Inc., 400 Centennial Parkway, Louisville, CO 80027-1266; P

Electronics Industries Alliance (EIA), 2500 Wilson Boulevard, Arlington, VA 22201; Phone 7

Institute of Electrical and Electronics Engineers, Inc. 445 Hoes Lane, PO Box 1331, Piscata Internet Engineering Task Force (IETF), c/o Corporation for National Research Initiatives, 1 Global Engineering Documents, World Headquarters, 15 Inverness Way East, Englewood,

ISO Standards

ISO Central Secretariat, 1, rud de Varembe, Case postale 56, CH-1211 Geneve 20, Switzer SCTE Standards SMPTE Standards

Society of Cable Telecommunications Engineers Inc., 140 Philips Road, Exton, PA 19341; P

Society of Motion Picture and Television Engineers, 595 W. Hartsdale Avenue, White Plains

Function or Service Audio Production DTV Audio Coding

Primary Standard/Document Links Digital Audio A/52: Digital Audio Compression Standard (AC-3)

Video Production

Signals and Interfaces

Digital Video

Digital Recording Formats

334

Time and Control Code

Test and Measurement

DTV Video Coding

A/53: ATSC Digital Television Standard

DTV Transmission System

A/53: ATSC Digital Television Standard

Program and System Information

A/65: Program and System Information Protocol for Terrestrial Broadcast and

Data Broadcasting

A/90:ATSC Data Broadcast Standard

Satellite Systems

A/80: Modulation And Coding Requirements For Digital TV (DTV) Application A/81: Direct-to-Home Satellite Broadcast Standard SCTE Standards Receiver Specifications

Cable Distribution Receivers

335

Defining Key Terms normative reference

A reference used to provide material that is critical to the practice of a standard. It represents a Thus, the normative references become an integral part of the standard. They can be thought

informative reference A reference that provides helpful, often critical, information related to a standard, but is not req but it is not required if you otherwise understood 13818-2. So, it would be an informative. ATSC Glossary

The ATSC Glossary of Terms is available for download. This document is a compilation of acro

Acquiring Standards Documents The standards documents referenced on this service are available from the following sources: Function or Service Audio Production DTV Audio Coding Primary Standard/Document Links Digital Audio A/52: Digital Audio Compression Standard (AC-3)

Video Production

Signals and Interfaces

Digital Video

Digital Recording Formats

Time and Control Code

336

Test and Measurement

DTV Video Coding

A/53: ATSC Digital Television Standard

DTV Transmission System

A/53: ATSC Digital Television Standard

Program and System Information

A/65: Program and System Information Protocol for Terrestrial Broadcast and

Data Broadcasting

A/90:ATSC Data Broadcast Standard

Satellite Systems

A/80: Modulation And Coding Requirements For Digital TV (DTV) Application A/81: Direct-to-Home Satellite Broadcast Standard SCTE Standards

Cable Distribution

Receivers

Receiver Specifications

Defining Key Terms normative reference

A reference used to provide material that is critical to the practice of a standard. It represents a Thus, the normative references become an integral part of the standard. They can be thought

informative reference A reference that provides helpful, often critical, information related to a standard, but is not req but it is not required if you otherwise understood 13818-2. So, it would be an informative.

337

ATSC Glossary

The ATSC Glossary of Terms is available for download. This document is a compilation of acro

Acquiring Standards Documents The standards documents referenced on this service are available from the following sources: ATSC Standards CableLabs Specifications EIA Standards IEEE Standards IETF Standards

Advanced Television Systems Committee (ATSC), 1750 K Street N.W., Suite 1200 Washing

Cable Television Laboratories, Inc., 400 Centennial Parkway, Louisville, CO 80027-1266; P

Electronics Industries Alliance (EIA), 2500 Wilson Boulevard, Arlington, VA 22201; Phone 7

Institute of Electrical and Electronics Engineers, Inc. 445 Hoes Lane, PO Box 1331, Piscata Internet Engineering Task Force (IETF), c/o Corporation for National Research Initiatives, 1 Global Engineering Documents, World Headquarters, 15 Inverness Way East, Englewood,

ISO Standards

ISO Central Secretariat, 1, rud de Varembe, Case postale 56, CH-1211 Geneve 20, Switzer SCTE Standards SMPTE Standards

Society of Cable Telecommunications Engineers Inc., 140 Philips Road, Exton, PA 19341; P

Society of Motion Picture and Television Engineers, 595 W. Hartsdale Avenue, White Plains

Function or Service Audio Production DTV Audio Coding

Primary Standard/Document Links Digital Audio A/52: Digital Audio Compression Standard (AC-3)

Video Production

Signals and Interfaces

Digital Video

338

Digital Recording Formats

Time and Control Code

Test and Measurement

DTV Video Coding

A/53: ATSC Digital Television Standard

DTV Transmission System

A/53: ATSC Digital Television Standard

Program and System Information

A/65: Program and System Information Protocol for Terrestrial Broadcast and

Data Broadcasting

A/90:ATSC Data Broadcast Standard

Satellite Systems

A/80: Modulation And Coding Requirements For Digital TV (DTV) Application A/81: Direct-to-Home Satellite Broadcast Standard SCTE Standards

Cable Distribution

339

Receivers

Receiver Specifications

Defining Key Terms normative reference

A reference used to provide material that is critical to the practice of a standard. It represents a Thus, the normative references become an integral part of the standard. They can be thought

informative reference A reference that provides helpful, often critical, information related to a standard, but is not req but it is not required if you otherwise understood 13818-2. So, it would be an informative. ATSC Glossary

The ATSC Glossary of Terms is available for download. This document is a compilation of acro

Acquiring Standards Documents The standards documents referenced on this service are available from the following sources:

340

TABLE 5: LIST OF ARIB STANDARDS(WRITTEN IN ENGLISH) IN THE FIELD OF DIGITAL BROADCASTING (ASSOCIATION OF RADIO INDUSTRIES AND BUSINESSES IN JAPAN)
STANDARD CODE STANDARD NAME TRANSMISSION SYSTEM FOR DIGITAL TERRESTRIAL TELEVISION BROADCASTING VIDEO CODING, AUDIO CODING, AND MULTIPLEXING SPECIFICATIONS FOR DIGITAL BROADCASTING DATA CODING AND TRANSMISSION SPECIFICATIONS FOR DIGITAL BROADCASTING Preface Volume 1 Volume 2 SERVICE INFORMATION FOR DIGITAL BROADCASTING SYSTEM RECEIVER FOR DIGITAL BROADCASTING DIGITAL TERRESTRIAL TELEVISION BROADCASTING STD-B14 Volume 2 Volume 7 Volume 8 Ver.1.6 May 17, 2005 Ver.3.2 June, 2004 VERSION ENACTMENT DATE or REVISION DATE

STD-B31

Ver.1.5

July 29, 2003

STD-B32

Ver.1.3

February 5, 2004

STD-B24

STD-B10 STD-B21

Ver.3.8

June 1, 2004

Ver.4.2

June 1, 2004

341

You might also like