# August 2009 Semester 4 BT0046 – Communication Technology – 2 Credits (Book ID: B0025 & B0026) Assignment Set – 1 (30 Marks

)
Answer all questions Book ID: B0025 1. What is bandwidth? What is the bandwidth of a. Telephone signal b. Commercial radio broad casting c. TV signal. 2. Define and prove sampling theorem using frequency spectrum. Book ID: B0026 3. Explain the concept of Path Clearance. 4. Explain Tropospheric Forward Scatter Systems. 5. Explain various light sources for Optical Fiber Communication. 5 x 6 = 30

1. What is bandwidth? What is the bandwidth of a). Telephone signal

b). Commercial radio broad casting c). TV signal. Bandwidth is the span of frequencies within the spectrum occupied by a signal and used by the signal for conveying information. The bandwidth is an important parameter in communication and it depends on the type of signal or type of application, the amount of information to be communicated and the time in which the information is to be communicated. To convey more information in short time we need more bandwidth. The same quantity of information can be sent in a longer period using less bandwidth. Similarly to convey voice signal we need bandwidth and to convey video it requires more bandwidth and so on. Information Transfer rate. The bit has been defined as the basic unit of information in digital binary system. The speed at which information is transferred from one computer or terminal to another in a digital system called the information transfer rate or bit rate and is measured in bits/sec. E.g. 10bits get transferred in 10ms the information transfer rate is 10bits/10ms=1000bits/s or that is equal to 1kbps a). Telephone signal A POTS line (in the US and Europe) has a bandwidth of 3 kHz. A normal POTS line can transfer the frequencies between 400 Hz and 3.4 KHz. The frequency response is limited by the telephone transmission system (the actual wire from central office to your wall can usually do much more). Typical telephone line has frequency response of 300 Hz to 3400 Hz. The signal starts to attenuate in the frequencies below 300 Hz because of the AC coupling of audio signals (audio signal goes through capacitors and transformers). The high frequency response is limited by the transformer and the available bandwidth in the telephone transmission system (in digital telephone network the telephone audio is sampled at 8 kHz sample rate). Nowadays POTS is sharply band limited due to the fact that the line almost always is digitally sampled at 8 kHz at some point in the circuit. The absolute, theoretical limit (with perfect filters) is therefore 4 kHz - but this isn't reality, 3.4 kHz maximum frequency. The bass frequency response is limited because of the limitations in telephone system components: transformers and capacitors can be smaller if they don't have to deal with lowest frequencies. Other reason to drop out the lowest frequencies is to keep the possibly strong mains frequency (50 or 60 Hz and its harmonics) humming away from the audio signal you will hear. Most of the current telephone systems are still restricted to the historically motivated limitation of the bandwidth from 0.3 to 3.4 kHz. This bandwidth ensures a sufficient intelligibility of speech—the speech quality, however, suffers from the reduced bandwidth. One possible approach to reduce the quality loss is the application of an artificial bandwidth extension system at the receiver which is proposed in this contribution. Based on a priori knowledge concerning the

bands of frequencies in the radio spectrum, chopped into 6-MHz slices, to accommodate TV channels: • 54 to 88 MHz for channels 2 to 6 • 174 to 216 MHz for channels 7 through 13 • 470 to 890 MHz for UHF channels 14 through 83 The ratio of theoretical to actual horizontal resolution is called the Kell factor after the engineer, who defined it, and it is found, for a range of different line standards, to take values around 0.75; the figures for the 625 line system calculated above correspond to a Kell factor of 0.746.The reason that the Kell factor is less than unity arises from the effective sampling of the picture in the vertical direction and the continuous nature of the process horizontally. The maximum vertical spatial picture frequency is limited because 1 spatial cycle requires 2 picture lines (corresponding to the Nyquist cut-off) whereas in the horizontal direction the system can transmit frequencies above the nominal cut-off (5.5MHz) albeit with reducing amplitude with increasing frequency. The required bandwidth in television and other image scanning systems depends upon the rate of change of signal intensity along a line of the scanned image. The scanning rate in conventional systems is uniform, and the bandwidth then depends upon the maximum rate of change needed to achieve an acceptable picture quality. In broadcast television, there is a high degree of correlation of the luminance signal from frame to frame. Nevertheless, camera movement and rapid changes of scene can reduce the inter frame correlation appreciably. For teleconferencing and video telephone type scenes where the camera is stationary and the movement of subjects rather limited, only a small fraction of the video samples change appreciably from scene to scene. Consequently, there would be less frame-to-frame correlation in average scenes transmitted in broadcast TV than in video telephone or videoconference scenes. Measurements have also indicated that typical variations of signal intensity along a single scan tend to vary in bunches, with little variation over one interval followed by a jump in level to the next interval; during a typical interval which usually exceeds 2% of the line duration, the signal intensity remains substantially unchanged. Television transmission of a single, fixed scene may be achieved using a slow scan rate. In this case, the transmission bandwidth requirement would be small. However, a slow-scan, narrow bandwidth system would be incapable of transmitting a changing scene without serious degradation of picture quality. The time required to transmit video signal information is inversely proportional to the rate of change of the signal intensity. Thus, various inventors have proposed transmission schemes in which slowly varying information would be transmitted at a rapid scanning rate while rapidly varying information would be transmitted at a slow rate. Several early attempts to implement TV systems incorporating VVS produced disappointing results. These schemes were designed on the premise that the rate of change of the signal from a TV camera could be used to control its scanning

velocity, thereby reducing the total bandwidth requirements. The bandwidth requirements to transmit the rate of change information, however, were greater than those of the TV camera output signal. Consequently, a greater bandwidth was actually required than would have been required to transmit the TV signal itself using a uniform scanning velocity.

2. Define and prove sampling theorem using frequency spectrum. Sampling theorem states that; “Any signal which is continuous in time (analog) can be completely represented by its samples and can be recovered if the sampling frequency fs >2 fm. Where fx is the sampling frequency and fm is the maximum frequency of the signal” The definition of proper sampling is quite simple. Suppose you sample a continuous signal in some manner. If you can exactly reconstruct the analog signal from the samples, you must have done the sampling properly. Even if the sampled data appears confusing or incomplete, the key information has been captured if you can reverse the process. The sampling theorem. Frequently this is called the Shannon sampling theorem, or the Nyquist sampling theorem, after the authors of 1940s papers on the topic. The sampling theorem indicates that a continuous signal can be properly sampled, only if it does not contain frequency components above onehalf of the sampling rate. For instance, a sampling rate of 2,000 samples/second requires the analog signal to be composed of frequencies below 1000 cycles/second. If frequencies above this limit are present in the signal, they will be aliased to frequencies between 0 and 1000 cycles/second, combining with whatever information that was legitimately there. Amplitude

fm

f

The figure shows the frequency spectrum of the signal m(t).A frequency spectrum is a plot of signal amplitudes versus the frequency. For sinusoidal signals the spectrum will be what is called line spectrum, since each signal is represented as a line at the corresponding frequency and the height of the line represents the maximum amplitude of the signal. But for a band limited signal, that is signal.

A continuous-time signal x(t), whose spectral content is limited to frequencies smaller than Fb (i.e., it is band-limited to Fb) can be recovered from its sampled version x(n) = x(nT ) if the sampling rate Fs= 1/T is such that Fs> 2Fb It is also clear how such recovering might be obtained. Namely, by a linear reconstruction filter capable to eliminate the periodic images of the base band introduced by the sampling operation. Ideally, such filter doesn't apply any modification to the frequency components lower than the Nyquist frequency, defined as FN= Fs/2, and eliminates the remaining frequency components completely. The reconstruction filter can be defined in the continuous-time domain by its impulse response, which is given by the function h(t) = sinc(t) = sin (t/T ) t/T

Figure 2: sinc function, impulse response of the ideal reconstruction filter Ideally, the reconstruction of the continuous-time signal from the sampled signal should be performed in two steps: Conversion from discrete to continuous time by holding the signal constant in time intervals between two adjacent sampling instants. This is achieved by a device called a holder. The cascade of a sampler and a holder constitutes a sample and hold device.

3. Explain the concept of Path Clearance? Path clearance, an essential component for point to point communications systems, involves ensuring that there are no obstructions between the transmitting and receiving antennas or within the first Fresnel zone. In practice the direct ray path from the transmitting antenna to the receiving antenna has to pass above and below or by the side of elevated structures like buildings, trees and hills etc. which are capable of reflecting the microwaves and thus providing a reflected wave. If these reflecting objects (including ground also) are not sufficiently removed from the direct ray path the reflected wave would tend to cancel the direct wave the receiver. Therefore it is necessary to ensure that adequate path clearance exists.

The practice of microwave communication path design prior to installation has in the past been based on empirical clearance criteria over surveyed elevation profiles between tower sites, or on actual path testing employing temporary towers with variable

antenna heights. An unpublished Bell system practice entitled "Microwave Path-Testing" describes these techniques in detail. Path clearance surveys have employed topographic maps, altimetry, theodolities, optical flashing, low-altitude radar profiling, or high-altitude photogrammetry to determine path elevations for calculation of static clearance criteria over obstructions. These survey methods each contain inherent limitations and potential hazards in portraying the actual path strip profile, and they all ignore the equally significant performance parameters of terrain reflectivity and atmospheric refractivity variations. These sporadic elevation surveys and static clearance designs have been generally adequate for non-reflective paths or microwave routes with communication traffic which tolerates moderate fading. Propagation reliability has been improved when necessary by diversity transmission frequencies or by space diversity reception with empirically separated antennas. Arbitrary diversity design affords limited protection against structured fades produced by multi-surface reflections which vary with atmospheric refraction, but is not the complete answer. Path clearance is calculated as demonstrated below

4. Explain Tropospheric Forward Scatter Systems? The troposphere is the lowest layer of the atmosphere; it refracts radio waves slightly, providing communication at distances somewhat beyond the visual line of sight. It also absorbs radiation at some frequencies. This type of propagation is known as troposphere scatter propagation. It is also known as “troposcatter or “forward scatter propagation". It is responsible for the propagation of UHF signals beyond the horizon. The prime advantage of tropospheric forward scatter systems, compared with the line of sight microwave system is that they provide reliable communication over distance up to 1000 km or more without repeaters stations. On the other hand the large range tropospheric scatter systems requires very range antennas and very high power transmitters. Characteristics of Tropospheric forward scatter systems

Here two directional antennas are pointed. So, that their beams intersect midway between them above the horizon. Sufficient radio energy is to be directed

from transmitting antenna (Tx) to the receiving antenna (Rx) to have better communication. This is because of only a small portion of the transmitted energy is scattered and a small fraction of the scattered energy reaches the receiver. There are two theories for explaining the troposcatter

The receive signal is the sum of a number of signals reflected from local surfaces, and these signals sum in a constructive or destructive manner => relative phase shift. Phase relationships depend on the speed of motion, frequency of transmission and relative path lengths. To separate out fast fading from slow fading => the envelope or magnitude of the RX signal is averaged over a distance (e.g. 10-m). Alternatively, a sliding window can be used Slow Fading:Slow fading is the result of shadowing by buildings, mountains, hills, and other objects. The average within individual small areas also varies from one small area to the next in an apparently random manner. The variation of the average if frequently described in terms of average power in decibel (dB): Ui = Wlog(V2(xi)) where V is the voltage amplitude and the subscript % denotes different small areas. For small areas at approximately the same distance from the Base Station (BS), the distribution observed for Ui about its mean value E{U} is found to be close to the Gaussian distribution p(Ut - E{U})= -^e 2°2 F

Where GSF is the standard deviation or local variability of the shadow fading.

5. Explain various light sources for Optical Fiber Communication? There are two different kinds of Fiber o Singleplex o Multiplex Singleplex slower and more dangerous has a high concentrated lazer that shoots straight through the glass, This is allot more dangerous because the glass has a magnifying affect and if you look at the lazer you will permanently scar your retina (Be careful it can’t be seen while on) Multiplex is a little different it bounces the light off the glass instead of shooting straight through, so this works faster (because you can send multiple pulses at a time) the light isn't as strong so looking at it is harmless as you just appear it as red harmless light. An optical fiber bundle, comprising: a plurality of optical fiber light sources; a plurality of optical fibers bundled on both an input terminal side thereof and anoutput terminal side thereof, said optical fibers receiving light from said input terminal side and outputting said light to said output terminal side; and a connecting member provided for said optical fiber bundle; wherein said optical fibers are divided on said input terminal side, individually or into a plurality of groups in accordance with output terminal side positions of said optical fibers, said optical fiber bundle is arranged to adjust said light received from said input terminal side for each of said optical fibers or for each of said groups and said optical fiber light sources are provided for each of said optical fibers or each of said groups. The optical fiber bundles according to claim 1, wherein each of said optical fibers includes a connecting member. The optical fiber bundle according to claim 1, wherein said optical fibers are divided into a plurality of groups in accordance with output terminal side positions thereof, and each of said groups includes a connecting member. The optical fiber bundles according to claim 1, wherein each of said optical fiber or each of said groups includes a light intensity adjusting member. The optical fiber bundle according to claim 1, wherein said optical fibers include an optical fiber for detecting said light on said output terminal side of said optical fiber bundle. A light source device comprising: an optical fiber light source; and an optical fiber bundle for receiving light from said optical fiber light source on an input terminal side thereof and outputting said light to an output terminal side thereof, said optical fiber bundle including a plurality of optical fibers bundled in a desired shape on both said input terminal side thereof and said output terminal side thereof, wherein said optical fibers are divided on said input terminal side thereof, individually or into a plurality of groups in accordance with output terminal side positions of said optical fibers, and said optical fiber bundle is arranged to adjust a light intensity of said light received from said optical fiber light source on said input terminal side for each

of said optical fibers or for each of said groups. The light source device according to claim 6, wherein each of said optical fibers includes a connecting member, and is connected to said optical fiber light source through said connecting member. The light source device according to claim 6, wherein said optical fibers are divided into a plurality of groups in accordance with output terminal side positions thereof, each of said groups includes a connecting member, and is connected to said optical fiber light source through said connecting member. The light source device according to claim 6, wherein said optical fibers are connected to said optical fiber light source through a light intensity adjusting member provided for each of said optical fibers or each of said groups.A method of manufacturing a light source device, comprising the steps of: bundling a plurality of optical fibers to form an optical fiber bundle; irradiating a light from an input terminal side of said optical fiber bundle; detecting alight intensity and light distribution pattern on an output terminal side of said optical fiber bundle; calculating a light intensity of an optical fiber light source for each of said optical fiber on the basis of a detection result in order to obtain desired output on said output terminal side; and connecting said optical fiber light source to said optical fiber bundle on the basis of a calculation result. A method for manufacturing a light source device according to claim 10, further comprising the step of: adjusting said optical fiber light source to make said light intensity distribution uniform.

August 2009 Semester 4 BT0046 – Communication Technology – 2 Credits (Book ID: B0025 & B0026) Assignment Set – 2 (30 Marks)
Answer all questions 5 x 6 = 30

Book ID: B0025 1. Briefly explain different layers of digital communication. 2. Explain PCM with a suitable block diagram. 3. What are the different signaling formats? Explain with waveforms. Book ID: B0026 4. Explain LOS Propagation on Flat Earth. 5. Write notes on Satellite Links.

1. Briefly explain different layers of digital communication? Communications is multiple levels or multiple layers actively. In communications between people, there are rules to follow to ensure that each person has a chance to speak, to interrupt, to finish. This is called the protocol of

communications. Communications systems also have protocols that’s specially define how the communications is to start, finish, and recover from problems ( such as due to noise or equipment failure) how the receiver is to indicate if a message was received properly and without error, and what to do if an error is detected. The next to highest level is the coding, which defines how initial data will be sent and transmitted with these specific signal values, one level below is the format which is responsible for adding additional information about the message such as who it is for, how long it is , and where it ends. Format also provides framing and additional information that helps the receiver determine if the message, as received, contains any errors. At the lowest level are the specific voltages (or currents or frequencies) used in modulation to represent the digital information. There are four layer of digital communication;
¾

Protocol - In the field of telecommunications, a communications protocol is the set of standard rules for data representation, signaling, authentication and error detection required to send information over a communications channel. An example of a simple communications protocol adapted to voice communication is the case of a radio dispatcher talking to mobile stations. Communication protocols for digital computer network communication have features intended to ensure reliable interchange of data over an imperfect communication channel. Communication protocol is basically following certain rules so that the system works properly. Coding - In digital communications, a channel code is a broadly used term mostly referring to the forward error correction code and bit interleaving in communication and storage where the communication media or storage media is viewed as a channel. The channel code is used to protect data sent over it for storage or retrieval even in the presence of noise (errors). Sometimes channel coding also refers to other physical layer issues such as digital modulation, line coding, clock recovery, pulse shaping, channel equalization, bit synchronization, training sequences, etc. Channel coding is distinguished from source coding, i.e., digitalization of analog message signals and data compression.

¾

¾

Format – is responsible for adding additional information about the message such as who is it for, how long it is, and where it ends. Format also provides framing and additional information that helps the receiver determine if the message, as received, contain any errors. Modulation - Modulation is the process of transforming a message signal to make it easier to work with. It usually involves varying one waveform

¾

in relation to another waveform. In telecommunications, modulation is used to convey a message, or a musician may modulate the tone from a musical instrument by varying its volume, timing and pitch. In radio communications for instance, electrical signals are best received when the transmitter and receiver are tuned to resonance. Therefore keeping the frequency content of the message signal as close as possible to the resonant frequency of the two is ideal. Often a high-frequency sinusoid waveform is used as carrier signal to convey a lower frequency signal. The three key parameters of a sine wave are its amplitude ("volume"), its phase ("timing") and its frequency ("pitch"), all of which can be modified in accordance with a low frequency information signal to obtain the modulated signal. A device that performs modulation is known as a modulator and a device that performs the inverse operation of modulation is known as a demodulator (sometimes detector or demod). A device that can do both operations is a modem (short for "Modulator-Demodulator").

Protocol Coding Format Modulation

To physical interface

2. Explain PCM with a suitable block diagram? Pulse code modulation (PCM) is a digital scheme for transmitting analog data. The signals in PCM are binary; that is, there are only two possible states, represented by logic 1 (high) and logic 0 (low). This is true no matter how complex the analog waveform happens to be. Using PCM, it is possible to digitize all forms of

analog data, including full-motion video, voices, music, telemetry, and virtual reality (VR). Pulse-code modulation (PCM) is a digital representation of an analog signal where the magnitude of the signal is sampled regularly at uniform intervals, then quantized to a series of symbols in a numeric (usually binary) code. PCM has been widely used in digital telephone systems. A sine wave is sampled and quantized for PCM. The sine wave is sampled at regular intervals. For each sample, one of the available values is chosen by some algorithm; usually the floor function is used. This produces a fully discrete representation of the input signal (shaded area) that can be easily encoded as digital data for storage or manipulation. This is modulation of the input signal. To produce output from the sampled data, the procedure of modulation is applied in reverse. After each sampling period has passed, the next value is read and a signal is shifted to the new value. As a result of these transitions, the signal will have a significant amount of high-frequency energy. To smooth out the signal and remove these undesirable aliasing frequencies, the signal would be passed through analog filters that suppress energy outside the expected frequency range. To obtain PCM from an analog waveform at the source (transmitter end) of a communications circuit, the analog signal amplitude is sampled (measured) at regular time intervals. The sampling rate, or number of samples per second, is several times the maximum frequency of the analog waveform in cycles per second or hertz. The instantaneous amplitude of the analog signal at each sampling is rounded off to the nearest of several specific, predetermined levels. This process is called quantization. The number of levels is always a power of 2 -- for example, 8, 16, 32, or 64. These numbers can be represented by three, four, five, or six binary digits (bits) respectively. The output of a pulse code modulator is thus a series of binary numbers, each represented by some power of 2 bits. At the destination (receiver end) of the communications circuit, a pulse code demodulator converts the binary numbers back into pulses having the same quantum levels as those in the modulator. These pulses are further processed to restore the original analog waveform. Basic voice data rate is derived by determining the maximum theoretical capacity of codec packets that can be transmitted per second. So if a data frame can be transmitted in 439 microseconds then 2,277 packets can be transmitted per second. Say the network uses 50 frames per second for each voice stream and a standard call requires 100 frames per second then the maximum theoretical capacity is 22 WEP Pulse code modulation is essentially analog to digital conversation of a special type where the information contained in the instantaneous samples of an analog signal is represented by digital words or codes in a serial bit stream. A simple PCM system is as shown in figure. The analog signal m(t) is sampled using a sample and hold circuit. The samples are held for the quantizer until the next sample so as to avoid aliasing effect. A quantizer then quantizes the samples. Therefore the output

of the quantizer will be any one of the allowed levels unlike the sampled output which may take any voltage value within the upper and lower limits. The quantized voltage is converted into a uniquely identifiable binary code representing the quantized value by the encoder.

The pcm demodulator will reproduce the correct standard amplitude represented by the pulse-code group. However, it will reproduce the correct standard only if it is able to recognize correctly the presence or absence of pulses in each position. For this reason, noise introduces no error at all if the signal-to-noise ration is such that the largest peaks of noise are not mistaken for pulses. When the noise is random (circuit and tube noise), the probability of the appearance of a noise peak comparable in amplitude to the pulses can be determined. This probability can be determined mathematically for any ration of signal-to-averagenoise power. When this is done for 105 pulses per second, the approximate error rate for three values of signal power to average noise power is: 17 dB - 10 errors per second 20 dB - 1 error every 20 minutes 22 dB - 1 error every 2,000 hours Above a threshold of signal-to-noise ration of approximately 20 dB, virtually no errors occur. In all other systems of modulation, even with signal-to-noise ratios as high as 60 dB, the noise will have some effect. Moreover, the pcm signal can be retransmitted, as in a multiple relay link system, as many times as desired, without the introduction of additional noise effects; that is, noise is not cumulative at relay

stations as it is with other modulation systems. The system does, of course, have some distortion introduced by quantizing the signal. Both the standard values selected and the sampling intervals tend to make the reconstructed wave depart from the original. This distortion, called QUANTIZING NOISE, is initially introduced at the quantizing and coding modulator and remains fixed throughout the transmission and retransmission processes. Its magnitude can be reduced by making the standard quantizing levels closer together. The relationship of the quantizing noise to the number of digits in the binary code is given by the following standard relationship: Where: n is the number of digits in the binary code Thus, with the 4-digit code of figure 2-50 and 2-51, the quantizing noise will be about 35 dB weaker than the peak signal which the channel will accommodate. The advantages of pcm are two-fold. First, noise interference is almost completely eliminated when the pulse signals exceed noise levels by a value of 20 dB or more. Second, the signal may be received and retransmitted as many times as may be desired without introducing distortion into the signal. 3. What are the different signaling formats? Explain with waveforms? In commercial telephony along with the speech information some additional information regarding the initiation and termination of the call, address of the calling party etc. also has to be transmitted. These are called signaling. When analog transmission is employed, the signaling information is communicated using a separate channel. In T1 digital system a process of bit slot sharing is used to convey the signaling information. In this method the first five samples are encoded as eight bit codes white the sixth one is encoded as seven bit code and the eight bit (lease significant bit) is used for sending the singling information. This pattern is repeated for every six frames. Thus in six frames the number of bits used for encoding the samples is 5 x 8 + 1 x 7 = 47 so that samples are encoded on an average info 47 / 6 = 7 x (5/6) bits. The frequency of the bits used for signaling is 1/6th of the frame bit rate. That is fb(T1)signaling = (1/6) x ( 8000) = 1,333 Hz. This type of signaling is called channel associated signaling. Vocoder A vocoder is a combination of the words voice and encoder) is an analysis / synthesis system, mostly used for speech. In the encoder, the input is passed through a multiband filter, each band is passed through an envelope follower, and the control signals from the envelope followers are communicated to the decoder.

The decoder applies these (amplitude) control signals to corresponding filters in the (re)synthesizer. It was originally developed as a speech coder for telecommunications applications in the 1930s, the idea being to code speech for transmission. Its primary use in this fashion is for secure radio communication, where voice has to be encrypted and then transmitted. The advantage of this method of "encryption" is that no 'signal' is sent, but rather envelopes of the band pass filters. The receiving unit needs to be set up in the same channel configuration to re-synthesize a version of the original signal spectrum. The vocoder as both hardware and software has also been used extensively as an electronic musical instrument. Digital speech coders can be classified into 2 categories, waveform coders and vocoders (voice coders). Waveform coders use algorithms to encode and decode, so that the system output is an approximation of the input waveforms. System like PCM and DPCM are examples of waveform of waveform coders. The main advantages of the waveform coders is the high quality of the signal reproduced. But they require relatively high bit rates. An alternative encoding scheme, which operates significantly at lower bit rates, is vocoders. Typically vocoder bit rates are in the range of 1.2 to 2.4 kb/s. (This is in contrast with the bit rate of 24Kb/s for voice signals. When encoded as an 8 bit PCM). Vocoders encode speech signals by extracting a set of parameters. These parameters are digitized and transmitted to the receiver where they are used to set values for the parameters in function generators and filters, which in turn, synthesize the output speech sound.The people who developed vocoders studied the physiology of the vocal chords, the larynx, the throat, the mouth and the nasal passages, all of which have bearings on speech generation. They also studied the physiology of the ear and the manner in which the brain interprets sound heard. Voice Model Speech can be very well approximated as a sequence of voiced and unvoiced sounds passed through a filter. The voiced sounds are those generated by the vibrations of the

Switch whose position is determined by Whether the sound is voiced or unvoiced Noise source to represent unvoiced sound

FILTER To represent effect of mouth, throat and nasal passages

Impulse generator to represent Voiced sounds Synthesized waveform approximation to speech

Vocal cords. The unvoiced sounds are those generated when a speaker pronounces such letters as “s” , “f, “p” etc. The unvoiced sounds are formed by expelling air through lips and teeth. A generalized representation of a vocoder is as shown above. The filter represents the effect on the generated sounds of the mouth, throat, and nasal passage of the speaker. In the vocoder, an impulse generator simulates the voiced sounds whose frequency is the fundamental frequency of vibration of the vocal chords. The unvoiced sounds are simulated by a noise source. All encoders employ the scheme shown above to generate a synthesized approximation to speech waveforms. They differ only in the techniques employed to generate the voiced and unvoiced sounds and in the characteristics, and design of the filter.

4. Explain LOS Propagation on Flat Earth? Line-of-sight propagation refers to electro-magnetic radiation or acoustic wave propagation. Electromagnetic transmission includes light emissions traveling in a straight line. The rays or waves may be diffracted, refracted, reflected, or absorbed by atmosphere and obstructions with material and generally cannot travel over the horizon or behind obstacles. Especially radio signals, like all electromagnetic radiation including light emissions, travel in straight lines. At low frequencies (below approximately 2 MHz or so) these signals travel as ground waves, which follow the Earth's curvature due to diffraction with the layers of atmosphere. This enables AM radio signals in low-noise environments to be received well after the transmitting antenna has dropped below the horizon. Additionally, frequencies between approximately 1 and 30 MHz, can be reflected by the F1/F2 Layer, thus giving radio transmissions in this range a potentially global reach (see shortwave radio), again along multiply deflected

straight lines. The effects of multiple diffraction or reflection lead to macroscopically "quasi-curved paths". However, at higher frequencies and in lower levels of the atmosphere, neither of these effects applies. Thus any obstruction between the transmitting antenna and the receiving antenna will block the signal, just like the light that the eye may sense. Therefore, as the ability to visual sight a transmitting antenna (with regards to the limitations of the eye's resolution) roughly corresponds with the ability to receive a signal from it, the propagation characteristic of high-frequency radio is called "line-of-sight". The farthest possible point of propagation is referred to as the "radio horizon". In practice, the propagation characteristics of these radio waves vary substantially depending on the exact frequency and the strength of the transmitted signal (a function of both the transmitter and the antenna characteristics). Broadcast FM radio, at comparatively low frequencies of around 100 MHz, easily propagates through buildings and forests.

5. Write notes on Satellite Links? A satellite link comprises two parts, the uplink and the downlink. First, consider the uplink. The earth station transmits a signal. This signal comes from the transmitter which may be a solid state power amplifier (SSPA) or travelling wave tube amplifier (TWTA). Most commonly VSAT terminals have solid state power amplifiers mounted at the dish and as close to the feed as possible to minimise waveguide attenuation losses. These dish mounted units are often block up converters (BUC) or Transmit Receive Integrated Assembly (TRIA) which change the frequency of the signals from L band (in the cross site inter facility link (IFL) cable) to the microwave frequency for transmission (C band, Ku or Ka band). BUCs have a rated output power, such as 2 watts for single carrier operation or 0.5 watts for multi-carrier operation. For ease of calculation the 2 watts power needs to be converted to dBW by doing 10 x log (power in watts), so a 2 watt BUC has a single carrier output power capability of +3 dBW (2 watt) or, for multi-carrier operation -6 dBW (0.25 watt) output power per carrier for each of two equal power carriers. The output power of the BUC is fed to the dish which concentrates the power in the direction of the satellite rather than allowing the power to be radiated evenly in all directions. This characteristic of the antenna is called gain, measured in dBi, which means gain relative to an isotropic, Omni-directional antenna. The combination of BUC power and satellite dish gain produces equivalent isotropic radiated power (EIRP), so for example. 2 watt BUC power + 40 dBi antenna gain produces 43 dBW EIRP.The transmit EIRP of the earth station may be achieved by having a variety of sizes of BUC power and dish size. A large dish with low power