Professional Documents
Culture Documents
- Mahatma Gandhi
(While describing the power of radio, after his first and last live broadcast over
th
AIR on 12 November 1947). The power of radio has further increased with the large scale
digitization taking place across the globe, even so in the television. Broadcasting in India
has seen phenomenal advances in both production and technology resulting in innovative
programming taking advantages of digital revolution coupled with mushrooming delivery
modes namely terrestrial, cable, satellite, IPTV etc. More than 400 channels are now
available to consumers in India. Now the power has shifted to consumer with interactivity
which is the next big thing happening in DTH and cable.
The entertainment and media sector in India has been growing at a steady 19%
per annum. Barring a minor correction due to global slowdown, the trend is expected to
continue. In a study conducted by Price Water Coopers, the TV industry in India is
expected to grow from Rs 226 billion in 2007 to Rs 600 billion in 2012. During the same
period, radio industry is expected to grow from Rs 6.2 billion to 18 billion. The
exponential growth seen in broadcast channels resulted in enormous job opportunities in
this sector. Lack of skilled manpower is seen as one of the bottlenecks in the industry.
Prasar-Bharati training centers are filling this void to certain extent and this
course on broadcast technology is an attempt to train the technical students in the vast area
of broadcasting. The contents in this course include starting from analog basics of audio
and video clearly mentioning intricacies of signal generation to modern trends in
broadcasting like DAB and DRM in Radio and DVB-H in Television as the world is
moving to mobile. We hope this course which includes overview of all aspects of
broadcasting such as production, post production, transmission and archiving will fulfil the
needs of students and professionals who are new to this industry.
Editor
CHARACTERISTICS OF SOUND AND ACOUSTICS
B.GHOSH – ADE
RSTI (T), BBSR
Nature of Sound
Sound is a longitudinal wave motion consisting of a train of
Compression and rarefaction travelling in a medium. When these waves strike the
eardrum these are converted into signals which are carried to the brain by the
auditory nerves and are finally interpreted into what we call Sound.
It has all characteristics of a wave as explained below-
1) Amplitude –Defined as the
intensity of Compression &
Maximum Compression
rarefaction produced in a medium.
2) Frequency (f) –Defined as
the number of Successive
Zero pressure Line
compression and rarefaction Time
Amplitude
of pain
70
under test is made to speak syllables in
Sound energy (% of
60 60
random order, which is recorded and heard by 50
50
a group of persons with normal hearing. The 40
40
articulation efficiency should be about 90% 30
30
total)
20 20
speech. It is found that Intelligibility is mostly 10 (d) (b) 10
contained in the high frequency components 0 0 0
1 2 3
(1.5 to 2.5 KHz) of the speech. (Figure-3) (Figure-3) Cut-off
4 5 KHz
(a) Articulation efficiency
Frequencyas a function of
2) Energy-It is found that about 80% upper cut-off frequency. (b) Articulation
of total energy is transmitted, even though efficiency as a function of lower cut-off
all frequencies above 1 KHz are suppressed. frequency. (c) Sound energy as a function of
Similarly suppressing all frequencies below 1 KHz
upperreduces the energy (d)
cut-off frequency. transmitted
Sound
to 15 %( Fig-3).The energy in speech is contained mostly in the low frequencies.
energy as a function of lower cut-off
frequency.
Based on these results 300-3400 Hz for telephone speech and 80-8000 Hz for
entertainment speech have been considered most adequate.
Growth
include some proportion of
reverberation.
Decay
Reverberation Time(R/T): The time
taken for sound energy in a room to
drop to 10-6 times (one millionth) of a b c Time d
its initial value, or 60 dB below its (Figure-6) Growth & decay of sound
original value.
in an enclosure
Some typical values of reverberation time (R/T) are:
1) Big concert hall : 2.0 second, 4) TV Studio : 0.5 second
2) Conference room : 0.5 second 5) Speech studios : 0.3 second,
3) Lecture halls : 0.3 second, 6) Music hall : 0.8 second.
Factors effecting Reverberation time:
𝑽
R/T = 55.3 ___________ (1)
𝒄𝒂
Where, c = velocity of sound = 344 m/sec or 1120 ft/sec.
V= Volume of the room.
a = Total absorption.
𝟎.𝟎𝟒𝟗 𝑽
Therefore, R/T = ________ (2) (In FPS unit) &,
𝒂
𝟎.𝟎𝟔𝟏𝑽
R/T = ________ (3) (In FPS unit)
𝒂
The Total absorption ‘a’ depends upon the surface area of each surface
and its absorption coefficient, given by equation:
a =∑ αS
or, a= α1S1+ α2S2+ --- αnSn __________ (4)
Where, α1, α2 --- are the absorption coefficients of surface areas S1, S2--
defined as the ratio between energy absorbed by unit surface area to the total
energy received by unit surface area.
Some
typical values of absorption coefficients(for 500 Hz frequency) are:
1) Open window -------------------------- 1
2) Carpets (1cm thick ) ------------------------0.25
3)Curtain ---------------------- 0.15
4)Wooden chair ----------------------------- 0.17
5)Acoustics tiles ----------------------------- 0.55
6)Door-wood --------------------------- 0.05
7) Wooden floor ---------------------------- 0.09
8) Glass panes ------------------------- 0.25
9)Audience ------------------------- 0.84
Based upon the above values of absorption coefficients and from the
known values of surface-areas of each items present in room, the total
absorption including audience can be calculated.
Acoustical Design of Studios & Auditoriums:
A TV camera outputs a video signal that is split into the three primary
colour,; red, green and blue (RGB). The entire colour spectrum can be
represented by varying intensities of these three primary colours. Video Signal
can be converted into following three signals.
Camera -Y’
R R’ (R’ – Y’)
outputs (R’ – Y’)
adder
-Y’
B B’ (B’ – Y’)
(B’ – Y’)
adder
PAL Video
• 625 scan lines per frame, 25 frames per second (40 m sec/frame)
• Aspect ratio 4:3
• PAL uses Y,U,V colour model
• Luminance (Y ) =0.3R+ 0.59G +0.11B
• In PAL System carrier is single, we need two signals i.e. (R–Y) and (B–Y) to
modulate independently.
• Both are of the same frequency but are displaced in phase by 90 degrees.
Hence it uses quadrature Amplitude modulation (QAM). The two modulated
signals at 90 degrees to each other produces the resultant chrominance
signal which gets added to Luminance signal to form Composite colour Video
Signal (CCVS).
• The (R-Y) and (B-Y) chrominance signals may be recovered at the television
receiver by suitable synchronous demodulation.
• The sub-carrier is to be generated by a local oscillator.
• This generated sub-carrier in the receiver must have same phase and
frequency as that of transmitted sub-carrier.
• This is achieved by transmitting 10 cycles of sub-carrier frequency on the
back porch of H synchronizing pulse and is known as burst or colour burst.
Broadcasting industry in India is fast turning digital because
of following advantages.
Superior technical quality.
Lower operating cost through the use of compression technology and
improved system reliability.
No regeneration loss.
Easy signal storage and processing.
Less susceptibility to interference and noise leading to reduction in
power requirements for digital transmission networks.
Use of extensive error correction techniques leading to use of cheaper
receiving equipment.
Conditional access system and easy encryption mechanism.
Identical method of handling audio, video and data.
Interactive broadcasting services.
Data broadcasting.
More channels in a given bandwidth with extensive use of compression
techniques.
More programming choices, viewing convenience & new services,
packaged information delivery, shopping, games, education, banking.
Flexibility in processing.
High spectrum efficiency, including greater possibilities for frequency
re-use and the ability to support more programme transmissions per
RF channel.
Scalability which means for example that the bit stream can be
devoted to a single High Definition Television (HDTV) quality signal or
multiple simultaneous Standard Definition Television (SDTV) feeds
featuring independent or inter-related programming.
Conversion to digital enables convergence of broadcasting,
telecommunication and information technology
Increased performance diversity, the ability to provide multiple
services in an existing single broadcasting service channel.
Digital Broadcasting Drawbacks
Requires huge investments for Prasar Bharati to change the analogue
transmitters and equipment in majority of the studios.
New frequencies are required for digital broadcasting as the
coexistence of analogue and digital broadcasting for some time is a
necessity before complete change over.
Users need new receivers each costing more than Rs. 5000/-.
Uncompressed digital video data is very high, whereas compressed
data, at very high compression ratios, are not very good for archiving.
Digital Video Signal: Digital video signals have been used for some time in
television studios based on the original CCIR Standard CCIR 601, designated as
ITU-BT.R601 today; this data signal is obtained as follows:
To start with, the video camera (Fig.3) supplies the analog Red, Green and Blue
(R, G, B) signals. These signals are matrixed in the camera to form luminance
(Y) and chrominance (colour difference CB and CR) signals. The luminance
bandwidth is then limited to 5.75 MHz using a low-pass filter. The two colour
difference signals are limited to 2.75 MHz, i.e. the colour resolution is clearly
reduced compared with the brightness resolution. In analog television (NTSC,
PAL, SECAM) too, the colour resolution is reduced to about 1.5 MHz. The low
pass filtered Y, CB and CR signals are then sampled and digitized by means of
analog/digital converters.
Digital video is distributed through SDI (Serial CCIR 601 Interface) which
is a serial interface stands for serial digital interface and has become the most
widely used interface because a conventional 75-Ohm BNC cable can be used.
Serial Digital Interface (SDI) is uncompressed digital video signs have a data
rate of 270 Mbps for 10 bit quantisation. SDI uses waveform that is
symmetrical about ground and has initial amplitude of 800 mV pp across 75
ohm load. Parallel connection of digital equipment is practical only for
relatively small installations, and there is a clear need for transmission over a
single coaxial cable. This is not simple as the data rate is high, and if the signal
were transmitted serially without modification, reliable recovery would be
very difficult. The serial signal must be modified prior to transmission to
ensure that there are sufficient edges for reliable clock recovery, to minimize
the low frequency content of the transmitted signal, and to spread the
transmitted energy spectrum so that radio frequency emission problems are
minimized. SDI signal can be fed through 75 ohm coaxial cable having BNC
connectors.
Why use serial? In large facilities it is too difficult and expensive to route
parallel signals rather need to be able to transmit over a single coax. Want to
include audio data with the video to save on cabling and special audio devices.
Analog audio signals are available in balanced mono or stereo mode but
digital audio signals are available in digital AES/EBU audio format as a
discrete channel or embedded on serial digital video. It is a standard defined by
audio Engineering Society (AES) and the European Broadcasting Union (EBU).
Digital audio is a stream of bytes which contain amplitude (volume) data. In
other words, when you're talking about samples, or a CD, you're talking strictly
about a series of bytes which represent a sequence of volume peaks. Each AES
stream carries two audio channels which can be either a stereo pair or two
independent feeds. The signals are pulse code modulated data stream carrying
digitized audio. Each sample is quantized to 20 or 24 bits creating an audio
sample word. Each word is formatted to form a sub-frame which is
multiplexed with other sub-frames to form the AES digital stream. The
sampling rates range from 32 to 50 KHz. Common rates and applications
include the following:
1) 32 KHz - used for radio broadcast links.
2) 44.1 KHz - used for CD players.
3) 48 KHz - used for professional recording & production.
32 Bits
V = Validity
0 1 2 3 U = User Data
C = Channel Status
X, Y, Z Preamble P = Parity
Two sub frames make up a frame which contains one sample from each of the two
channels. Frames are further grouped into 192 frame blocks and one frame
consist of 2 sub frames. X, Y indicates the channel identity for each sample. Z
indicates start of next frame block. The final stream can be embedded into the
blanking interval of SDI video.
Auxiliary Data Bits: When a 20 bit audio sample is used, the four least
significant bits (LSB) can be used for auxiliary data. One application for these
auxiliary data bits is for use as a voice-quality audio channel to provide a
talkback channel. Otherwise, these bits can be used to carry the 4 LSB of a 24
bit audio sample.
Audio Sample Data Bits: The audio sample data is placed between
bits 4 to 27 with the MSB placed at bit 27 and supporting a maximum sample of
24 bits. If not all the 24 bits are used for an audio data samples, the LSB’s are
set to “0”. Typically within broadcast facilities an audio sample of 20 bits is
used. This allows for auxiliary data channel within the 4 LSB’s from 4-8.
User Data bit (U): The user data bits can be used to carry additional information
about the audio signal. Each U bit from the 192 sub-frames can be assembled
together to produce a total of 192 bits per block. The operator can use this
information for such purposes as copyright information.
Channel Status Bit (C): The Channel Status bit provides information on various
parameters associated with the audio signal. These parameters are gathered for
each C bit within the 192 sub-frames for each audio channel. The following table
shows the information carried within these bits.
Parity Bit (P): The parity bit is set such that the values of bits 4-31 form an even
parity (even number of ones) used as a simple means of error checking to detect an
error with a sub-frame. Note it is not necessary to include the preambles since they
already have even parity.
Digital audio is a serial data stream with no separate clock signal. In order
to recover the data the receiver must extract the clock from the data stream by
using a simple coding stream known as bi-phase mark coding. A transition occurs
every bit period and when the data value is a “1”, an additional transition occurs at
half the bit period. This ensures easy clock extraction from the data and minimizes
the DC component present within the signal. Since transitions represent the data
values, the signal is also polarity insensitive.
Clocking information is derived from the AES/EBU bit stream, and is thus
controlled by the transmitter. The standard mandates use of 32 kHz,44.1
kHz, or 48 kHz sample rates, but some interfaces can be made to work at
other sample rates.
XLR I/O Output – male pins with female shell and Input – female pins with
male shell
TYPES OF MODULATION :-
1. Analog Modulation
PHASE MODULATION:
Phase Modulated
Carrier wave
Normally the carrier has three parameter such as : Amplitude, Frequency & Phase
Concept of Bandwidth :
(Figure-9-a) (Figure-9-b)
Spectrum of binary bit of duration Tb bandwidth of the signal is 1/Tb
A. Amplitude Shift Keying
NRZ BPF
Unipolar Inverter
Signal A Cos (ωc + ω )t Summer
Ckt.
Carrier
( fc + f ) FSK
Signal
Mixer
BPF
A Cos (ωc - ω )t
Carrier
( fc - f )
0 1 ( -1 1 ) - A COSωct + A SINωct
1 0 (1 -1 ) + A COSωct - A SINωct
It is seen that the QPSK signal requires lesser bandwidth than PSK
signal, hence finds more application for conserving bandwidth in satellite
communication as well as terrestrial communication.
MICROPHONES AND AMPLIFIERS
Introduction
A microphone is an acoustic-to-electric transducer or sensor that
converts sound into an electrical signal. A microphone may be passive or active.
The electrical power output of a passive microphone is derived solely from the
acoustic power it absorbs, while an active microphone controls an external source
of power.
Further microphones have been classified into two categories
Omni directional microphones are sensitive to sound from all directions. They are
good for picking up the ambience and reverb of rooms and tend to sound very
natural and open even when placed close to instruments. Omni microphones don’t
exhibit any proximity effects but obviously are not good when separation is
needed.
ii) Figure of 8 (bi-directional)
Figure of 8 microphones pick up from the front and rear and have
null points to either side. They are good for recording two vocalists facing each
other or for recording something and still capturing the ambience of the room.
Figure of 8 microphones don’t exhibit any proximity effects.
iii) Cardioid
Cardioid microphones are directional have a heart shaped polar
pattern. This means they pick up sound mainly from the front and are least
sensitive to sound from the rear (its null point). Hyper-Cardioid:
Cardioid: Picks up from front,
Picks up from front, more focussed than Cardioid
rejects from behind
Output leads
N
Magnet
S
Diaphragm
Suspension
N
Coil
Hence any change in the distance (d) due to variation in sound pressure,
varies the voltage(V).The two plates of the capacitor are diaphragm (movable) &
back plate (Fixed). Characteristics
1) Sensitivity- Very low hence built-in amplifier is used to
raise the output to 03 mV(50 dB below 1V) at sound pressure of 0.1Pa or 1 bar.
2) Signal to noise ratio: High, about 40 dB.
3) Frequency response: Excellent, 40 Hz to 15000 Hz.
4) Directivity: Basically Omni-directional
Applications: For professional Hi-fidelity recording. Good for music
purpose.
1) ELECTRET MICROPHONE: (Pressure Type)
Figure of 8: Output
Picks up in front & behind
Characteristics
1) Sensitivity: about 3μV/ 110 dB below 1V for a sound pressure level of 0.1 Pa.
2) Signal to noise ratio: High, about 50 dB. 3)
Frequency response: Excellent, 20 Hz to 12000 Hz. 4)
Directivity: Bi-directional (figure of 8).
Applications: a) Suitable for Dramas, due to its Bi-Directional property.
b) Good for recording two vocalists facing each other.
5) Wireless Microphones :
A Wireless microphone system consists of a microphone
connected to a miniature radio transmitter, and a receiver designed to receive
only that signal. Some are fixed tuned - that is, they use a quartz crystal for
determination of the operating channel. Most modern products are tuneable --
they add a frequency synthesizer circuit to allow multiple operating channels
from a single crystal. The output is designed for connection directly to the
microphone or line input of a mixer.
Wireless Microphones Transmitters are available in three basic
packages- ANTENNA MICROPHONE
These microphones
have conventional microphone
elements mounted to a handle into RF INTERNAL
which a miniature radio transmitter INSULATORS PC BOARDS
2) Plug-on transmitters -
It has a female XL-connector attached to a
compact body that contains the transmitter. Their internal battery provides
phantom power to the microphone and power to the transmitter. A plug-on
transmitter allows the use of virtually any microphone compatible with its
powering circuit.
Antenna
Element
Insulator
Microphone
(XLR)
Second Mixer Audio
IF Amplifier 10.7 MHz IF filter DETECTOR Amplifier
(10.6-10.8 MHz)
Matching Transformer
Second local (233.3 MHz)
Oscillator Signal flow in a UHF Wireless microphone receiver
The signals picked by the antenna are sent through a
broadband filter that attenuates signals far off frequency, are amplified, and fed
to the first mixer. A local oscillator, followed by a frequency multiplier stage,
also feeds the mixer. On the principle, of "heterodyne" the two signals "beat"
together to produce new signals at the sum and difference of their original
frequencies. The sum frequency is filtered out, and the difference signal called an
"intermediate frequency," or IF. is amplified and band-pass filtered again to
remove more interfering signals. The operating frequency may be fixed for
crystal controlled and tuneable for synthesised (PLL controlled) receivers.
Proximity-Effect-Defined as the
Microphone Vocalists increase in bass effect with most
unidirectional microphones when
they are placed close to an
instrument or vocalist (within 1 ft.).
Remedies- (1) roll off low
frequencies at the mixer, (2) use a
microphone designed to minimize
proximity effect, (3) Use a
microphone with a bass roll-off
switch, or (4) use an Omni-
directional microphone (which does
not exhibit proximity effect).
Vocalist-1 Vocalist-2
(1-ft.)
(3-ft.)
Microphone-1 Microphone-2
Area Coverage
Application of choir microphones falls into the category known as
“area” coverage. Rather than one microphone per sound source, the object is to
pick up multiple sound sources (or a “large” sound source) with one (or more)
microphone(s). Obviously, this introduces the possibility of interference effects
unless certain basic principles (Ex- “3-to-1 rule”) are followed..
Each Microphone placement for a typical choir should be a few feet
in front of, and a few feet above, the heads of the first row.
Microphones should be centred in front of the choir and aimed at the
last row.
Spacing between the microphones for each lateral section should be
approximately 6 to 9 feet.
(2- 3 ft.)
Vocalists
(2- 3 ft.)
(2- 3 ft.)
(3 - 6ft.)
Introduction to Amplifiers
Where, V2/V1, I2/I1, & P2/P1 are the ratios of output and input voltages,
currents and powers respectively.
The Typical gain of a voltage amplifier is about 60 dB and that of a
Power amplifier is 20 dB respectively.
Output
Input
Input Output
Level:
Lighting levels for television are generally set by adjusting the incident
light, or the light striking the subject. The unit of measure the foot candle, which
is the amount of light produced by a standard candle at a distance of one foot.
Lighting measurements are made using an incident light meter having a sensing
element and a logarithmic scale calibrated in foot candles. To measure the useful
incident light for television, the meter is held near the subject and pointed t-oward
the camera. The minimum acceptable level for colour television depends on the
ability of the lens to transmit light to the camera, the sensitivity of the pickup tube
or chip, and the amount of depth of field you need.
Contrast:
Contrast refers to the difference in brightness from the darkest parts of a
scene to the brightest. If there's too little contrast many receivers will produce a
flat, greyish picture. If there's too much contrast, details in the brightest and
darkest parts of the picture will be lost and the picture will look too harsh.
Colour-Temperature:
The third consideration is colour temperature. Every source of light has
a characteristic colour. This colour is related to its "temperature." Lower colour
temperatures tend to be red or orange while higher temperatures tend to be green
or blue. Colour temperatures are measured in degrees Kelvin. Some examples are
given below in the table.
Color
Source Colour
Temperature
1950 Candlelight Orange
2870 Normal Incandescent Orange
3200 Most Photo or TV Lights Orange
3400 Some Photo Lamps Orange
3500-4000 Fluorescent Lamps Green
5500-6500 Midday Sunlight / HMI lamp Blue
The eye "remembers" how things are supposed to look and interprets
colour accordingly, regardless of the colour temperature of lighting sources. A
white sheet of paper seems white whether viewed under an incandescent lamp or
sunlight. The eye can even adjust for "correct colour" when two light sources of
different colours are present in the same scene. Sunlight streaming into a room
which is also lit by incandescent lamps doesn't make objects it strikes appear-
bluish.
Television cameras aren't so versatile. They must be set up to render
colour in a way that's pleasing to the eye. They can do this only if all of the
important lighting sources within a scene have the same colour temperature. A
combination of filters and electronic adjustments is used to adapt colour
cameras to each new lighting situation. Most cameras can adjust automatically
to typical colour temperatures. They cannot resolve conflicts when major
picture elements are lit at different colour temperatures.
Lighting Instruments
1) Spot light: These lights have narrow beam that casts well-defined shadows.
They are generally hard.
2)Broad light; They are rectangular light that has a somewhat wider beam and
casts softer shadows.
3)Flood Light: Flood light throws a broad, even illumination in a circular pattern
with diffuse shadows.
Spot Light
Component of a hard Light
Spot Light Flood Light:
The intensity and beam spread of spots and some other lights may be
adjusted by moving the lamp forward or back in the lamp housing. When the
beam is narrow and intense the lamp is "spotted down." When the beam is wide
and more diffuse the lamp is "flooded out." Not all lamps have this adjustment.
Most lamps can be fitted with "barn doors," which are black metal flaps
fastened to the front of the lamp housing. These flaps are used to keep light from
falling where it's not wanted. Use of barn doors is most important on backlights,
which can cause objectionable lens flare if their light is allowed to strike the
camera lens directly.
Scrims are special disks of screen wire which can be used to soften lights
and reduce their intensity slightly.. Scrims can also be used in lamps which don't
already have protective covers or lenses to contain debris in the event the bulb
explodes.
One man, One Camera (Three point & Four point Lighting)
The three lights used are called the key light, fill light and back light.
Naturally one will require three lights to utilise the technique fully.
The simplest type of lighting involves one camera shooting one subject.
The subject is placed in the setting far enough away from any walls or backdrops
to avoid casting shadows on the background near the subject. The camera is set
up placing the subject in front of the backdrop.
Back Light (SPOT) : It is placed directly behind the subject, in line with
the camera. Its main aim is to show the separation between the subject and the
background. The reason is that the television screen is a two-dimensional field, it
is necessary to imply the third dimension with light. It is set at a forty-five degree
angle from vertical. The back light is spotted down and aimed at the subject's
neck. It is then flooded until it has about the same intensity as the key light. It
should be adjusted to produce a crisp but delicate border around the subject.
Fill Light (FLOOD /SOFT): The fill light is the instrument used to
soften the dark, well defined shadow produced by the key light. It is added on the
side of the camera opposite the key light and it should be about half the intensity
of the key and back lights. It should also be softer, producing no harsh shadows.
Fill lights are also frequently scrimmed (A wire screen used to cut down the
amount of light emulating from an instrument.) to soften them and reduce their
intensity.
Movement of subject
If the subject moves there are two ways of handling this problem
depending upon the movement.
Movement along a pre-determined path: In such situation
providing full key, back, and fill along the entire path is neither necessary nor
desirable. It is necessary only to provide about the same overall illumination along
the path of travel.
Optical Block-Makes the colour correction and splits the light into
three primary colours.
Camera Lens:
The purpose of the camera lens is to focus the optical energy at the face plate
of a pick up device i.e. to form an optical image.
Different focal length on camera lenses are required to get different
composition of pictures from a fixed location. A lens with a variable focal length
is called as ZOOM LENS. A typical ENG/EFP camera has a variable focal length
lens with focal length varying between 9 mm & 108 mm. The zoom ration then
becomes 108/9 = 12 :1. Its objectives is to focus the image on the face plates of
the camera tubes. This focal length can be varied either manually or by a servo
motor to get different composition of picture focussed on the camera tubes. Wide
angle or long shorts can be composed by using shorter focal length & vice versa.
Different compositions from camera are possible by changing the focal length
even though the camera is stationary.
Horizontal viewing angle of the lens is determined by the focal length & size of
the pickup device. We have different lenses for different size of pick up devices for
a particular angle of coverage. The table below can give you an idea why smaller
studio should prefer smaller pick up device for wider angles/close ups as
compared to larger size of pick up.
Aperture :
This important parameter of a lens is also called as aperture or iris. The opening
of the lens is controlled by collapsible fins inside the lens. This control like ZOOM,
can be either manual or automatic. Since camera man has to control focus and
zoom by his two hands the third variable i.e. iris is preferred on auto mode most
of the time. It is also related by the f stop number.
Please note, higher the f-stop numbers lesser is the lens opening. Lowest f-stop
number indicating maximum exposure is known as speed of the lens and is
usually a number which does not fit the f stop series marked on lens aperture
ring as :
Where LS is lens speed and its typical values can be 1.4 or 1.7 etc.
Optical Block :
Optical assembly is located inside the camera head and has :
1. Colour filters wheel
2. Prism & Dichoric Mirrors
3. Bias light and a suitable lens mount
The lens used for the video cameras depends on the size of the pick up device.
Video cameras are using 1 inch, 2/3 inch or ½ inch pick up devices. Lenses meant
for ½ inch devices can be used only with cameras having ½ inch device and not
for any other camera.
TYPES OF PICK-UP-DEVICES :
CAMERA ELECTRONICS :
A block diagram of a typical three tube camera chain is described in fig. 3.
Tube power supply section provides all the voltages required for various grids of
electron gun. Horizontal and vertical deflection section supplies the saw tooth
current to the deflection coils for scanning the positive charge image formed on
the target. The built in sync pulse generator provides all the pulses required for
the encoder and colour bar generator of the camera.
The signal system in most of the cameras consists of processing of the signal
from red, blue and green tubes. Some of the cameras use white blue and red tubes
instead of R.G.B system. The processing of red and blue channel is exactly similar.
Green channel which is also called a reference channel has slightly different
electronic concerting aperture correction. So if we understand a particular
channel, the other channels can be followed easily. So if we understand a
particular channel. The signal picked up from the target is amplified at the target
itself in a stage called pre-pre amplifier. It is then passed to a pre amplifier board
with a provision to inserts external test signal. Most of the cameras also provide
All the three signals R, G and B are then fed to the encoder section of the camera
via a colour bar/camera switch. This switch can select R, G and B from the camera
or from the R,G,B signal from colour bar generator. In the encoder section these R,
G, B signals are modulated with SC to get V and U signals. These signals are then
mixed with luminance sync, burst & blanking etc. to provides colour composite
video signal ( CCVS Signal). Power supply board provides regulated voltages to
various sections.
DIGITAL PROCESSING :
The R, G & B individual signal from process amplifier passes through a Pre-
Gamma Correction then it is sampled at frequency of 14 MHz (for Ikagami HL-57
the exact sampling frequency that is 14.3 MHz) . The supplied video is connected
to a digital video with 10 bit analog to digital converter. After 10 bit A/D
conversion the digital signal passes through several stages for correction i.e.
Gamma(Transfer function alteration in cameras for the purpose of
complementing the transfer function of the display device i.e. TV receiver ),
Signal, Auto White Balance( Adjustment or condition of the processing circuit
ensuring that the proper mix of colour signals is passed to provide white light at a
given color temperature ) & Black Balance etc. Detailed correction of signal is
done in Green Channel at the third stage and the Video matrix chip is operated at
a clock frequency of double the sampling frequency i.e. 29 MHz The 10 bit output
signal is then fed to the encoder in which the R,G&B signal passes through LPF
circuit for elimination of 29 MHz clock component and fed to the matrix for
conversion to component video ( Luminance & Chrominance ) i.e. Y, R-Y & B-Y.
Then the Y signal passes through the circuit or black stage correction and then the
CCVS signal is retrieved after addition of sync pulse signal.
Gama Correction :
The overall transfer characteristic of a television system relates
brightness levels in the final displayed image to the brightness levels in the
televised scene.
>1
OUTPUT
=1
Black INPUT
(Scene Brightness)
If Gamma is less than unity whites are compressed (crushed) and blacks
are expanded (stretched). If Gamma is more than unity whites are stretched and
blacks are crushed.
Magnetic Principle :
II) A current carrying conductor when wound like a coil acts like a bar magnet.
III ) A current carrying coil when bent to form a ring inner field remains
homogeneous but the outer field vanishes, i.e. field lines inside are
able to close.
Fig 1 Shows ferromagnetic material inserted inside the ring with a narrow air
gap causing a flux bubble because of magnetic potential difference across the gap.
Magnetic Tape
Introduction :
Octaves = log2 16
= log2 24
= 4
1 kHz is 16 harmonic of 62.5 Hz
th
Recording Process :
When a tape is passed to move over the magnetic flux bubble, the
electric signal in the coil will cause the electric lines of force from the head gap to
pass through the magnetic material of the tape producing small magnets
depending upon the strength of the current. Polarity of the magnetic field which
causes these bar magnets depends on the change of current. Decreasing current
will cause NS magnet and vice versa. Power of these magnets is as per BH curve.
Thus the magnetic flux strengthens the unarranged magnetic particles as per the
signal and they stay in that condition after the tape has already passed the
magnetic head . Length of the magnet thus formed is directly proportional to
writing speed of the head v, and inversely proportional to the frequency of the
signal to be recorded i.e.
Recording Process:
During the initial stages it was tried to record video signal with
stationary video heads and longitudinal tracks using tape speed of the order or
9 m/s which was very difficult to control besides very high tape consumption i.e.
miles of tape for 3 to 4 minutes of recording and this was coupled with breaking
of video signal frequencies into 10 parts recorded by 10 different video heads and
then switched during playback to retrieve the signal. The quality of the
reproduced signal was also compromised up to the resolution of 1.7 MHz or so.
Around 1956 the AMPEX company of USA then come out with Quadruplex
machines having two revolutionary ideas which laid the foundation of present day
VTRs/VCRs. These ideas were-
medium is the range, during which the extracted signal is more than noise. This
range is only 10 octaves.
The basic physics states the highest freq. to be recorded depends on tape speed and
the head gap
F max = V tape / 2xW headgap. Methods followed for recording high
frequency signal ie
The tape speed has to increase for recording high freq. Signals OR
It can be attained by rotating Head along with linear movement of the tape.
Tape speed can be decreased by increasing Head speed
It avoids mechanical stresses on the tape.
a. Quad Format(Segmented)
This format uses spool of 2” wide tape, 4 heads on transversely mounted drum, with a very
high writing speed of about 41 m/s. These machines had higher operational cost and
required constant engineering efforts to keep them running. These machines have since
been phased out except fir transfer/archival purpose.
b. Type B Format (Segmented helical )
This format has been developed by BOSCH/BTS using helical scan with 1” tape as BCN series
if Video Tape Recorders. It used a scanner with head wheel carrying two video heads around
which tape is wrapped in about 190°. Each television field is recorded on six tracks with each
head scanning a 52line segment. The scanner diameter is 50 mm and rotates at 150 rev/sec.
The tape moves at 24 cms/sec. The 80 mm long tracks are recorded at an angle of 14.3°.
There are four longitudinal tracks out of which two are full quality audio tracks third of the
time code and the fourth for the control track. Video writing speed is 24 m/s.
The flying erase head mounted on the same head-wheel and the associated electronics
allows for roll free electronic editing. The addition of digital frame store unit provides
freeze frame and slow motion. The portable version in the same format has also been
marketed by the manufacturers for studio use.
c. Type C Format (Field per scan helical)
This is the combined format of AMPEX and SONY using 1” tape with a full omega wrap
around a helical scanner running at 50 rps. Main head mounted on a 135 mm dia drum
records one field i.e. the video signal containing useful picture and part of vertical interval
containing field synchronizing pulses. An additional head called the Sync Head mounted on
the scanner records the vertical interval. There is thus no missing information as is common
with older single head, field per scan helical recorders ( one inch IVC 800/900 series and
AMPEX 7900 series recorders). There are four longitudinal tracks i.e 2 Nos. for audio, third
for time code and the fourth for control tracks.
This type of machine has 411 mm long video tyracks recorded on the tape at an angle of 2° 34’.
Video writing speed is about 24 m/s. The Ampex AST (Automatic scan tracking) or SONY Dynamic
tracking using piezoceramic (Bimorpyh) transducer and digital time base corrector assures precise
tracking and dependable interchange in spite of 411 mm long tracks. This also provides for freeze
frame, slow motion and recognizable picture in the shuttle mode.
Chrominance information around 4.43m MHz is down converted to 6.85 kHz for LB U-matic machine
and 924 kHz for U-matic high band. Luminance is limited to 2.5 MHz(3 MHz inHB).
Frequency modulated luminance and down converted chrominance are mixed and recorded.
During playback chrominance is upconverted back to 4.43 MHz and mixed with demodulated
luminance to get a composite video signal(CCVS).
There are two heads mounted on 110 mm scanner rotating at 25 rev/sec. Enabling each head to
record one field per scan. This format is therefore field per scan format.
U-matic format has been further improved by raising the FM frequencies for luminance recording
and using specially developed tape. The format os designated as U-matic High band SP. There are
thus three formats ion this category viz.
The system produces acceptable pictures for ENG and semi-professional use but has limited
resolution and chroma noise at saturation.
Chroma noise further increase after two or three generations. Low Band is out of use and U-matic SP
offers better quality than Hi-band.
It uses four audio track i.e 2 x linear or 2 x AFM audio recording. Used by
consumer/professional(ENG/EFP) format.
NB : All SP machines when using oxide tapes, are compatible and reverse compatible with Non
SP machines, except for (g) and (h) which can only play oxide tapes and records on metal tapes.
Portable Version
(a) BVW 21p - Non SP, Portable, Player only, External TBC, Non docable.
(b) BVV 1 APS - Non SP, docable, Recorder only.
BVV 5 Ps - SP, docable, Recorder and Mono play.
PVV 1 P - SP, docable, Recorder and Mono play.
(c) W 35 P - SP, Portable, Recorder cum player, External TBC, compatible
and reverse compatible with non SP machines, (when using
oxide tapes) 4 Audio.
(d) BVW 50 P - SP, Portable, Recorder cum player, External TBC, compatible
and SP with Built in TBC can take 30 min/90 min. cassettes.
Type of Betacam tapes
Important
1) Micro switch will sense automatically whether a small or large cassette has been
inserted in the machine.
2) Oxide tape has 19 m thickness and metal particle 14 m.
Tape Loading in Betacam System
Head drum
Head drum for BVW 75P carries as many as 10 video heads namely two heads for Luminance Ya &
Yb, two heads for chrominance Ca & Cb ; two heads for Dynamic tracking Luminance DTYa & DTYb ;
two heads for Dynamic tracking chrominance DT Ca & DT Cb ; and finally two heads for Eraser Rea &
Reb (Rotary erase). In
some of the models where slow motion is not available DT heads and associated electronics is not
required. This makes the models cheaper to BVW 75 P.
Audio System
Track arrangement
2) Bottom edge of the tape ; Two longitudinal tracks for time code
and control track.
3) Middle of the tape : Two additional audio channels (AFM) in
SP machines recorded along with the video
By the rotary heads. The carrier used for
FM Audio channels is 310 KHz and 540 KHz for
channel 3 and 4 respectively - Insert Audio edit
on these channels is not possible.
4) A confidence replay head is provided for off tape monitoring of longitudinal tracks when in
record mode.
1. Track layout
Same for oxide and metal particle tapes to give replay compatibility though the recorded signals are
different.
2. The two video recording heads making up a pair are mounted with an azimuth offset of
+ 15 degrees. This enables their tracks to be laid on the tape with zero quard band
between them. The azimuth offset provides cross-talk protection when tracking errors
cause a Y replay head to wander over the adjacent C track and vice versa.
Frequency modulated luminance is recorded by the Y-heads and the second head of the pair, C
follows a distance of 12 recorded lines behind Y and records the two colour difference signals as a
compressed Time Division Multiplexed waveform on its own FM carrier. (The different recorded
signal parameters for Oxide and SP recordings are already given). The first head pair is called A and
the second B. These are often referred to Channel A head pair and Channel B head pair, though
strictly speaking Y and C are the two information channels being recorded by alternate A & B head
pairs.
Recording
Information selected by the input as above is processed with the final objective of getting Y and
CTDM input. This Y and CTDM signals are then passed through separate Pre-emphasis, Modulators
and Record amplifiers. Record amplifiers for Y and CTDM Chroma respectively will feed heads for
channel A and Channel B.
AFM Audio for channel 3 & 4 is mixed with modulated. CTDM signal and then fed to Record
amplifier for recording on chroma track. While making copies, in CTDM dub mode, the raw
unprocessed demodulator Video from player is passed directly to the recorder modulators to
prevent degradation in quality.
Play Back
RF from the normal R/P heads or DT heads is switched at field rate to select the active head,
equalized and demodulated. The demodulated signal is then passed through the noise reduction
(non linear de-emphasis) and linear de-emphasis built in TBC not only to correct timing errors using
digital video store but also to compensate the dropouts (RF Loss) in each channel. The missing
information is filled with data from the previous line both for Y, R-Y & B-Y.
Compressed Time Division Multiplex C.T.DM - The R-Y and B-Y are clocked into separate one line
duration stores. Similarly during the second line, a second pair of stores receivers the next R-Y & B-
Y.
Meanwhile, R-Y is clocked out of its first store at twice the clock speed, compressed it to 32 sec.
Then B-Y is clocked from its first store to fill the next 32 sec period. This is called CTDM. The first
pair of stores are now empty, ready to receive new R-Y & B-Y from the input signal. While this is
going on double speed clock are used to empty the second pair of stores in a sequence of R-Y first
and then B-Y.
Data storage in Betacam-
Digital Video Cassette Recording
DIGITAL TAPE FORMATS-
INTRODUCTION
With the advent of digital signals, breakthrough came in the field of recording from analog recording
to digital recording around the year 1990. In the series of development of digital tape recording
systems, it is felt to have a system which should be handy for the purpose of field recording along
with capability of long duration recording. A recording format is developed by a consortium of ten
companies as a consumer digital video recording format called “DV”. DV (also called ”mini DV” in its
smallest tape form) is known as DVC (Digital Video cassette).
DVCAM is a professional variant of the DV, developed by Sony and DVCPRO on the other hand is a
professional variant of the DV, developed by Panasonic. These two formats differ from the DV
format in terms of track width, tape speed and tape type. Before the digitized video signal hits the
tape, it is the same in all three formats.
What is DV?
DV is a consumer video recording format, developed by a consortium of 10 companies and latter on
by 60 companies including Sony, Panasonic, Jvc, Phillips etc., was launched in 1996. in this format,
video is encoded into tape in digital format with intraframe DCT compression using 4:1:1 chroma sub
sampling for NTSC (or 4:2:0 for PAL). This makes it straightforward to transfer the video onto
computer for editing due to its intraframe compression technique. DV tapes come in two formats:
Mini DV size (66mm x 48mm x 12.2mm) and DV, the standard full size (125mm x 78mm x 14.6mm).
They record digital compressed video by a DCT method at 25 Megabit per second. In terms of video
quality, it is a step up from consumer analog formats, such as 8mm, VHS-C and Hi-8.
I) SMPTE(Society of Motion Picture Expert Group) has identified the digital video recorders with the
letter”D”
2) D series machines are divided into uncompressed/compressed types.
3) They are further divided for different track layouts, signal encoding, modulation technique before
recording.
D1(1987-Sony)
What is DVCPRO?
DVCPRO is a professional variant of the DV, developed by Panasonic. In DVCPRO, the baseband
video signal is converted to 4:1:1 sampled data sequence from the originally sampled 4:2:2 signal by
the method of sub sampling and the resulted data are converted into blocks which are shuffled
before passing through compression circuitry and again reshuffled back to their original position
after compression. It is to mention here that still pictures containing little or no movement are
compressed using intraframe compression where as the pictures with large amounts of movements
are coded and compressed in intra field form.Error correction code is added to the compressed and
reshuffled data sequence by using Reed Solomon product code before it is sent to recording
modulation method. The modulated data sequence generated by 24-25 coding method using
scrambled NRZI is recorded onto the tape via video head.
Analog SMPTE259M-C
Component EBU Tech.3267-E
Video input Digital Component
Video Output Compression
On the other hand the baseband audio signal is not compressed but available in two channel, each
sampled at 48kHz and represented by 16bit data sequence before added with compressed video
data. Subcode data is added to the combined video and audio data which were error corrected by
the same method as used in video. The combined error corrected data sequence is recorded onto
the tape by the same head which records the video also.
DVCPRO Tape Pattern
The orientation of recording heads on the head drum (Azimuth) are different. This helps the head to
pickup specified tracks which are in line to its orientation and can not pick up other tracks during
playback. So there is no need of spacing between the tracks on the tape and bulk data can be
recorded. This method of recording is called as rotary head azimuth recording.
Post Production in TV
Gopal Kumar, DDE
RSTI(T) Bhubaneswar
The post production is the editing process in which video clips available
on different video sources, like tape, CD, DVD and Camera Storage are processed
as per requirement of different kind of transmission.
In the news transmission the field recordings are normally cut to the size
to make clips relevant to the news. These clips are sometimes given voice over by
putting voice description. These are the finished clips for transmission and such
multiple clips are normally arranged over one or two video tapes for playing at
the time of transmission of news. Here, the requirement is cutting of source videos
to size and arranging them on tape. A simple requirement for which linear video
editing setup is used.
In other entertainment telecast like dance, drama, music shows etc., the
requirement of post production is different. Many types of special features like
effect at the transition from one clip to another, change in background of event,
insertion of new objects, change in light and colour etc may be required. Some
simple type of transition effect like dissolve is possible through linear edit consoles
but for advance effects, digital processing of video is required for which computer
based linear editing system is there.
Linear Editing :
Linear editing can be carried out using two VCRs connected directly.
However, In most of the linear editing setup edit controller is used, which can
control the VCRs connected to it. Through edit controller it is possible to edit the
clips more precisely and preview the edit before recording.
Based on recording technique, the Linear Editing is of two types
1. Assemble Editing
2. Insert Editing
Assembling Editing:
Assemble editing is so called
because shots are assembled in sequence on
the tape and the VCR will record the clip on
tape without considering the pre-recorded
video on recording tape and its sync. So
some time disturbance may be observed at
the end of the clip recorded on a
prerecorded tape.
Insert Editing:
Some times it is felt that a new video clip should be inserted over
existing footage. Like, over the long shot of some event the clips of graphics display
& expert comment are added within the time span of the shot.
If assemble editing is used, horrible picture disturbances will be observed
on edited tape. To avoid this, insert editing is used. In insert editing a new video
clip is inserted over old video with clean beginning and end. For this the VCR first
synchronies itself with the sync of pre recorded video. Thus the sync timing over
the tape remains as it is for the videos, newly inserted video clips & old pre
recorder video. Therefore the beginning & end will be clean. Unfortunately this
facility is available with semi- professional and professional recorders only.
Pre recorded video
Sync point
between frames Video Frames
Some digital devices comes with fire wire port( IEEE1394 port) with
capability to transfer the digital video from a video device like camcorder to
editing computer through this port.
The non linear editing is taken as project consisting
of following steps.
1. Capturing of source :
a. Transfer of video from VCR & Camera to computer via capture card
b. Transfer of source available on CD & DVD
c. Transfer through fire-wire port from camera & VCRs.
d. Transfer of graphics from graphic station to editing computer
2. Editing of source :
3. Transfer of final product to tape/CD/DVD or to file :
Outputting Video :
The edit software provides the facility in its menu for putting the finished
video on various storage devices like
1. Tape on, VCR through capture card
2. Encoding in MPEG 2 format and recording in DVD
3. Saving the video in file format or the hard disk of computer itself.
Analog Television Transmitters
D.Ranganadham
Even though these transmitters are analog, they incorporate some of the
latest advancements in technology coupled with microcontroller based parameter
control permitting remote control of transmitters using RS-485 serial connectors
helping fault diagnosis from remote because most of the transmitters will be
installed in hilly terrain to get the advantage of line of sight and greater reach. All
transmitters are internally protected against short circuits, drive faults, fan
failures, heat sink temperatures etc.
1) R&S 10 KW Transmitters
The heart of any modern transmitter is exciter because the signal which is
going to be radiated through the antenna is generated at the exciter stage with
low power. The exciter stage determines the quality of the transmitter. Following
stages of exciter only amplifies the signal. Figure 2. Gives the block diagram of
10 KW R&S exciter.
The encoder also contains a microcontroller which drives the whole exciter
and handles communication with the CCU (central control unit). A program
memory is provided as a peripheral for the microcontroller. This means that all
the exciter firmware and software is stored at one location and an update can be
performed via the serial interface without replacing any hardware.
By Satyapal, A.D.E. 8
Video &audio signals are given to exciter. The main functions of exciter are
Brief description of Exciter: video is fed to A/D D/A block. The AD-DA
unit has a function that converts the video output signal supplied to exciter into
PCM signal, and sends the PCM signal to a unit for digital correction. This unit
converts the video PCM signal, after the digital correction, into analog video
signal. Then, this unit supplies the analog video signal to a visual modulator unit.
Visual modulator unit is intended to convert a base band video signal into
modulated IF signal. Most non-linear distortion caused in power amplifier of the
transmitter are corrected by the IF corrector unit. The IF signal is applied at the
input is converted to an RF signal by a combination of mixer and local oscillator
and the RF signal is passed through filters (BPF & BEF) to separate out only the
specified band. This is amplified to obtain an RF signal of +20 dBm. By applying
AGC to the IF signal, the output power of the transmitter is maintained at a
constant level.
DVC block is instrumental in correcting errors introduced in the base
band signal and tries to give clean signal for visual modulator. Digital correction
of linearity, phase non linearity, sync and color burst separation and other
processing techniques are much easier in digital which is the reason for using A/D
and D/A block. Balanced audio signal is given to sound modulator where it is IF
modulated using FM and up-converted to RF frequency by aural mixer. The
standard IF frequencies for vision are 38.9 MHz and 33.5 MHz for audio. The
reference local oscillator frequencies are given by synthesizers.
Redundancy is provided for the exciters. Final power output will be achieved
by using series of cascaded stages of power amplifiers based on MOSFET
technology which has better noise performance and impedance properties.
Two 360W PA pallets are combined to get 630W of output power from
the unit. The control board monitors power, current and temperature of each PA
pallet and indicates the fault and Ok status of PA. It also protects the PA against
over drive, over current, over temperature, over voltage and VSWR. One 3dB
90°Hybrid Coupler is used at the output of Driver, to drive the Two PA Pallets.
The output of these two PA Pallets is combined using a 3dB 90°Hybrid Coupler.
The combined output power passes through a directional coupler, which provides
forward and reflected power samples to the PA control board.
3. CONTROL UNIT
The Control Unit is micro controller based and it allows controlling and
monitoring various parameters in Exciter and Power Amplifier. It also monitors
faults if any in transmitter and shuts down the transmitter for some major faults.
The monitored parameters of each unit can be seen on the LCD display by using
menu keys.
The Control Unit communicates with other units using RS 232 serial
interfaces. So transmitter can be connected to Station Control Unit (SCU) for
facilitating (1+1) operation..
4. FILTER
The output filter is tuned to specific channel.
Remote Controlled
500 Watt TV Transmitting Station,
(1+1) Mode of Operation.
10) 14” Colour Television Sets - Colour TV sets are provided for local
monitoring of the programs.
11) IRD System - Provides the program content for transmission.
12) Computer(2Nos)- Local-PC & Remote-PC with Softwares .
13) Laser printer, (1No.) –For print out of the transmitter status.
14) Power Supply Distribution and Control - The input power to the
station is supplied from a 400V phase-neutral system of at least 25-kVA capacity
and a DG Set is provided for standby power generation during mains outage.
A 6kVA UPS is provided to maintain power to the selected transmitter during
Mains to DG Set changeover.
The incoming Mains is fed to the Auto on Mains Failure (AMF) Panel.
The AMF panel switches over to the DG Set when Mains fail and vice versa when
it returns to normal. The output of the AMF panel is then passed through the AVR
for regulating the voltage.
The output of AVR is fed to the 6kVA UPS and 1kVA UPS. The AC mains
to all three Air Conditioners are connected directly from AMF Panel (via PSP’s
contactors) and not through UPS since the starting current of compressor motors
are very high and cannot be supplied by UPS. The 6kVA UPS output is fed to the
remaining three contactors in the Power Switching Panel for supplying the two
Transmitters and Input Monitoring Rack.
The Station Control Unit uses the Power Switching Panel to switch ON
or OFF the following six equipment namely Transmitter1, Transmitter2, Input
Monitoring Rack, Air-Conditioner 1 / Air-Conditioner 2 & Air-Conditioner 3.
15) Serial Communication Interface (RS-232 C)- It is a serial
Communication widely used for Data communications in Computer terminals,
remote control panels & Short-Distance-Communication links, like Modems etc.
The DVB (Digital Video Broadcasting) [1] [2] [3] project is the
consortium of over 280 broadcasters, manufacturers, network operators, software
developers, regulatory bodies and others in over 35 countries committed to
designing open interoperable standards for the global delivery of digital media
services. The group specified a family of DVB standards, including the following.
Digital Video Broadcasting (DVB) usually means the transmission of digitized
audio, video and auxiliary data signals. The most suitable distribution systems for
the transmission of DVB are satellite, cable, and terrestrial mode. The relative
standards are DVB-S, DVB-C and DVB-T. The processing at different stages of
communication depends on the channel used. A generic DVB broadcasting system
is given in Figure 1.
DVB-S
The satellite DTH( Direct To Home) system for use in the 11/12-GHz
band, configurable to suit a wide range of transponder bandwidths and EIRPs
(the standard is also applied in C, Ku, and Ka FSS bands). The basic transmission
design of DVB-S has proven to be robust and economical in use. In addition to the
inherent technical features of this standard (such as the use of concatenated
coding and QPSK), the property of MPEG 2 to transfer internet protocol data
efficiently and transparently has gained a large following. Many DVB-S satellite
transponders have a bandwidth of 33 MHz, which allows with QPSK a symbol
rate of 33 MHz / 1.2 = 27.5 Mbaud. With 2 bit/symbol, this results in 55 Mbit/s
and after the convolutional 3/4 FEC (forward error correction) decoder has
removed 25% of the bits for inner error correction, 41.25 Mbit/s remain. This bit
stream is sent to the second error correction algorithm (Reed-Solomon), which
transforms 204 bytes into 188 corrected bytes and the final error corrected data
rate is therefore 38.015 Mbit/s for the multiplexed MPEG-2 data stream. Around
ten TV programs can be sent in with this bit rate instead one program possible in
analogue mode.
DVB-C
Noise free reception: If the required Bit Error Rate (BER < 10e-6)) is
maintained, one can achieve almost noiseless video reception. Multi-path
fading and ghost images are completely eliminated.
The above figure 2. shows differences in the DVB systems focussing on two
important issues namely 1. Modulation technique and 2. Error correction codes.
Modulation in DVB System:
Satellite reception (QPSK, Phase modulation)
Can provide both fixed & portable reception in the presence of strong
reflections.
Conclusion
In this article brief description of first generation DVB standards used for
transmission of digital signals is given. Currently there is intense focus on second
generation standards implementation namely DVB-C2 for cable, DVB-S2 for
satellite mode and DVB-T2 for terrestrial. These standards provide further
improvements in modulation techniques and channel coding. These standards
have been specified around three concepts: best transmission performance
approaching the Shannon limit, total flexibility, and reasonable receiver
complexity. Channel coding and modulation are based on more recent
developments by the scientific community: low density parity check codes are
adopted, combined with QPSK, 8PSK, 16APSK, and 32APSK modulations
depending on mode of delivery. Overall broadcasting industry is witnessing
phenomenal growth in all delivery modes.
References:
[2] W. Fisher, Digital Television, A Practical Guide for Engineers (1st edition).
Springer Verlag, 2004.
[3] DVB Homepage (2008, June). The main website of the DVB Project. [Online].
Available http://www.dvb.org.
Satellite Communication
D.Ranganadham, DDE
RSTI(T) - BBSR
In the year 1945 Arthur C. Clarke, British science fiction writer wrote
an article in “Wireless World”, magazine, about possible worldwide coverage
using three satellites in Geo Stationary orbit about 36,000 Km (around 22,300
miles ) above equator. Two important issues were mentioned in the article.
There exists an orbit in the sky which can be used for communication
purposes which later called geo stationary orbit (GSO).
The power for communication equipment can be generated from solar
panels.
The important thing about GSO orbit is the satellite placed in this orbit
looks stationary for an observer on the earth because its period of revolution
about the earth would be the same as the period of the earth's rotation. This
synchronous satellite, which would always appear in the same place in the sky,
would be provided with receiving and transmitting equipment and directional
antennas to beam signals to all or parts of the visible portion of the earth. Three
satellites located 120θ apart in the GSO can cover entire world by using global
beam (it covers 42.4% of the earth’s surface, and large receiving antennas must be
used to adequately detect broadcasts).
What is a satellite?
mv2
Centrifugal force F2 =
r( where ν is the velocity of the satellite
and r= the distance between satellite and the centre of the earth).
Hence if F1=F2
mv2 GM
mg where g
r r2
mv 2
mg
r
GM G gravity consant 6.672 10 11 NM 2 kg 2
m 2 M 5.974 10 24 kg
r
GM cons tan t
v r GM consant
2
v 2 r GM
GM
v
r
3
2r 2r 2r 2
p
v GM GM
r
About 40% of the earth area can be covered by downlink beam. No other
communication system can have this kind of coverage area.
Signals are available even in remote areas and hilly terrain
Can be easily deployable. Flexible to install and dismantle.
Disadvantages:
Propagation delay. Electromagnetic signals from the uplink parabolic dish
antenna (PDA) are travelling one way distance of around 72000 km which
introduces a delay of 240 ms and 480 ms for the case of two way
communication like telephone signals.
Satellites placed in GSO cannot cover north and south pole
Signals are subjected to huge attenuation due to huge distances involved in
the uplink and down link. Hence receiving equipment should be able to
handle very weak signals.
Frequency bands used for satellite communication: Three
main frequency bands are allocated for GEO communications satellites, C-band, Ku-
and Ka-band. Each band is divided into sub-bands, which are allocated to each
transponder on a satellite. All satellites use the same frequencies. Directional
antennas provide isolation from interference. This places a minimum size limit on
the antennas used.
The major systems in the satellite can be broadly divided into five
categories.
The attitude control system must keep the solar panels correctly pointed
towards the sun.
Due to rotation of the earth all attitude control systems must pitch the
satellite150 /hour to maintain earth pointing. They must also provide correction
for disturbances due to radiation pressure and torques generated by station
manoeuvres.
1. Spin stabilization
2. 3 axis or body stabilization
A three-axis stabilized space craft can make better use of its solar cells
area, since the cells can be arranged on flat panels that can be rotated to maintain
normal incident of the sunlight.
1. Satellite management
2. Telemetry
3. Tracking
4. Command
Typical on-board sensors
Sensors Function
The data received from onboard sensors are processed by TT&C block in
the satellite and sent to TT&C block at the master control centre. Telemetry data
are usually digitized and transmitted as frequency or phase shift keying of low-
power telemetry carrier using TDM technique. A low data rate is normally used to
allow the receiver at the ES to have a narrow band width and thus maintain a high
C/N. The entire TDM frame may contain thousands of bits of data and take several
seconds to transmit. At the controlling ES a computer can be used to monitor, store
and decode the telemetry data so that the station of any system or sensors on the
spacecraft can be determined immediately by the controller on earth. Alarms can
also be sounded if any vital parameter goes outside allowable limits.
3. Power subsystem:
There are two obvious sources of primary power for space craft, namely
nuclear and solar. Due to cost and environmental hazards, nuclear sources are not
generally used in earth orbit. They are however used for interplanetary space
craft where the distances from the sun produce key weak solar radiation. All
commercial satellites have used solar energy to derive primary power from the
solar cells. Solar cells convert incident sunlight into electrical energy. In addition
to the solar array there are batteries carried on the space craft to provide power
for essential services during period of eclipse and during launch period( during
eclipse time sun radiations will not fall on the solar panels). Outside periods of
eclipse the batteries are charged by drawing power from the solar array.
The sun is a powerful source of energy, in the total vacuum of outer
space, at geostationary altitudes; the radiation falling in a space craft has an
intensity of 1.39kw/m2. Solar cells do not convert all this incident energy in to
electrical power. Their efficiency is typically 10 to18 percent but falls with time
become of aging of the cells and etching of the surface by micro meter impacts.
Since sufficient power must be available at the end of lifetime of the satellite to
supply all the systems onboard. The space craft, about 15% extra area of solar
cells is usually provided as an allowance for aging. Future generation satellites
operate in Ku band which has got more communication capacity and more
lifespan and hence require higher power.
Eclipses occur twice per year, around the spring and fall equinoxes,
when earth shadow passes across the space craft. The longest duration of eclipses
is 70 min occurring around March 21 and September 21 each year. To avoid the
need for large, heavy batteries, part or all the communications system load may be
shutdown during eclipse, but this technique rarely used when the telephony or
data traffic is carried. TV broad satellites will not carry sufficient battery capacity
to supply their high power transmitters during eclipse and must be shutdown.
Batteries are usually of the sealed nickel cadmium type, which do not gas when
changing and have good reliability and long life But due to advancements in
launching technology, heavier satellites are launched which can include lighter
nickel hydrogen batteries which will take care of power requirements during
eclipse time..
4. Communication subsystems:
A communication satellite in geostationary orbit exists to provide
relaying of voice data and video communication. All other subsystems on the
spacecraft exist solely to support the communication sub system. Satellites have
become larger, heavier, and more costly, but the rate which traffic capacity has
increased has been much greater, resulting in a lower cost per telephone circuit
with each succeeding generation of satellite. The introduction of switched beam
technology and onboard processing in high capacity satellites will offer a further
increase in capacity. The communication subsystem or (communication pay load)
consist of the satellite antennas plus the repeater. The bandwidth handled by the
satellite is broken down (demultiplexed) in to traffic manageable segments (40-
80MHz) each of which is handled by separate repeater called transponders which
are connected by a switching matrix to the various onboard antennas.
Figure 2. C and Ku band transponders block diagram.
Following are the four main types of antennas are used on space craft.
Two types of antennas popular in broadcasting. They are horn and reflectors
Aperture antennas (horns and reflectors) have a physical collecting area that
can be easily calculated from their dimensions:
D2
Aphy r
2
4
Therefore, we can obtain the formula for aperture antenna gain as:
4Ae 4Aphy
Gain 2
2
D
2
Gain
Where D is diameter of the antenna &typical values of for Reflectors is: 50-60%
Link Design
Link design basically deals with designing a link in both uplink and
down link chains for a specific C/N which depends on the threshold of the
demodulator. Following are the factors influence link design.
Area Am2
F
Pt
w / m2 1
4R 2
All real antennas are directional and radiate more power in some
direction than in others. Any real antennas has again G(θ), defined as the ratio of
power per unit solid angle radiated in a direction θ to the average power radiated
per unit solid angle.
P
G 2
p0 / 4
Where
1. P(θ) is the power radiated per unit solid angel by the antenna.
2. р0 is the total power radiated by the antenna.
3. G(θ) is the gain of the antenna at an angel θ.
The reference for the angel θ is usually taken to be the direction in
which maximum power is radiated; often called the bore sight direction of this
antenna. The gain of the antenna is then the value of G(θ) at angel θ=0 0, and is a
measure of the increase in flux density radiated by the antenna radiating the same
total power for a transmitter with o/p Pt watts driving a lossless antenna with
gain Gt, the flux density in the direction of the antenna bore sight at distance R
meter is
F
pt Gt
w / m2 3
4R 2
The product PtGt is often called the effective isotropically radiated power or
EIRP, and it describes the combination of transmitter power and antenna gain in
terms of an equivalent isotropic source with power PtGt watts radiating uniformly
in all directions.
Pr
Figure 7.
Power required by an ideal antenna with area A m2, incident flux density is
F
Pt
w / m2 . 1
4R2
Received power is Pt= FXA= Pt A /4πR2 watts.
Pr F A watts 4
A practical antenna with a physical aperture area of Ar m2 will not
deliver the power given above. Some of the energy incident on the aperture is
reflected away from antenna and some is absorbed by loss y components. This
reduction in efficiency is described by using an effective aperture Ae where
Ae = A Ar (5)
And A is the aperture efficiency of the antenna which accounts for all
the losses between the incident wave front and the antenna output port: these
include illumination efficiency or aperture taper efficiency of the antenna which is
related to the energy distribution produced by the feed across the aperture and
also other losses due to spill over, blockage, phase errors, diffraction effects,
polarization, and mismatch losses. For paraboloidal reflect antenna n A is typically
in the range 50 to 75 lower for small antennas and higher for large case grain
antennas. Horn antennas have efficiencies approaching 90%. Thus the power
received by a real antenna with a physical receiving area A r and effective aperture
area Ae m2 is:-
Pr
Pt Gt Ae
watts 6
4R 2
Pt Gt Gr
Pt 8
4R / 2
watts
loss , Lp. It is not loss in the sense of power being absorbed, it accounts for the way
energy spreads out as an electromagnetic wave travels away from a transmitting
source in three dimensional spaces.
The expression dBW means decibels greater or less than 1 watt (0dBW).
The unit dBW and dBm(dB greater or less than 1w and 1mw) are widely used in
communications engineering. EIRP, being the product of transmitter power and
antenna gain is often quoted in dBW.
Note that once a value has been calculated in decibels, it can readily be
scaled if one parameter is changed. For example, if we calculated for an antenna
to be 48db at a frequency of 4GHz and wanted to know the gain at 6GHz. We can
multiply Gr by (6/4)2. Using decibels we simply add 20 log(6/4) or 20 log(3)-
20log (2)=9.5-6 = 3.5db.thus the gain of our antenna at 6GHz is 51.3db.
C N d /l
EIRPofsate lite pathloss
Gr
Tn
K B
C N N C
C
N
C N
1 1 1
U ND
U D
C
C
N NU N D N IM N imt
N N
1
C
1 1 1 1
C C c
N U D IM n imt
References:
1. Timothy Pratt, Charless Bostian, Jeremy Allnutt “satellite communications”
second edition
Digital Television Terrestrial Broadcasting
V. Seetharam
ADE RSTI (T), BBSR
Internationally there are standards evolved for DTTB; the three major
standards being ATSC-T, DVB-T, and ISDB-T. Shown below is a typical block
schematic of a Digital Terrestrial Broadcasting set up. India has adopted the DVB-
T standard for DTTB.
The key concepts to be learnt are Baseband Digital Audio, Video and
Data Signals, encoding of the base band signals (Compression formats – for eg.
MPEG-2 format used in DVB-T), multiplexing of more than one television channels
in to a single transport stream, data scrambling and conditional access, channel
coding to improve the ruggedness of the signal when it is transmitted into free
space, modulation techniques for transmission of the signal at Radio Frequency, -
all of which takes place at the transmitting station and the reverse process takes
place at the receiving homes through a Integrated Receiver Decoder (IRD) often
referred as set to box; providing the final viewing experience on the television
display device.
The video, audio and other service data are compressed and multiplexed
to form elementary streams. These streams may be multiplexed again with the
source data from other programs to form the MPEG-2 Transport Stream (TS). A
transport stream consists of Transport Packets that are 188 bytes in length.
Unlike video, the three current DTV standards use three different audio
coding schemes:, MPEG audio and Dolby AC-3 for DVB, the audio standards uses a
similar technique called perceptual coding and support up to six channels—right,
left, center, right surround, left surround, and subwoofer—often designated as 5.1
channels. A perceptual audio coder exploits a psycho-acoustic effect known as
masking. This psycho-acoustic phenomenon states that when sound is broken into
its constituent frequencies, those sounds with relatively lower energy adjacent to
others with significantly higher energy are masked by the latter and are not
audible.
MPEG-2 PSI tables only give information concerning the multiplex. The
DVB standard adds complementary tables (DVB-SI) to allow the user to navigate
the available programs and services by means of an electronic program guide
(EPG). DVB-SI has four basic tables and three optional tables to serve this purpose.
The decoder must perform the following main steps in order to find a program or
a service in an MPEG-2 transport multiplex.
2. Once the user choice is made, the decoder must filter the PID corresponding
to the PMT of this program and construct the PMT from the relevant
sections. If there is more than one audio or video stream, the user should be
able to make another choice.
The audio/video decoding can now start. The part of this process that is
visible to users is the interactive presentation of the EPG associated with the
network, which can be built by means of the PSI and DVB-SI tables in order to
allow them to easily navigate the available programs and services.
Read-Solomon Coding
Interleaving
The purpose of data interleaving is to increase the efficiency of the
Reed-Solomon coding by spreading over a longer time the burst errors
introduced by the transmission channel, which could otherwise exceed the
correction capacity of the Reed-Solomon coding. Interleaving is normally
implemented by using a two-dimensional array buffer, such that the data enters
the buffer in rows and then read out in columns. The result of the interleaving
process is that a burst of errors in the channel after deinterleaving becomes a few
scarcely spaced single-symbol errors, which are more easily correctable. DVB
uses convolution interleaving, and the interleaving depth is 12.
Inner Code
The inner coding is a convolutional coding for DVB. Inner coding is an efficient
complement to the Reed-Solomon coding and Forney interleaving as it is designed
to correct random errors.
Until now we do not see much difference among the three DTV
systems. Differentiation occurs due to the different modulation schemes of the
systems. This section briefly describes principles behind those modulation
schemes.
Satellite Transponder
The imaginary line drawn equidistant to both north and south poles and this
was called equator. Parallel lines were drawn depicting the angular distance north
or south of equator. These imaginary lines are called Latitudes. Equator is 0 degree
Latitude. All other latitudes are circles with different diameters the equator being
the largest. As they are parallel to each other they are also called parallels. The
equator as the reference and both north and south poles cover an angle of 90
degrees maximum. (Fig. 2 & 3).
The azimuth and elevation are angles which specify the direction of a
satellite from a point on the earth's surface. In layman terms the azimuth is the
east west movement and the elevation can be defined as the north south movement
of the dish.
Both the azimuth and elevation of a dish can be affected by three factors
for geo-stationary satellites. They are
r
Cos D. Cos R
1
Elevation tan
1 Cos D. Cos 2
where D = r - s in degrees.
= latitude of the given site in degrees.
r = longitude of the given site in degrees.
s = longitude of the satellite.
Polarization
The wave radiated by an antenna consists of an electric field component
and a magnetic field component. These two components are orthogonal and
perpendicular to the direction of propagation of the wave. By convention the
polarisation is defined by the plane of propagation of electrical field component.
That means if the electrical field component is travelling in the vertical plane it is
called vertically polarised. If the wave contains both vertical and horizontal
components it is called circular or elliptical. The types of polarisation are:
1. Linear polarisation
2. Circular or Elliptical
Find the look angles of an earth station 20oN 75oE for the Satellite parked at 93.5oE
E
d
6378 o El
90
O S
6378 35786
Assumptions :
c
C
a b c
a 2R
Sin A SinB Sin C
B
b2 c 2 a2
Cos A
2bc
+ + El = 90o
Spherical Trigonometry
Plus sign is taken when the longitude of the Satellite is less than the
longitude of the earth Station.
Minus sign is taken when the longitude of the satellite is greater than the
longitude of the earth station.
Tan 0.3346
Tan 1 Tan 1
Sin 0.3420
6378 (90+El) d
S
O
42164
Cos Cos . Cos
Cos 18.5 o . Cos 20 o 0.8911
26.98 o
Central angle
d 63782 421642 2(6378)( 42164)(0.8911)
1818481779 479272774
1339209005 36595.2 km.
Slant range.
d 42164 42164
Sin Sin (90 El) Cos El
42164 Sin
Cos El
36595.2
42164 0.453679
0.5227
36595.2
El 90o
Nadir angle
90 58.49 26.98 4.53o
Answer :
Look angles
Azimuth = 135.63o
Elevation = 58.49o
Limits of Visibility
There will be east and west limits on the geostationary arc visible
from any given earth station. The geographical co-ordinates of the earth station
and the antenna elevation set the limits of visibility.
E
d
6378 o El
90
O S
6378 SS 35786
Theoretical elevation for the lower limit is zero, but in practice to avoid
reception with excessive noise from the earth, a value of 5o is chosen as the lower
limit.
S = Satellite
SS = Sub satellite point
E = Earth Station
= Central angle = Nadir Angle
El = Elevation angle
+ + El = 90o
From EOS,
Spherical Trigonometry
Cos = Cos . Cos ,
Cos
or Cos
Cos
Cos 76.33o
or Cos 0.2515
Cos 20o
75.43o
s 75 75.43o
150.43o E or 0.43o W
Thus, satellites within the geo-stationary arc from 150.43oE to 0.43oW
can be viewed from the earth station 20oN 75oE.
MOBILE TV THROUGH DIGITAL TERRESTRIAL TV NETWORK
Gopal Kumar, DDE
RSTI(T) BBSR
Introduction :
Mobile Television is the television which is watched on small handheld
devices. Mobile TV may be a pay TV through Mobile Telecommunication Network
over mobile phone carrier or through free to air terrestrial television network
operating over television carriers. It can also be in IPTV streaming video through
wireless network.
The DVB-H specifies the way of carrying multimedia services over DTT-TV
(Digital Terrestrial Transmission-TV)-Network. As DVB-H is using basic
infrastructure of DTT-TV with some modification, the brief overview of DTT is
required before explaining the DVB-H.
DTT (DVB-T)
Fig-1 shows the block diagram of DVB-T(DTT) system. As shown in the
fig. the DTT-transmission system can be seen as cascading of different functional
blocks having specific functions. These blocks are
1) MPEG-2 Encoding
The programme streams either in analog or digital video form are encoded
with MPEG-2 encoding method to achieve professional quality high
compression.
2) Multiplexing
The Multiplexing of encoded programme streams are multiplexed
to have single stream carrying data of multiple programmes in a structured
format specified by MPEG-2 Standard.
5) Outer Interleaver-
Convolutional Interleaver used for the dispersal of bytes from one
packet (204 bytes) over a length of 204 x 12 bytes on a data stream. By this process,
the bust of error gets distributed over bytes from different packets hence the error
correction capability of RS coding can be optimally utilized.
Each symbol is generated from QAM mapped values of all carriers for
particular symbol duration, by using IFFT. An guard interval is added at the
beginning of symbol generated by IFFT. Actual Symbol time is TS =Tu + Tg
Useful time gauort interval
Guard interval in fraction of symbol time: 1/32, 1/16, 1/8
In addition to the video data on 1512 (2k) carriers, A symbol contains
Scattered Pilots : for channel estimation
Continuous Pilots : for channel estimation and synchronization
TPS Carrier : to carry the channel parameters
IFFT output after guard interval are still in digital form which is converted
to analog for transmission, by passing it through D/A(Digital to Analog)
converter. Channel converter – Up-converts the above OFDM modulated RF signal
to the transmission channel frequency.
DVB-H based Mobile TV
At Link Layer:
Time slicing technique in which data corresponding to a particular
programme is transmitted in burst after at certain interval. This will provide
opportunity to hand held device to switch off the receiving chain after receiving
a bust of data, till the arrival of next bust of data. However, information
corresponding to arrival of next bust should be available to receiver through
present bust of data.
FEC for Multi protocol encapsulated data ( MPE – FEC ) provides additional
protection to data packet for Mobile TV.
At Physical Layer:
DVB-H signaling in TPS bits is included to enhance and speedup service
recovery.
Additional 4K mode is adopted as trade off between mobility and single
frequency networking. So additional option of number of carriers
In-depth symbols Interleaver is used along with 2K and 4K mode to improve
the robustness in mobile environment.
The signaling corresponding to 4K mode and in-depth Interleaver in TPS is
included
The Site Master is a hand held cable and antenna analyzer. The Site Master is
designed for measuring Return Loss, SWR, and Cable Loss of cable and antenna
systems from 25 MHz to 4 GHz. Distance-To-Fault (DTF) measurements can be used
to locate the precise location of a fault
within the feed line system.
For accurate results, the Site Master must be calibrated before making
any measurements. The Site Master must be re-calibrated whenever the setup
frequency changes, the temperature exceeds the calibration temperature range or
when the test port extension cable is removed or replaced.
There are two methods of calibration –
1) Flex Cal : Flex Cal is a broadband frequency calibration that remains valid
if the frequency is changed.
2) OSL Cal : An OSL calibration is an Open, Short and Load calibration for a
selected frequency range, and is no longer valid if the frequency is changed. The
default calibration mode is OSL.
With either calibration method, the Site Master may be calibrated
manually with Open, Short, Load (OSL) calibration components.
Calibration Verification -
1) When an OPEN is connected, a trace will be displayed between 0-20 dB.
2)
When the Site Master is measuring an equivalent OPEN, a trace will be displayed
between 0-20 dB.
65.53
M1 = 3.29 dB, 224.97 MHz
M2 = 3.81 dB, 194.18 MHz
M3 = 4.12 dB, 164.53 MHz
M4 = 5.11 dB, 135.46 MHz
1.00 M1 M2 M3 M4
100 MHz frequency → 250 MHz
Return Loss Measurement Measures the reflected power of the system in
decibels (dB).This measurement can also be taken in the Standing Wave Ratio
(SWR) mode, which is the ratio of the transmitted power to the reflected power.
Return loss measurement verifies the performance of the transmission feed
line system with the antenna connected at the end of the transmission line.
Procedure:
Step-1. Press the MODE key.
Step-2. Select Freq-Return Loss using the Up/Down arrow & and press ENTER.
Step-5. Connect the cable to the Site Master. A trace will be displayed on the
screen when the Site Master is in the sweep mode.
Cal ON 517 points sweep: 1.39 Sec
M1 = 5.37 dB, 224.97 MHz
0 dB M2 = 4.66 dB, 194.18 MHz
M3 = 4.24 dB, 164.53 MHz
M4 = 3.44 dB, 135.46 MHz
𝑉𝑆𝑊𝑅 −1
Return loss =−20 log
𝑉𝑆𝑊𝑅 +1
3.29 −1
= −20 log
3.29 +1
20 dB M4 M3 M2 M1 = 5.37 dB
100 MHz frequency → 250 MHz
1.74 −1
Step-6. Set the D1 and D2 values. Or, = −20 log
1.74+1
Resolution:
There are three sets of data points (130, 259 and 517) available in the
Site Master. The factory default is 259 data points. By increasing the number of
data points the measurement accuracy and transmission line distance to measure
will increase.
Step size = (1.5 Х 108)(Vp)
F
Where, Vp = relative propagation velocity of the cable.
F = stop frequency (minus) start frequency (Hz).
MEAS/
MOD FREQ/D AMPLITU DISP
2) EAmplitude:
IST Selecting
DE Amplitude Range:
Reference Level is the setting of the top line
of the display
Scale-Changes the units per division of
amplitude
Auto-Atten-Changes attenuation as
Reference Level changes
Manual - The setting of the input attenuator
(0 to 51 dB)
Dynamic-Sets the input attenuator so that it
is dynamically coupled to the input signal and
turns the preamp on or off as necessary
Pre-Amp-On-Improves noise level and
Ref. Level: -120 dBm to +20.0 dBm. sensitivity
Scale:1 dB/Div to 15 dB/Div. Units changes from dB to Watts to Volts
Attenuation/Preamp: Auto/ RL-Offset-Compensates for external
Manual/ Dynamic/ Pre-amp-On/Off attenuators
Field Strength: The magnitude of an electric, magnetic, or electromagnetic
field at a given point is known as field strength. The strength is measured in
Amplitude Units/‖length‖ which is in meters. The field strength can be measured
in dBm/m2, dBV/m, dBmV/m or dBμV/m.
% of power=99% Meas. dB down= 7.5 dB Method -XdB down. Meas. Occ. BW =75 KHz
Meas. occupied BW= 5.85 MHz dB value = 3 dB Meas. % of power =64.7 %
Figure – 5 (a) Percent of Power Method Figure – 5 (b) X dB down Method
Procedure:
Step-1: Connect the antenna to the Spectrum Analyzer & press the MODE key.
Step-3: Press the AMPLITUDE key & select the Ref. Level.
Step-6: Press the MEAS/DISP key & select the Bandwidth & set the resolution &
video bandwidth.
RBW = 10 KHz,
VBW = 3 KHz
Applications
For field strength measurement
• Investigation to determine service areas.
1. Since the antenna is MP534B, set the address switches S1 and S2 at the last
side of the real panel as soon as fig. 3-15 (0, 0).
2. Connect the antenna connection cable to the RF INPUT and set the POWER
SWITCH TO ON.
3. Press the BATT CHECK and check the supply voltage. It Is normal if the
pointer of the level meter is within the BATT check scale. At this time, the
light is also activated to illuminate the level display and frequency display.
4. Press the [UNITS] key so that the ►mark (dBµV /m) at the lower right
corner of the level display brights.
5. Set the AM, FM monitors switch to AM or FM according to the signal to be
received.
6. Set the BW (KHZ) pass band width switch to 15, 120 or 8. (Type C is 8.)
7. Set the receiving frequency using the numeric keys [0] to [9] and [MHz] key.
The set frequency is displayed on the frequency display.
8. Press the [CAL] key. Level calibration is performed automatically. After the
end of calibration, the instrument enters the measurement state and the field
strength is displayed on the level display.
9. The conversion of field strength is not carried out on the level meter the
input voltage from the antenna is displayed as it is.
Caution: If the RF ATT indicator at the upper left corner of the level
display blinks during step 8, it means that the input signal level exceeds the
maximum measurable level without the RF ATT. Press the [RF ATT] key in this
case. The RF voltage applied to the RF INPUT is converted to field strength and
displayed on the level of display. If the RF ATT is used, since the attenuator of the
RF ATT is not added to the level meter indication, adding 20 dB to the level meter
indication is the input level when the RF ATT indicator is lit continuously.