You are on page 1of 152

“This (Radio) is a miraculous power.

I see shakti, the miraculous power of God in it”

- Mahatma Gandhi
(While describing the power of radio, after his first and last live broadcast over
th
AIR on 12 November 1947). The power of radio has further increased with the large scale
digitization taking place across the globe, even so in the television. Broadcasting in India
has seen phenomenal advances in both production and technology resulting in innovative
programming taking advantages of digital revolution coupled with mushrooming delivery
modes namely terrestrial, cable, satellite, IPTV etc. More than 400 channels are now
available to consumers in India. Now the power has shifted to consumer with interactivity
which is the next big thing happening in DTH and cable.

The entertainment and media sector in India has been growing at a steady 19%
per annum. Barring a minor correction due to global slowdown, the trend is expected to
continue. In a study conducted by Price Water Coopers, the TV industry in India is
expected to grow from Rs 226 billion in 2007 to Rs 600 billion in 2012. During the same
period, radio industry is expected to grow from Rs 6.2 billion to 18 billion. The
exponential growth seen in broadcast channels resulted in enormous job opportunities in
this sector. Lack of skilled manpower is seen as one of the bottlenecks in the industry.

Prasar-Bharati training centers are filling this void to certain extent and this
course on broadcast technology is an attempt to train the technical students in the vast area
of broadcasting. The contents in this course include starting from analog basics of audio
and video clearly mentioning intricacies of signal generation to modern trends in
broadcasting like DAB and DRM in Radio and DVB-H in Television as the world is
moving to mobile. We hope this course which includes overview of all aspects of
broadcasting such as production, post production, transmission and archiving will fulfil the
needs of students and professionals who are new to this industry.

Editor
CHARACTERISTICS OF SOUND AND ACOUSTICS
B.GHOSH – ADE
RSTI (T), BBSR

Nature of Sound
Sound is a longitudinal wave motion consisting of a train of
Compression and rarefaction travelling in a medium. When these waves strike the
eardrum these are converted into signals which are carried to the brain by the
auditory nerves and are finally interpreted into what we call Sound.
It has all characteristics of a wave as explained below-
1) Amplitude –Defined as the
intensity of Compression &
Maximum Compression
rarefaction produced in a medium.
2) Frequency (f) –Defined as
the number of Successive
Zero pressure Line
compression and rarefaction Time
Amplitude

occurring in one second, and is Wavelength (λ)


expressed in Hertz (Hz).
3) Time Period-Time taken in
completing one cycle, given by- Maximum rarefaction
T= 1/f second (Figure-1)
4) Phase-It indicates the state of motion at a particular instant relative to
some reference, expressed in terms of angle, one complete cycle is equal to the
phase difference of 360 degrees. In terms of wavelength (λ) & time (T), phase
difference of 90O can be expressed as, λ/4 or T/4.
5) Velocity-It is the distance travelled by the sound wave in one second,
equal to 344 metres/ second at 20O C, and 332 metres / second at 0OC.The
relation between the velocity and temperature is given by.
V =V (T /T )½
2 1 2 1
Where, V1= velocity at T1 degree Kelvin
V2= velocity at T2 degree Kelvin

Pressure and intensity of Sound waves - Sound waves produce


variation of pressure in the medium in the form of compressions and rarefactions
in quick successions. Sound pressure variation is therefore represented by newton
per square metre (N/m2), or pascal (Pa). In terms of micro-bar (dyne per Sq
cm) one Pa is equal to ten micro-bars. In terms of energy, intensity of sound
waves is defined as the average rate of flow of sound energy through cross
sectional area of one square metre at right angles to the direction of motion. It is
represented by watt per square metre (W/m2).

When sound pressure is 20X10-6 Pa, it


The Decibel-The decibel (dB) is often used
gives just audible sound and is called in electrical & acoustic measurements. It is a
‘Threshold of hearing’. This much number that represents a ratio of two values
of a quantity such as ‘voltage’ in logarithmic
sound pressure pertains to 1 pico- ratio. It is used to scale a large
measurement range down to a smaller
watt/m2 of sound intensity. The range. The form of the decibel relationship
pressure level at which pain is felt is 63 for voltage is:
dB = 20 x log(V1/V2)-where 20 is a
Pa of intensity 10 watt/ m2.This constant,V1 & V2 are voltages, and log is
intensity is called ‘threshold of pain. logarithm base 10.
For powers, P1&P2, dB=10log P1/ P2.
Examples: 1) The relationship in decibels
All sound pressure and intensities lie between 100 volts & 1 volt.
between the threshold of hearing and dB = 20 x log(100/1) = 20 x log(100),
or dB = 20 x 2 (the log of 100 is 2) = 40
threshold of pain. That is, 100 volts is 40dB greater than 1 volt.
Examples- 2) What is the relationship in decibels
between 0.001 volt and 1 volt?
Type of sound Pa or micro-bar W/m2 (dB) over dB = 20 x log(0.001/1) = 20 x log(0.001) =
20 x (-3) = - 60
N/m2 T.O.H. * That is, 0.001 volt is 60dB less that 1 volt.
Similarly:
1)Rustle of leaves 63X10 - 6 630X10 - 6 10 -11 10 dB #If one voltage is equal to the other they are
2)Whisper 20X10 - 5 200X10 - 5 10 -10 20 dB 0dB different.
# If one voltage is twice the other they are
3)Ordinary 63X10 - 4 630X10 - 4 10 -7 50 dB 6dB different.
# If one voltage is ten times the other they
conversation 4)Normal speech 0.1 1.0
are 20dB different.
0.25X10 - 4 74 dB 5)Thunder 2.0 20 10
- 2 100

dB 6)Threshold 63 630 10 130 dB

of pain

* (T.O.H. = Threshold of Hearing)


Sound & Sensitivity of human-ear:
a) Human ear is very sensitive to sound intensity & can detect sound as
low as 0.1 pW/m2 (or 10 dB below the threshold of hearing).
b) The ear cannot distinguish difference of intensity of less than 1 dB
between two sounds.
c) Minimum level which can be comfortably detected over threshold of
hearing is 3 db for speech or music.
d) The ear possesses characteristics of masking that is the louder sound
reaching the ear can suppress the weaker sound.
e) The ear judges’ direction of sound from the first received even if it is
weaker.
Loudness & Phon-Loudness is
defined as the intensity of sound as
judged by the ear. It needs higher
intensity at low frequencies than at
high frequencies to impart same
sensation of loudness. The intensity of
60 dB at 40 Hz and of 0 dB at 1000 Hz
imparts the same loudness. The
intensity in dB with reference to
Threshold of hearing as perceived by
the ear at 1000 Hz is called phon (P). Frequency
If it is 0 dB then loudness is 0 phon, if it (Figure-2) Fletcher-Munson Curves for loudness
is 40 dB then loudness is 40 phon.
vs. Frequency
Sone: It is found that a 10 dB increase in sound level corresponds
approximately to a perceived doubling of loudness. One sone is defined as the
loudness expressed by a person listening to a 1000 Hz tone of 40 phon loudness
level. Similarly 50 phons would have a loudness of 2 sones, 60 phons would be 4
sones, etc. The relation between sone (L) and Phon (P) is given by-
10logL= (P - 40) log2
Frequency range for Speech: Audible frequencies range from 16Hz to
20000 Hz. For satisfactory transmission of speech two factors are very important.
Curves (a)&(b) Curves (c)&(d)
100 (c) 100
1) Intelligibility- It is defined
90 (a) 90
as the clearness of one’s speech determined
80 80
through the test of articulation. The person
70
Articulation efficiency

70
under test is made to speak syllables in
Sound energy (% of

60 60
random order, which is recorded and heard by 50
50
a group of persons with normal hearing. The 40
40
articulation efficiency should be about 90% 30
30
total)

for broadcast purpose & 80% for telephone


(%)

20 20
speech. It is found that Intelligibility is mostly 10 (d) (b) 10
contained in the high frequency components 0 0 0
1 2 3
(1.5 to 2.5 KHz) of the speech. (Figure-3) (Figure-3) Cut-off
4 5 KHz
(a) Articulation efficiency
Frequencyas a function of
2) Energy-It is found that about 80% upper cut-off frequency. (b) Articulation
of total energy is transmitted, even though efficiency as a function of lower cut-off
all frequencies above 1 KHz are suppressed. frequency. (c) Sound energy as a function of
Similarly suppressing all frequencies below 1 KHz
upperreduces the energy (d)
cut-off frequency. transmitted
Sound
to 15 %( Fig-3).The energy in speech is contained mostly in the low frequencies.
energy as a function of lower cut-off

frequency.
Based on these results 300-3400 Hz for telephone speech and 80-8000 Hz for
entertainment speech have been considered most adequate.

Overtones and Timbre: Sound waves produced by Speech and musical


instruments are not pure sine waves, but are complex waves consisting not only
the fundamental frequencies (tones) but also of their harmonics, and other
frequencies, called ‘overtones’. The proportion of tones & overtones present in
the sound that helps us to identify any particular voice is called Timbre. Some
examples of fundamental frequencies and their overtones are given below:

Sound Source Range of fundamental Overall frequency


(voice/Instrument) frequencies in Hz. range including
overtones in Hz.
1) Men 110 - 1000 110 – 8000
2) Women 220 - 1500 220 – 10000
3) Harmonium 150 – 1200 150 - 16000
4) Flute 170 – 2200 170 - 15000
5) Violin 180 – 2500 180 - 15000

Intervals: It is defined as the ratio of two frequencies.


Example-Interval of 400Hz &100Hz is 4.
Octaves: An interval of 1:2is called an OCTAVE.
Example: One octave of 200 Hz will be 400 Hz or one octave of 100 Hz will be 200
Hz. Two octaves of 100Hz will be 400 Hz. Or mathematically:
Number of Octaves for two frequencies f1& f2 = log2 [f2/f1].
Harmonics: It is an integer ratio between two frequencies. Harmonics are
always integral multiple of the fundamental frequencies.
Example: With respect to 100 Hz, a frequency of 200 Hz will be 2nd harmonic, and
a frequency of 400 Hz will be 4th harmonic.
Pitch: It generally represents the
perceived fundamental frequency of a
sound. Sound waves with a longer
wavelength don't arrive at the ear, as
often (frequently) as the shorter waves.
The shorter the wavelength, the higher
the frequency, and the higher the pitch,
of the sound. In other words, short waves
sound high; long waves sound low.
(Figure-4) Wavelength, Frequency,
and Pitch
Acoustic Reverberation.
The term acoustics has been derived from the Greek word akoustos, meaning
“hearing.” It is the area of science devoted to the study of the production,
transmission, reception, and effect of sound. In
the field of broadcasting, auditoriums and broadcast studios are the originating
place of the program to be broadcast live, or recorded for future use. Hence
proper care should be taken in their designing and construction. Therefore
certain special treatments called “Acoustic Treatment” are needed to be given
to the broadcast studios and auditoriums to preserve the originality of the sound
along with extending maximum pleasure to the listener.
The conditions and designing feature of a broadcast studio or an auditorium has
been discussed below.
Reverberation: As soon as
the sound waves originate from its
source, suffers reflection, refraction,
diffraction & absorption. In an
auditorium or studio the sound is Source
received directly from the source as S
Receive
R
well as sound reflected from walls,
floor, ceiling, etc. The sound persists
for a noticeable time even after the
original sound stops. It fades away
gradually. The persistence of sound,
caused due to repeated reflection is (Figure-5) Multiple reflections of Sound
called reverberation.
Eliminating reverberation waves.
Sound level of source
completely will result in a lifeless and
un-natural sound. Hence all natural Steady
sound in a hall or studio should
Sound level

Growth
include some proportion of
reverberation.
Decay
Reverberation Time(R/T): The time
taken for sound energy in a room to
drop to 10-6 times (one millionth) of a b c Time d
its initial value, or 60 dB below its (Figure-6) Growth & decay of sound
original value.
in an enclosure
Some typical values of reverberation time (R/T) are:
1) Big concert hall : 2.0 second, 4) TV Studio : 0.5 second
2) Conference room : 0.5 second 5) Speech studios : 0.3 second,
3) Lecture halls : 0.3 second, 6) Music hall : 0.8 second.
 Factors effecting Reverberation time:

1) Volume of the room.


2) Surface area of the room.
3) Absorption coefficient of the surface area.
4) Velocity and wavelength of the sound.

Based upon the above factors Prof.W.C Sabine of Harvard University


derived a formula for reverberation time ‘T’ in seconds given by:

𝑽
R/T = 55.3 ___________ (1)
𝒄𝒂
Where, c = velocity of sound = 344 m/sec or 1120 ft/sec.
V= Volume of the room.
a = Total absorption.

𝟎.𝟎𝟒𝟗 𝑽
Therefore, R/T = ________ (2) (In FPS unit) &,
𝒂

𝟎.𝟎𝟔𝟏𝑽
R/T = ________ (3) (In FPS unit)
𝒂

The Total absorption ‘a’ depends upon the surface area of each surface
and its absorption coefficient, given by equation:
a =∑ αS
or, a= α1S1+ α2S2+ --- αnSn __________ (4)
Where, α1, α2 --- are the absorption coefficients of surface areas S1, S2--
defined as the ratio between energy absorbed by unit surface area to the total
energy received by unit surface area.
Some
typical values of absorption coefficients(for 500 Hz frequency) are:
1) Open window -------------------------- 1
2) Carpets (1cm thick ) ------------------------0.25
3)Curtain ---------------------- 0.15
4)Wooden chair ----------------------------- 0.17
5)Acoustics tiles ----------------------------- 0.55
6)Door-wood --------------------------- 0.05
7) Wooden floor ---------------------------- 0.09
8) Glass panes ------------------------- 0.25
9)Audience ------------------------- 0.84

Based upon the above values of absorption coefficients and from the
known values of surface-areas of each items present in room, the total
absorption including audience can be calculated.
Acoustical Design of Studios & Auditoriums:

While designing & constructing a broadcast-studio or auditorium the


following aspects should be incorporated.
1) Reverberation Time(R/T): The acoustical Design of a studio depends
upon its utilization. The reverberation Time(R/T) for a lecture hall or a speech
studio should be very low. Similarly for a concert hall the R/T should be 1 to 2
seconds. For achieving proper R/T, the absorption of the studio should be
satisfactory with walls and ceilings covered with proper absorbents like
perforated boards, felts, asbestos etc. The floor should have matting and carpets.
Furthermore the absorbing material should not be concentrated in one area but
should be placed in random but not near the speaker or performer.
2) Sound Insulation. Insulation means preventing unwanted sound,
originating from other place, from entering the studio. This is done by using
proper absorbing and sound insulating materials. Unwanted sound /noise in a
studio set-up may originate from the following.

 Outside the building.


 Inside the studio itself.
 Outside the studio but within the building.

a) Unwanted sound from outside the building: Caused due to presence of


airport, busy road or railway-traffic in the nearby vicinity of the studio. This can
be minimised by providing sufficient set-back distance between the studio and the
noise source.
b) Unwanted sound from inside the studio itself: Caused due to airflow
in air-conditioning set-up, noise from fluorescent lights, cooling fans etc. Therefore
the airflow in air-conditioning set-up should have slow diffusion and the
fluorescent lights should have the ballast chokes mounted separately outside the
studio.
c) Due to air-conditioning plants, diesel generator and lift: AC plants
and diesel generator can transfer structural borne noise as well as air borne noise
to the studios. The structural borne noise is avoided by locating them in separate
blocks with a structural isolation gap of 75 mm filled with damping materials
such as asphalt. Only flexible connections are used for linking these blocks with
studio for running electrical cables, duct etc. These plants are mounted on
vibration isolation pads. The water pipes for condenser cooling are also isolated
from the walls with flexible packing materials to avoid transmission of vibration
.Also the main supply and return ducts from the plants are connected to the studio
ducts through flexible canvass connection and insulated internally with sound
absorbing materials e.g. glass wool to avoid airborne noise.
Video and Audio Basics
D.Ranganadham, DDE
RSTI(T) - BBSR

Television broadcasting is concerned with broadcasting of three main


signals audio, video and data. Broadcasting industry in India is predominantly
analogue, but fast turning to digital because of many advantages. Phase
Alternating Line (PAL) standard is followed in India for analogue television.
India has adopted Digital Video Broadcasting (DVB) standard for digital
television broadcasting.

A TV camera outputs a video signal that is split into the three primary
colour,; red, green and blue (RGB). The entire colour spectrum can be
represented by varying intensities of these three primary colours. Video Signal
can be converted into following three signals.

1. Component Signal Y, (R-Y), (B-Y)

2. S Video (Two Signals Y & C)

3. Composite video (Only One Signal)

1.Component Signal Y, (R-Y), (B-Y): Analog video signal consists of three


component video signals, one Luminance signal Y and two colour difference
signals CB, CR , derived from the primary colors R,G,B which are the signals
generated at the camera pre amplifiers output. The component signals are
generated from RGB by using a matrix with following weights.
Gamma correction R1 = 0.59, R2 = 0.3, R3 = 0.11

Simple matrix Luminance signal


Y’ = 0.3R’ + 0.59G’ + 0.11B’
G G’ R1

R2
Amp.
R3
R4 Inv.
amp.

Camera -Y’
R R’ (R’ – Y’)
outputs (R’ – Y’)

adder

-Y’
B B’ (B’ – Y’)
 (B’ – Y’)
adder

Fig 2. Generation of Luminance and color difference signals


Y = 0.3R+0.59G+0.11B
CB = 0.71(B-Y)
CR = 0.56(R-Y)
The reasons for converting the primary R G B signals into these
component signals are:

Firstly by converting into component signals Y, CB and CR the


bandwidth can be reduced by using the Human Visual System (HVS)
characteristics. The R, G and B signals from camera contains frequencies up to
5MHz each which accounts to total bandwidth requirement of 15 MHz. But in
the component signals Y contains 5MHz and the remaining colour difference
signals the bandwidth requirement can be reduced to 1.5 MHz each by taking
advantage of HVS characteristics. The separation of luminance from
chrominance is important because the human eye is not as sensitive to
chrominance information as it is to luminance. Thus we can bandwidth limit
the chrominance signals to about 1.5MHz and still have enough chrominance
information for a pleasing picture. Bandwidth limiting the chrominance
information reduces the bandwidth needed for color portion of the signal and
thus is a type of analog compression.
Secondly the upward compatibility can be maintained with black &
white signals if component signals are used where both luminance Y and
chrominance signals are sent. The PAL encoder at the broadcasting end will
convert RGB signals to component Y, CB and CR and at the colour receiver to
RGB by PAL decoder. B&W TV displays luminance signal Y.
2. Composite video: Composite video, as its name suggests, is a single
video signal that is a composite of the black-and-white information (Y) and the
colour information (C).

PAL Video

• 625 scan lines per frame, 25 frames per second (40 m sec/frame)
• Aspect ratio 4:3
• PAL uses Y,U,V colour model
• Luminance (Y ) =0.3R+ 0.59G +0.11B
• In PAL System carrier is single, we need two signals i.e. (R–Y) and (B–Y) to
modulate independently.
• Both are of the same frequency but are displaced in phase by 90 degrees.
Hence it uses quadrature Amplitude modulation (QAM). The two modulated
signals at 90 degrees to each other produces the resultant chrominance
signal which gets added to Luminance signal to form Composite colour Video
Signal (CCVS).
• The (R-Y) and (B-Y) chrominance signals may be recovered at the television
receiver by suitable synchronous demodulation.
• The sub-carrier is to be generated by a local oscillator.
• This generated sub-carrier in the receiver must have same phase and
frequency as that of transmitted sub-carrier.
• This is achieved by transmitting 10 cycles of sub-carrier frequency on the
back porch of H synchronizing pulse and is known as burst or colour burst.
Broadcasting industry in India is fast turning digital because
of following advantages.
 Superior technical quality.
 Lower operating cost through the use of compression technology and
improved system reliability.
 No regeneration loss.
 Easy signal storage and processing.
 Less susceptibility to interference and noise leading to reduction in
power requirements for digital transmission networks.
 Use of extensive error correction techniques leading to use of cheaper
receiving equipment.
 Conditional access system and easy encryption mechanism.
 Identical method of handling audio, video and data.
 Interactive broadcasting services.
 Data broadcasting.
 More channels in a given bandwidth with extensive use of compression
techniques.
 More programming choices, viewing convenience & new services,
packaged information delivery, shopping, games, education, banking.
 Flexibility in processing.
 High spectrum efficiency, including greater possibilities for frequency
re-use and the ability to support more programme transmissions per
RF channel.
 Scalability which means for example that the bit stream can be
devoted to a single High Definition Television (HDTV) quality signal or
multiple simultaneous Standard Definition Television (SDTV) feeds
featuring independent or inter-related programming.
 Conversion to digital enables convergence of broadcasting,
telecommunication and information technology
 Increased performance diversity, the ability to provide multiple
services in an existing single broadcasting service channel.
Digital Broadcasting Drawbacks
 Requires huge investments for Prasar Bharati to change the analogue
transmitters and equipment in majority of the studios.
 New frequencies are required for digital broadcasting as the
coexistence of analogue and digital broadcasting for some time is a
necessity before complete change over.
 Users need new receivers each costing more than Rs. 5000/-.
 Uncompressed digital video data is very high, whereas compressed
data, at very high compression ratios, are not very good for archiving.

Digital Video Signal: Digital video signals have been used for some time in
television studios based on the original CCIR Standard CCIR 601, designated as
ITU-BT.R601 today; this data signal is obtained as follows:
To start with, the video camera (Fig.3) supplies the analog Red, Green and Blue
(R, G, B) signals. These signals are matrixed in the camera to form luminance
(Y) and chrominance (colour difference CB and CR) signals. The luminance
bandwidth is then limited to 5.75 MHz using a low-pass filter. The two colour
difference signals are limited to 2.75 MHz, i.e. the colour resolution is clearly
reduced compared with the brightness resolution. In analog television (NTSC,
PAL, SECAM) too, the colour resolution is reduced to about 1.5 MHz. The low
pass filtered Y, CB and CR signals are then sampled and digitized by means of
analog/digital converters.

Fig. 3. Digitization of Luminance


and chrominance
The A/D converter in the luminance branch operates at a sampling
frequency of 13.5 MHz and the two CB and CR colour difference signals are sampled
at 6.75 MHz each. This meets the requirements of the sampling theorem: There are
no more signals components above half the sampling frequency.

Fig. 4. Sampling of component in


accordance of ITU-BT.R.601

The three A/D converters can all have a resolution of 8 or 10 bits.


With a resolution of 10 bits, this will result in a gross data rate of 270
Mbit/sec which is suitable for distribution in the studio but much too high for
TV transmission via existing channels (terrestrial, satellite or cable). The
samples of all three A/D converters are multiplexed in the following order: CB
Y CR Y CB Y ….. In this digital video signal (Fig. 4), the luminance value thus
alternates with a CB value or a CR value and there are twice as many Y values
as there are CB or CR values. This is called a 4:2:2 resolutions, compared with
the resolution immediately after the matrixing, which was the same for all
components, namely 4:4:4. Within the data stream the start and the end of
the active video signal is marked by special code words called SAV (start of
active video) and EAV (end of active video), naturally enough. Between EAV
and SAV, there is the horizontal blanking interval which does not contain any
information related to the video signal, i.e. the digital signal does not contain
the sync pulse. In the horizontal blanking interval, supplementary
information can be transmitted such as, e.g. audio signals or error protection
information for the digital signal.

Digital video is distributed through SDI (Serial CCIR 601 Interface) which
is a serial interface stands for serial digital interface and has become the most
widely used interface because a conventional 75-Ohm BNC cable can be used.
Serial Digital Interface (SDI) is uncompressed digital video signs have a data
rate of 270 Mbps for 10 bit quantisation. SDI uses waveform that is
symmetrical about ground and has initial amplitude of 800 mV pp across 75
ohm load. Parallel connection of digital equipment is practical only for
relatively small installations, and there is a clear need for transmission over a
single coaxial cable. This is not simple as the data rate is high, and if the signal
were transmitted serially without modification, reliable recovery would be
very difficult. The serial signal must be modified prior to transmission to
ensure that there are sufficient edges for reliable clock recovery, to minimize
the low frequency content of the transmitted signal, and to spread the
transmitted energy spectrum so that radio frequency emission problems are
minimized. SDI signal can be fed through 75 ohm coaxial cable having BNC
connectors.

Why use serial? In large facilities it is too difficult and expensive to route
parallel signals rather need to be able to transmit over a single coax. Want to
include audio data with the video to save on cabling and special audio devices.

Analog audio signals are available in balanced mono or stereo mode but
digital audio signals are available in digital AES/EBU audio format as a
discrete channel or embedded on serial digital video. It is a standard defined by
audio Engineering Society (AES) and the European Broadcasting Union (EBU).
Digital audio is a stream of bytes which contain amplitude (volume) data. In
other words, when you're talking about samples, or a CD, you're talking strictly
about a series of bytes which represent a sequence of volume peaks. Each AES
stream carries two audio channels which can be either a stereo pair or two
independent feeds. The signals are pulse code modulated data stream carrying
digitized audio. Each sample is quantized to 20 or 24 bits creating an audio
sample word. Each word is formatted to form a sub-frame which is
multiplexed with other sub-frames to form the AES digital stream. The
sampling rates range from 32 to 50 KHz. Common rates and applications
include the following:
1) 32 KHz - used for radio broadcast links.
2) 44.1 KHz - used for CD players.
3) 48 KHz - used for professional recording & production.

AES/EBU Data Structure:

Block 1 Block 2 Block 3 Block 4

Frame 191 Frame 0 Frame 1

Sub Frame 1 Sub Frame 2


Auxiliary bits
0 34 78 27 28 29 30 31
Bit
Number Sample Data – 20bits V U C P

32 Bits

V = Validity
0 1 2 3 U = User Data
C = Channel Status
X, Y, Z Preamble P = Parity

Two sub frames make up a frame which contains one sample from each of the two
channels. Frames are further grouped into 192 frame blocks and one frame
consist of 2 sub frames. X, Y indicates the channel identity for each sample. Z
indicates start of next frame block. The final stream can be embedded into the
blanking interval of SDI video.

The Sub-frame is comprised of the following:

Preamble: The preamble is a 4-bit synchronizing word used to identify the


channels and start of a block. Channel 1 is identified by a preamble X & Channel
2 is identified by a preamble Y. The 192 Frame block is identified by a
preamble Z is shown in Figure 14. The preambles violate the biphase mark-
coding scheme & allow easy identification of the preamble from the rest of the
data.

Auxiliary Data Bits: When a 20 bit audio sample is used, the four least
significant bits (LSB) can be used for auxiliary data. One application for these
auxiliary data bits is for use as a voice-quality audio channel to provide a
talkback channel. Otherwise, these bits can be used to carry the 4 LSB of a 24
bit audio sample.
Audio Sample Data Bits: The audio sample data is placed between
bits 4 to 27 with the MSB placed at bit 27 and supporting a maximum sample of
24 bits. If not all the 24 bits are used for an audio data samples, the LSB’s are
set to “0”. Typically within broadcast facilities an audio sample of 20 bits is
used. This allows for auxiliary data channel within the 4 LSB’s from 4-8.

The 20 bit audio sample is used for most applications within a


broadcast environment. However a 24 bit audio sample is supported in AES/EBU by
using the sample bits from 4 to 27 and not providing any auxiliary data bits. The
binary audio data sample is 2’s complement encoded. By using this simple
technique it greatly reduces the complexity of the audio hardware design.
Validity Bit (V): When the Validity bit is set to zero the sub-frames audio data is
suitable for decoding to analog audio. If Validity bit is set to “1”, the audio sample
data is not suitable for decoding to an analog audio signal. Test equipment can be
set-up to ignore the validity bit and continue to use the data for measurement
purposes.

User Data bit (U): The user data bits can be used to carry additional information
about the audio signal. Each U bit from the 192 sub-frames can be assembled
together to produce a total of 192 bits per block. The operator can use this
information for such purposes as copyright information.

Channel Status Bit (C): The Channel Status bit provides information on various
parameters associated with the audio signal. These parameters are gathered for
each C bit within the 192 sub-frames for each audio channel. The following table
shows the information carried within these bits.

Parity Bit (P): The parity bit is set such that the values of bits 4-31 form an even
parity (even number of ones) used as a simple means of error checking to detect an
error with a sub-frame. Note it is not necessary to include the preambles since they
already have even parity.
Digital audio is a serial data stream with no separate clock signal. In order
to recover the data the receiver must extract the clock from the data stream by
using a simple coding stream known as bi-phase mark coding. A transition occurs
every bit period and when the data value is a “1”, an additional transition occurs at
half the bit period. This ensures easy clock extraction from the data and minimizes
the DC component present within the signal. Since transitions represent the data
values, the signal is also polarity insensitive.

AES/EBU is the most popular audio standard.

 Is a bit serial communication protocol for transmitting digital audio.

 It provides up-to two channels of 24 bit per sample

 Provides both professional and consumer modes

 It provides two channels of audio data, a method for communication control


and status information and some error detection capabilities.

 AES/EBU is a bit-serial communications protocol for transmitting digital


audio data through a single transmission line.

 Clocking information is derived from the AES/EBU bit stream, and is thus
controlled by the transmitter. The standard mandates use of 32 kHz,44.1
kHz, or 48 kHz sample rates, but some interfaces can be made to work at
other sample rates.

 Format Serial transmission of two channels of sampled and linearly


encoded data.

 Physical Shielded twisted-pair (100 meters max.)

 XLR I/O Output – male pins with female shell and Input – female pins with
male shell

 XLR wiring Pin 1 – shield, earth, ground


Pin 2 – signal
Pin 3 – signal
 Electrical 110 ohm source
2 V to 7 V P-P across 110 ohm (balanced)
Rise/Fall time 5-30 ns
Intrinsic Jitter – 0.025UI

 Sampling frequencies: - 32kHz, 38kHZ, 44.1kHz, 48kHz, 96kHz, 192kHz.


MODULATION
A.C. Subudhi, DDE,
RSTI (T), BBSR

The purposes of modulation are:

a. Modulation for ease of radiation


b. Converting wide band signal to narrow band signal
c. Reducing the effect of impracticability of the antenna
d. Modulation helps in frequency division multiplexing
e. Modulation overcomes the limitation of the equipment
f. Modulation overcomes the noise and interference in the baseband

TYPES OF MODULATION :-

Normally two types of modulation are used:


1. Analog Modulation
2. Digital Modulation

1. Analog Modulation

Parameter of Parameter Name of the


the Base-Band of the Definition Modulation
Signal Carrier Scheme
Signal

Amplitude Amplitude Whenever amplitude of the Amplitude


carrier signal changes in Modulation
accordance with the amplitude
of the base band signal.
Amplitude Frequency Whenever Frequency of the Frequency
carrier signal changes in Modulation
accordance with the amplitude
of the base band signal.
Amplitude Phase Whenever Phase of the carrier Phase
signal changes in accordance Modulation
with the amplitude of the base
band signal.
A. Amplitude Modulation:

Whenever the amplitude of the carrier changes in accordance with


amplitude of the base band signal the modulation is known as amplitude
modulation.

(Figure-1) The schematic of an Amplitude Modulator.

Amplitude modulations are of four types:

a) Double side band full carrier Modulation:-


In this type of modulation the spectrum of the AM signal contains a
carrier, one upper side band & one lower side band. The presence of the carrier
makes the receiver simpler one, hence envelope detection scheme is used for
detection of the base band signal. This amplitude modulation is used for Radio
transmissions.

(Fig – 2) Spectrum of Double side band full carrier


system.
b) Double side band suppressed carrier Modulation:-

In this type of modulation scheme the spectrum of the DSB-SC signal


contains one upper side band & one lower side band signal. As carrier is absent
hence synchronous demodulation is used to detect the base band signal. This
modulation scheme was used for defense.

(Figure – 3) The spectrum of double side band suppressed carrier system

c) Single side band Modulation:-

In this type of modulation scheme the spectrum of the SSB signal


contains either upper side band or lower side band signal. As carrier is absent
in this type of modulation scheme hence synchronous demodulation is used in
order to detect the base band signal. This modulation scheme is used for
HAM Radio.

(Figure– 4) The spectrum of single side band modulation


system
d) Vestigial Side band Modulation:-

In this type of modulation scheme the spectrum of the VSB signal


contains one Carrier signal, one upper side band signal & a vestige of lower side
band signal. As carrier is absent hence synchronous demodulation is used to
detect the base band signal. This modulations is used for the transmission of the
video signal in terrestrial mode.

(Figure– 5) The spectrum of Vestigial side band


modulation system

Advantages of Amplitude Modulation


1. It is simplest modulation scheme
2. It requires less bandwidth.

Disadvantages of Amplitude Modulation:

1. It is more prone to noise


2. As it is having limited bandwidth hence high fidelity sound
transmission is not possible in AM.
3. Day & night coverage is not uniform.
4. Coverage cannot be improved beyond certain limit
5. Earth radials are needed for medium wave transmissions for
increasing coverage area.
6. Coverage depends upon the conductivity of earth.
B. Frequency Modulation:
Whenever the frequency of the carrier changes in accordance with
amplitude of the base band signal, the modulation is known as frequency
modulation.

(Figure- 6) The Schematic of Frequency Modulation

The frequency modulation is also divide into two types


a) Narrow band frequency modulation:

In this type of frequency modulation the spectrum of NBFM signal


contains only two side bands. The modulation index is very much less than one. It
is used for low quality speech transmission over a short haul FM link.
b) Wide band frequency modulation:

In this type of frequency modulation the spectrum of WBFM signal


contains infinite number of side bands. This modulation is used for audio
transmission in terrestrial mode & analog audio and video transmission in
satellite mode.

( Figure – 7) Spectrum of FM Signal


Advantages of Frequency Modulation:

1. FM Shows immunity to atmospheric noise & man made noise.


2. FM transmission is independent of soil conductivity.
3. FM signal coverage is uniform in day &as well as in night.
4. FM signal coverage is independent of frequency.
5. Carrier power doesn’t vary with the modulation.
6. Coverage depends upon height of the tower, ERP & channel spacing.
7. Multiplex operation is possible in FM.
8. Stereo/RDS/Traffic Signal operation & other value added services are
possible in FM.

Disadvantages of Frequency Modulation:

1. FM signal requires more bandwidth.


2. The FM signal requires complex circuitry for demodulation.
3. The cost of receiver is high as comparison to AM.

PHASE MODULATION:

Whenever the Phase of the carrier changes in accordance with


amplitude of the base band signal the modulation is known as Phase modulation.
This modulation Scheme is used for speech transmission over a short distance
communication.

Phase Modulated
Carrier wave

(Figure-8) The schematic of phase modulation.


2. Digital Modulation:

The requirement of Digital Modulation scheme are :

1. It should be able to handle high bit rate


2. It should provide minimum probability of error
3. It should provide resistance to noise & interference
4. It is supposed to handle low transmitter power
5. It should provide better spectral efficiency
6. It should reduce complexity of the circuit

Definition of various Modulation Scheme

Normally the carrier has three parameter such as : Amplitude, Frequency & Phase

Type of Carrier Statement Type of Digital


Binary Modulation
Digital Scheme
Baseband
Format
NRZ Co-sinusoidal Whenever the amplitude of Amplitude Shift
Unipolar the carrier changes in Keying (ASK)
accordance with the or
amplitude of NRZ Unipolar On-Off Keying (OOK)
signal or
Interrupted Carrier
Wave(ICW)
NRZ Co-sinusoidal Whenever frequency of the Frequency Shift
Bipolar carrier changes in Keying (FSK)
accordance with the
amplitude of NRZ bipolar
signal
NRZ Co-sinusoidal Whenever the phase of the Phase Shift Keying
Bipolar carrier changes in (PSK)
accordance with the
amplitude of NRZ bipolar
signal

One symbol = 2 binary bits


Types of Carrier Statement Type of Digital
Symbol(2 Modulation
bits makes 1 Scheme
symbol)
The two binary Co-sinusoidal Whenever the phase of Quadrature Phase
bits provides 4 the carrier changes in Shift Keying
symbols accordance with (QPSK)
( 00, 01,11, 10) amplitude of symbol
The two binary Co-sinusoidal Whenever the amplitude Quadrature
bits provides 4 and phase of the carrier Amplitude
symbols changes in accordance Modulation (QAM)
( 00, 01,11, 10) with amplitude of the
symbol

Concept of Bandwidth :

The bandwidth is defined as the range of positive frequency over which


the energy is maximum. In the digital communication system the speed over
which data are transmitted is called data rate or bit rate. The relation between
bit rate and bandwidth in M-ary digital signal is related by
Bit rate = 2 x bandwidth x Log2 M

Where M = Number of Amplitude states of the Digital Signal

Example :Let a binary bit has time period of Tb Second

(Figure-9-a) (Figure-9-b)
Spectrum of binary bit of duration Tb bandwidth of the signal is 1/Tb
A. Amplitude Shift Keying

(Figure-10)Block diagram of a ASK Transmitter

The block diagram of ASK TX is shown above, the input binary


stream is in 1 or 0 form. When 1 is given to the mixer the output of the mixer is A
Cosωct. When 0 is given to the mixer the output of the mixer is 0. Hence, the
output of the mixer either is A Cosωct or 0. The purpose of band pass filter is to
shape or limit the spectral spreading. The timing diagram of ASK wave form is
shown below.

(Figure-11) Timing Diagram


Frequency Domain Analysis of ASK Signal

When a pulse of amplitude 1 unit is multiplied by carrier the output


is A Cosωct. The spectrum of ASK signal in frequency domain is

(Figure -12) Positive Spectrum of ASK Signal

Hence the bandwidth of ASK signal is 2/Tb . The disadvantage of ASK


signal is that it is more prone to noise & interference.

B. Frequency Shift Keying


Mixer

NRZ BPF
Unipolar Inverter
Signal A Cos (ωc + ω )t Summer
Ckt.
Carrier
( fc + f ) FSK
Signal
Mixer

BPF

A Cos (ωc - ω )t

Carrier
( fc - f )

(Figure -13) The block diagram of FSK signal


transmitter.
Whenever input to the FSK Modulator is 1 at that time output is A
Cos(ωc+Δω)t and when input to the FSK Modulator is 0 the output is A
Cos(ωc-Δω)t hence it is seen the output of the FSK Modulator is drifting in
between A Cos(ωc+Δω)t to A Cos(ωc-Δω)t or vice-versa.

(Figure-14) Timing Diagram of FSK signal

(Figure-15) Spectrum of FSK Signal

It is seen from the spectrum that the FSK signal occupies a


bandwidth of 2(Δf + 1/Tb). The disadvantage is that it occupies more bandwidth.
C. Phase Shift Keying

(Figure-16)Block Diagram of PSK Transmitter

Whenever input to the PSK Modulator is 1 at that time output of


Comparator & PSK Modulator are 1 & A COSωct respectively. Whenever input
to the PSK Modulator is 0 at that time output of the comparator & PSK
Modulator are -1 & - A COSωct respectively. It is seen that the PSK signal drifts
between A COSωct to - A COSωct or viceversa. The timing diagram of PSK
signal is shown below.

(Figure-17) Timing Diagram of PSK signal

(Figure-18) Spectrum of PSK signal


It is seen from the spectrum of PSK that, bandwidth of PSK signal is
2/Tb whether signal is 1 or 0.
D. Quadrature Phase Shift Keying :

(Figure-19) The block Diagram of QPSK signal

As it is required to carry out QPSK Modulation, hence bits are


converted to symbols. So, two bits are making one symbol. If bit duration is T b
and then the bandwidth of the bit is 1/Tb . The symbol duration is 2Tb, hence the
bandwidth of the symbol before modulation is 1/2Tb and after modulation the
bandwidth is 1/Tb.

Input Bit Stream/Symbol Output QPSK Signal

0 0 (-1 -1) - A COSωct - A SINωct

0 1 ( -1 1 ) - A COSωct + A SINωct

1 1(1 1) + A COSωct + A SINωct

1 0 (1 -1 ) + A COSωct - A SINωct

It is seen that the QPSK signal requires lesser bandwidth than PSK
signal, hence finds more application for conserving bandwidth in satellite
communication as well as terrestrial communication.
MICROPHONES AND AMPLIFIERS
Introduction
A microphone is an acoustic-to-electric transducer or sensor that
converts sound into an electrical signal. A microphone may be passive or active.
The electrical power output of a passive microphone is derived solely from the
acoustic power it absorbs, while an active microphone controls an external source
of power.
Further microphones have been classified into two categories

1) PRESSURE TYPE: Only one side of the diaphragm is exposed to the


striking sound waves. It consists of a box, enclosing a volume of air held at a static
atmospheric pressure. The diaphragm forms the 'lid' over the box can move in
response to sound waves, and acts as a simple pressure detector that compares
the fixed internal air pressure with the varying external pressure variations due to
sound. Some examples of pressure type of microphones are Dynamic microphone,
Condenser microphone, Carbon Microphone etc.
2) PRESSURE-GRADIANT TYPE: In these types of microphones the
diaphragm mounting is open to passing sound waves on both sides. In such
situation the diaphragm will only move when there is a pressure difference (or
pressure gradient) between the front and rear faces depending upon the path
lengths and phase differences of the passing sound waves giving rise to following
conditions:
a) Sound source is directly to the sides of the diaphragm: The sound waves
will be identical front and back; net pressure difference will be nil and so the
diaphragm won't move. The result will be that there will be no electrical output
either, and so the mic is almost deaf to side sounds.
b) Sound source is moved around to the front of the mic: The path length for
sound waves arriving at the back of the mic will be longer than for those arriving
at the front resulting in a phase difference, creating a pressure difference, which
causes the diaphragm to move and provide an electrical output signal. The
maximum output will be generated when the sound source is directly in front of
(or behind) the diaphragm due to longer path difference. The example of pressure
gradient microphone is Ribbon Microphone.
Characteristics of Microphones
1) Sensitivity: It is defined as the output of a microphone in millivolts
(or in dB below 1volt) for the sound pressure of 1 microbar (or o.1 Pa) at 1000
Hz. It is given by:
S = 20 log 1/ EO, where S=Sensitivity, EO = output
2) Signal to noise ratio(S/N):
It is defined as the ratio in dB of the output of the microphone with Sound
pressure level (SPL) of 1 bar to the output in absence of sound. It is given by –

S/N = 20 log Output in presence of sound


Output in absence of sound
3)Frequency response –
It is defined as the output level or sensitivity of the microphone over its
operating range from lowest to highest frequency. Although the audible frequency
range of sound is 20 to 2000 Hz, the frequency range is judged for flat response
from 40 to 15000 Hz. The graph below has frequency in Hertz (Hz) on the x-axis
and relative response in decibels (dB) on the y-axis.

Flat frequency response Shaped frequency response


A microphone whose output is equal at all frequencies has a flat
frequency response. They reproduce a variety of sound sources without changing
the original sound. Some microphones have peaks or dips in certain frequency
areas known as shaped frequency response. A shaped response is usually designed
to enhance or restrain a specific sound source in a particular application. For
example, a microphone may have a peak in the 2 – 8 kHz range to increase
intelligibility for live vocals which is called a presence peak.
4) Directivity:
Directivity (D) of a microphone is defined as the ratio in dB of the actual
output when placed in a direction of maximum responce to the output which an
Omni-directional microphone in the same direction would have given. It is given
by –
Where E = Actual output when placed in a direction of maximum responce.
D = 20 log = E/Eo Eo = Output which an Omni-directional microphone in the
same direction would have given.
Microphone Polar patterns.
The polar patterns represent the locus of points that produce the same
signal level output in the microphone if a given sound pressure level is generated
from that point. Polar pattern of microphone indicates its sensitivity to sounds
arriving at different angles about its central axis. It also determines the
Directivity of the Microphone
Types of Polar Patterns
Omni directional; Figure of 8:
Picks up all around Picks up in front & behind

Omni directional microphones are sensitive to sound from all directions. They are
good for picking up the ambience and reverb of rooms and tend to sound very
natural and open even when placed close to instruments. Omni microphones don’t
exhibit any proximity effects but obviously are not good when separation is
needed.
ii) Figure of 8 (bi-directional)
Figure of 8 microphones pick up from the front and rear and have
null points to either side. They are good for recording two vocalists facing each
other or for recording something and still capturing the ambience of the room.
Figure of 8 microphones don’t exhibit any proximity effects.

iii) Cardioid
Cardioid microphones are directional have a heart shaped polar
pattern. This means they pick up sound mainly from the front and are least
sensitive to sound from the rear (its null point). Hyper-Cardioid:
Cardioid: Picks up from front,
Picks up from front, more focussed than Cardioid
rejects from behind

They can be positioned to pick up the instruments that you want to


record and “ignore” the unwanted one. Cardioid mics do have problems known as
the proximity effect. The microphone boosts the bass frequencies when it is close
to the object it is recording, resulting in a boomy sound.

IV) Hyper-Cardioid: It has a similar pick up pattern as the Cardioid mics


but are more directional and don’t pick up as much from the side. Hyper-Cardioid
microphones are good at separating specific sounds from multiple sound sources.
They are tightly focussed and are good for drum kits as they can concentrate on
specific drums and reject spill from other sources. 5) Output
Impedance It is an important parameter of a microphone used to determine
which type of matching transformer would be required to efficiently transfer
power from the microphone to the transmission line and then ultimately to the
amplifier. It is represented in Ω (ohms).
Example: A dynamic
microphone having low output impedance & hence uses a built-in step-up
transformer to match the line.

DIFFERENT TYPES OF MICROPHONES


1) Dynamic microphone.

Output leads

N
Magnet

S
Diaphragm

Suspension

N
Coil

It works on the principal of Faraday’s Law which states that an


electrical current is induced within the conductor when it is moved through a
magnetic field. The magnetic field within the microphone is created using
permanent magnets .A coil of wire is attached to a thin diaphragm, & placed
within the permanent ‘annular ring magnet’. When sound pressure waves arrive
at the diaphragm it vibrates. The coil attached to it also vibrates within the
magnetic field, generating an output voltage proportional to the incoming sound.
This output voltage is very low - typically -70dBV, which is further amplified.
Characteristics
1) Sensitivity-30 micro volts(90 dB below 1 Volt)for sound pressure level
of 0.1Pa

2) Signal to noise ration: 30dB.


3) Frequency response: 60 Hz to 8000 Hz.
4) Directivity: Omni-directional & Cardioid in series with ribbon microphone.
Applications:1) Speech /PA system AND Dramas, (Used in Cardioid pattern in
series with a ribbon microphone)
2) Electrostatic or Condenser Microphones: - (Pressure Type)

Principle & Working.

This type of microphone converts pressure fluctuations into electrical


potentials through the use of changing the capacitance of a capacitor. When the
capacitance changes, the charge on the capacitor tends to remain the same, then
the voltage & capacitance are given by V=
Q/C, & C= kA/d ----(1)
Phantom Powering in condenser microphone
Where, V = Voltage across capacitor.
Q = Charge in Coulombs Output
C = Capacitance in farads
A = area of the plates,
d = distance between plates.
k =the dielectric constant. Phantom power is a DC voltage (12-48 volts)
From the Expression (1) we have - used to power the electronics of a condenser
𝑄 𝑄𝑑
𝑉= = --------(2)
microphone. This DC voltage is supplied
𝑘𝐴/𝑑 𝑘𝐴 through the microphone cable by a phantom
power mixer or by some type of in-line external
Since Q, k & A are Constant source. For example, Pin 2 is 48 VDC and Pin 3
𝑄 is 48 VDC, both with respect to Pin 1 which is
Then = Constant (K) say
𝑘𝐴 ground (shield). Since the voltage is exactly the
Therefore, V= Kd same on Pin 2 & 3, phantom power will have no
adverse effect on the microphones signal.
or V = K X d -- (3)

Hence any change in the distance (d) due to variation in sound pressure,
varies the voltage(V).The two plates of the capacitor are diaphragm (movable) &
back plate (Fixed). Characteristics
1) Sensitivity- Very low hence built-in amplifier is used to
raise the output to 03 mV(50 dB below 1V) at sound pressure of 0.1Pa or 1 bar.
2) Signal to noise ratio: High, about 40 dB.
3) Frequency response: Excellent, 40 Hz to 15000 Hz.
4) Directivity: Basically Omni-directional
Applications: For professional Hi-fidelity recording. Good for music
purpose.
1) ELECTRET MICROPHONE: (Pressure Type)

It is also a Capacitor microphone with a built-in charge facility. Insulating


material like TEFLON can trap & retain a large quantity of fixed charge. A
thin layer of negatively charged TEFLON is coated on the microphone back
A positive charge is induced on the diaphragm
plate. Metal Supporting
due to this back plate negative charge, which diaphragm fixtures
ultimately establishes an electric field across the
gap resulting in a terminal voltage, which varies Teflon layer

in accordance to the sound pressure variation.


Characteristics:
1) Similar to that of a capacitor microphone Insulator Back plate
An Electret Microphone
2) No separate Bias supply needed.
.
Applications: Being very light due to absence of
separate bias- supply it is used as a tie-clip
4)microphone.
RIBBON MICROPHONE (Pressure Gradient type) Built-in
matching transformer
Ribbon Foil

Figure of 8: Output
Picks up in front & behind

A Ribbon Microphone Permanent


Magnet
Microphones operated by the gradient of pressure are called pressure-
gradient type .Ex- ribbon microphone. The output voltage is proportional to the
instantaneous difference in pressure on the two sides of the diaphragm. These
microphones are also known as “velocity microphones”. A ribbon microphone
consists of a few microns thick, few cm long & 2 to 4 mm wide strip of aluminium
foil, suspended to vibrate between the poles of a permanent magnet. A built-in
matching transformer is used to step-up the impedance of the microphone.

Characteristics
1) Sensitivity: about 3μV/ 110 dB below 1V for a sound pressure level of 0.1 Pa.
2) Signal to noise ratio: High, about 50 dB. 3)
Frequency response: Excellent, 20 Hz to 12000 Hz. 4)
Directivity: Bi-directional (figure of 8).
Applications: a) Suitable for Dramas, due to its Bi-Directional property.
b) Good for recording two vocalists facing each other.
5) Wireless Microphones :
A Wireless microphone system consists of a microphone
connected to a miniature radio transmitter, and a receiver designed to receive
only that signal. Some are fixed tuned - that is, they use a quartz crystal for
determination of the operating channel. Most modern products are tuneable --
they add a frequency synthesizer circuit to allow multiple operating channels
from a single crystal. The output is designed for connection directly to the
microphone or line input of a mixer.
Wireless Microphones Transmitters are available in three basic
packages- ANTENNA MICROPHONE

1) Handheld wireless microphones:


ELEMENT ELEMENT

These microphones
have conventional microphone
elements mounted to a handle into RF INTERNAL
which a miniature radio transmitter INSULATORS PC BOARDS

and microphone- preamp are built. A handheld transmitter

2) Plug-on transmitters -
It has a female XL-connector attached to a
compact body that contains the transmitter. Their internal battery provides
phantom power to the microphone and power to the transmitter. A plug-on
transmitter allows the use of virtually any microphone compatible with its
powering circuit.
Antenna
Element

Insulator
Microphone

A plug-on transmitter attached to a mic Body pack transmitters

3) Body pack transmitters-


It allows the connection of any professional Electret
or dynamic lavalier microphone (It is a miniature microphone designed to be
pinned or clipped to an article of clothing and worn on the performer). The
transmitters are usually a bit larger and contain the same electronics as the
handheld transmitter. Non-lavalier mics and line level sources may also be used
with body pack transmitters with the appropriate wiring adapters -- this can be
a good way to send sound outside to an overflow system for special events.
Wireless Microphone Receiver:
Receiver is the most important component of a wireless
microphone system, having an operating range of 300 to 600 ft. (100 - 200
meters) under ideal condition. The limiting factors in wireless microphone
performance are-
1) Interference, 2) Reflections, and 3) Range. Wireless
microphones beyond these limiting factors will affect the performance.

Band-pass Filter RF Amplifier First Mixer IF Amplifier 244 MHz IF filter

(243.76 - 244.26 MHz)

(69 – 72 MHz) First local 12 X Multiplier (828 - 864 MHz)


Oscillator

(XLR)
Second Mixer Audio
IF Amplifier 10.7 MHz IF filter DETECTOR Amplifier
(10.6-10.8 MHz)
Matching Transformer
Second local (233.3 MHz)
Oscillator Signal flow in a UHF Wireless microphone receiver
The signals picked by the antenna are sent through a
broadband filter that attenuates signals far off frequency, are amplified, and fed
to the first mixer. A local oscillator, followed by a frequency multiplier stage,
also feeds the mixer. On the principle, of "heterodyne" the two signals "beat"
together to produce new signals at the sum and difference of their original
frequencies. The sum frequency is filtered out, and the difference signal called an
"intermediate frequency," or IF. is amplified and band-pass filtered again to
remove more interfering signals. The operating frequency may be fixed for
crystal controlled and tuneable for synthesised (PLL controlled) receivers.

Microphone Placement of Techniques.


 Always use a microphone with a frequency response that is suited to the
frequency range of the sound.
 Adopt trial and error method. Place the microphone at various distances and
positions until you find a spot where you hear from the studio monitors the
desired tonal balance and the desired amount of room acoustics. As an
audio Engineer you should be satisfied with the sound quality.
 Place the microphone very close to the loudest part of the instrument or
isolate the instrument whenever you encounter poor room acoustics or
pickup of unwanted sounds.
 Always experiment with microphone choice, placement and isolation, on
order to minimize undesirable sounds. Microphone technique is a matter of
personal taste.
1) Vocal Recording
Recording a choral group or vocal ensemble: Vocalists can
be made to circle around an Omni-directional microphone, or two cardioid
microphones, positioned back to back could be used for this same application.

Proximity-Effect-Defined as the
Microphone Vocalists increase in bass effect with most
unidirectional microphones when
they are placed close to an
instrument or vocalist (within 1 ft.).
Remedies- (1) roll off low
frequencies at the mixer, (2) use a
microphone designed to minimize
proximity effect, (3) Use a
microphone with a bass roll-off
switch, or (4) use an Omni-
directional microphone (which does
not exhibit proximity effect).

For a single vocalist an Omni-directional microphone may be used. If the


singer is in a room with ambience and reverb that add to the desired effect, the
closer the vocalist is to the microphone the more direct sound is picked up
relative to the ambience.

The 3-to-1 Rule


When it is necessary to use multiple microphones or to use them
near reflective surfaces the resulting interference effects may be minimized by
using the 3-to-1 rule. For multiple microphones the rule states that the distance
between microphones should be at least three times the distance from each
microphone to its intended sound source.

Vocalist-1 Vocalist-2

(1-ft.)
(3-ft.)

Microphone-1 Microphone-2

 Area Coverage
Application of choir microphones falls into the category known as
“area” coverage. Rather than one microphone per sound source, the object is to
pick up multiple sound sources (or a “large” sound source) with one (or more)
microphone(s). Obviously, this introduces the possibility of interference effects
unless certain basic principles (Ex- “3-to-1 rule”) are followed..
 Each Microphone placement for a typical choir should be a few feet
in front of, and a few feet above, the heads of the first row.
 Microphones should be centred in front of the choir and aimed at the
last row.
 Spacing between the microphones for each lateral section should be
approximately 6 to 9 feet.

(2- 3 ft.)

Vocalists
(2- 3 ft.)
(2- 3 ft.)

(3 - 6ft.)

Choir microphone positions - top view Microphone positions - side view

 Stereo Microphone Techniques –This is the use of two or more


microphones to create a stereo image will often give depth and spatial
placement to overall recording. There are a number of different methods for
stereo. Three of the most popular are-
1) Spaced pair (A/B) - This technique uses Sound source
two cardioid or Omni-directional
microphones spaced 3-10 ft apart from each
other panned in left/right configuration to
capture the stereo image of the source. The
distance between the two microphones is
dependent on the physical size of the sound (3 – 10 ft.)
source. The drawback to A/B stereo is due to
the relatively large distance between the
microphones and the resulting difference of
Spaced pair (A/B), top view
sound arrival.
2) (X-Y configuration), coincident or Sound source
near-coincident pair-This technique uses
two cardioid microphones of the same
type placed either as close as possible
(coincident) or within 12 inches of each 0 0
90 – 135
other (near-coincident) and facing each
other at an angle ranging from 90 - 135
degrees, depending on the size of the
sound source and the particular sound
desired. This technique is good but may be
limited if the sound source is extremely (X-Y configuration)
wide.
3)(M-S) or Mid-Side stereo technique-
This technique involves a cardioid and a
bi-directional microphone elements
usually housed in a single case, mounted
in a coincident arrangement. The cardioid
Cardioid- (M)
(mid) faces directly at the source and
picks up primarily on-axis sound while the
bi-directional (side) faces left and right
and picks up off-axis sound. The two
signals are combined via the M-S matrix
to give a variable controlled stereo image. Bi-Directional-(s) L = M+S
This technique is completely mono- R = M- S
compatible and is widely used in
broadcast and film applications.
(M-S) stereo technique

Introduction to Amplifiers

Amplifiers are devices having the ability to amplify a


relatively small input voltage signal, for example from a microphone,
into a much larger output signal to drive a speaker system or any
other device called load. An amplifier can be thought of as a simple
box containing the amplifying device, and having two input
terminals and two output terminals with the output signal being
greater than that of the input signal, being "Amplified".
An Ideal amplifier
Characteristics of amplifier
 Gain – It is defined as the ratio between the outputs to the
input signal expressed in decibels. The voltage gain Av the current gain Ai and
power gain Ap are given by :
Av = 20 log V2/V1 --- (1)
Ai = 20 log I2/I1 ---(2)
Ap = Av X Ai =10 log P2/P1 ---- (3)

Where, V2/V1, I2/I1, & P2/P1 are the ratios of output and input voltages,
currents and powers respectively.
The Typical gain of a voltage amplifier is about 60 dB and that of a
Power amplifier is 20 dB respectively.

 Bandwidth – It is defined as the ability of the amplifier to give


a flat responce for frequency range from 16Hz to 20 KHz. For a Hi-Fi System a
flat responce from 40Hz-15 KHz is acceptable.

 Distortion – An amplifier suffers from the following


distortions.
 Frequency distortion-Caused due to unequal amplification of all
frequencies.
 Phase distortion-When the relative phase relationship is not maintained
between the input and the output signals.
 Non-linier or amplitude distortion-Caused due to passage of signal
through non-linear portion of the Characteristics curve of the transistor
resulting in clipping of output signals at the positive and negative peaks.
 Distortion due to self Oscillation-Caused by positive feedback due to
undesired coupling of output of one stage to input of some earlier stage.
These self oscillations overload a stage resulting in severe distortion of
signal.
 Power output- It is the output power which can be taken out
from an amplifier which to be fed to a loudspeaker. The required output power
varies from few watts to several hundred watts. As the output power increases
adequate heat-sink should be used to radiate out the heat generated by
dissipation of power.

 Impedance – For maximum power transfer from the


amplifier to the load, the source impedance (amplifier) must match with the
load impedance. If the impedance of the source is higher than the load then a
step-down transformer with turns ratio of primary to secondary is given by-
np & ns are number of turns in primary and secondary .
np/ns = (Zp/Zs)½
Zp & Zs are impedances of source and load sides’.
Amplifiers can be divided into two distinct types,

1) Small Signal Amplifiers – These are designed to amplify very small


signal voltage levels of only a few micro-volts (μV) from microphones or audio
signals such as pre-amplifiers, instrumentation amplifiers etc. Small signal
amplifiers are generally referred to as "Voltage" amplifiers as they convert a
small input voltage into a much larger output voltage.

2) Large Signal Amplifiers (Power amplifiers)-These are designed to


amplify large input voltage signals or switch high current loads, such as audio
power amplifiers or switching amplifiers. Power amplifiers (large signal
amplifiers) are designed to deliver power, which is the product of the voltage
& current applied to the load. The power amplifier works on the basic principle
of converting the DC power drawn from the power supply into an AC voltage
signal delivered to the load.
%-age-Efficiency of the power amplifier is
given by -

η% - is the percentage efficiency


of the amplifier.
Pout - is the amplifiers output
power delivered to the load.
Pdc - is the DC power taken from
the supply.

Power Amplifier Classes- These are classified according to


their circuit configurations and mode of operation being designated different
classes of operation in alphabetical order such as A, B, C, AB, etc.
1) Class-A: In this amplifier
100% of the input signal is used, and Output
the active element remains conducting
(works in its "linear" range) all of the
time. Class-A amplifiers are typically Input
more linear and less complex, but are
very inefficient. This type of amplifier is
most commonly used in small-signal
stages or for low-power applications
like pre-amplifier, microphones, etc.
2)Class-B : In this amplifier 50% of the input signal is used i.e., the
active element (transistor) works in its linear range half of the time and is more
or less turned off for the other half. In most Class B amplifiers there are two
output devices, each of which conducts alternately (push–pull) for exactly half
cycle (or1800) of the input signal.
These amplifiers are subject to crossover distortion
if the transition from one active element (transistor) to the other is not perfect.

Output

Input
Input Output

Class-B Amplifier Class-B Amplifier (Push-pull)

3) Class-AB: In Class AB operation, each


active device or the transistor operates
the same way as in Class B over half the
waveform, but also conducts a small
amount on the other half, thus reducing
the dead Zone (region where both
devices simultaneously are nearly off).
This way the crossover distortion faced in
Class-B is greatly minimised or Bias
Voltage iC
eliminated. Class AB has an operating
point somewhere between Class A and
Class B and hence is less efficient than
Class-B and much more efficient than Class AB Waveform
class A.
4) Class-C-This amplifiers conduct
less than 50% of the input signal and the
distortion at the output is high, but it has
high efficiency (up to 90%). The most Output
Input
common application for Class-C
amplifiers is in RF transmitters, where
the distortion can be vastly minimised by
using tuned loads on the amplifier stage. Class-C Amplifier
5) Class-D - In the Class D amplifier the input signal
is converted to a sequence of high voltage output pulses whose averaged-over
time power values are directly proportional to the instantaneous amplitude
of the input signal. The frequency of the output pulses is typically ten or more
times the highest frequency in the input signal. The output pulses also contain
inaccurate spectral components which are removed by a low-pass passive
filter to finally obtain the amplified output . Class D
amplifiers can be controlled by either analog or digital circuits. The digital
control introduces additional distortion called quantization error. The main
advantage of a class D amplifier is power efficiency and they do not need large
or heavy power supply transformers or heat-sinks. Therefore they are smaller
and more compact in size than an equivalent Class AB amplifier. They are
widely used exclusively for small DC motors, and also used as audio amplifiers.
Audio Amplifiers-
An audio amplifier is device used to amplify low-power audio
signals (primarily frequencies between 20 to 20,000 hertz) from a
Microphone, CD player, or a pre-amplifier to a level suitable for driving the next
amplifier stage or the Speaker system. The input signal may be only a few milli
or microwatts, its output may be very high up to tens hundreds, or thousands of
watts. There are two types of Audio amplifiers – Audio-Voltage
Amplifier– These are voltage amplifiers (discussed above) used as pre-
amplifier, buffer amplifier, and driver amplifier. Their main function is
amplifying the audio signal voltage sufficient to drive a power amplifier.

1. Audio-Power Amplifier–These are power amplifiers(discussed above)


working in the final stage of amplification used to feed sufficient audio power to
for example the speaker system to convert the audio electrical signal into sound.
Television Lighting B.GHOSH – ADE
RSTI (T), BBSR
In Television broadcasting, the variation of light from object and
scenes are changed into electrical signals for recording , storage or transmission
and then finally recreating those patterns on a television screen. For this the
television camera must be presented with properly illuminated scenes achieved by
lighting. The three important considerations are -
1) Overall-Level, 2) Contrast-range, and 3) Colour-temperature.

Level:
Lighting levels for television are generally set by adjusting the incident
light, or the light striking the subject. The unit of measure the foot candle, which
is the amount of light produced by a standard candle at a distance of one foot.
Lighting measurements are made using an incident light meter having a sensing
element and a logarithmic scale calibrated in foot candles. To measure the useful
incident light for television, the meter is held near the subject and pointed t-oward
the camera. The minimum acceptable level for colour television depends on the
ability of the lens to transmit light to the camera, the sensitivity of the pickup tube
or chip, and the amount of depth of field you need.

Contrast:
Contrast refers to the difference in brightness from the darkest parts of a
scene to the brightest. If there's too little contrast many receivers will produce a
flat, greyish picture. If there's too much contrast, details in the brightest and
darkest parts of the picture will be lost and the picture will look too harsh.

Colour-Temperature:
The third consideration is colour temperature. Every source of light has
a characteristic colour. This colour is related to its "temperature." Lower colour
temperatures tend to be red or orange while higher temperatures tend to be green
or blue. Colour temperatures are measured in degrees Kelvin. Some examples are
given below in the table.
Color
Source Colour
Temperature
1950 Candlelight Orange
2870 Normal Incandescent Orange
3200 Most Photo or TV Lights Orange
3400 Some Photo Lamps Orange
3500-4000 Fluorescent Lamps Green
5500-6500 Midday Sunlight / HMI lamp Blue
The eye "remembers" how things are supposed to look and interprets
colour accordingly, regardless of the colour temperature of lighting sources. A
white sheet of paper seems white whether viewed under an incandescent lamp or
sunlight. The eye can even adjust for "correct colour" when two light sources of
different colours are present in the same scene. Sunlight streaming into a room
which is also lit by incandescent lamps doesn't make objects it strikes appear-
bluish.
Television cameras aren't so versatile. They must be set up to render
colour in a way that's pleasing to the eye. They can do this only if all of the
important lighting sources within a scene have the same colour temperature. A
combination of filters and electronic adjustments is used to adapt colour
cameras to each new lighting situation. Most cameras can adjust automatically
to typical colour temperatures. They cannot resolve conflicts when major
picture elements are lit at different colour temperatures.

Lighting Instruments

There are four basic kinds of lights used in television.

1) Spot light: These lights have narrow beam that casts well-defined shadows.
They are generally hard.

2)Broad light; They are rectangular light that has a somewhat wider beam and
casts softer shadows.

3)Flood Light: Flood light throws a broad, even illumination in a circular pattern
with diffuse shadows.

4) Soft light.(also called a "bathtub") is an array of lights reflected by the white


interior of a large box. Used for general background illumination, the bathtub
creates shadows that are barely noticeable.

Spot Light
Component of a hard Light
Spot Light Flood Light:

The intensity and beam spread of spots and some other lights may be
adjusted by moving the lamp forward or back in the lamp housing. When the
beam is narrow and intense the lamp is "spotted down." When the beam is wide
and more diffuse the lamp is "flooded out." Not all lamps have this adjustment.

Soft light Diffused Soft light LED Soft light

Most lamps can be fitted with "barn doors," which are black metal flaps
fastened to the front of the lamp housing. These flaps are used to keep light from
falling where it's not wanted. Use of barn doors is most important on backlights,
which can cause objectionable lens flare if their light is allowed to strike the
camera lens directly.

Scrims are special disks of screen wire which can be used to soften lights
and reduce their intensity slightly.. Scrims can also be used in lamps which don't
already have protective covers or lenses to contain debris in the event the bulb
explodes.

Tungsten-Halogen bulbs are generally used for television lamps. These


bulbs retain their brightness and correct colour temperature throughout their
lives. Unlike household bulbs, however, they can be damaged if touched with
fingers (depositing oil on the glass) and are more susceptible to damage by shock.
Barring accidents, halogen lamps last from 100 to 300 hours.

Housings may generally be fitted with several different bulbs of different


wattages. Acceptable bulb substitutions are often listed on the housing. Bulbs are
identified by a three-letter code. Only those bulbs listed should be used in housing.
Lamps and housing become extremely hot when they're in use. Hot lamps should
be handled only with protective gloves to prevent burns.
Lighting Techniques

One man, One Camera (Three point & Four point Lighting)

THREE-POINT Lighting FOUR-POINT Lighting

The Three Point Lighting Technique is the most standard method


used in video, film, and still photography .It forms the basis of most lighting. Once
three point lighting is understood, one can understand all lighting.

The three lights used are called the key light, fill light and back light.
Naturally one will require three lights to utilise the technique fully.

The simplest type of lighting involves one camera shooting one subject.
The subject is placed in the setting far enough away from any walls or backdrops
to avoid casting shadows on the background near the subject. The camera is set
up placing the subject in front of the backdrop.

Key Light (SPOT) : It is the main source of light which is positioned


thirty to forty-five degrees to the side of the camera and should strike the subject
at an angle of about forty-five degrees from vertical. It is usually the strongest and
has the most influence on the look of the scene. The light used for this purpose is
spot light. The key light is focused on the subject by putting the bulb in the "full
spot" position and centring the beam on the subject. The light is then gradually
flooded out until a reasonable overall level is reached.

Back Light (SPOT) : It is placed directly behind the subject, in line with
the camera. Its main aim is to show the separation between the subject and the
background. The reason is that the television screen is a two-dimensional field, it
is necessary to imply the third dimension with light. It is set at a forty-five degree
angle from vertical. The back light is spotted down and aimed at the subject's
neck. It is then flooded until it has about the same intensity as the key light. It
should be adjusted to produce a crisp but delicate border around the subject.
Fill Light (FLOOD /SOFT): The fill light is the instrument used to
soften the dark, well defined shadow produced by the key light. It is added on the
side of the camera opposite the key light and it should be about half the intensity
of the key and back lights. It should also be softer, producing no harsh shadows.
Fill lights are also frequently scrimmed (A wire screen used to cut down the
amount of light emulating from an instrument.) to soften them and reduce their
intensity.

Background Light : Finally, background light is added to bring the


background up to a level in the middle of the overall gray scale of the subject.
Background lighting should be even and unobtrusive. The background shouldn't
be made the center of attention with harsh or uneven lighting.

Movement of subject

If the subject moves there are two ways of handling this problem
depending upon the movement.
Movement along a pre-determined path: In such situation
providing full key, back, and fill along the entire path is neither necessary nor
desirable. It is necessary only to provide about the same overall illumination along
the path of travel.

Movement too large or random: In such situation it is possible to


provide diffuse fill lighting called "base light" to the entire area to keep all
shadows within acceptable contrast range. Key and back lights are then added for
specific areas and camera positions as necessary. This kind of lighting might be
helpful in certain situations, but may result in a flat and dull overall appearance
and creation of multiple shadows by individual lights.
Cross-Lighting: This technique is useful when a quick and
simple lighting plan is needed. Here adjustable spotlights are placed in the corners
of a room or studio. Since these spot lights "throw" their light some distance, they
are should be adjusted for a narrow beam (spotted down) and aimed in a
crossing pattern at the opposite corners. Unfocused light loses its power with the
square of the distance resulting in foreground subjects too bright and
background subjects too dark. Hence the amount of light striking foreground
subjects is reduced by narrowing the spread of the beam, and spotting the lights
and aiming them at the corners minimized the loss of light with distance.

Lighting for Dance: Lighting for dance suggests an even illumination


of the entire set and usually desirable to create shadows that show off the dancers'
form. This is done by lighting from greater angles than normal by placing the
lights at about seventy to ninety degrees from the camera position. While doing
this lighting the mood and artistic objectives of the dance have to be considered.
Standard television lighting would be suitable for some group dances, involving
elaborate costumes or an importance on story or drama. More methodical
lighting is suggested for specific dance emphasizing the appearance and
movement of the dancer.

High Contrast: It is the technique of eliminating fill lighting, leaving


only key and back light. Such lightings may be suitable for some forms of dance
but usually tends to aggravate some technical shortcomings in low-cost cameras
and recorders resulting in video noise especially in the areas of the picture left too
dark.

Limbo-lighting: In limbo lighting normal key, back, and fill lighting or


high contrast lighting is used, but great care is taken to eliminate any light from
the background or floor behind the subject. The intended effect is to leave the
subject without any visual context. The more likely effect in analog recordings is a
context of video noise, especially if recording or editing for later distribution is
intended. It also poses technical problems for equipment.

Back-lighting: This technique is useful to hide the identity of performer


or people on camera. Here the Key and fill lights are eliminated, leaving only back
and background lights to provide an "interesting" background for program titles
and credit captions. This doesn't completely eliminate "fill" light on the subject
because of the large amount of stray light bounced off of floors and walls.
FUNDAMENTALS OF VIDEO CAMERA
G. P. Pothal, ADE,
Introduction :
A typical video camera consists of the following sections:

 Lens- Makes the picture frames focused on optical block.

 Optical Block-Makes the colour correction and splits the light into
three primary colours.

 Transducer or pick up device-To convert optical image into


electrical signal

 Electronics- To process the output of the transducer to get a


Composite Colour Video Signal (A video camera can be a analog video
camera or digital camera depending on the signal processing ie. either
in analog or digital form)

Camera Lens:
The purpose of the camera lens is to focus the optical energy at the face plate
of a pick up device i.e. to form an optical image.
Different focal length on camera lenses are required to get different
composition of pictures from a fixed location. A lens with a variable focal length
is called as ZOOM LENS. A typical ENG/EFP camera has a variable focal length
lens with focal length varying between 9 mm & 108 mm. The zoom ration then
becomes 108/9 = 12 :1. Its objectives is to focus the image on the face plates of
the camera tubes. This focal length can be varied either manually or by a servo
motor to get different composition of picture focussed on the camera tubes. Wide
angle or long shorts can be composed by using shorter focal length & vice versa.
Different compositions from camera are possible by changing the focal length
even though the camera is stationary.

Horizontal viewing angle of the lens is determined by the focal length & size of
the pickup device. We have different lenses for different size of pick up devices for
a particular angle of coverage. The table below can give you an idea why smaller
studio should prefer smaller pick up device for wider angles/close ups as
compared to larger size of pick up.

H-Angle degree Distance in feet ½ Distance in feet 2/3 Distance in feet


inch camera inch camera 1 inch camera
50 7.5 10 15
25 15 20 30
10 37 50 75
Micro Position :
Adjacent to back focus adjustment screw, another ring which is generally
kept locked is called micro control. This is unlocked and disturbed while working
with very small objects with camera very close to them. (e.g. small insects or
postage stamps etc. )This control help to get such images in focus. One has to be
on extreme wide angle and zoom action become ineffective for micro shooting.
Be careful to return to normal marking of micro ring for normal shooting.

Aperture :
This important parameter of a lens is also called as aperture or iris. The opening
of the lens is controlled by collapsible fins inside the lens. This control like ZOOM,
can be either manual or automatic. Since camera man has to control focus and
zoom by his two hands the third variable i.e. iris is preferred on auto mode most
of the time. It is also related by the f stop number.

f stop no = focal length


diameter of lens opening through fins

Why f stop instead of lens opening diameter :

Because we are interested in exposure of light to camera tubes which also


depends on distance of the object to be focussed or indirectly on focal length
besides diameter of lens opening. Hence the f – stop becomes the real measure of
light falling on the pick up device.

Please note, higher the f-stop numbers lesser is the lens opening. Lowest f-stop
number indicating maximum exposure is known as speed of the lens and is
usually a number which does not fit the f stop series marked on lens aperture
ring as :

L.S. 2, 2.8, 4, 5.6, 8, 11, 16, 22, 32

Where LS is lens speed and its typical values can be 1.4 or 1.7 etc.

Next number on this series can always be found by multiplying the


number with √2.
A change of one f stop which is linked with light exposures gives a change of
video level in steps of 6 dBs i.e. either increase in video level by a factor of 2 or
reduction by one half.
While performing zoom operation, though f is varying, the f stop remains constant
by readjusting diameter of lens opening automatically. Once set on zoom on (close
up) position, zoom operation should not affect focusing. If it is out on full zoom out
(long shots) positions, it can be adjusted by back focus adjustment on the lens.
Depth of field :
Depth of field in a picture is the distance between the nearest and
farthest object in focus. One may need wide depth of field in most of the pictures.
it can be increased by the following ways:
1. By working at higher f stop provided there is enough light to do so
2. Using wide angle lenses or
3. By moving away from the objects
While working in news or presentation studios we may not require
more depth of field and can afford to work at f 2.8 or so but the same is not true
with large studios and huge sets which may demand working at higher f stop,
thus more lighting.

Optical Block :
Optical assembly is located inside the camera head and has :
1. Colour filters wheel
2. Prism & Dichoric Mirrors
3. Bias light and a suitable lens mount

The lens used for the video cameras depends on the size of the pick up device.
Video cameras are using 1 inch, 2/3 inch or ½ inch pick up devices. Lenses meant
for ½ inch devices can be used only with cameras having ½ inch device and not
for any other camera.

Color filter wheel include the following filters :


Opaque filter as a cap
1. Clear filter for indoors lighting of 3200°K
2. Reddish filter to match outdoor (4800°K) with indoor
3.Extra reddish filter to match outdoor (6500°K) with indoor and this is
sometimes accompanied by ND(Neutral Density)Filter to cut the excess
light.
4.Effect filter for creating star effect.
These filters may vary slightly from camera to camera.
Fine adjustment for colour temperature* are done by means of a white
balance operations.
*COLOUR TEMPERATURE-The temperature to which a perfectly blackbody
must be heated to match the colour of the light emitted from the source (black
body) expressed in Degree Kelvin.
Example-The colour temperature of a fluorescent bulb is 6500`K ,emits a
bluish white coloured light.
Dichroic Block :
This block is also called beam splitter. It splits the incoming light into
three beams i.e. red, green and blue. Incoming light when reaches the first
Dichroic mirror DM-1 it reflect only blue & pass the green and red wave lengths,
similarly, DM-2 reflects Red & pass Green to be collected by G-CH pick up tube.
Reflected Red and Blue are passed on to the respective pick up tubes via fully
reflecting mirrors M.R.

Color separation using optical system

TYPES OF PICK-UP-DEVICES :

There are three types of pick up devices based on :


1. Photo emissive material : These material emit electrons when the light falls
on them. Amount of emitted electrons depends on the light. These cameras
are called image orthicon cameras. These cameras are no longer in use at
present.
2. Photo conductive material : The conductivity of these material changes with
amount of light falling on them. The material with variable conductivity is
made part of an electrical circuit and the signal is thus recovered. First
cameras based on this principle were Vidicon Cameras used in the
monochrome telecine chain of Doordarshan Kendras. As these cameras have
serious Lag and other problems relating to dark currents further
improvement in these cameras led to the development of Plumbicon and
satican Cameras.
3. Charge coupled devices : These semiconductor devices convert light into a
charge image which is collected at a high speed to form a signal.

FIT ( FRAME INTERLINE TRANSFER) TYPE CCD :

The FIT(Frame Interline Transfer) type CCD consists of a light receiving


CCD, vertical transfer CCD, storage CCD and horizontal transfer CCD. For the
transfer of charge, during vertical blanking the charge, the result of light image
converted to charge image by the photo diode (CCD pixel) is transferred to the
vertical transfer CCD(2). This takes place after the residual charge in this CCD- the
cause of smear – has been swept out(1) via drain. Then the charges are
transferred to the storage CCD at high speed(3). It is the high speed of the charge
transfer that is the major factor in reducing smear due to light.

Employing high-density 2/3-inch FIT CCD image sensors with 520,000


pixel or 6,40,000 pixel or more, the sensitivity of the camera increased. In the
studio cameras, the CCD block is mounted on a bedplate with axis adjustment
assuring perfect optical alignment between zoom lens and CCD sensor.

A pixel is generally thought of as the smallest single component of a


digital image. The definition is highly context-sensitive. For example, there can be
"printed pixels" in a page, or pixels carried by electronic signals, or represented
by digital values, or pixels on a display device, or pixels in a digital camera
(photosensor elements). This list is not exhaustive, and depending on context,
there are several terms that are synonymous in particular contexts, such as pel,
sample, byte, bit, dot, spot, etc. The term "pixels" can be used in the abstract, or as
a unit of measure, in particular when using pixels as a measure of resolution,
such as: 2400 pixels per inch, 640 pixels per line, or spaced 10 pixels apart

Block diagram of three tube/CCD video camera.

CAMERA ELECTRONICS :
A block diagram of a typical three tube camera chain is described in fig. 3.
Tube power supply section provides all the voltages required for various grids of
electron gun. Horizontal and vertical deflection section supplies the saw tooth
current to the deflection coils for scanning the positive charge image formed on
the target. The built in sync pulse generator provides all the pulses required for
the encoder and colour bar generator of the camera.
The signal system in most of the cameras consists of processing of the signal
from red, blue and green tubes. Some of the cameras use white blue and red tubes
instead of R.G.B system. The processing of red and blue channel is exactly similar.
Green channel which is also called a reference channel has slightly different
electronic concerting aperture correction. So if we understand a particular
channel, the other channels can be followed easily. So if we understand a
particular channel. The signal picked up from the target is amplified at the target
itself in a stage called pre-pre amplifier. It is then passed to a pre amplifier board
with a provision to inserts external test signal. Most of the cameras also provide

gain setting of 6 dB , 9 dB and 18 dB at the pre amplifier. Shading compensator


provides H and V shading adjustments in static mode and dynamic mode by
readjusting the gain. After this correction the signal is passed through a variable
gain amplifier which provides adjustment for auto white balance, black balance
and aperture correction. Gama correction amplifier provides suitable gain to
maintain a gamma of 0.45 for each channel. Further signal processing includes
mixing of blanking level, black clip, white clip and adjustment for the flare
correction. The same processing take place for blue and red channels. Green
channel as an additional electronic which provides aperture correction to red and
blue channels. Aperture correction provides correction to improve the resolution
or high frequency lost because of the finite size of the electron beam. Green
channel has fixed gain amplifier instead of variable gain amplifier in the red and
blue channels.

All the three signals R, G and B are then fed to the encoder section of the camera
via a colour bar/camera switch. This switch can select R, G and B from the camera
or from the R,G,B signal from colour bar generator. In the encoder section these R,
G, B signals are modulated with SC to get V and U signals. These signals are then
mixed with luminance sync, burst & blanking etc. to provides colour composite
video signal ( CCVS Signal). Power supply board provides regulated voltages to
various sections.

DIGITAL PROCESSING :

The R, G & B individual signal from process amplifier passes through a Pre-
Gamma Correction then it is sampled at frequency of 14 MHz (for Ikagami HL-57
the exact sampling frequency that is 14.3 MHz) . The supplied video is connected
to a digital video with 10 bit analog to digital converter. After 10 bit A/D
conversion the digital signal passes through several stages for correction i.e.
Gamma(Transfer function alteration in cameras for the purpose of
complementing the transfer function of the display device i.e. TV receiver ),
Signal, Auto White Balance( Adjustment or condition of the processing circuit
ensuring that the proper mix of colour signals is passed to provide white light at a
given color temperature ) & Black Balance etc. Detailed correction of signal is
done in Green Channel at the third stage and the Video matrix chip is operated at
a clock frequency of double the sampling frequency i.e. 29 MHz The 10 bit output
signal is then fed to the encoder in which the R,G&B signal passes through LPF
circuit for elimination of 29 MHz clock component and fed to the matrix for
conversion to component video ( Luminance & Chrominance ) i.e. Y, R-Y & B-Y.
Then the Y signal passes through the circuit or black stage correction and then the
CCVS signal is retrieved after addition of sync pulse signal.

Gama Correction :
The overall transfer characteristic of a television system relates
brightness levels in the final displayed image to the brightness levels in the
televised scene.

<1 Peak white


Reproduced
Brightness

>1
OUTPUT

=1

Black INPUT
(Scene Brightness)

If Gamma is less than unity whites are compressed (crushed) and blacks
are expanded (stretched). If Gamma is more than unity whites are stretched and
blacks are crushed.

A Gamma of slightly more than unity (1.2) is preferred to compensate


for the loss of contrast in the system due to optical flare etc.

REF-Video Hand book, Cannon Optics , ABU JOURNAL


FUNDAMENTALS OF VEDIO RECORDING & VCR FORMATS

Magnetic Principle :

The magnetic principles shows,


I) A current carrying conductor causing a magnetic field proportional to
current.

II) A current carrying conductor when wound like a coil acts like a bar magnet.

III ) A current carrying coil when bent to form a ring inner field remains
homogeneous but the outer field vanishes, i.e. field lines inside are
able to close.

Fig 1 Shows ferromagnetic material inserted inside the ring with a narrow air
gap causing a flux bubble because of magnetic potential difference across the gap.

The equations, which we already know are


Magnetic field intensity H = NI / L
Magnetic flux density B = µH
Magnetic Flux Ø= BA

(µ is of the order of 100 to few 10,000 for ferromagnetic materials)


Property of the ferromagnetic materials to retain magnetism even
after the current or the H is removed is called retentivity and is used for recording
electrical signals in magnetic form on magnetic tapes. This relationship can also
be represented by a curve called BH curve. Magnetic tapes are made of
ferromagnetic materials with broader BH curve than the material used for video
heads as the heads are not required to retain information.

Magnetic Tape

Magnetic coating (5-200µin) is there on a non-magnetic substrate


The plastic substrate is typically in the form of a tape/disk (500-2000 µin)
Magnetic coatings are made either as ferromagnetic particles embedded in a
binder material or as a thin-film coating of a magnetic metal. The thinnest coated
tapes are made with metal films. But metal particles are thicker and have more
density and long standing.

Basic principle of tape recording-

Introduction :

Video tape recorder is a most complex piece of studio equipment with


analog and digital processing servo system, microprocessors, memories, logic
circuits and mechanical devices etc. also these recorders have been the main
limitation so far as the quality output from studio is concerned. Continuous efforts
are being made to improve its performance so as to reproduce cameras faithfully
by improving S/N ratio and resolution. Designer for video tape recorders had to
consider the following differences in the video and audio signals :

Sl. Item Audio Recorders Video Recorders


No.
1 Frequencies involved 20 Hz to 20 kHz 20 Hz to 5 MHz
2 No. of octaves* 10 18
3 Timing accuracy Not so important Very important
4 Recording medium limitation No Yes *
*Requires higher writing speed and lower head gap along with reduction
in number of octaves*
*An interval of 1:2 is called an ‘OCTAVE’. One octave of 100 Hz will be
200 Hz, or we can say that the frequency range of 100 to 200 Hz is one octave
wide. Two octaves of 100 Hz will be 400 Hz. Mathematically, an octave for two
frequencies, f1 and f2 is defined by Eq. 1.7.
Number of octaves = log2(f2/f1)
Harmonic is an integer ratio between two frequencies. There is a clear
distinction between octaves and harmonics. For example, with respect to 100 Hz,
a frequency of 200 Hz will be 2nd harmonic, but in terms of octaves, it will be one
octave wide. Similarly, a frequency of 400 Hz will be 4th harmonic, but it will be
two octave wide. The harmonics are always integer multiples of a fundamental
frequency, but octave can be in frictions also . for example, 140 Hz relative to 100
Hz will be 0.5 octave ( because log2 140/100 = 0.5).

The term overtone describes all frequencies higher than the


fundamental, including harmonics. The ear judges intervals (i.e. frequency ratio)
and not the actual difference in frequency. For example, a change from 500 to
1500 Hz will be recognized to be the same as change from 2000 to 6000 Hz,
although the difference in the first case is 1000 Hz only, while in the second case it
is 4000 Hz.
For Example :
Calculate interval, octave and harmonics for frequency range of 62.5 Hz to 1
kHz

Interval = 62.5 : 1000


= 1 : 16

Octaves = log2 16
= log2 24
= 4
1 kHz is 16 harmonic of 62.5 Hz
th

WRITING SPEED AND FREQUENCY RESPONSE :

Recording Process :
When a tape is passed to move over the magnetic flux bubble, the
electric signal in the coil will cause the electric lines of force from the head gap to
pass through the magnetic material of the tape producing small magnets
depending upon the strength of the current. Polarity of the magnetic field which
causes these bar magnets depends on the change of current. Decreasing current
will cause NS magnet and vice versa. Power of these magnets is as per BH curve.
Thus the magnetic flux strengthens the unarranged magnetic particles as per the
signal and they stay in that condition after the tape has already passed the
magnetic head . Length of the magnet thus formed is directly proportional to
writing speed of the head v, and inversely proportional to the frequency of the
signal to be recorded i.e.

Recorded wavelength for one cycle of signal = speed x time


Or Wave length of the magnetic signal tape = v / f

Thus the problem to be solved in the development of VTRs was how to


provide higher speed to record very high frequencies.
The other limitation of recording medium is the range, during when
the extracted signal is more than noise. This range is only 10 octaves. Thus the
system can no longer be used for recording/reproduction after this dynamic range
of 60 db, because of 6 dB/octave playback response characteristics. Beyond this
range the low frequencies becomes inaudible and the higher frequencies becomes
distorted.
Concept for video recording

Recording Process:
During the initial stages it was tried to record video signal with
stationary video heads and longitudinal tracks using tape speed of the order or
9 m/s which was very difficult to control besides very high tape consumption i.e.
miles of tape for 3 to 4 minutes of recording and this was coupled with breaking
of video signal frequencies into 10 parts recorded by 10 different video heads and
then switched during playback to retrieve the signal. The quality of the
reproduced signal was also compromised up to the resolution of 1.7 MHz or so.
Around 1956 the AMPEX company of USA then come out with Quadruplex
machines having two revolutionary ideas which laid the foundation of present day
VTRs/VCRs. These ideas were-

1. Rotating Video Heads and


2. Frequency Modulation before recording

Increase in writing speed by rotating head :


When a video head mounted on a rotating head wheel writes on a tape
moving across it, will lay a track on length which will depend not only on the
speed of the tape but also on the rotating speed of the head. Single head with
diameter d number of rotation per second as r and full omega wrap or two heads
in ½ omega wrap i.e. little over 180 degree, which most of the present day VCR are
using, will have a writing speed of πdr minus or plus the linear tape speed (
which is negligible as compared to the rotating speed). This avoids the
requirements of miles of tape for few minutes of recording in a stationary head
type of recorders tried earlier.
Frequency Modulation:
It was found difficult to record/reproduce 18 octaves of video signal
frequencies(5MHz) on a tape even after increasing the writing speed because of
the dynamic range of 60 dbs or 10 octaves. Thus another problem for a recording
video frequencies was to reduce number of octaves. This was achieved by
modulation. Amplitude modulation is not suitable because information of output
due to tape-head contact will appear as modulation. Frequency and pulse
modulation system does not have this property. These systems however produce
large number of side bands. For VTRs where we are having limitation on the
requirement of band width, so we have to use lower modulation index to reduce
side bands of 5 MHz, as an example for average picture carrier level of 6 MHz and
modulating signal of 5 MHz, the side bands will be (6+5) MHz and (6-5) MHz i.e 1
MHz to 11 MHz. the octave range of this modulated signal is now only 4 . Hence
the octave bands are compressed with frequency requirement enhanced up to 11
MHz and the highest at 11 MHz. The extinction frequency is now higher but the
octave range has been reduced to 4 only.

SPEED & ACCURACY :


Timing accuracy, as pointed out earlier is especially important for VTRs as
our eyes are very sensitive to these errors compared to our ears which may not
detected these errors in audio tapa recorders. In order to reduce these timing
errors it is important to create same conditions for the capstan and drum motors
of video tape recorders at the time of playback, which were there at the time of
recording. To achieve this the status of these motors during recording is written
on the tape itself along with the signal, (called control Track) and is used during
playback as one of inputs to the servo system. Servo system are employed to
control various motor, ensure constant tape tension and minimize timing errors.
These timing errors are further reduced to about 5 n sec. by using additional
electronics called Digital Time Base Correctors (DTBC) to make it synchronous
with other video signals like studio cameras etc.

Monitoring During Recording:


Most of the video tape recorders provide Electronics to Electronics
monitoring (EE Mode) at the time of recording. The video signal is monitored
after routing it through all the signal system electronics of the recorders excluding
the video heads and preamplifiers etc. Some of the recorders also provide
simultaneous playback for the off tape monitoring by using additional heads
during recording called confidence heads. Thus the VTRs could achieve wider
frequency range with-
a. Faster writing speed
b. Smaller gap, and
c. Octave band compression with frequency modulation
Also achieving accurate speed for motors with servo system reduces the
timing errors.

PLAY BACK PROCESS:


During play back when the recorded tape is passed over the head gap at
the same speed at which it was recorded, flux lines emerging from the tape on
crossing the head gap induce voltage in the coil proportional to the rate of change
of flux i.e. dØ/dt and this in turn depends on the frequency of the recorded signal.
Doubling of frequency causes voltage to increase by 6 dB. This accounts for the
well known 6 dB/octave playback characteristics of the recording medium. This
holds good only up to a certain limit thereafter at very high frequencies, lot of
losses take place during playback and recording process causing noise to be more
than the signal itself. It may be noted that when the cap becomes equal to the
wavelength of the recorded signal, two adjoining bar magnets may produce
opposite current during playback and the output become zero. Similar thing
happens when the gap equals 2,3 ….. n. times the wavelength. First extinction
frequency occurs when gap becomes equal to wavelength. For getting maximum
output, head cap has to be one half of wavelength. Frequency at which zero output
occurs is called extinction frequency Thus the maximum usable frequency becomes
half of the extinction frequency. These parameters are related by :
Maximum usable frequency ( MUF) = Fext/2 = Fext =v/wavelength

= writing speed/ 2 x head gap

Since λ record on tape = head gap for Fext

So in order to record the higher frequencies we must increase the writing


speed for a minimum value of wave length recorded on tape i.e λ tape. This
minimum value of λ tape is again restricted by the minimum practically possible
head gap.
Now, the ratio of video and audio frequencies is approximately 300, so we
must increase the writing speed or reduce the gap by the same factor or 300 to get
the desired results. Perhaps a speed of 60 mph will be required to cope with the
higher video frequencies. Keeping in mind the practical limitations a gap of the
order of 0.025 mil and writing speed of 600 ips or 15 mtrs/sec approx. (The
requirement of portable machines demands reduction in writing speed to achieve
lower tape consumption. So it is always a compromise or balance between various
parameters involved. For most of the present day portable machines, higher
performance specification even at lower writing speed has been possible because of
development of better quality metal tape and improvement in video heads) for
VTRS ( compared to 0.5 mil and 7.5 ips for audio recorders) has been found
practical and tried successfully. If we insert these figures in the above mentioned
relationship we get MUF or the order of 16 MHz. this means that with these
parameters we can obtain a suitable bandwidth for video tape recorders up to 8
MHz. Besides the requirement of attaining higher writing speed and reduced gap
the other limitation of recording

medium is the range, during which the extracted signal is more than noise. This
range is only 10 octaves.
The basic physics states the highest freq. to be recorded depends on tape speed and
the head gap
F max = V tape / 2xW headgap. Methods followed for recording high
frequency signal ie
The tape speed has to increase for recording high freq. Signals OR
It can be attained by rotating Head along with linear movement of the tape.
Tape speed can be decreased by increasing Head speed
It avoids mechanical stresses on the tape.

Concept for video recording formats-


1.The recording media (magnetic/optical), material used in the media and type of storage
device(Reel/tape) etc.
2.The type of signal and its processing (Analog/digital/compressed digital).
3.Type of modulation used before recording.
4.Track layout for recording signals and other information like(control, time code signals, etc).
5.Orientation of the tracks(transverse/helical) and Heads width and pitch of the tracks.
6.Tape/drum speed, etc.

Type of VTR formats-

1.Composite analog formats(Reel/Spool type)- Quadruplex(2’’), 1” A/B/C EBU.

BROADCAST QUALITY 5MHZ BW(400 LINES)


HEAD TO TAPE SPEED 1500INCH/SEC
IT IS ACHIEVED BY TAPE SPEED OF 15INCH/SEC PASSING A ROATATING DRUM CONTAINING 4 HEADS
PLACED 90 DEGREE APART
DRUM SPEED IS 14,400 RPM OR 240 REV/SEC
Composite Analog Formats are :

a. Quad Format(Segmented)
This format uses spool of 2” wide tape, 4 heads on transversely mounted drum, with a very
high writing speed of about 41 m/s. These machines had higher operational cost and
required constant engineering efforts to keep them running. These machines have since
been phased out except fir transfer/archival purpose.
b. Type B Format (Segmented helical )
This format has been developed by BOSCH/BTS using helical scan with 1” tape as BCN series
if Video Tape Recorders. It used a scanner with head wheel carrying two video heads around
which tape is wrapped in about 190°. Each television field is recorded on six tracks with each
head scanning a 52line segment. The scanner diameter is 50 mm and rotates at 150 rev/sec.
The tape moves at 24 cms/sec. The 80 mm long tracks are recorded at an angle of 14.3°.
There are four longitudinal tracks out of which two are full quality audio tracks third of the
time code and the fourth for the control track. Video writing speed is 24 m/s.
The flying erase head mounted on the same head-wheel and the associated electronics
allows for roll free electronic editing. The addition of digital frame store unit provides
freeze frame and slow motion. The portable version in the same format has also been
marketed by the manufacturers for studio use.
c. Type C Format (Field per scan helical)
This is the combined format of AMPEX and SONY using 1” tape with a full omega wrap
around a helical scanner running at 50 rps. Main head mounted on a 135 mm dia drum
records one field i.e. the video signal containing useful picture and part of vertical interval
containing field synchronizing pulses. An additional head called the Sync Head mounted on
the scanner records the vertical interval. There is thus no missing information as is common
with older single head, field per scan helical recorders ( one inch IVC 800/900 series and
AMPEX 7900 series recorders). There are four longitudinal tracks i.e 2 Nos. for audio, third
for time code and the fourth for control tracks.

This type of machine has 411 mm long video tyracks recorded on the tape at an angle of 2° 34’.
Video writing speed is about 24 m/s. The Ampex AST (Automatic scan tracking) or SONY Dynamic
tracking using piezoceramic (Bimorpyh) transducer and digital time base corrector assures precise
tracking and dependable interchange in spite of 411 mm long tracks. This also provides for freeze
frame, slow motion and recognizable picture in the shuttle mode.

Tracking pattern in helical system.


Mechanical positioning for helical system(Tape Transport System)

2.Heterodyne composite analog formats(Cassette tape width 2/3’’)-

U-matic LB, HB, SP(for professional) and VHS,Betamax,Video8mm,S-VHS,Hi-8(for domestic)


¾” tape,composite recording,280/360 line resolution

Uses 2 x linear and 2 x AFM audio recording. It is first helical format.

Used for home/consumer purposes


U-matic format using ¾ inch Video Cassette is popular for semi8-professional (CCTV), ENG and
Educational TV. U-matic portable recorders together with portable colour cameras are used for
Electronic News gathering(ENG) and field cobverages (EFP). Editing and playback machines in this
format for studies are also available. In order to reduce the drum speed and tape useage compared
to 1” and 2” formats, it was necessary to reduce the specification of video system by reducing the
bandwidth to 2.5 MHz(3 MHz in HB). The luminance could be successfully reproduced at the cost of
fine details. For colour, a down converted carrier is used. These recorders employ half omega wrap
and special technique of under colour or heterodyne for recording of chrominance information.

Chrominance information around 4.43m MHz is down converted to 6.85 kHz for LB U-matic machine
and 924 kHz for U-matic high band. Luminance is limited to 2.5 MHz(3 MHz inHB).
Frequency modulated luminance and down converted chrominance are mixed and recorded.
During playback chrominance is upconverted back to 4.43 MHz and mixed with demodulated
luminance to get a composite video signal(CCVS).
There are two heads mounted on 110 mm scanner rotating at 25 rev/sec. Enabling each head to
record one field per scan. This format is therefore field per scan format.
U-matic format has been further improved by raising the FM frequencies for luminance recording
and using specially developed tape. The format os designated as U-matic High band SP. There are
thus three formats ion this category viz.

U-matic standard band


U-matic High band
U-matic High band SP

The system produces acceptable pictures for ENG and semi-professional use but has limited
resolution and chroma noise at saturation.
Chroma noise further increase after two or three generations. Low Band is out of use and U-matic SP
offers better quality than Hi-band.

3.Component analog formats(Cassette tape width 2/3’’)- Betacam, Betacam-


SP,M-II.

In ½” tape,300/340 line resolution, Component video recording is done. i.e component


video R-Y & B-Y are recorded in same track by frequency multiplex system i.e CTDM(
compressed time division multiplexing) and the luminance Y is recorded in a separate track.

It uses four audio track i.e 2 x linear or 2 x AFM audio recording. Used by
consumer/professional(ENG/EFP) format.

BETACAM VIDEO CASSETTE RECORDERS


Introduction
Betacam format was introduced in 1982 followed by Betacam SP in 1987 squarely replaced
the umati cassette recorders and the tape recorders of B and C Formats. Betacam popularity is
primarily on its compactness and low capital and operational cost as compared to the Tape based B
and C format machines, while at the same time providing a signal quality reproduction in
comparison to the best prevailing tape formats of the day. Initially they were analog recoders and
with the introduction of Digi Beta, the digital version of the tape recorders were introduced .

System : Analog Component

Types : Used by Doordarshan with important features.

Studio Version (All with built in TBC and Time code).

(a) BVW 10 P - Non SP, Player only.


(b) BVW 40 P - Non SP, Recorder cum Player.
(c) BVW 60 P - SP Player only, compatible.
(d) BVW 65 P - SP Player with DT head, compatible.
(e) BVW 70 P - SP Recorder cum player, No DT head.
(f) BVW 75 P - SP Audio Recorder cum player, with DT head, 4 Audio track,
Compatible with non SP.
(g) PVW 2800 P - PRO SERIES, Recorder cum Player, SP only.
(h) PVW 2600 P - PRO SERIES, Player only, SP only.

NB : All SP machines when using oxide tapes, are compatible and reverse compatible with Non
SP machines, except for (g) and (h) which can only play oxide tapes and records on metal tapes.

Portable Version

(a) BVW 21p - Non SP, Portable, Player only, External TBC, Non docable.
(b) BVV 1 APS - Non SP, docable, Recorder only.
BVV 5 Ps - SP, docable, Recorder and Mono play.
PVV 1 P - SP, docable, Recorder and Mono play.
(c) W 35 P - SP, Portable, Recorder cum player, External TBC, compatible
and reverse compatible with non SP machines, (when using
oxide tapes) 4 Audio.
(d) BVW 50 P - SP, Portable, Recorder cum player, External TBC, compatible
and SP with Built in TBC can take 30 min/90 min. cassettes.
Type of Betacam tapes

a) Oxide tapes, Small sizei.e. CT 5 G/10G/20G/30G/BCT 5K/10K/20K/30K

b) Oxide tapes, large size i.e.BCT 5GL/10 GL/20 GL/60 GL/90 GL

c) Metal tapes, small size i.e.BCT 5 M/10M/20M/30MSBT 10M/20M/30M

d) Metal tapes, large size i.e.BCT 5ML/10ML/20ML/30ML/60ML/90ML/SBT 60


ML/90 ML

e) Cleaning Cassette i.e.BCT-5 CLN

Important

1) Micro switch will sense automatically whether a small or large cassette has been
inserted in the machine.
2) Oxide tape has 19 m thickness and metal particle 14 m.
Tape Loading in Betacam System

Betacam tape path around the head drum.

Fig. 3 Betacam Tape Path around the head drum

Head drum

Head drum for BVW 75P carries as many as 10 video heads namely two heads for Luminance Ya &
Yb, two heads for chrominance Ca & Cb ; two heads for Dynamic tracking Luminance DTYa & DTYb ;
two heads for Dynamic tracking chrominance DT Ca & DT Cb ; and finally two heads for Eraser Rea &
Reb (Rotary erase). In
some of the models where slow motion is not available DT heads and associated electronics is not
required. This makes the models cheaper to BVW 75 P.

Audio System

Track arrangement

1) Top edge of the tape ; Two longitudinal tracks for audio

2) Bottom edge of the tape ; Two longitudinal tracks for time code
and control track.
3) Middle of the tape : Two additional audio channels (AFM) in
SP machines recorded along with the video
By the rotary heads. The carrier used for
FM Audio channels is 310 KHz and 540 KHz for
channel 3 and 4 respectively - Insert Audio edit
on these channels is not possible.

4) A confidence replay head is provided for off tape monitoring of longitudinal tracks when in
record mode.

Betacam Audio Recording/Reproduction

1. Track layout
Same for oxide and metal particle tapes to give replay compatibility though the recorded signals are
different.

2. The two video recording heads making up a pair are mounted with an azimuth offset of
+ 15 degrees. This enables their tracks to be laid on the tape with zero quard band
between them. The azimuth offset provides cross-talk protection when tracking errors
cause a Y replay head to wander over the adjacent C track and vice versa.
Frequency modulated luminance is recorded by the Y-heads and the second head of the pair, C
follows a distance of 12 recorded lines behind Y and records the two colour difference signals as a
compressed Time Division Multiplexed waveform on its own FM carrier. (The different recorded
signal parameters for Oxide and SP recordings are already given). The first head pair is called A and
the second B. These are often referred to Channel A head pair and Channel B head pair, though
strictly speaking Y and C are the two information channels being recorded by alternate A & B head
pairs.

Betacam Video Signal Block Diagram


TAPE 101.5 mm/sec
A2
A1
Pair B Y C
5.8m/sec
Rotary Heads
Pair A Y C
CTL
T.C
.
Betacam Track Layout

Recording

Input for recording can be any one of the following :

(1) U, V & Y (Three lines).


(2) Dub C input (CTDM)
(3) Composite input

Information selected by the input as above is processed with the final objective of getting Y and
CTDM input. This Y and CTDM signals are then passed through separate Pre-emphasis, Modulators
and Record amplifiers. Record amplifiers for Y and CTDM Chroma respectively will feed heads for
channel A and Channel B.

AFM Audio for channel 3 & 4 is mixed with modulated. CTDM signal and then fed to Record
amplifier for recording on chroma track. While making copies, in CTDM dub mode, the raw
unprocessed demodulator Video from player is passed directly to the recorder modulators to
prevent degradation in quality.

Play Back

RF from the normal R/P heads or DT heads is switched at field rate to select the active head,
equalized and demodulated. The demodulated signal is then passed through the noise reduction
(non linear de-emphasis) and linear de-emphasis built in TBC not only to correct timing errors using
digital video store but also to compensate the dropouts (RF Loss) in each channel. The missing
information is filled with data from the previous line both for Y, R-Y & B-Y.

Compressed Time Division Multiplex C.T.DM - The R-Y and B-Y are clocked into separate one line
duration stores. Similarly during the second line, a second pair of stores receivers the next R-Y & B-
Y.

Meanwhile, R-Y is clocked out of its first store at twice the clock speed, compressed it to 32  sec.
Then B-Y is clocked from its first store to fill the next 32  sec period. This is called CTDM. The first
pair of stores are now empty, ready to receive new R-Y & B-Y from the input signal. While this is
going on double speed clock are used to empty the second pair of stores in a sequence of R-Y first
and then B-Y.
Data storage in Betacam-
Digital Video Cassette Recording
DIGITAL TAPE FORMATS-
INTRODUCTION

With the advent of digital signals, breakthrough came in the field of recording from analog recording
to digital recording around the year 1990. In the series of development of digital tape recording
systems, it is felt to have a system which should be handy for the purpose of field recording along
with capability of long duration recording. A recording format is developed by a consortium of ten
companies as a consumer digital video recording format called “DV”. DV (also called ”mini DV” in its
smallest tape form) is known as DVC (Digital Video cassette).
DVCAM is a professional variant of the DV, developed by Sony and DVCPRO on the other hand is a
professional variant of the DV, developed by Panasonic. These two formats differ from the DV
format in terms of track width, tape speed and tape type. Before the digitized video signal hits the
tape, it is the same in all three formats.

What is DV?
DV is a consumer video recording format, developed by a consortium of 10 companies and latter on
by 60 companies including Sony, Panasonic, Jvc, Phillips etc., was launched in 1996. in this format,
video is encoded into tape in digital format with intraframe DCT compression using 4:1:1 chroma sub
sampling for NTSC (or 4:2:0 for PAL). This makes it straightforward to transfer the video onto
computer for editing due to its intraframe compression technique. DV tapes come in two formats:
Mini DV size (66mm x 48mm x 12.2mm) and DV, the standard full size (125mm x 78mm x 14.6mm).
They record digital compressed video by a DCT method at 25 Megabit per second. In terms of video
quality, it is a step up from consumer analog formats, such as 8mm, VHS-C and Hi-8.
I) SMPTE(Society of Motion Picture Expert Group) has identified the digital video recorders with the
letter”D”
2) D series machines are divided into uncompressed/compressed types.
3) They are further divided for different track layouts, signal encoding, modulation technique before
recording.

D1(1987-Sony)

i) ¾” tape, Component uncompressed video(4:2:2/8bit)in 4tracks, 4 x 48k digital audio


ii) Uses metal particle tape
iii) Total heads on drum = 12
iv) No azimuth recording
R-NRZI video coding
v) 460 line resolution, used for graphic applications, etc
vi)
D7-DVCPRO(Panasonic)
i) ¼” tape, 4:1:1 component compressed video
(5:1DCT(DVCPRO25)/3.3:1DCT(DVCPRO50)/8bit ii)digital video, 2 x 48kHz/16bit digital
audio
iii) Uses metal particle tape.
iv) Channel / video = 1,
v) Total heads = 6
vi) Helical channel coding = NRZI, Azimuth recording.

DVC Pro – Digital Video Cassette Recording

What is DVCPRO?

DVCPRO is a professional variant of the DV, developed by Panasonic. In DVCPRO, the baseband
video signal is converted to 4:1:1 sampled data sequence from the originally sampled 4:2:2 signal by
the method of sub sampling and the resulted data are converted into blocks which are shuffled
before passing through compression circuitry and again reshuffled back to their original position
after compression. It is to mention here that still pictures containing little or no movement are
compressed using intraframe compression where as the pictures with large amounts of movements
are coded and compressed in intra field form.Error correction code is added to the compressed and
reshuffled data sequence by using Reed Solomon product code before it is sent to recording
modulation method. The modulated data sequence generated by 24-25 coding method using
scrambled NRZI is recorded onto the tape via video head.

4:2:2 270Mbps 4:1:1 124Mbps 25Mbps 34.4Mbps 41.85Mbps

Analog SMPTE259M-C
Component EBU Tech.3267-E
Video input Digital Component
Video Output Compression

Chrominance Variable Error Digital Recor-


A/D DCT &
Shuffle half sampling Buffer Quant Length Correction Correction ding
Weighting
elimination Coding Coding Coding head

Analog AES/EBU Intraframe Data Data Quant


Audio Input Digital Mobon Class Quality Step
Audio Input Delecbon Selecbon Esomator Table

A/D Shuffle Sub Code

Functional Block Diagram of DVCpro Recording Format

On the other hand the baseband audio signal is not compressed but available in two channel, each
sampled at 48kHz and represented by 16bit data sequence before added with compressed video
data. Subcode data is added to the combined video and audio data which were error corrected by
the same method as used in video. The combined error corrected data sequence is recorded onto
the tape by the same head which records the video also.
DVCPRO Tape Pattern

The orientation of recording heads on the head drum (Azimuth) are different. This helps the head to
pickup specified tracks which are in line to its orientation and can not pick up other tracks during
playback. So there is no need of spacing between the tracks on the tape and bulk data can be
recorded. This method of recording is called as rotary head azimuth recording.
Post Production in TV
Gopal Kumar, DDE
RSTI(T) Bhubaneswar

The post production is the editing process in which video clips available
on different video sources, like tape, CD, DVD and Camera Storage are processed
as per requirement of different kind of transmission.

In the news transmission the field recordings are normally cut to the size
to make clips relevant to the news. These clips are sometimes given voice over by
putting voice description. These are the finished clips for transmission and such
multiple clips are normally arranged over one or two video tapes for playing at
the time of transmission of news. Here, the requirement is cutting of source videos
to size and arranging them on tape. A simple requirement for which linear video
editing setup is used.

In other entertainment telecast like dance, drama, music shows etc., the
requirement of post production is different. Many types of special features like
effect at the transition from one clip to another, change in background of event,
insertion of new objects, change in light and colour etc may be required. Some
simple type of transition effect like dissolve is possible through linear edit consoles
but for advance effects, digital processing of video is required for which computer
based linear editing system is there.

Thus the post production hardware may be divided in two category.


1. Linear Editing
2. Lon-Linear Editing

Linear Editing :
Linear editing can be carried out using two VCRs connected directly.
However, In most of the linear editing setup edit controller is used, which can
control the VCRs connected to it. Through edit controller it is possible to edit the
clips more precisely and preview the edit before recording.
Based on recording technique, the Linear Editing is of two types
1. Assemble Editing
2. Insert Editing

Assembling Editing:
Assemble editing is so called
because shots are assembled in sequence on
the tape and the VCR will record the clip on
tape without considering the pre-recorded
video on recording tape and its sync. So
some time disturbance may be observed at
the end of the clip recorded on a
prerecorded tape.

Insert Editing:
Some times it is felt that a new video clip should be inserted over
existing footage. Like, over the long shot of some event the clips of graphics display
& expert comment are added within the time span of the shot.
If assemble editing is used, horrible picture disturbances will be observed
on edited tape. To avoid this, insert editing is used. In insert editing a new video
clip is inserted over old video with clean beginning and end. For this the VCR first
synchronies itself with the sync of pre recorded video. Thus the sync timing over
the tape remains as it is for the videos, newly inserted video clips & old pre
recorder video. Therefore the beginning & end will be clean. Unfortunately this
facility is available with semi- professional and professional recorders only.
Pre recorded video

Sync point
between frames Video Frames

New video clip

Sync at regular interval


Pre recorded video

New video clip

Video Frame Position after Insert Editing

Non Linear ( Digital ) Editing :


The computer based editing system, in which all sources are first collected
over the computer and then edited in carried out using advance editing software,
is called Non Linear Editing.
The computer used for non linear editing are normally having additional
card called capture card. This card is used for the conversion of video & audio
source from different devices in file formats suitable for editing. Some times the
additional very fast storage (SCCUSI hard disks in higher RAID configuration)
comes along with the editing machine. The third hardware feature which may be
added to computer to increase the processing speed and efficiency is the video
processing card which has features suitable for high speed video processing.

Non-linear edit setup


Some source materials may be available in file format over CD, DVD etc
which can be easily transferred to computer by simple copy and paste method.
However use of these clips depends on the suitability of file format to editing
software. If file format is not suitable for editing software then additional
conversion software may be used.

Some digital devices comes with fire wire port( IEEE1394 port) with
capability to transfer the digital video from a video device like camcorder to
editing computer through this port.
The non linear editing is taken as project consisting
of following steps.

1. Capturing of source :
a. Transfer of video from VCR & Camera to computer via capture card
b. Transfer of source available on CD & DVD
c. Transfer through fire-wire port from camera & VCRs.
d. Transfer of graphics from graphic station to editing computer

2. Editing of source :
3. Transfer of final product to tape/CD/DVD or to file :

Selection criteria for editing software :


1. Compatibility with hardware & operating system
2. The formats it can handle and resolution
3. Capture, import & export option
4. Advance feature required like special effects layering etc.
5. What are other software bundled with it (eg. audio, editing)
6. It is compatible with other video-audio software.

Using Edit software ;


Capturing Video :
An editing software controls
all of above steps, however a
separate software may be used
for capturing.
First lunch the video
capture software or capture
window in edit software. Play
the VCR connected to capture
card. The video will be visible in
capture window. Click the
record button in capture
window for recording the
footage on the computer.
Capture Window
Editing:
The editing workspace normally consist of following window ( it depends on
software used)
1. Project Window : Which contains all elements used in project like video,
audio, graphics.
2. Monitor Windows :
a. Preview window for source
b. master edit window
3. Time line Zone: This represents the flow of video project or it can be said
that finished video is the combination of videos on different timeline. Time
line zone has two dimension, one is time axis and another is layer axis. At a
particular time the output video will be combination of videos in different
layers along with special effect (if any added at that point) at that time on
time axis. Thus, we may say that the making of output video using different
source clips is visible on time line.
The easiest method to arrange the clips on time line is drag from
project window and drop at time line. Then trimming & positioning in
different layer and time can be carried out. The special effects and other
features in video can be added after visualizing clips on time line.
4. Tools Bar: Using tool bar different features and actions provided by editing
software can be selected.

Outputting Video :
The edit software provides the facility in its menu for putting the finished
video on various storage devices like
1. Tape on, VCR through capture card
2. Encoding in MPEG 2 format and recording in DVD
3. Saving the video in file format or the hard disk of computer itself.
Analog Television Transmitters
D.Ranganadham

Television transmitters used in Doordarshan are predominantly analog


except four digital transmitters installed at four metros. Hence they use analog
modulation techniques like amplitude modulation (AM) and frequency
modulation(FM). In television transmission vestigial side band modulation( which
is a form of AM) is used for transmitting video and FM is used for transmitting
audio. The analogue transmitter network in both Radio and TV transmitters
broadly comprise following major devices shown in Figure 1.

Exciter Filter Chain of power


Filter Antenna
amplifiers

Figure 1. Block Diagram of Analog Transmitter.

High Power Television Transmitters


Over a period of time the technology for transmitters have changed from
valve based technology to solid state technology especially high power
transmitters whose power is above 10 KW which has resulted in more compact,
more reliable, lower power consumption and easier to maintain transmitters. NEC
and R&S transmitters are some of the examples of solid state transmitters used in
broadcasting.

Even though these transmitters are analog, they incorporate some of the
latest advancements in technology coupled with microcontroller based parameter
control permitting remote control of transmitters using RS-485 serial connectors
helping fault diagnosis from remote because most of the transmitters will be
installed in hilly terrain to get the advantage of line of sight and greater reach. All
transmitters are internally protected against short circuits, drive faults, fan
failures, heat sink temperatures etc.

Block diagrams of some of the High Power Transmitters used in Doordarshan.

1) R&S 10 KW Transmitters
The heart of any modern transmitter is exciter because the signal which is
going to be radiated through the antenna is generated at the exciter stage with
low power. The exciter stage determines the quality of the transmitter. Following
stages of exciter only amplifies the signal. Figure 2. Gives the block diagram of
10 KW R&S exciter.

Fig. 02: Block diagram of exciter (courtesy R&S)


The exciter comprises the following modules: ATV encoder, ATV/DVB
equalizer, ATV/DVB modulator and ATV/DVB synthesizer. The modules are
interconnected via a motherboard and powered from a switching power supply.
The ATV encoder receives an analog video signal and digitizes it. The
digital signal is processed according to the selected standard and converted into
two quadrature baseband signals with a Hilbert transform. The analog audio
signals are also digitized and then processed according to the standard. The data
obtained are used to frequency modulate one or two sound subcarriers. The two
sound sub-carriers are then available in one digital signal which is converted into
two baseband signals in preparation for subsequent quadrature modulation. The
digital baseband signals in the video and audio paths are applied to the
pre- corrector.

The encoder also contains a microcontroller which drives the whole exciter
and handles communication with the CCU (central control unit). A program
memory is provided as a peripheral for the microcontroller. This means that all
the exciter firmware and software is stored at one location and an update can be
performed via the serial interface without replacing any hardware.

The ATV/DVB equalizer is made up of a group-delay equalizer and a


linearity precorrector. Correction is performed at the digital baseband level. The
linearity precorrector corrects the video and audio signals separately in the split
mode. In the combined mode, the digital video and audio signals are combined
before linearity correction. The digital signals are then converted into analog
baseband signals which are fed to the modulator.
The ATV/DVB modulator generates the RF vision and sound carriers by
direct quadrature modulation. The signals are filtered and amplified and then fed
to the output stages. An RF monitor is provided in the modulator. It outputs
measurement data on the vision and sound carrier at the exciter output and
ahead of the antenna. The exciter output signals are controlled and monitored in
this way. The values measured at the antenna are used for monitoring but also for
displaying the vision and sound carrier power.

The ATV/DVB synthesizer generates the vision carrier frequency required


for modulation. An optional GPS module can be fitted. It is used as a reference for
frequency generation.

The ATV/DVB motherboard interconnects the exciter modules and links


the exciter to the CCU. The motherboard forms the interface to the transmitter
rack (with cooling system, power supplies and amplifiers) for transmitter control.
The modules are powered via the motherboard.

Figure 3: Complete Block Diagram of Transmitter


The input signal is pre-amplified by the preamplifier and the driver. The
input signal of each input of each input board passes through a 4-way splitter, is
amplified by 4 parallel transistor stages and is then combined using an 8-way
combiner on the output board.
NEC 10 KW Transmitter

By Satyapal, A.D.E. 8

Figure 5. Exciter Block Diagram

Video &audio signals are given to exciter. The main functions of exciter are

 To get highly linear base band signal.

 To get required IF modulation technique, i.e, IF AM modulation for video and


FM modulation for audio.

 To get the required RF channel

 To amplify the signal up to an extent, that is required in the next stage.

Brief description of Exciter: video is fed to A/D D/A block. The AD-DA
unit has a function that converts the video output signal supplied to exciter into
PCM signal, and sends the PCM signal to a unit for digital correction. This unit
converts the video PCM signal, after the digital correction, into analog video
signal. Then, this unit supplies the analog video signal to a visual modulator unit.

Visual modulator unit is intended to convert a base band video signal into
modulated IF signal. Most non-linear distortion caused in power amplifier of the
transmitter are corrected by the IF corrector unit. The IF signal is applied at the
input is converted to an RF signal by a combination of mixer and local oscillator
and the RF signal is passed through filters (BPF & BEF) to separate out only the
specified band. This is amplified to obtain an RF signal of +20 dBm. By applying
AGC to the IF signal, the output power of the transmitter is maintained at a
constant level.
DVC block is instrumental in correcting errors introduced in the base
band signal and tries to give clean signal for visual modulator. Digital correction
of linearity, phase non linearity, sync and color burst separation and other
processing techniques are much easier in digital which is the reason for using A/D
and D/A block. Balanced audio signal is given to sound modulator where it is IF
modulated using FM and up-converted to RF frequency by aural mixer. The
standard IF frequencies for vision are 38.9 MHz and 33.5 MHz for audio. The
reference local oscillator frequencies are given by synthesizers.

The output of exciter is modulated TV signal consisting of audio and video


which is at much lower power. Hence this has to be amplified to the required
power level by a series of power amplifiers before it is given to antenna though
feeder lines for radiation. This method is often referred to as low level modulation.
It is generally employed in LPTs.(low power transmitters)

In the case of high power transmitters modulated video and modulated


audio is separately amplified and combined in a diplexer before it is given antenna
through feeder line. This method is called high level modulation. It is generally
employed in HPTs( high power transmitters.

Redundancy is provided for the exciters. Final power output will be achieved
by using series of cascaded stages of power amplifiers based on MOSFET
technology which has better noise performance and impedance properties.

Power amplification in NEC transmitter is done in series of amplifier stages


preamplifier, driver amplifier and final power amplifiers. All amplifiers uses
MOSFETS of push pull construction which is of a high output, a high gain and high
reliability for use in VHF and UHF band transmitters.

Figure 4: Complete Block Diagram of a 10 kW TV Transmitter (Courtesy –NEC)


500 W TV Transmitters (VHF)
GENERAL TECHNICAL DESCRIPTION

The 500W VHF TV Transmitter is designed to operate in common


amplification mode in the VHF Band III covering E5 to E12 for CCIR PAL B
standard. The Exciter and Power Amplifier are designed to operate broadband
covering the Band III. However the output filter is to be changed for the required
channel. The Transmitter is housed in a 19" rack. The 500W TV TX is designed to
operate on 230V 1phase AC.

The Transmitter is fully solid-State, combined amplification Transmitter


delivering 500W Vision (Sync peak) and 50W Aural output. The Vision and Aural
drives of the required channel frequency are derived from a solid state Exciter.

Power Amplifiers consist of two 360W pallets amplifier for realizing


500W peak power at TX. output. The TX output is then passed through a band
pass filter to filter out the unwanted frequencies and through a directional
coupler. The amplifier is protected against Temp., VSWR faults.

The Transmitter has been designed with adequate protective


devices/circuits. All module PCB’s are connected to facilitate easy inspection and
maintenance. The RF power is passed through co-axial switches, directional
couplers and Thru-line meter and antenna.
Salient feature of 500 Watt TV Transmitter (VHF)
1. SOLID STATE EXCITER
The Exciter unit accepts Video/Audio inputs and gives a combined
vestigial side band shaped amplitude modulated Vision and FM modulated Aural
output at the desired channel frequency. It can generate (+) or (-) 2/3-line
frequency offset to the carrier frequency. The local oscillator is a synthesized
oscillator and uses a stable OCXO as reference to get accurate setting of
frequency. The 38.9 MHz is generated from OCXO using a frequency multiplier.

EXCITER UNIT- BLOCK DIAGRAM

The Aural modulator is a linear low distortion PLL modulator working


at 33.4 MHz. It has 50μsec pre-emphasis circuit. The up conversion of IF to
channel frequency is carried out through double conversion thus making the
Exciter broadband and requiring no tuning. The Exciter has white limit and built-
in AGC circuit. The Exciter unit also contains an ICPM corrector. It has an
Exciter Control to facilitate remote control and monitoring of Exciter unit. The
frequency setting is done through Exciter Control Board.

2. 600W POWER AMPLIFIER


The power amplifier is a self -contained unit having power amplifier
stages, 3dB 90°Hybrid Couplers, directional coupler, Control circuitry, and cooling
Fans. The unit only needs to be given 230V AC (for fan), 30V DC (45A) for PA pallets,
30V DC (8A) for Driver Pallet, 18V DC (0.5A) for PA Control Board and RF input to
deliver up to 630W(SP) power in any channel of the VHF TV band III.
POWER AMPLIFIER UNIT

Two 360W PA pallets are combined to get 630W of output power from
the unit. The control board monitors power, current and temperature of each PA
pallet and indicates the fault and Ok status of PA. It also protects the PA against
over drive, over current, over temperature, over voltage and VSWR. One 3dB
90°Hybrid Coupler is used at the output of Driver, to drive the Two PA Pallets.
The output of these two PA Pallets is combined using a 3dB 90°Hybrid Coupler.
The combined output power passes through a directional coupler, which provides
forward and reflected power samples to the PA control board.

3. CONTROL UNIT
The Control Unit is micro controller based and it allows controlling and
monitoring various parameters in Exciter and Power Amplifier. It also monitors
faults if any in transmitter and shuts down the transmitter for some major faults.
The monitored parameters of each unit can be seen on the LCD display by using
menu keys.
The Control Unit communicates with other units using RS 232 serial
interfaces. So transmitter can be connected to Station Control Unit (SCU) for
facilitating (1+1) operation..

4. FILTER
The output filter is tuned to specific channel.

5. POWER SUPPLY (SMPS)


The Switched Mode Power Supply (Model no.A3375-31) is a three
output, Surface Mount devices ,capable of delivering 26 V to 32 V DC/ 90 A
(Voltage V1), 26 V to 32 V DC/ 12A (Voltage V2) and 18 V DC/ 500 mA (Voltage
V3) throughout the input range of 207 V AC to 264 V AC, 50 Hz, Single phase. The
outputs V1, V2 and V3 are protected against overload and short circuit. V1 and
V2 are also protected against input under voltage, input over voltage, output
over voltage and over temperature. An MCB is provided on the Front panel of the
Power Supply for input over current. The cooling of the Power supply unit is
through in-built Fan.
6. BASE BAND PROCESSOR UNIT
The input Video signal is passed through the Base band Processor Unit
before it is fed to the Exciter. The Base band Processor Unit accepts the input
Video signal and introduces required Sync expansion in order to compensate the
sync compression encountered in the Power Amplifier, without introducing any
other non-linearity.

Remote Controlled
500 Watt TV Transmitting Station,
(1+1) Mode of Operation.

LAYOUT OF REMOTE CONTROLLED (1+1) MODE

The Remote Controlled (1+1) Station contains the following equipments:

1) 500W VHF or UHF Transmitters, (2 Nos.)- The transmitter gets the


Video and Audio from the National/Regional Changeover unit housed in the
input rack which selects the Video/ Audio from either the National IRD or the
Regional IRD, depending on the 1 KHz tone at the second audio of the Regional
IRD. The 1 kHz tone is uplinked from the Regional Kendra whenever the Regional
program is to be transmitted by the LPTs. By default, the Video from the
National IRD is always routed to the transmitters.

2) Input Monitoring Rack (common to two transmitters) ,(1 No)–


The Input Monitoring Rack houses the Station Control Unit, 10x2 Audio/Video
switcher, SCU bypass unit, National/Regional changeover unit, TV Demodulator,
Waveform Monitor, Video Generator, Audio Generator, 8W Monitoring Amplifier
and Space for mounting the IRD’s.
3) 25-kVA Diesel Generator set with AMF panel, (1 No) - It is provided
as a standby power source for the station during Mains failures.

4) 1kVA UPS, (1 No) - Provided to power up the SCU and modem. It


provides approx 24 hrs backup at this load. The SCU and modem will remain ON
at all times to enable remote connection during both Mains and DG set failure
and also to bring up the station for complete OFF condition.

5) 6kVA UPS, (1 No)- It is provided to avoid transmission break during the


Mains to DG set changeover period. It is of 30 minutes capacity. It supplies the
transmitters and Input Mon Rack through the PSP and to the TVRO system
directly. The transmission can continue with these equipments till the DG set
starts up. It does not power the Air Conditioners due to large surge currents.

6) 10kVA Automatic Voltage Regulator (AVR), (1 No)- The 10kVA AVR


accepts 140V to 270V AC input and regulates the Input voltage to the complete
Station to 230V +/-1%.
7) Room Temperature Sensors,(2 Nos)- Two numbers of Room
temperature Sensors are provided for sensing of the room temperature of the
site, to judge air conditioning efficiency and local temperature conditions.

8) Smoke Detectors, (2 Nos)- To provide warning against fire hazards, two


numbers of Smoke detectors are provided. These should be mounted on the roof,
above equipment locations and fuel storage locations, as appropriate. If smoke is
detected, an alarm is switched ON and the station is switched OFF.

9) Power Switching Panel (PSP),(1No.), The Power Switching panel


contains six contactors driven by six relays. The SCU controls these relays. Mains
to Air-Conditioner1, Air-Conditioner2, Air-Conditioner3, Transmitter1,
Transmitter2 and the Input Mon Rack are switched on through PSP. The SCU can
switch ON/OFF the Mains to these units. Out of the three Phases, RN is wired to
AVR, UPS which in turn passed thru PSP Panel for controlling mains to
Transmitter-1, Transmitter2, I/P rack, YN phase is connected thru PSP panel for
selecting Mains to Air-Conditioner1 or Air-Conditioner2, BN phase is connected
Air-Conditioner3.

10) 14” Colour Television Sets - Colour TV sets are provided for local
monitoring of the programs.
11) IRD System - Provides the program content for transmission.
12) Computer(2Nos)- Local-PC & Remote-PC with Softwares .
13) Laser printer, (1No.) –For print out of the transmitter status.
14) Power Supply Distribution and Control - The input power to the
station is supplied from a 400V phase-neutral system of at least 25-kVA capacity
and a DG Set is provided for standby power generation during mains outage.
A 6kVA UPS is provided to maintain power to the selected transmitter during
Mains to DG Set changeover.
The incoming Mains is fed to the Auto on Mains Failure (AMF) Panel.
The AMF panel switches over to the DG Set when Mains fail and vice versa when
it returns to normal. The output of the AMF panel is then passed through the AVR
for regulating the voltage.

The output of AVR is fed to the 6kVA UPS and 1kVA UPS. The AC mains
to all three Air Conditioners are connected directly from AMF Panel (via PSP’s
contactors) and not through UPS since the starting current of compressor motors
are very high and cannot be supplied by UPS. The 6kVA UPS output is fed to the
remaining three contactors in the Power Switching Panel for supplying the two
Transmitters and Input Monitoring Rack.
The Station Control Unit uses the Power Switching Panel to switch ON
or OFF the following six equipment namely Transmitter1, Transmitter2, Input
Monitoring Rack, Air-Conditioner 1 / Air-Conditioner 2 & Air-Conditioner 3.
15) Serial Communication Interface (RS-232 C)- It is a serial
Communication widely used for Data communications in Computer terminals,
remote control panels & Short-Distance-Communication links, like Modems etc.

GENERAL TECHNICAL DESCRIPTION

The Remote Controlled 500W TV transmitting station consists of two


numbers of BEL 500W UHF/VHF Solid-state working transmitters in passive
standby configuration along with TVRO electronics, Input & Monitoring
equipment, Diesel Generator set with AMF panel and other station items.
The operation of the complete station is controlled by a microprocessor
based Station Control Unit (SCU) housed in the Input/Monitoring Rack. The SCU
operates the station without operating personnel, and switches-in redundant
systems in case of failures, to maximize uptime.
Remote control and monitoring of the station is provided by a modem
connected to the SCU. Users can dial-in to the station using the Remote Control
software running on a personal computer at a remote place (Maintenance
Center). All major equipment is electronically controllable and monitorable, to
enable maintenance centers to know the status of all equipment and visit the
station to rectify faults on a periodic basis. Fault tolerant design has been
incorporated at all stages to increase reliability in order to ensure operation
continues till maintenance personnel arrive.
In Local mode, the Station status, Transmitter status, Exciter status,
Power amplifier status are displayed. No control is possible in this mode. But in
remote mode monitoring and controls are possible.
The Local mode and the Remote control mode are performed by two
numbers of PCs loaded with software as given below-

a) RCS Local PC Software –For the local PC in the transmitting station.

b) RCS Remote PC Software-For the remote PC located anywhere, with


additional facility of Remote Station Telephone dialing Interface screen and
Station Control Commands.
Digital Video Broadcasting
D.Ranganadham, DDE
RSTI(T) - BBSR

The DVB (Digital Video Broadcasting) [1] [2] [3] project is the
consortium of over 280 broadcasters, manufacturers, network operators, software
developers, regulatory bodies and others in over 35 countries committed to
designing open interoperable standards for the global delivery of digital media
services. The group specified a family of DVB standards, including the following.
Digital Video Broadcasting (DVB) usually means the transmission of digitized
audio, video and auxiliary data signals. The most suitable distribution systems for
the transmission of DVB are satellite, cable, and terrestrial mode. The relative
standards are DVB-S, DVB-C and DVB-T. The processing at different stages of
communication depends on the channel used. A generic DVB broadcasting system
is given in Figure 1.

Figure 1: A Generic DVB Broadcasting System

Video sources are collected as SDI( serial digital interface) or analogue


PAL signals from different sources and are given to MPEG-2 encoder. Encoder can
accept following inputs: analog audio, analog video, digital audio as EBU/AES
format, digital video as SDI format, embedded SDI and data. Program outputs
from different encoders are given to multiplexers with possible scrambling to
generate transport stream. This transport stream generally contains between 8
and 18 channels. Depending on the mode of delivery for the transport stream,
there will be certain changes in transmission modules. All three transmission
systems are designed for maximal compatibility, which means that they can use
common circuit blocks (e.g., the Reed Solomon decoder and interleaver) if a
single receiver supports several transmission mediums. Compatibility also means
that transmodulation is made easy when the bitrates are selected carefully.
However there are certain differences in terms of modulation methods and
channel coding before transport stream is broadcasted. Brief description of three
popular standards is given below.

DVB-S

The satellite DTH( Direct To Home) system for use in the 11/12-GHz
band, configurable to suit a wide range of transponder bandwidths and EIRPs
(the standard is also applied in C, Ku, and Ka FSS bands). The basic transmission
design of DVB-S has proven to be robust and economical in use. In addition to the
inherent technical features of this standard (such as the use of concatenated
coding and QPSK), the property of MPEG 2 to transfer internet protocol data
efficiently and transparently has gained a large following. Many DVB-S satellite
transponders have a bandwidth of 33 MHz, which allows with QPSK a symbol
rate of 33 MHz / 1.2 = 27.5 Mbaud. With 2 bit/symbol, this results in 55 Mbit/s
and after the convolutional 3/4 FEC (forward error correction) decoder has
removed 25% of the bits for inner error correction, 41.25 Mbit/s remain. This bit
stream is sent to the second error correction algorithm (Reed-Solomon), which
transforms 204 bytes into 188 corrected bytes and the final error corrected data
rate is therefore 38.015 Mbit/s for the multiplexed MPEG-2 data stream. Around
ten TV programs can be sent in with this bit rate instead one program possible in
analogue mode.

DVB-C

The cable delivery system, compatible with DVB-S and normally to be


used with 8-MHz channels (e.g, consistent with the 625-line systems common in
Europe, Africa, and Asia). DVB-C uses MPEG-2 or MPEG-4 and QAM modulation.
DVB-C modulator uses 64QAM for Coaxial cable and 256QAM for Optical fiber. In
a typical DVB-C cable system, there are 8 MHz channels, and after the 15% roll-
off specified by DVB-C, a theoretical maximum symbol rate of 6.96 Mbaud is
possible. For compatibility with the above DVB-S example, let's use 6.875 Mbaud
with a 64-QAM, which results again in 41.25 Mbit/s including Reed Solomon
redundancy. As before, after the RS-decoder, the final bit rate remaining is 38.015
Mbit/s. Here also round ten TV programs can be sent in with this bit rate instead
of one program possible in analogue mode.
DVB-T

DVB-T standards are envisaged for Digital Terrestrial Television


Broadcasting (DTTB) in the existing TV bands of VHF and UHF spectrum. It
utilizes 6, 7 and 8 MHz channel bandwidths. DVB-T offers a set of advantages
which are otherwise not achievable through existing analog TV broadcasting.
Following are the few advantages of DVB-T system:

 Bandwidth saving: 4-to-5 digital MPEG-2 video services can be


multiplexed and broadcasted in a 6/7/8 MHz channel bandwidth.

 Power saving: About 10 dB less power is required in comparison to


analog broadcasting. This is achieved by adopting a suitable combination
of digital modulation and forward error correction (FEC) techniques.

 Noise free reception: If the required Bit Error Rate (BER < 10e-6)) is
maintained, one can achieve almost noiseless video reception. Multi-path
fading and ghost images are completely eliminated.

 Mobility: DVB-T gives a great degree of mobility. Video can be received in


the vehicles moving at a speed of 100 Km/h. Doppler shift is the limiting
factor for mobility.

 Single Frequency Network: All TV transmitters in a country can be


networked to transmit TV programs on a single frequency, sparing entire
VHF/UHF spectrum.

Error Correction and Modulation in DVB System (Transmitter)

Figure 2: channel coding details of DVB systems

The above figure 2. shows differences in the DVB systems focussing on two
important issues namely 1. Modulation technique and 2. Error correction codes.
Modulation in DVB System:
Satellite reception (QPSK, Phase modulation)

 Carrier to noise ratio C/N can be very small (10 dB or less)


 No reflections, but nonlinear transmission chain (C-class amplifiers in the
satellites) leading to amplitude distortions
 constant amplitude modulation should be used (QPSK)
Cable reception (QAM, Amplitude and Phase modulation)

 C/N is quite high, generally over 30 dB


 The signal can be effected by echoes due to impedance mismatches in the
network
 Amplitude modulation can be used, but echo cancellation is necessary
Terrestrial reception (COFDM, Coded Orthogonal Frequency Division Multiplex)

 Propagation conditions for signal are difficult, especially if mobile reception


is required with simple antennas => variable echoes due to multipath and
signal level variations
• COFDM right choice. Advantages of OFDM are:

 Allows for flexibility in frequency planning.

 Can provide both fixed & portable reception in the presence of strong
reflections.

 Rugged against interference.

Error Correction in DVB System

The choice of modulation method and error correction techniques


depend on the type of channel. There are two types of forward error correction
(FEC) coding used in DVB-T, DVB-C and DVB-S systems. Each system uses a
combination of block codes ( Reed-Solomon (RS) codes) and convolutional codes.
Since these codes are arranged in a cascade or series configuration, they are said
to be concatenated, as shown in figure 2. Block code is outer code and is encoded
first. Outer code chosen to have good burst correcting capabilities. It deals with
all errors which includes the other elements of the FEC. A Reed – Solomon (RS)
204 : 188 outer code used. It means for every MPEG – Transport stream Packet
consisting of 188 bytes, another 16 check bytes added to make it 204 bytes. RS
code can detect 8 bytes error and correct it in a total of 204 bytes in packet. The
outer code is followed by inner code. Inner code is a convolutional code which
uses viterbi decoding method. Inner code rates chosen by DVB are 1:2, 2:3, 3:4,
5:6 and 7:8. Higher is the rate better is its power to correct its errors.
Convolutional codes are good in correcting random errors. Use of convolutional
code alone results in coding gain of 3 to 6 db. This can be further improved by
concatenated coding. In other words we can say that error correction codes can
either be used to extend the coverage area of the broadcast television signal or to
reduce the transmitter power required for a given coverage area.

The error correcting codes exhibit good performance as long as the


number of errors in a code word remains less than or equal to the amount of
errors, which can be corrected by the coding system. If the errors(burst errors)
exceed the error correcting capability, decoders not only fail to correct the errors
but also may generate more errors. The burst errors may occur due to amplitude
fading on a radio link, sporadic burst noise due to interference or hostile jamming,
magnetic or optical disc recordings with surface defects and inter symbol
interference. By using complementary inner and outer codes, almost quasi-error-
free condition (BER ≤ 10 -11) can be achieved. Interleaver placed between outer
and inner code stages. It prevents any lengthy burst of errors reaching from outer
decoder in receiver. Interleaving reorders the data bits at the transmitter
according to a defined pattern. At the receiver, the reordering is reversed to
restore the original data order. Interleaving is widely accepted technique to
various codes and channel behavior and is simple to implement.

In DVB-T additional inner interleaver using each symbol in frequency


interleaving is used because of the noisy channel. It also uses OFDM (Orthogonal
Frequency Division Multiplexing), instead of QPSK modulation which is a multi
carrier solution, unlike that of single carrier in satellite and cable mode
transmission. Single carrier modulation technique like QPSK is susceptible to
multipath reflection. In multicarrier mode, the total data can be divided equally
among the different carriers thus increasing the inter symbol space by a factor of
the number carriers used. In OFDM Carriers are orthogonal over a single period,
so that product of any two carriers integrated over a symbol period vanishes.
This helps in recovering individual symbol on each carrier without interference,
even if spectral over lap occurs. Frequency division multiplexing among many
carriers creates independence among data symbol streams by providing
frequency diversity. Inter symbol inference can be completely eliminated by
introducing a guard interval between adjacent symbols.

Conclusion
In this article brief description of first generation DVB standards used for
transmission of digital signals is given. Currently there is intense focus on second
generation standards implementation namely DVB-C2 for cable, DVB-S2 for
satellite mode and DVB-T2 for terrestrial. These standards provide further
improvements in modulation techniques and channel coding. These standards
have been specified around three concepts: best transmission performance
approaching the Shannon limit, total flexibility, and reasonable receiver
complexity. Channel coding and modulation are based on more recent
developments by the scientific community: low density parity check codes are
adopted, combined with QPSK, 8PSK, 16APSK, and 32APSK modulations
depending on mode of delivery. Overall broadcasting industry is witnessing
phenomenal growth in all delivery modes.

References:

[1] U. Reimers Digital Video Broadcasting, The family of international standards


for digital television (2nd edition) Springer-Verlag, 2004.

[2] W. Fisher, Digital Television, A Practical Guide for Engineers (1st edition).
Springer Verlag, 2004.

[3] DVB Homepage (2008, June). The main website of the DVB Project. [Online].
Available http://www.dvb.org.
Satellite Communication
D.Ranganadham, DDE
RSTI(T) - BBSR

In the year 1945 Arthur C. Clarke, British science fiction writer wrote
an article in “Wireless World”, magazine, about possible worldwide coverage
using three satellites in Geo Stationary orbit about 36,000 Km (around 22,300
miles ) above equator. Two important issues were mentioned in the article.

 There exists an orbit in the sky which can be used for communication
purposes which later called geo stationary orbit (GSO).
 The power for communication equipment can be generated from solar
panels.
The important thing about GSO orbit is the satellite placed in this orbit
looks stationary for an observer on the earth because its period of revolution
about the earth would be the same as the period of the earth's rotation. This
synchronous satellite, which would always appear in the same place in the sky,
would be provided with receiving and transmitting equipment and directional
antennas to beam signals to all or parts of the visible portion of the earth. Three
satellites located 120θ apart in the GSO can cover entire world by using global
beam (it covers 42.4% of the earth’s surface, and large receiving antennas must be
used to adequately detect broadcasts).

What is a satellite?

From a communication stand point, a satellite may be considered as a distant


microwave repeater that receives uplink transmission and provides filtering,
amplification, processing and frequency translation to the downlink band for
transmission.

Why do satellites stay moving and in orbit?


How does satellite stay in the orbit and keep moving? From the
diagram given in the figure 1. It is very clear satellite will be in equilibrium only if
the two forces acting on it that is gravitational force due to earth gravity and
centrifugal force due to kinetic energy due to motion of the satellite are same.

Gravitational force F1= mg (where m= mass of the satellite and g is


acceleration due o gravity)

mv2
Centrifugal force F2 =
r( where ν is the velocity of the satellite
and r= the distance between satellite and the centre of the earth).
Hence if F1=F2

mv2 GM
 mg where g
r r2

mv 2
 mg
r
GM G  gravity consant 6.672  10 11 NM 2 kg 2
m 2 M  5.974  10 24 kg
r
GM  cons tan t
v r  GM  consant
2

v 2 r  GM
GM
v
r
3
2r 2r 2r 2
p  
v GM GM
r

Where р = period of one rotation of the circular orbit which is taken as


23 hour 56min 4.1 sec( sidereal day ) and r= R + h=42164 km ( R = Radius of the
earth and h= height of the orbit). From the above equation it is very clear for
geostationary orbit, for maintaining stationarity with respect to earth antenna, p
has to be same as time taken for earth rotation on its axis with respect to stars
which is 23hour 56min 4.1 seconds. We can calculate r from the above equation
and r is 42164 km. hence geostationary orbit is only one. All communication
satellites operate on stationary antennas will have to be parked in this orbit only.

Advantages of satellite communication:

 About 40% of the earth area can be covered by downlink beam. No other
communication system can have this kind of coverage area.
 Signals are available even in remote areas and hilly terrain
 Can be easily deployable. Flexible to install and dismantle.
Disadvantages:
 Propagation delay. Electromagnetic signals from the uplink parabolic dish
antenna (PDA) are travelling one way distance of around 72000 km which
introduces a delay of 240 ms and 480 ms for the case of two way
communication like telephone signals.
 Satellites placed in GSO cannot cover north and south pole
 Signals are subjected to huge attenuation due to huge distances involved in
the uplink and down link. Hence receiving equipment should be able to
handle very weak signals.
Frequency bands used for satellite communication: Three
main frequency bands are allocated for GEO communications satellites, C-band, Ku-
and Ka-band. Each band is divided into sub-bands, which are allocated to each
transponder on a satellite. All satellites use the same frequencies. Directional
antennas provide isolation from interference. This places a minimum size limit on
the antennas used.

Frequency band Up Link Down Link

C-band 6 GHz 4 GHz

X-band 8 GHz 7GHz

Ku-band 14 GHz 11 GHz

Ka-band 30 GHz 20 GHz


C-Band: Uplinks at 6 GHz (5.925-6.425 GHz). Downlinks at 4 GHz (3.700-
4.200 GHz). 36-MHz transponder sub-band - 4 MHz guard band Easier to make
high-power amplifiers Antennas must be large Less atmospheric and rain
attenuation than Ku or Ka More crowded band, many users, more interference

Frequency Allocation Plan: C-band allocations are 5925-6425 and 3700-


4200 MHz. The bands are broken into 12 sub-bands each 36 MHz wide. There is a
4 MHz guard band between each sub-band. There is one 18 MHz channel at one
end of the band.

There is a 2 MHz gap at the other. Transponders are identified by the


center frequency of the sub-band they transmit, each sub-band is identified by a
channel number. Orthogonal polarizations are identified by either an A or B or a V
or H after the frequency. Assume a 36 MHz Transponder. Analog PAL is FM
modulated to 36 MHz. Can carry 1 TV channel, or a combination of voice, and data
signals

• Ku-Band: Uplinks to Satellite at 14 GHz nominal frequency. Downlinks at


12 GHz nominal frequency More difficult to make high-power amplifiers.
Smaller antennas or higher gain and spot beams More rain attenuation, can
adversely affect signal. Less crowded, fewer users, higher bandwidth and less
interference

• Ka-Band: Uplinks to Satellite at 30 GHz nominal frequency. Downlinks at 20


GHz nominal frequency Very small high gain antennas are possible, as are
tightly focused local spot beams Difficult to make high power amplifiers High
attenuation from atmospheric moisture and rain Relatively unused part of
the spectrum, lots of bandwidth, low interference potential

Why uplink frequency is higher than down link frequency?

In order to avoid interference between uplink and down link there


should be frequency isolation between uplink and down link.

1. Bandwidth is approximately 75λ/d for circular aperture antennas (where


λ is wavelength and d is the diameter of the antenna). In order to avoid
adjacent satellite interference the uplink transmitted beam has to be
narrow. This will be achieved only if beam width of the antenna
transmitted lobe is narrow. From the above equation it is clear beam
width is narrow if frequency is high or by using larger dish. This is the
reason all the uplink teleports of broadcasting stations use large antennas
and uplink frequency is always kept higher compared to down link.
2. Since EIRP (effective isotropical radiated power) of satellites are lower
than the possible EIRP of the uplink stations it is always better to use
lower frequencies for downlink due lower losses of low frequencies.

What constitutes the satellite?

The major systems in the satellite can be broadly divided into five
categories.

1. Attitude and orbit control system (AOCS)


2. Telemetry, tracking and command (TT&C)
3. Power subsystem
4. Communication subsystems
5. Spacecraft antennas
1) Attitude and orbit control system (AOCS): AOCS does two important tasks

 The attitude control system must keep the solar panels correctly pointed
towards the sun.

 Communication antennas correctly pointed towards earth station antennas.

Due to rotation of the earth all attitude control systems must pitch the
satellite150 /hour to maintain earth pointing. They must also provide correction
for disturbances due to radiation pressure and torques generated by station
manoeuvres.

There are two principal categories of attitude control systems namely

1. Spin stabilization
2. 3 axis or body stabilization
A three-axis stabilized space craft can make better use of its solar cells
area, since the cells can be arranged on flat panels that can be rotated to maintain
normal incident of the sunlight.

2. Telemetry, tracking and command (TT&C):


TT&C system is essential to the successful operation of a communication
satellite. It includes:

1. Satellite management
2. Telemetry
3. Tracking
4. Command
Typical on-board sensors

Sensors Function

pressure Fuel tanks.

Voltage and critical current and voltage of


currents Communication subsystem,

-power conditioning unit

-current drawn by each system etc.

Sighting Angles of other solar bodies need to calculate attitude.

Temperature Structural temperatures.

Status Operating station of redundant


equipment.

The data received from onboard sensors are processed by TT&C block in
the satellite and sent to TT&C block at the master control centre. Telemetry data
are usually digitized and transmitted as frequency or phase shift keying of low-
power telemetry carrier using TDM technique. A low data rate is normally used to
allow the receiver at the ES to have a narrow band width and thus maintain a high
C/N. The entire TDM frame may contain thousands of bits of data and take several
seconds to transmit. At the controlling ES a computer can be used to monitor, store
and decode the telemetry data so that the station of any system or sensors on the
spacecraft can be determined immediately by the controller on earth. Alarms can
also be sounded if any vital parameter goes outside allowable limits.

3. Power subsystem:
There are two obvious sources of primary power for space craft, namely
nuclear and solar. Due to cost and environmental hazards, nuclear sources are not
generally used in earth orbit. They are however used for interplanetary space
craft where the distances from the sun produce key weak solar radiation. All
commercial satellites have used solar energy to derive primary power from the
solar cells. Solar cells convert incident sunlight into electrical energy. In addition
to the solar array there are batteries carried on the space craft to provide power
for essential services during period of eclipse and during launch period( during
eclipse time sun radiations will not fall on the solar panels). Outside periods of
eclipse the batteries are charged by drawing power from the solar array.
The sun is a powerful source of energy, in the total vacuum of outer
space, at geostationary altitudes; the radiation falling in a space craft has an
intensity of 1.39kw/m2. Solar cells do not convert all this incident energy in to
electrical power. Their efficiency is typically 10 to18 percent but falls with time
become of aging of the cells and etching of the surface by micro meter impacts.
Since sufficient power must be available at the end of lifetime of the satellite to
supply all the systems onboard. The space craft, about 15% extra area of solar
cells is usually provided as an allowance for aging. Future generation satellites
operate in Ku band which has got more communication capacity and more
lifespan and hence require higher power.

Eclipses occur twice per year, around the spring and fall equinoxes,
when earth shadow passes across the space craft. The longest duration of eclipses
is 70 min occurring around March 21 and September 21 each year. To avoid the
need for large, heavy batteries, part or all the communications system load may be
shutdown during eclipse, but this technique rarely used when the telephony or
data traffic is carried. TV broad satellites will not carry sufficient battery capacity
to supply their high power transmitters during eclipse and must be shutdown.
Batteries are usually of the sealed nickel cadmium type, which do not gas when
changing and have good reliability and long life But due to advancements in
launching technology, heavier satellites are launched which can include lighter
nickel hydrogen batteries which will take care of power requirements during
eclipse time..

4. Communication subsystems:
A communication satellite in geostationary orbit exists to provide
relaying of voice data and video communication. All other subsystems on the
spacecraft exist solely to support the communication sub system. Satellites have
become larger, heavier, and more costly, but the rate which traffic capacity has
increased has been much greater, resulting in a lower cost per telephone circuit
with each succeeding generation of satellite. The introduction of switched beam
technology and onboard processing in high capacity satellites will offer a further
increase in capacity. The communication subsystem or (communication pay load)
consist of the satellite antennas plus the repeater. The bandwidth handled by the
satellite is broken down (demultiplexed) in to traffic manageable segments (40-
80MHz) each of which is handled by separate repeater called transponders which
are connected by a switching matrix to the various onboard antennas.
Figure 2. C and Ku band transponders block diagram.

General block schematic for satellite system

The principal functions of the repeater in a communication spacecraft are to:

1. Accept low-level signal from the receiving antenna, and amplify


them with an acceptable degree of noise addition.
2. Change the received signal from the uplink to the down link
frequency.
3. Provide power amplification of the down link signal with an
acceptable degree of distortion and present them to
transmitting antennas.
All these function must be performed within the power, mass and
environmental constraints imposed by the overall space craft design.

Communication repeater may be broadly categorised as:

1. ‘Transparent’ repeaters in which the uplink signal are


translated in frequency, but otherwise unchanged
2. Processing repeaters in which the uplink signals are changed in
form by demodulation, correlation or some other complex
process.

All communication satellites are transparent repeaters widely. The total


channel capacity of a satellite that uses 500 MHz band at 6/4 GHz can be
increased only if the bandwidth can be increased or reused. The trend in high
capacity satellites has been to reuse available bands by employing several
directional beams at the same frequency(special frequency reuse) and orthogonal
polarization at the same frequency(polarization frequency reuse).
Signals (known as carries) transmitted by an earth station are received at
the satellite by either a zone beam or a spot antenna. Zone beams can receive
from transmitters anywhere within the coverage zone where as spot beams have
limited coverage. The received signal is often taken to two LNAs (low noise
amplifiers) and is recombined at their o/p to provide redundancy. If either
amplifier fails, the other one can still carry all traffic. Since all carries from one
antenna must pass through a LNA, a failure at this point is catastrophic.
Redundancy is provided where ever failure of one component will cause the loss of
a significant part of the satellite communication capacity.
Inter modulation distortion is likely to occur wherever we use TWT(
travelling wave tube) amplifies as wide band amplifies for more than one signal
and drive the amplifies into saturation. Control of spurious signal generation and
unwanted image responses is more difficult in a multiple conversion repeaters.
The choice of transponder BW also depends on the nature of the signal to be
carried by the satellite and the multiple access technique used. Digital modulation
using TDMA allows the transponder to be allocated to only one signal at any
instant of time. So non linearity in the transponder output amplifier is not
important and the output power amplifier can be driven into non liner region.
Redundancy is provided for HPA (high power amplifier) in each transponder by
including a spare TWT or GAS FET amplifier that can be switched in to circuit if
primary power amplifier fails. The life time of TWTA is limited. Transponders may
carry analog or digital signals, and may use horizontal or vertical polarization.

Transponder for use in 14/11 GHz bands normally a double frequency


conversion scheme. It is easier to make filtering, amplifiers, equalizers at an IF
frequency such as 1100 MHz than at 14 or 11 GHz , so the incoming 14 GHz carrier
translated to an IF around 1 GHz. The amplifier and filtering are performed at 1
GHz and relatively high level carrier is translated back to 11GHz for amplifier by
the HPA.
Stringent requirements are placed on the filters used in transponders, to
provide good rejection of unwanted frequencies, like inter-modulation products
and also have very low amplitude and phase ripple in their pass bands.

Frequently a filter will be followed by an equalizer that smooth out


amplitude and phase variation in the pass bands. Phase variation across the pass
baud produces group delay distortion, which is particularly troublesome with
wideband FM signals and high speed phase shift keyed data transmission.

5. Spacecraft antennas: Antenna forms an important element in a


communication system especially in terrestrial systems which are designed to
radiate and receive electromagnetic waves. Antenna cannot add power, instead it
can only focus and shape the radiated power in space i.e. it can increase the power
in one direction and suppress the power in other directions

Following are the four main types of antennas are used on space craft.

1. Wire antennas: monopoles and dipoles


2. Horn antennas.
3. Reflector antennas
4. Arrays
Wire antennas are used primary at VHF and UHF to provide
communication for the TT & C systems. They are positioned with great care on the
body of the space craft in an attempt to provide omni directional coverage.

Horn antennas are used at microwaves frequencies when relatively wide


beams are required as for global coverage. A horn is a flared section of wave guide
that provides an aperture several wavelengths wide and a good match between the
wave guide impedance and free space. Horns are also used as feeds for reflectors
either single or in clusters. Horns and reflectors are examples of aperture antennas
that launch a wave into free space from a wave guide. It is difficult to obtain gains
much greater than 23db or beams widths narrower than about 100 with horn
antennas. For higher gains or narrow beam widths a reflector antenna or array must
be used. The parabolic is the basic shape for most regulator antennas, although
many space craft antennas are modified paraboloidal reflector profiles to tailor the
beam pattern to a particular coverage zone.

Two types of antennas popular in broadcasting. They are horn and reflectors

• Horn antennas are efficient, low gain, wide beam.


Figure 4. Horn antennas have efficiency() values ranging 65-80 %.

Parabolic reflectors radiations emitted at focus emerge in a beam


parallel to the axis and give a narrow beam if the diameter of the dish is more.
Suitable mainly at microwave frequencies because it must be large compared with
the wavelength.

Figure 5. parabolic antennas

Aperture antennas (horns and reflectors) have a physical collecting area that
can be easily calculated from their dimensions:
D2
Aphy  r  
2

4
Therefore, we can obtain the formula for aperture antenna gain as:

4Ae 4Aphy
Gain   2 
2 
 D 
2

Gain    
  
Where D is diameter of the antenna &typical values of  for Reflectors is: 50-60%

Link Design

Link design basically deals with designing a link in both uplink and
down link chains for a specific C/N which depends on the threshold of the
demodulator. Following are the factors influence link design.

a) Weight of the satellite.


b) The dc power that can be generated onboard.
c) The frequency bands allocated.
d) Multiple access system used.
e) The size of the antennas that can be used.
Basic transmission theory

The calculation of power received by an earth station from a satellite


transmitter is fundamental to the understanding of a satellite communications. In
this section, we discuss two approaches to this calculation: the use of flux density
and link equation.

Area Am2

Flux density F watts/m2

[ Fig 6 :- Flux density produced by an isotropic source ]

Consider a transmitting source, in free space radiating a total power


Pt watts uniformly in all directions as shown in above figure such a source is called
isotropic. It is an idealization that cannot be realized physically because it could not
create transverse electromagnetic waves. At a distance R meters from the
hypothetical isotropic source transmitting RF power Pt watts, the flux density
crossing the surface of a sphere with radius R is given by

F
Pt
w / m2 1
4R 2

All real antennas are directional and radiate more power in some
direction than in others. Any real antennas has again G(θ), defined as the ratio of
power per unit solid angle radiated in a direction θ to the average power radiated
per unit solid angle.

P 
G   2
p0 / 4

Where

1. P(θ) is the power radiated per unit solid angel by the antenna.
2. р0 is the total power radiated by the antenna.
3. G(θ) is the gain of the antenna at an angel θ.
The reference for the angel θ is usually taken to be the direction in
which maximum power is radiated; often called the bore sight direction of this
antenna. The gain of the antenna is then the value of G(θ) at angel θ=0 0, and is a
measure of the increase in flux density radiated by the antenna radiating the same
total power for a transmitter with o/p Pt watts driving a lossless antenna with
gain Gt, the flux density in the direction of the antenna bore sight at distance R
meter is
F
pt Gt
w / m2 3
4R 2

The product PtGt is often called the effective isotropically radiated power or
EIRP, and it describes the combination of transmitter power and antenna gain in
terms of an equivalent isotropic source with power PtGt watts radiating uniformly
in all directions.

Isotropic source Incident flux density


EIRP=Pt watts F watts Receiver

Pr

[Receiving antenna with area A m2, gain Gr ]

Figure 7.

Power required by an ideal antenna with area A m2, incident flux density is
F
Pt
w / m2 . 1
4R2
Received power is Pt= FXA= Pt A /4πR2 watts.

If we had an ideal receiving antenna with an aperture area of A m2, as


shown in above figure we would collect power Pr watts given by

Pr  F  A watts 4
A practical antenna with a physical aperture area of Ar m2 will not
deliver the power given above. Some of the energy incident on the aperture is
reflected away from antenna and some is absorbed by loss y components. This
reduction in efficiency is described by using an effective aperture Ae where

Ae = A Ar (5)
And A is the aperture efficiency of the antenna which accounts for all
the losses between the incident wave front and the antenna output port: these
include illumination efficiency or aperture taper efficiency of the antenna which is
related to the energy distribution produced by the feed across the aperture and
also other losses due to spill over, blockage, phase errors, diffraction effects,
polarization, and mismatch losses. For paraboloidal reflect antenna n A is typically
in the range 50 to 75 lower for small antennas and higher for large case grain
antennas. Horn antennas have efficiencies approaching 90%. Thus the power
received by a real antenna with a physical receiving area A r and effective aperture
area Ae m2 is:-

Pr 
Pt Gt Ae
watts 6
4R 2

Note that this equation is essentially independent of frequency if G t and


Ae are constant within a given band; the power received at an earth station
depends only on the ERIP of the satellite, the effective area of the earth station
antenna and the distance R.

A fundamental relationship in antenna theory is that the gain and area


of an antenna are related by G  4Ae / 2 7
Where λ is the wave length (in meters) at the frequency of operation.

Substituting for Ae in above equation Pt

Pt Gt Gr
Pt  8
4R / 2 
watts

This expression is known as the link equation and it is essential in the


calculation of power received in any radio link. The frequency (wave length λ)
appears in the equation for received power because we have used the receiving
antenna gain, instead of effective area. The term 4R /   is known as the path
2

loss , Lp. It is not loss in the sense of power being absorbed, it accounts for the way
energy spreads out as an electromagnetic wave travels away from a transmitting
source in three dimensional spaces.

Collecting the various factors, we can write.


EIRP  receiving antenna gain
power received  watts 9
path loss
In communication system, decibel quantities are commonly used to
simplify equations like above in decibel terms, we have

Pr  EIRP  Gr  Lp dbw 10

Where, EIRP=10 log10 (Pt Gt) dBW

GR= 10 log10 (4πAe/λ2) dB

Path loss lp = 10 log10[(4πR/λ)2 ] = 20 log10 [4πR/λ] dB

The expression dBW means decibels greater or less than 1 watt (0dBW).
The unit dBW and dBm(dB greater or less than 1w and 1mw) are widely used in
communications engineering. EIRP, being the product of transmitter power and
antenna gain is often quoted in dBW.

Note that once a value has been calculated in decibels, it can readily be
scaled if one parameter is changed. For example, if we calculated for an antenna
to be 48db at a frequency of 4GHz and wanted to know the gain at 6GHz. We can
multiply Gr by (6/4)2. Using decibels we simply add 20 log(6/4) or 20 log(3)-
20log (2)=9.5-6 = 3.5db.thus the gain of our antenna at 6GHz is 51.3db.

The received power, Pr, calculated by eq:6 and eq:8 is commonly


referred to as carriers power, C. This is because most satellite links use either
frequency modulation for analog transmission or phase modulation for digital
transmission. In both of these modulation systems, the amplitude of the carrier is
not changed when the data are modulated onto a carrier so received carrier
power C is always equal to receive power Pr.

System Noise Temperature and G/T Ratio

Noise temperature: Noise temperature is a useful concept in


communication receiver, since it provides a way of determining how much
thermal noise is generated by active and passive devices in the receiving system.
At microwave frequencies a black body with a physical temperature Tp degrees
Kelvin, generates electrical noise over a wide bandwidth.
The noise power is given by

Pn = KTpBn. Where, K= Boltzman constants=1.39x 10-23J/K = -228.6dBW/k/Hz. Tp


= physical temperature of source in Kelvin degrees. Bn = noise bandwidth in
which the noise power is measured, in hertz.
Pn is the available noise power (in watts) and will be delivered only to a
load that is impedance matched to the noise source. From the equation 10 we can
derive the following equation after considering the noise power Pn

C Nuplink = EIRP of earth station transmitter - pathloss+ Tn


Gr
K B

C N  d /l
 EIRPofsate lite  pathloss 
Gr
Tn
K B

C N   N C
C
 N
   C N 
1 1 1


U  ND
U D

The design of downlink is important because of


limited power available at the satellite. The uplink design is easily than the downlink
in many cases, since an accurately specified carries power must be presented at the
satellite transponder and it is often feasible to use much higher power transmitters at
earth stations than can be used on a satellite.

If you take the effect of IM (inter modulation distortion) and other


interference in to consideration. The overall C/N for the entire link is given by

C
C 
N NU  N D  N IM  N imt

   N  N  
1
C 
1 1 1 1
 C  C  c
 N U D IM n imt 

References:
1. Timothy Pratt, Charless Bostian, Jeremy Allnutt “satellite communications”
second edition
Digital Television Terrestrial Broadcasting
V. Seetharam
ADE RSTI (T), BBSR

Digital television (DTV) refers to the complete digitization of the TV


signal from transmission to reception. By transmitting TV pictures and sounds as
data bits and compressing them, a digital broadcaster can carry more information
than is currently possible with analog broadcast technology. This will allow for the
transmission of pictures with HDTV resolution for dramatically better picture and
sound quality than is currently available, or of several SDTV programs
concurrently. The DTV technology can also provide high-speed data transmission,
including fast Internet access.

Digital Terrestrial Television Broadcasting refers to the mode of


transmission of Digital Television Signals which ultimately replace the existing
analog terrestrial broadcasting which has been the most popular television
viewing experience for over five decades in India.

Internationally there are standards evolved for DTTB; the three major
standards being ATSC-T, DVB-T, and ISDB-T. Shown below is a typical block
schematic of a Digital Terrestrial Broadcasting set up. India has adopted the DVB-
T standard for DTTB.

DTV System Diagram

Typical DTTB Block Schematic

The key concepts to be learnt are Baseband Digital Audio, Video and
Data Signals, encoding of the base band signals (Compression formats – for eg.
MPEG-2 format used in DVB-T), multiplexing of more than one television channels
in to a single transport stream, data scrambling and conditional access, channel
coding to improve the ruggedness of the signal when it is transmitted into free
space, modulation techniques for transmission of the signal at Radio Frequency, -
all of which takes place at the transmitting station and the reverse process takes
place at the receiving homes through a Integrated Receiver Decoder (IRD) often
referred as set to box; providing the final viewing experience on the television
display device.

The video, audio and other service data are compressed and multiplexed
to form elementary streams. These streams may be multiplexed again with the
source data from other programs to form the MPEG-2 Transport Stream (TS). A
transport stream consists of Transport Packets that are 188 bytes in length.

The FEC encoder takes preventive measures to protect the transport


streams from errors caused by noise and interference in the transmission channel.
It includes Reed-Solomon coding, outer interleaving, and convolution coding. The
modulator then converts the FEC protected transport packets into digital symbols
that are suitable for transmission in the terrestrial channels. This involves QAM
and OFDM in DVB-T . The final stage is the upper converter, which converts the
modulated digital signal into the appropriate RF channel. The sequence of
operations in the receiver side is a reverse order of the operations in the
transmitter side.

MPEG-2 Video Compression: Data compression technology makes


digital television broadcasting possible with a smaller frequency bandwidth than
that of an analog system. Among the many compression techniques, MPEG is one
of the most accepted for all sorts of new products and services, from DVDs and
video cameras to digital television broadcasting. The MPEG-2 standard supports
standard-definition television (SDTV) and high-definition television (HDTV) video
formats for broadcast applications.

MPEG video compression exploits certain characteristics of video


signals, namely, redundancy of information both inside a frame (spatial
redundancy), and in-between frames (temporal redundancy). The compression also
removes the psycho visual redundancy based on the characteristics of the human
vision system (HVS) such that HVS is less sensitive to error in detailed texture areas
and fast moving images. MPEG video compression also uses entropy coding to
increase data-packing efficiency.

DCT-based intraframe coding


The intraframe coding algorithm begins by calculating the DCT
coefficients over small non-overlapping image blocks (usually 8x8 in size). This
block-by-block processing takes advantage of the image's local spatial correlation
properties. The DCT process produces many 2D blocks of transform coefficients
that are quantized to discard some of the trivial coefficients that are likely to be
perceptually masked. The quantized coefficients are then zigzag scanned to output
the data in an efficient way. The final step in this process uses variable length
coding to further reduce the entropy.

Motion-compensated interframe coding

Interframe coding , on the other hand, exploits temporal


redundancy by predicting the frame to be coded from a previous reference frame.
The motion estimator searches previously coded frames for areas similar to those
in the macroblocks of the current frame. This search results in motion vectors
(represented by x and y components in pixel lengths), which the decoder uses to
form a motion-compensated prediction of the video. The motion-estimator
circuitry is typically the most computationally intensive element in an MPEG
encoder , Motion-compensated interframe coding, therefore, only needs to convey
the motion vectors required to predict each block to the decoder, instead of
conveying the original macroblock data, which results in a significant reduction
in bit-rate.

Block diagram of an MPEG-2 video compression system


DTV Audio Compression

Unlike video, the three current DTV standards use three different audio
coding schemes:, MPEG audio and Dolby AC-3 for DVB, the audio standards uses a
similar technique called perceptual coding and support up to six channels—right,
left, center, right surround, left surround, and subwoofer—often designated as 5.1
channels. A perceptual audio coder exploits a psycho-acoustic effect known as
masking. This psycho-acoustic phenomenon states that when sound is broken into
its constituent frequencies, those sounds with relatively lower energy adjacent to
others with significantly higher energy are masked by the latter and are not
audible.

Figure 5: Audio perceptual masking

MPEG-2 Transport Stream and Multiplex

Audio and video encoders deliver elementary stream outputs.


These bit streams, as well as other streams carrying other private data, are
combined in an organized manner and supplemented with additional
information to allow their separation by the decoder, synchronization of picture
and sound, and selection by the user of the particular components of interest.
This is done through packetization specified in MPEG-2 systems layer. The
elementary stream is cut into packets to form a packetized elementary stream
(PES). A PES starts with a header, followed by the content of the packet
(payload) and the descriptor. Packetization provides the protection and
flexibility for transmitting multimedia steams across the different networks. In
general, a PES can only contain the data from the same elementary stream.

Elementary, Packetized Elementary, and Transport Streams

In broadcasting applications, a multiplex usually contain


different data streams (audio and video) that might even come from different
programs. Therefore, it is necessary to multiplex them into a single stream—the
transport stream. Figure below shows the process of multiplexing. A transport
stream consists of fixed-length transport packets, each exactly 188 bytes long.
The header contains important information such as the synchronization byte
and the packet identifier (PID). PID identifies a particular PES within the
multiplex.

(a) The process of multiplexing. (b) The structure of a transport packet.

It is necessary to include additional program-specific information (PSI)


within each transport stream in order to identify the relationship between the
available programs and the PID of their constituent streams. This PSI consists of
the four tables: program associate table (PAT), program map table (PMT),
network information table (NIT), and conditional access table (CAT).

Within a transport stream, the reserved PID of 0 indicates a transport


packet that contains a PAT. The PAT associates a particular PID value with each
program that is currently carried in the transport multiplex. This PID value
identifies the PMT for that particular program. The PMT contains details of the
constituent elementary streams for the program. Program 0 has a special
meaning within the PAT and identifies the PID of the transport packets that
contains the optional NIT. The contents of the NIT are private to the broadcaster
and are intended to contain network-specific information. The CAT is identified by
a PID of 1 and contains information specific to any conditional access or
scrambling schemes that are in use.

Navigating an MPEG-2 Multiplex

MPEG-2 PSI tables only give information concerning the multiplex. The
DVB standard adds complementary tables (DVB-SI) to allow the user to navigate
the available programs and services by means of an electronic program guide
(EPG). DVB-SI has four basic tables and three optional tables to serve this purpose.
The decoder must perform the following main steps in order to find a program or
a service in an MPEG-2 transport multiplex.

1. As soon as the new channel is acquired (synchronized), the decoder must


filter the PID 0 packets to acquire the PAT sections and construct the PAT to
provide the available choice (services currently available on the air) to the
user.

2. Once the user choice is made, the decoder must filter the PID corresponding
to the PMT of this program and construct the PMT from the relevant
sections. If there is more than one audio or video stream, the user should be
able to make another choice.

3. The decoder must filter the PID corresponding to this choice.

The audio/video decoding can now start. The part of this process that is
visible to users is the interactive presentation of the EPG associated with the
network, which can be built by means of the PSI and DVB-SI tables in order to
allow them to easily navigate the available programs and services.

Conditional Access in DTV

DTV services will either be pay-per-view or at least include some elements


that are not freely available to the public. DVB defined a standard for a "Common
Interface for Conditional Access and other Digital Video Broadcasting Decoder
Applications" to enable an Integrated Receiver Decoder (IRD) to de-scramble
programs broadcast in parallel, using different conditional access (CA) systems.
By way of inserting a PCMCIA module into the common interface, you can
sequentially address different CA systems by that IRD.

Forward Error Correction

The transmission channels used for digital television broadcasting are,


unfortunately, rather error-prone due to a lot of disturbances (such as noise,
interference, and echoes). However, a digital TV signal, after almost all its
redundancy is removed, requires a very low bit error-rate (BER) for good
performance. A BER of the order of 10-10 corresponds to an average interval of
some 30 minutes between errors. Therefore it is necessary to take preventive
measures before modulation in order to allow detection and, as far as possible,
correction in the receiver of most errors introduced by the physical transmission
channel. These measures are called, collectively, forward error correction (FEC).
FEC requires that redundant data is added to the original data prior to
transmission, allowing the receiver to use these redundant data to detect and
recover the lost data caused by the channel disturbance.

Forward error correction coding


Above figure illustrates
process the successive steps of the forward error
correction encoding process used in digital television broadcasting. Strictly
speaking, energy dispersal is not part of the error correction process. The main
purpose of this step is to avoid long strings of 0s or 1s in the transport stream, in
order to ensure the dispersal of energy in the channel. Broadcasting standards
often use the terms inner coding and outer coding. Inner coding operates just
before the transmitter modulates the signal & just after the receiver demodulates
the signal. Outer coding applies to the extreme input and output ends of the
transmission chain. Inner coding is usually Convolutional in nature, with optimal
performance under conditions of steady noise interference. Outer coding is a
Read-Solomon code that is usually more effective for correcting burst errors.

Read-Solomon Coding

Outer coding is a Reed-Solomon code that is a subset of BCH cyclic block


codes. As its name implies, in block coding, a block of bits is processed as a whole
to generate the new coded block. It does not have system memory, such that
coding of a data word does not depend on what happens before or after that data
occurs. Reed-Solomon code, in combination with the Forney convolutional
interleaving that follows it, allows the correction of burst errors introduced by
the transmission channel.

Interleaving
The purpose of data interleaving is to increase the efficiency of the
Reed-Solomon coding by spreading over a longer time the burst errors
introduced by the transmission channel, which could otherwise exceed the
correction capacity of the Reed-Solomon coding. Interleaving is normally
implemented by using a two-dimensional array buffer, such that the data enters
the buffer in rows and then read out in columns. The result of the interleaving
process is that a burst of errors in the channel after deinterleaving becomes a few
scarcely spaced single-symbol errors, which are more easily correctable. DVB
uses convolution interleaving, and the interleaving depth is 12.

Inner Code

The inner coding is a convolutional coding for DVB. Inner coding is an efficient
complement to the Reed-Solomon coding and Forney interleaving as it is designed
to correct random errors.

DVB Convolutional Coding and Puncturing

In DVB, Convolutional coding is used, followed by code puncturing.


Typically, a 1/2 Convolutional consists of two FIR filters. These two FIR filters
convolve with the input bit stream, which produces two outputs that represent
different parity checks on the input data so that bit errors can be corrected.
Clearly, there will be two output bits for every input bit; therefore the code rate is
1/2. Any rate between 1/1 and 1/2 would still allow the transmission of original
data, but the amount of redundancy would vary. Failing to transmit the entire
1/2 output is called puncturing and it obtains any required balance between bit
rate and error correcting capability. In DVB systems, as well as in ISDB systems,
1/2, 2/3, 3/4, 4/5, 5/6, 7/8 are all possible code rates.

Digital Modulations in DTV

Until now we do not see much difference among the three DTV
systems. Differentiation occurs due to the different modulation schemes of the
systems. This section briefly describes principles behind those modulation
schemes.

DVB-T OFDM System

A European consortium of public and private sector organizations—the


Digital Video Broadcasting Project—developed the DVB-T OFDM system. The
system uses a larger number of carriers-per-channel modulated in parallel via an
FFT process, a technique referred to as orthogonal frequency division multiplex
(OFDM). In case of multipath interference, echoes could cause severe interference
to the main signal. Therefore, long symbol duration is necessary to suppress the
echo interference. OFDM can achieve long symbol duration within the same
bandwidth using parallel modulation. In OFDM, symbols are demultiplexed to
modulate many different carriers (a few thousand), each of which occupies a much
narrower bandwidth. Hence, the symbol duration could be increased, though the
total bandwidth remains the same. These carriers are chosen to be orthogonal to
each other so that they are separable in the decoder. The modulated symbols are
frequency multiplexed to form the OFDM baseband signal, which is then up-
converted to RF signal for transmission.

The OFDM transmission system allows the selection of different levels


of QAM modulation. Moreover, a guard interval with selectable width (1/4, 1/8,
or 1/16 of the symbol duration) separates the transmitting symbols, which gives
the system an excellent capability for coping with multipath distortion. OFDM
modulation also supports a single frequency network, such that in the single
coverage area, multiple transmitters are used to transmit the same data using
the same frequency at the same time. The DVB-T system can operate in either a
2k mode or 8k mode. The 2k mode uses a maximum of 1705 carriers, while in 8k
mode the carrier number is 6817. The 2k mode system has short symbol duration,
so it is suitable for a small single-frequency network (SFN) network with limited
distance between transmitters. The 8k mode is used in a large SFN network
where the transmitters could be up to 90 km apart.

Reference: Digital Television Terrestrial Broadcasting Primer by Pan Feng –


sourced from Tech on Line
Satellite Tracking
V. Seetharam
ADE RSTI (T), BBSR
Introduction
Television signals reception is either through a cable, through terrestrial
transmission of through Satellite transmission. Cable operators normally receive
the Satellite signals from different sources and in turn transmit the selected
channels through Cable network. Most of the Satellite reception by the cable
operators is in C-Band. For C-Band the downlink signals is in the 4GHz range and
the receiving parabolic dish antenna size is relatively large for installing at
homes. The solution for direct reception of satellite at homes is in Satellite
transmission through the Ku Band where the downlink signal is in the 11 GHz
region. The uplink signals to the satellite for C-Band and Ku Band, however is in
the 6 GHz and 14 GHz region respectively. There is a frequency translation
taking place on boar the satellite transponder. A typical block diagram of a
satellite transponder for C Band operation is as shown below

Satellite Transponder

As shown in figure below the uplinked signal (6 GHz) at satellite is


received, amplified and down converted to 4 GHz band and sent back through
filter and power amplifier (TWT). The local oscillator frequency of down
converter is 2225 MHz for C band and Ex-C band transponders. Hence, the
difference between the Uplink and the Down Link signal will be 2225 MHz.

Fig: 1 Block diagram of Satellite Transponder Operating in C-Band

Receiving Satellite Signal :


For receiving a satellite signal essential requirements are:-
1. Satellite receiving antenna (PDA).
2. Feed with low noise block converter (LNBC).
3. Knowledge of Geographical Location (Latitude and Longitude) of reception
site
4. Knowledge of the parking lot along the equator of the Satellite being tracked
5. Parameter of the receiving signal available in the satellite down link.
Any place of the earth if it has to be located or marked should have some
coordinates which are unique and representable. As such there are no fixed
references except north and south poles. Hence some imaginary lines were drawn,
called longitudes and latitudes. With these lines it is very easy to fix coordinates of
any place with reasonable accuracy.

The imaginary line drawn equidistant to both north and south poles and this
was called equator. Parallel lines were drawn depicting the angular distance north
or south of equator. These imaginary lines are called Latitudes. Equator is 0 degree
Latitude. All other latitudes are circles with different diameters the equator being
the largest. As they are parallel to each other they are also called parallels. The
equator as the reference and both north and south poles cover an angle of 90
degrees maximum. (Fig. 2 & 3).

Fig. 2 Parallels of Latitudes Fig. 3 Latitude as angular distance

Fig. 4 Longitude as angular distance east Fig. 5 Longitudes


The latitude gives only one reference point and any geographical location
cannot be uniquely defined by its latitude alone. The imaginary lines drawn from
north to south along the earth surface with the lines touching both the poles as
shown in fig. 4& 5 are called Longitudes. The reference point i.e. 0 degree longitude
is taken as the line passing through the observatory at Greenwich, England. The
meridians or longitudes east of Greenwich are called east meridians and those lying
in the west are called west meridians. The maximum is 180 degrees on both sides.
Any geographical location on the earth can be uniquely defined by specifying the
latitude and longitude of the place.

Azimuth and Elevation


For receiving a satisfactory signal from the satellite the dish antenna
should be pointed towards the satellite accurately. For that we need to know the
azimuth and elevation of a particular satellite from our place.

The azimuth and elevation are angles which specify the direction of a
satellite from a point on the earth's surface. In layman terms the azimuth is the
east west movement and the elevation can be defined as the north south movement
of the dish.

Both the azimuth and elevation of a dish can be affected by three factors
for geo-stationary satellites. They are

1. The longitude of the satellite.


2. The latitude of the place.
3. The longitude of the place.
Calculation of Angle of Elevation

 r 
 Cos D. Cos   R
1

Elevation  tan  
 1  Cos D. Cos  2 
 

where r = Radius of the earth (6367 kms)


R = Radius of Synchronous orbit (42,165 kms).
 = Latitude of the earth station
D = difference in longitude of the earth station and the satellite. (r - s)
Calculation of Azimuth

azimuth  180 o  tan 1


tan D
sin 

where D = r - s in degrees.
 = latitude of the given site in degrees.
r = longitude of the given site in degrees.
s = longitude of the satellite.
Polarization
The wave radiated by an antenna consists of an electric field component
and a magnetic field component. These two components are orthogonal and
perpendicular to the direction of propagation of the wave. By convention the
polarisation is defined by the plane of propagation of electrical field component.
That means if the electrical field component is travelling in the vertical plane it is
called vertically polarised. If the wave contains both vertical and horizontal
components it is called circular or elliptical. The types of polarisation are:
1. Linear polarisation

a) Vertical (V) b) Horizontal (H)

2. Circular or Elliptical

a) Right hand circular (RHCP) b) Left half circular (LHCP).

Exercise on Calculation of Look Angles

Find the look angles of an earth station 20oN 75oE for the Satellite parked at 93.5oE
E
d
6378 o El
90


O S
6378 35786

Assumptions :

Radius of earth is constant & = 6378 km


Height of the Satellite above Equator = 35786 km.

Earth Station is in the Northern Hemisphere

Formulas used : Plane trigonometry


A

c
C
a b c
a    2R
Sin A SinB Sin C
B
b2  c 2  a2
Cos A 
2bc

 +  + El = 90o
Spherical Trigonometry

Cos  = Cos . Cos 


 Tan  
AZ  180o  Tan 1  
 Sin  

Plus sign is taken when the longitude of the Satellite is less than the
longitude of the earth Station.

Minus sign is taken when the longitude of the satellite is greater than the
longitude of the earth station.

 = Latitude of the earth station.


 = Difference in longitudes of the earth station and Satellite.
Absolute Value.
= E   s
Problem

  75o  93.5 o  18.5 o

Tan   Tan 18.5 o  0.3346


  20 o

Sin   Sin 20o  0.3420

 Tan   0.3346
Tan 1    Tan 1
 Sin   0.3420

 Tan 1 0.97829  44.37 o


As  s   E , minus sign is taken.

AZ  180o  44.37 o  135.63 o


E

6378 (90+El) d

  S
O
42164
Cos   Cos  . Cos 
 Cos 18.5 o . Cos 20 o  0.8911
   26.98 o
 Central angle
d  63782  421642  2(6378)( 42164)(0.8911)
 1818481779  479272774
 1339209005  36595.2 km.
 Slant range.

d 42164 42164
 
Sin  Sin (90  El) Cos El

42164  Sin 
 Cos El 
36595.2

42164  0.453679
  0.5227
36595.2

 El  Cos1 0.5227  58.49o

    El  90o
   Nadir angle
 90  58.49  26.98  4.53o
Answer :

Look angles

Azimuth = 135.63o
Elevation = 58.49o

Limits of Visibility

There will be east and west limits on the geostationary arc visible
from any given earth station. The geographical co-ordinates of the earth station
and the antenna elevation set the limits of visibility.

E
d
6378 o El
90


O S
6378 SS 35786
Theoretical elevation for the lower limit is zero, but in practice to avoid
reception with excessive noise from the earth, a value of 5o is chosen as the lower
limit.
S = Satellite
SS = Sub satellite point
E = Earth Station
 = Central angle  = Nadir Angle
El = Elevation angle

 +  + El = 90o

Radius of earth is assumed constant = 6378 km


Height of the Satellite above Equator = 35786 km.

From  EOS,

Sin ( 90  El) Sin 



OS OE
when El  5o ,
Sin 95 o
Sin 

42164 6378
6378
 Sin   . Cos 5 o  0.15069
42164
   Sin 1 0.15069  8.67o

Central angle  = 90 – (5 + 8.67) = 76.33o

Spherical Trigonometry
Cos  = Cos . Cos ,

Cos
or Cos  
Cos 

For the earth station 20oN, 75oE,

Cos 76.33o
or Cos    0.2515
Cos 20o
    75.43o
 s  75  75.43o
 150.43o E or 0.43o W
Thus, satellites within the geo-stationary arc from 150.43oE to 0.43oW
can be viewed from the earth station 20oN 75oE.
MOBILE TV THROUGH DIGITAL TERRESTRIAL TV NETWORK
Gopal Kumar, DDE
RSTI(T) BBSR
Introduction :
Mobile Television is the television which is watched on small handheld
devices. Mobile TV may be a pay TV through Mobile Telecommunication Network
over mobile phone carrier or through free to air terrestrial television network
operating over television carriers. It can also be in IPTV streaming video through
wireless network.

The transmission (Broadcasting) of mobile TV signal may take place in


either of above mentioned mode For each mode, separate transmission standers
has been developed .Further, to receive the Mobile TV signal the receiver should
be equipped with special kind of software and hardware. Most of 3G compatible
mobile- phone set and some advance handheld devices are coming with these
facilities required for Mobile TV reception.

There are various standards which specifies the structure of Mobile TV


transmission in terrestrial mode over TV channel frequencies, like

ATSC – M/H - USA

T-DMB - South Korea

DMB-T/H - China & Hong Kong

DVB-H - India, European Union

The DVB-H specifies the way of carrying multimedia services over DTT-TV
(Digital Terrestrial Transmission-TV)-Network. As DVB-H is using basic
infrastructure of DTT-TV with some modification, the brief overview of DTT is
required before explaining the DVB-H.

DTT (DVB-T)
Fig-1 shows the block diagram of DVB-T(DTT) system. As shown in the
fig. the DTT-transmission system can be seen as cascading of different functional
blocks having specific functions. These blocks are

1) MPEG-2 Encoding
The programme streams either in analog or digital video form are encoded
with MPEG-2 encoding method to achieve professional quality high
compression.
2) Multiplexing
The Multiplexing of encoded programme streams are multiplexed
to have single stream carrying data of multiple programmes in a structured
format specified by MPEG-2 Standard.

3) Adoption & Randomization :


The MPEG-2 encoded stream carrying multiple programmes
passes through the process of Adoption & Randomization for energy dispersion
which is essential for OFDM transmission

4) Outer coder using Reed Solomon (RS) Coding-


In the RS block coding, a 188 byte transport packet is converted to 204 byte
pocket by adding 16 parity bytes. Addition of parity bytes provided the capability to
correct up to 8 bytes of error within a packet of size 188byte

5) Outer Interleaver-
Convolutional Interleaver used for the dispersal of bytes from one
packet (204 bytes) over a length of 204 x 12 bytes on a data stream. By this process,
the bust of error gets distributed over bytes from different packets hence the error
correction capability of RS coding can be optimally utilized.

6) Convolutional Coding (Inner Coding)-


The punctured Convolutional coding is used for the coding. According
to channel condition and bandwidth, the code rate can be selected at the coder.
7) Inner Interleaver and mapping to QAM

 The input is demultiplexed into v sub-streams, where v = 2 for QPSK,


v = 4 for 16-QAM, and v = 6 for 64-QAM.
 Each sub-stream from the demultiplexer is processed by a separate bit
Interleaver
 The block size is the same for each Interleaver, but the interleaving
sequence is different in each case
 Bit Interleaver – 126 bit block size. The block interleaving process is
therefore repeated exactly twelve times per OFDM symbol of useful
data in the 2K mode
 The purpose of the symbol Interleaver is to map v bit words onto the 1
512 (2K mode) or 6 048 (8K mode) active carriers per OFDM symbol.
The symbol Interleaver acts on blocks of 1 512 (2K mode) or 6 048 (8K
mode) data symbols.
8) OFDM Frame Structure & Creation of OFDM Signal

The transmitted signal is organized in Frames


Each Frame has a duration TF and consist of 68 OFDM symbols
Each OFDM symbols has K = 6817 carrier (8K Mode)
K = 1705 carrier (2K Mode)
K= 3409 carrier (4K Mode) – Mode suitable of DVB-H

Each symbol is generated from QAM mapped values of all carriers for
particular symbol duration, by using IFFT. An guard interval is added at the
beginning of symbol generated by IFFT. Actual Symbol time is TS =Tu + Tg
Useful time gauort interval
Guard interval in fraction of symbol time: 1/32, 1/16, 1/8
In addition to the video data on 1512 (2k) carriers, A symbol contains
Scattered Pilots : for channel estimation
Continuous Pilots : for channel estimation and synchronization
TPS Carrier : to carry the channel parameters
IFFT output after guard interval are still in digital form which is converted
to analog for transmission, by passing it through D/A(Digital to Analog)
converter. Channel converter – Up-converts the above OFDM modulated RF signal
to the transmission channel frequency.
DVB-H based Mobile TV

The mobile TV handset and reception condition has certain specific


requirement from transmission side and these requirements should be addressed
by the standard/system developed for Mobile TV transmission. These
requirements are:

a. As handled devices are battery operated, the transmission system shall


provide opportunity to handheld device to switch off the reception chain
intermittently without hampering the continuous display of video at device.
b. Services are expected to be delivered in a hostile environment suffering from
high level of noise particularly man made noise. The transmission system
should provide high level of protection to data by high protection error
coding, So that the effect of high level noise can be mitigated at
receiver(hand held device) at the time of decoding.

To meet above requirement the DVB-H has made some


modification in DVB-T system at different layers. The following figure shows the
Block Diagram of DVB-H system and modifications in DVB-T System to adopt
DVB-H over DVB-T platform. These modifications are:

At Link Layer:
Time slicing technique in which data corresponding to a particular
programme is transmitted in burst after at certain interval. This will provide
opportunity to hand held device to switch off the receiving chain after receiving
a bust of data, till the arrival of next bust of data. However, information
corresponding to arrival of next bust should be available to receiver through
present bust of data.
FEC for Multi protocol encapsulated data ( MPE – FEC ) provides additional
protection to data packet for Mobile TV.

At Physical Layer:
 DVB-H signaling in TPS bits is included to enhance and speedup service
recovery.
 Additional 4K mode is adopted as trade off between mobility and single
frequency networking. So additional option of number of carriers
 In-depth symbols Interleaver is used along with 2K and 4K mode to improve
the robustness in mobile environment.
 The signaling corresponding to 4K mode and in-depth Interleaver in TPS is
included

Features to be available in Hand held device.

BATTERY: Powerful battery is required to handle the requirement of


continuous processing and higher scale of processing need.
MEMORY: The streaming of video poses high buffer requirement, hence more
memory is required in the device.
User Interface: LCD display with better clearity and software to handle the
operation of device.
Processing power: The processor within the Mobile handset should be
capable of processing the video.
Programme content
The content to be delivered should also molded to fit in the constraints of
handset. The content should be of smaller duration and less resolution than the
standard TV.
SITE MASTER
Mamta Patra-SEA, RSTI(T), BBSR

The Site Master is a hand held cable and antenna analyzer. The Site Master is
designed for measuring Return Loss, SWR, and Cable Loss of cable and antenna
systems from 25 MHz to 4 GHz. Distance-To-Fault (DTF) measurements can be used
to locate the precise location of a fault
within the feed line system.
For accurate results, the Site Master must be calibrated before making
any measurements. The Site Master must be re-calibrated whenever the setup
frequency changes, the temperature exceeds the calibration temperature range or
when the test port extension cable is removed or replaced.
There are two methods of calibration –

1) Flex Cal : Flex Cal is a broadband frequency calibration that remains valid
if the frequency is changed.
2) OSL Cal : An OSL calibration is an Open, Short and Load calibration for a
selected frequency range, and is no longer valid if the frequency is changed. The
default calibration mode is OSL.
With either calibration method, the Site Master may be calibrated
manually with Open, Short, Load (OSL) calibration components.

Calibration Verification -
1) When an OPEN is connected, a trace will be displayed between 0-20 dB.
2)
When the Site Master is measuring an equivalent OPEN, a trace will be displayed
between 0-20 dB.

3) When the Site Master is measuring an equivalent LOAD, a trace will be


displayed between 0-50 dB.
Cable and Antenna Analyzer Measurements
In wireless communication, the transmit and receive antennas are
connected to the radio through a transmission line. This transmission line is usually
a coaxial cable or waveguide. This connection system is referred to as a
transmission feed line system.
The performance of a transmission feed line system may be affected by
excessive signal reflection and cable loss. Signal reflection occurs when the RF
signal reflects back due to an impedance mismatch or change in impedance caused
by excessive kinking or bending of the transmission line. Cable loss is caused by
attenuation of the signal as it passes through
the transmission line and connectors.
Signal attenuation occurs mostly in
lengthy cables and distribution networks.
Power loss due to dissipation at the input
of TV receivers is quite small comparison
to cable losses.
The energy dissipated in coaxial
cable due to-
i) I2R Losses: - The characteristic
impedance which is resistive CR) of mast
cables is around 75 ohms. Hence sufficient
I2 R loss occurs when signal current I flow
through cable conductors.

ii) Dielectric losses: - A capacitor is


formed whenever an insulator i.e. dialectic
separates two conductors between which a
difference of potential can exist. (Figure-2) Typical Transmission Feed Line
System
iii) Skin effect losses: - The effective resistance offered by conductors
to radio frequencies is considerably more than the ohmic resistance
measured with direct currents. This is because at an action known as skin
effect. Both dialectic and skin effect losses increase in proportion to the
square root of signal frequency i.e. √f .

iv) Temperature effect:- Cable losses are also effected by temperature


variations. The attenuation increases at higher temperatures. Therefore,
summer and winter temperatures are widely different enough variations
occurs in signal level.
To verify the performance of the transmission feed line system and
analyse these problems, three types of line sweeps are required.
Cable Loss Measurement: Measures the energy absorbed, or lost, by the
transmission line in dB/meter or dB/ft. Different transmission lines have different
losses, and the loss is frequency and distance specific. The higher the frequency or
longer the distance, the greater the loss.

Measurement Procedure - 0 dB 517 points Swp – 2.62 s

Step-1. Connect one end of the test


port cable to the port labelled RF
Out/Reflection. dB

Step-2. Press the MODE function


hard key and select Cable Loss – One
Port and press ENTER.

Step-3. Enter the frequency range.


a) Enter the frequency range. 10 dB M4 M3 M2 M1
b) Press F1 soft key. Use the number 100 MHz frequency → 250 MHz
keys to enter the lower frequency.
c) Press F2 soft key to enter the Frequency:
upper frequency. F1 = 100 MHz
Avg. Cable loss = 170 dB

F2 = 250 MHz M1 = 2.73 dB, 224.97 MHz


Step-4: Calibrate the instrument. Amplitude: M2 = 2.34 dB, 194.18 MHz
M3 = 2.16 dB, 164.53 MHz
Top = 0 dB.
Step-5: Bottom = 10 dB.
M4 = 1.73 dB, 135.46 MHz
a) Press MEAS/DISP function key.
b) Press Fixed CW & select off. (Figure-3) Cable loss measurement

Step-6: Connect the test port cable


t
o the swept frequency range is displayed in
The average cable loss value over
the bottom part of the display,
the line to be tested

Standing wave ratio:

65.53
M1 = 3.29 dB, 224.97 MHz
M2 = 3.81 dB, 194.18 MHz
M3 = 4.12 dB, 164.53 MHz
M4 = 5.11 dB, 135.46 MHz

Amplitude - Top = 65.53, Bottom 1.00

1.00 M1 M2 M3 M4
100 MHz frequency → 250 MHz
Return Loss Measurement Measures the reflected power of the system in
decibels (dB).This measurement can also be taken in the Standing Wave Ratio
(SWR) mode, which is the ratio of the transmitted power to the reflected power.
Return loss measurement verifies the performance of the transmission feed
line system with the antenna connected at the end of the transmission line.

Procedure:
Step-1. Press the MODE key.

Step-2. Select Freq-Return Loss using the Up/Down arrow & and press ENTER.

Step-3. Set the start and stop frequencies,

Step-4. Calibrate the Site Master.

Step-5. Connect the cable to the Site Master. A trace will be displayed on the
screen when the Site Master is in the sweep mode.
Cal ON 517 points sweep: 1.39 Sec
M1 = 5.37 dB, 224.97 MHz
0 dB M2 = 4.66 dB, 194.18 MHz
M3 = 4.24 dB, 164.53 MHz
M4 = 3.44 dB, 135.46 MHz

𝑉𝑆𝑊𝑅 −1
Return loss =−20 log
𝑉𝑆𝑊𝑅 +1

3.29 −1
= −20 log
3.29 +1

20 dB M4 M3 M2 M1 = 5.37 dB
100 MHz frequency → 250 MHz

Distance-To-Fault (DTF) Measurement:


Reveals the precise fault location of components in the transmission line
system. This test helps to identify specific problems in the system, such as connector
transitions, jumpers, kinks in the cable or moisture intrusion.
It can be performed in DFT-Return Loss mode or DTF – SWR mode.

Return Loss - System Sweep: A measurement made when the antenna is


connected at the end of the transmission line. This measurement provides an
analysis of how the various components of the system are interacting and provides
an aggregate return loss of the entire system.
Procedure - Return Loss Mode:
517 points sweep: 2.76 Sec
Step-1. Press the MODE key.
0 dB

Step-2. Select DTF-Return Loss using


the Up/Down arrow key & press
ENTER.

Step-3. Connect the Test Port


Extension cable to the RF port &
calibrate the Site Master. 20 dB
0 M1 M2 Dist (m) → 10
Step-4. Connect the Device under
Test to the Test Port Extension
cable. A trace will be displayed on M1 = 11.36 dB, 0.4 m
the screen. M2 = 28. 50 dB, 2.0 m
𝑉𝑆𝑊𝑅 −1
Return loss =−20 log
Step-5. Press the FREQ/DIST key. 𝑉𝑆𝑊𝑅 +1

1.74 −1
Step-6. Set the D1 and D2 values. Or, = −20 log
1.74+1

Step-7. Press the DTF Aid soft key & = 11.36 dB


select the appropriate Cable Type.

Distance To Fault - Load Sweep: A measurement is made with the antenna


disconnected and replaced with a 50_ precision load at the end of the transmission
line. This measurement allows analysis of the various components of the
transmission feed line system in the DTF mode.

Procedure - DTF-SWR Mode - 517 points sweep: 2.73 Sec


Step-1. Press the MODE key. 3.00

Step-2. Select the DTF-SWR using the


Up/Down arrow key and press
ENTER.

Step-3. Follow the same procedure as


DTF-Return Loss mode, above.
D1= 0.00 m M1 = 1.74 dB, 0.4 m
1.00
D2 = 10 m M2 = 1.08 dB, 2.0 m 0.0 m M1 M2 Dist (m) → 10 m
Antenna Subsystem Return Loss Test:

Antenna Subsystem return loss measurement verifies the performance


of the transmit and receive antennas. This measurement can be used to analyze the
performance of the antenna before installation.

Resolution:
There are three sets of data points (130, 259 and 517) available in the
Site Master. The factory default is 259 data points. By increasing the number of
data points the measurement accuracy and transmission line distance to measure
will increase.
Step size = (1.5 Х 108)(Vp)
F
Where, Vp = relative propagation velocity of the cable.
F = stop frequency (minus) start frequency (Hz).

The maximum distance is:


Dmax = Step size Х (# of data points – 1)
SPECTRUM ANALYZER
Mamta Patra-SEA, RSTI (T), BBSR

Spectrum analyser is a device to analyse frequency, span and amplitude of


a signal. The site master in Spectrum analyser mode can be used to measure Field-
strength, Occupied Bandwidth, , Adjacent Channel Power (ACPR), Carrier to
interference ratio (C/I), Interference Analysis, Channel power, AM/FM Modulation,
etc.

Spectrum Analysis Function: What is a Spectrum Analyzer?


A swept tuned receiver, very sensitive & versatile.
• Very sensitive. –Can view very small amplitude signals.
•Versatile; 1) Can view a wide variety of modulation types.
2) Measure simple to complex parameters.
1) Frequency range: 25 MHz to 4 GHz.
To Select Frequency range - Step 1–Press FREQ/DIST Key.

 Center Freq-Is the frequency
of the center of the display.
 Span-Is the difference between
start and stop frequencies.
 Start Freq-Is the left side of
display.
 Stop Freq-Is the right side of
the display.
 Signal Standard translates a
technology to a frequency.

MEAS/
MOD FREQ/D AMPLITU DISP
2) EAmplitude:
IST Selecting
DE Amplitude Range:
Reference Level is the setting of the top line
of the display
Scale-Changes the units per division of
amplitude
Auto-Atten-Changes attenuation as
Reference Level changes
Manual - The setting of the input attenuator
(0 to 51 dB)
Dynamic-Sets the input attenuator so that it
is dynamically coupled to the input signal and
turns the preamp on or off as necessary
Pre-Amp-On-Improves noise level and
Ref. Level: -120 dBm to +20.0 dBm. sensitivity
Scale:1 dB/Div to 15 dB/Div. Units changes from dB to Watts to Volts
Attenuation/Preamp: Auto/ RL-Offset-Compensates for external
Manual/ Dynamic/ Pre-amp-On/Off attenuators
Field Strength: The magnitude of an electric, magnetic, or electromagnetic
field at a given point is known as field strength. The strength is measured in
Amplitude Units/‖length‖ which is in meters. The field strength can be measured
in dBm/m2, dBV/m, dBmV/m or dBμV/m.

Ref. level =115.76 dBμV/m =0.00 dBm/m2


Procedure – . RBW = 10 KHz, VBW= 10 KHz
Step-1: Press the MODE key. Scale-10 dB/div
Step2: Select the Spectrum Attenuation=51 dB
Unit - dBμV/m
Analyzer mode and
press ENTER.
Step-3: Select the measurement
frequency by using the
Start & Stop keys.
Step-4: Upload the antenna factors.
Step-5: Press the MEAS/DISP key & 200 MHz (Start) M1 M 2 210 MHz (Stop)
M1 = 64.86 dBμV/m, 203.25 MHz (Noise marker)
select the Measure soft key.
M2 = 11.94 dBμV/m, 208.75 MHz (Noise marker)
Step-6: Select the Field-Strength. M1 = 104.71 dBμV/m, 203.25 MHz (Regular marker)
M2 = 52.15 dBμV/m, 208.75 MHz (Regular marker)

Occupied Bandwidth (OBW)


It is a common measurement performed on radio transmitters which
calculates the bandwidth containing the total integrated power occupied in a
given signal bandwidth. Two different methods of calculations are:

1) Percent of Power Method: The occupied frequency bandwidth is


calculated as the bandwidth containing the specified percentage of the
transmitted power.
2) X dB Down Method: It is defined as the bandwidth between the upper &
lower frequency points at which the signal level is XdB below the peak
carrier level.

OBW - Pos. Peak RBW= 3 KHz, VBW = 1 KHz


Ref. Level=0.00 dBm Sweep time = 6.1 s

200 MHz M1 M 2 210 MHz 200 MHz M1 M 2 210 MHz

% of power=99% Meas. dB down= 7.5 dB Method -XdB down. Meas. Occ. BW =75 KHz
Meas. occupied BW= 5.85 MHz dB value = 3 dB Meas. % of power =64.7 %
Figure – 5 (a) Percent of Power Method Figure – 5 (b) X dB down Method

Procedure:
Step-1: Connect the antenna to the Spectrum Analyzer & press the MODE key.

Step-2: Press the FREQ/DIST key.

Step-3: Press the AMPLITUDE key & select the Ref. Level.

Step-4: Select the Atten/Preamp key.

Step-5: Select the Scale key, 10dB/div.

Step-6: Press the MEAS/DISP key & select the Bandwidth & set the resolution &
video bandwidth.

Step-7: Press the Measure & OBW keys


i) Press method. ii) Press % of power. iii) dB down.

Channel Power Measurement


Channel power measurement is one of most common measurements for a
radio transmitter. This test measures the output power, or channel power, of a
transmitter over the frequency range in a specific time interval. Power
measurements indicate system faults, which can be in the power amplifiers and in
filter circuits.
Channel Power measurements can be used to:
 Validate transmitter performance.
 Comply with FCC or local regulations.
 Keep overall system interference at a minimum.

RBW = 10 KHz,

VBW = 3 KHz

203 .25 MHz 208.75 MHz

Int BW =7 MHz Channel Pwr = - 46.55 dBm Without


Ch. Span = 7 MHz Density = -115 .04 dBm/Hz Antenna
Channel Pwr = - 2.51 dBm With
Density = -70.88 dBm/Hz Antenna
Figure – 6 Channel power measurement
 Procedure: Same as occupied bandwidth.
FIELD STRENGTH MEASUREMENT
Vijay Kamle, Engg-Asstt.
RSTI (T)-BBSR

What is field strength?


Field strength is the intensity of the received electromagnetic field
which will excite a receiving antenna and there by induce a voltage at a specific
frequency in order to provide an input signal to a radio receiver or meter.
FOR MEASURING SERVICE AREA

Figure -1, ANRITSU- MEASURING RECEIVER ML 524 B

The ML524B have a full range of features and functions plus


demodulation functions for various signals. Their compact, lightweight
construction makes them suitable for a variety of measurement applications. Use
of the GPIB interface option allows easy configuration of an automatic test
system controlled by a personal computer.
Features
• Very compact and lightweight.

• High frequency stability (A synthesizer local oscillator is used. Its reference


Oscillator has a high frequency stability of ±1 x 10–6.).

• Wide dynamic range (80 dB without switching).


• Automatic gain calibration.
• Direct readout of field strength.

• High precision level display (indication in 0.1 dB steps)

Applications
For field strength measurement
• Investigation to determine service areas.

• Radio wave propagation test.

• Measurement of spurious radiation from transmitter


For other than field strength measurement
• Radio monitoring.
• Measuring receiver.

• High-sensitivity signal demodulation

Figure-2, Field strength measurement


Procedure

1. Since the antenna is MP534B, set the address switches S1 and S2 at the last
side of the real panel as soon as fig. 3-15 (0, 0).
2. Connect the antenna connection cable to the RF INPUT and set the POWER
SWITCH TO ON.
3. Press the BATT CHECK and check the supply voltage. It Is normal if the
pointer of the level meter is within the BATT check scale. At this time, the
light is also activated to illuminate the level display and frequency display.

4. Press the [UNITS] key so that the ►mark (dBµV /m) at the lower right
corner of the level display brights.
5. Set the AM, FM monitors switch to AM or FM according to the signal to be
received.
6. Set the BW (KHZ) pass band width switch to 15, 120 or 8. (Type C is 8.)
7. Set the receiving frequency using the numeric keys [0] to [9] and [MHz] key.
The set frequency is displayed on the frequency display.
8. Press the [CAL] key. Level calibration is performed automatically. After the
end of calibration, the instrument enters the measurement state and the field
strength is displayed on the level display.
9. The conversion of field strength is not carried out on the level meter the
input voltage from the antenna is displayed as it is.

Caution: If the RF ATT indicator at the upper left corner of the level
display blinks during step 8, it means that the input signal level exceeds the
maximum measurable level without the RF ATT. Press the [RF ATT] key in this
case. The RF voltage applied to the RF INPUT is converted to field strength and
displayed on the level of display. If the RF ATT is used, since the attenuator of the
RF ATT is not added to the level meter indication, adding 20 dB to the level meter
indication is the input level when the RF ATT indicator is lit continuously.

Figure-3, MP 534 A/B Dipole Antenna


Field strength Conversion Co-efficient.
RSTI(T) Bhubaneswar Conducts Following
Courses for Professionals

 Broadcasting Technology Course for


Media Professionals.
 Summer Vacation Course for Degree &
Diploma Engineering Students.
 OB(Outside Broadcast ),DSNG,
Digital Earth-Station Course
for Private Media Professionals.
 Modern Trends in Broadcasting Technology.

You might also like