You are on page 1of 147

CHAPTER 1

INTRODUCTION
PROJECT BACKGROUND
CCTV is the acronym of Closed-circuit television (CCTV). Surveillance CCTV is one
of the most important evidence when deals with wrong doings. A video surveillance
system covering a large office building or a busy airport can apply hundreds and even
thousands of cameras. To avoid communication bottlenecks, the acquired video is
often compressed by a local processor within the camera, or at a nearby video server.
The compressed video is then transmitted to a central facility for storage and display.
Based on the current technologies, with the set of personal computer (PC) and the
internet connection either wire or wireless, the monitoring can be done. With this, the
user may monitor the video wherever they want, and the random video playback
functions can be provided. With these flexibilities, it gives more advantage to the user
to monitor and ensure the safety place they want. It is also may increase the safety of
the user properties; this is because there is image processing technique apply in the
system.
This project introduces the main components that can go to make up CCTV systems
of varying complexity. CCTV (closed-circuit television) is a TV system in which
signals are not publicly distributed but are monitored, primarily for surveillance and
security purposes. CCTV relies on strategic placement of cameras and private
observation of the camera's input on monitors. The system is called "closed-circuit"
because the cameras, monitors and/or video recorders communicate across a
proprietary coaxial cable run or wireless communication link. Access to data
transmissions is limited by design. Older CCTV systems used small, low-resolution
black and white monitors with no interactive capabilities. Modern CCTV displays can
be high-resolution color, providing the CCTV administrator with the ability to zoom
in on an image or track something (or someone).
Talk CCTV allows the administrator to speak to people within range of the camera's
associated speakers. CCTV is commonly used for a variety of purposes, including:

Hotels.

1 | Page

Airports

Shopping Malls.

Roads and Highway

Jewelers Shops

Banks

Money Exchanges

Residential Apartments

Maintaining perimeter security.

Monitoring traffic.

Obtaining a visual record of human activity.

The Applications for CCTV


Probably the most widely known use of CCTV is in security systems and such
applications as retail shops, banks, government establishments, etc. The true scope for
applications is almost unlimited. Some examples are listed below.

Monitoring traffic on a bridge.

Recording the inside of a baking oven to find the cause of problems.

A temporary system to carry out a traffic survey in a town centre.

Time lapse recording for the animation of plasticine puppets.

Used by the stage manager of a show to see obscured parts of a set.

The well-publicised use at football stadiums.

Hidden in buses to control vandalism.

Recording the birth of a gorilla at a zoo.

Making a wildlife program using a large model helicopter.

Reproducing the infrared vision of a goldfish!

Aerial photography from a hot air balloon.

Production control in a factory.

The list is almost endless and only limited by the imagination.


2 | Page

The Camera
The starting point for any CCTV system must be the camera. The camera creates the
picture that will be transmitted to the control position. Apart from special designs
CCTV cameras are not fitted with a lens. The lens must be provided separately and
screwed onto the front of the camera. There is a standard screw thread for CCTV
cameras, although there are different types of lens mounts.

Diagram 1 Camera and Lens


Not all lenses have focus and iris adjustment. Most have iris adjustment. Some very
wide angle lenses do not have a focus ring. The 'BNC' plug is for connecting the
coaxial video cable. Line powered cameras do not have the mains cable. Power is
provided via the coaxial cable.
The Monitor
The picture created by the camera needs to be reproduced at the control position. A
CCTV monitor is virtually the same as a television receiver except that it does not
have the tuning circuits.

Diagram 2 CCTV Monitor

3 | Page

Simple CCTV Systems


The simplest system is a camera connected directly to a monitor by a coaxial cable
with the power for the camera being provided from the monitor. This is known as a
line powered camera. Diagram 3 shows such a system. Probably the earliest wellknown version of this was the Pye Observation System that popularised the concept
of CCTV, mainly in retail establishments. It was an affordable, do-it-yourself, selfcontained system.

Diagram 3 A Basic Line Powered CCTV System


The next development was to incorporate the outputs from four cameras into the
monitor. These could be set to sequence automatically through the cameras or any
camera could be held selectively. Diagram 4 shows a typical arrangement of such a
system. There was even a microphone built into the camera to carry sound and a
speaker in the monitor.
The speaker, of course, only put out the sound of the selected camera. There were
however a few disadvantages with the system, although this is not to disparage it. The
microphone, being in the camera, tended to pick up sound close to it and not at the
area at which it was aimed. There was a noticeable, and sometimes annoying, pause
between pictures when switching. This was because the camera was powered down
when not selected and it took time for the tube to heat up again. The system was,
though, cheap to buy and simple to install. It came complete in a box with camera,
16mm lens, bracket, switching monitor and 12 metres of coaxial cable with fitted
plugs. An outlet socket for a video recorder was provided, although reviewing could
be a little tedious when the cameras had been set to sequence. There are now many
4 | Page

systems of line powered cameras on the market that are more sophisticated than this
basic system. Most of the drawbacks mentioned have been overcome. Cameras had
been around for a long time of course, before this development. The example is given
to show the simplest, practical application. The use of some line powered cameras can
impose limitations on system design. They do though, offer the advantage of ease of
installation.

Diagram 4 A Four-Camera Line Powered CCTV System

Mains Powered CCTV Systems


The basic CCTV installation is shown in diagram 5 where the camera is mains
powered as is the monitor. A coaxial cable carries the video signal from the camera to
the monitor. Although simple to install it should be born in mind that the installation
must comply with the relevant regulations such as the Institute of Electrical Engineers
latest edition. (Now incorporated into British Standard BS7671). Failure to do so
could be dangerous and create problems with the validity of insurance. This
arrangement allows for a great deal more flexibility in designing complex systems.
When more than one camera is required, then a video switcher must be included as
shown in diagram 6. Using this switcher any camera may be selected to be held on the
screen or it can be set to sequence in turn through all the cameras. Usually the time
that each camera is shown may be adjusted by a control knob or by a screwdriver.

5 | Page

Diagram 5 A Basic Mains Powered CCTV System

Diagram.6 A Four-Camera System With Video Switcher

Systems with Video Recording


The next development of a basic system is to add a video recorder, the arrangement
would be as shown in diagram 7.

6 | Page

Diagram 7A Multi Camera System With Video Recorder


With this arrangement the pictures shown during play back will be according to the
way in which the switcher was set up when recording. That is, if it was set to
sequence then the same views will be displayed on the monitor. There is no control
over what can be displayed.

Movable Cameras
So far all the cameras shown have been fixed with fixed focal length lenses. In many
applications the area to be covered would need many fixed cameras. The solution to
this is to use cameras fixed to a movable platform. This platform can then be
controlled from a remote location. The platform may simply rotate in a horizontal
plane and is generally known as a scanner. Alternatively the platform may be
controllable in both horizontal and vertical planes and is generally known as a pan, tilt
unit. A basic system is illustrated in diagram 8. This chapter does not deal with how
cameras are controlled or wired; it is just showing the facilities that may be
incorporated into a CCTV system. Therefore the diagrams that follow are simply
descriptive block diagrams and not connection drawings.

7 | Page

Diagram 8 Basic Movable Camera System


Cameras may be used indoors or outdoors. When used outdoors they will always
require a protective housing. For indoor use the environment or aesthetic constraints
will dictate whether a housing is needed. Systems may contain a combination of both
fixed and movable cameras.

Diagram 9 Multiple Camera System


Other Considerations
This has been an introduction to some of the fundamentals of CCTV. Recent
developments have made some very sophisticated systems possible. These include
concepts such as multiple recording of many cameras; almost real time pictures over
telephone lines; true real time colour pictures over the ISDN telephone lines;
switching of hundreds, even thousands, of cameras from many separate control
positions to dozens of monitors; reliable detection of movement by electronic
evaluation of the video signal; immediate full colour prints in seconds from a camera
or recording; the replacement of manual controls by simply touching a screen.

8 | Page

1. FUNDAMENTALS OF VIDEO
Video signals are the signals used to send closed circuit television pictures from one
place to another. Television (TV) is literally, tele-vision, a means of viewing one place
from somewhere else. The word video comes from the Latin verb Videre, to see. A
television picture is made up from a number of horizontal lines on the television
screen, which are laid down, or scanned, from the top to the bottom of the television
screen. There are now only two standards for TV pictures in general use, 525 lines in
the USA (EIA) and Japan and 625 lines elsewhere (CCIR). The descriptions that
follow are based on the 625-line system. The number of lines describes how each still
picture is created, but a television picture is made up from a number of still pictures
displayed every second. There is a characteristic of the human eye known as
persistence of vision. The eye retains an impression of an image for a fraction of a
second after it has disappeared. If a series of still images is presented at a rate of about
14 per second an impression of continuous movement will be perceived. This,
however, would give rise to a very distracting flicker. If the rate were increased to 24

9 | Page

images per second, the flicker would be almost unnoticeable. Increasing this to 50
images per second would eliminate noticeable flicker.
To transmit 50 complete images per second would be needlessly complex and
expensive to produce. The solution is to adopt what is known as interlaced scanning.
Instead of scanning the full 625 lines 50 times a second, the scanning speed is
effectively doubled and so is the vertical spacing of the lines. Therefore, one scan
produces 312 1/2 lines from the top to the bottom of the picture. This is known as
one field. The next scan is arranged to start at a precise position exactly between the
lines of the first scan, so that the lines of the second field interlace, like fingers,
between the lines of the first field. In this way, a complete frame of video is created
made up from two fields.
On a TV screen, the phosphor on the screen continues to glow from the first scan
while the second scan is being displayed. In this way, although only 25 complete
pictures (frames) are presented per second the screen is scanned 50 times (fields) per
second. The result is to achieve a flicker rate of 50 Hz (cycles per second) while only
using a bandwidth for 25 frames per second. Some broadcast televisions now use a
technique called 100Hz technology to further reduce the flicker on the TV screen.
However, this technique is not generally used in CCTV monitors due to the extra cost
involved.

Diagram 2.1 Interlaced Fields


The relationship between the length of the horizontal lines and the height of the
picture is always the same and is known as the aspect ratio. It is given by the
following ratio.
10 | P a g e

Monochrome Video Signal Components


The signal used to carry the scanning pictures from one place to another is called the
video signal. A voltage is generated proportional to the brightness of the image at any
point on a horizontal line. For the brightest parts, corresponding to a white area, a
level of one volt is produced; this is the white level. For the darkest parts
corresponding to a black image, a voltage of approximately 0.3 volts is produced; this
is the black level. Between these levels, the camera will produce a voltage
proportional to the shade of grey of the image.
However, the brightness signal is not the only part of the video signal normally
produced by a camera. Some method is required of synchronising the monitor on
which the camera picture is being displayed to the field and line scanning process.
This is to enable it to re-create the picture that the camera is viewing. The method
used to achieve this is to add pulses for the start of each field and the start of each
line. The synchronising, or sync, pulses for the start of each field are called Vertical
Sync Pulses. These vertical sync pulses reduce the voltage from the black level down
to zero voltsand take up a time space equivalent to 25 horizontal lins, i.e. 1.6
milliseconds. The sync pulses for the start of each line are called Horizontal Sync
Pulses. The horizontal sync pulses are also from the black level down to zero volts
and are 4.7 microseconds in long.
The type of video signal that contains both video and synchronising information is
known as composite video.

11 | P a g e

Diagram 2.2 composite video signal


The relationship in level between the video signal and sync pulses is normally given
by the following formula:

The complete horizontal line lasts 64 microseconds. There is a short period between
the end of the video signal for a line and the leading edge of the next horizontal sync
pulse. This is known as the front porch. There is also a short period between the
trailing edge of the horizontal sync pulse and the start of the video signal of the next
line. This is known as the back porch. Considering the times for the horizontal sync
and the front and back porches, the actual length of the video signal in a horizontal
line is 52 microseconds. In practice only 47 to 50 microseconds is visible due to overscanning at the monitor.
There is not just one sync pulse. The nominally 625 line system uses 25 lines for field
blanking, therefore 50 lines in one frame. This leaves 575 lines for picture
information. The 25 lines are used as follows:

12 | P a g e

2. CAMERAS
Introduction
The principal part of a CCTV system is the camera. There are many types of camera
and many ways in which they are used. In this chapter, the different sorts of cameras
and the fundamentals of their operation will be examined. It will also explain the
terms describing the performance of the cameras. This will enable an understanding of
the data sheets available for the myriad of cameras available on the market. There is
now no standard method for manufacturers to present data defining camera
performance. Therefore, their literature should be studied carefully before making a
selection and comparisons made against a common standard.

Types of Cameras
Internal Cameras
Internal cameras are usually designated for use indoor without the need for
environmental protection. Normally the cameras are simply fitted with a lens to view
the required area and mounted on a wall or ceiling bracket. If the camera is in an area
such as a corridor or other place where the light level doesnt change, then a simple
manual iris lens may be used. The light level may change because there are windows
or skylights in the area being viewed. Alternatively, if twenty-four hour operation of
the camera is needed then an automatic iris lens or another means of electronic
sensitivity control must be used. (See electronic shutter cameras.). Frequently the
styling of an internal camera is important because an architect or similar person will
want the camera to blend into the surrounding decor. In those cases, the camera may
be mounted inside some kind of housing. There are many housings of different styles
available, from simple cases through to domes, wedges and other types. Internal
housings are also used for other reasons. It may be important that the camera is not
seen at all, in which event a covert housing is used to hide the camera or disguise it as
something else. Housings may also be used to give a measure of protection in certain
situations. There are many types of enclosures that can be used to protect the camera
from vandalism, dust, or other contaminants.
13 | P a g e

External Cameras
External cameras are usually described for use in outdoor situations. They are nearly
always housed in some form of weatherproof housing, an exception being where the
camera case itself is water-resistant. The external camera housing normally contains a
heater and thermostat to prevent the glass window at the front from misting at low
temperatures. External cameras always need some form of electronic sensitivity
control. This is because, over the course of the day and night, the light level may well
change by a factor of over a million times. At the time this book went to press the
most effective way of giving such electronic sensitivity control is an automatic iris
lens fitted with a neutral density spot filter. Chapters 4 and 14 provide more detailed
information on lenses and lighting.

Electronic Shutter Cameras


There are an increasing number of cameras being introduced with electronic
shutters; electronic devices that are controlled by the amount of light falling on the
imaging device. In effect, it is the electronic equivalent of the variable speed
mechanical shutter fitted to early cine cameras. In these, the amount of light was
measured by a photoelectric cell, an increase in light causing the shutter to revolve
faster and vice versa. The same problems apply to both devices. At very high light
levels, there is a limit to the speed at which the shutter can effectively operate without
the picture flaring. At very low light levels, the exposure time is so long that moving
images become blurred. Some manufactures have claimed that these cameras
eliminate the need for an automatic iris lens. This is doubtful in all conditions. They
are ideal for indoor conditions where there is a limited range of light levels. As
always, the manufacturers specification should be consulted carefully to check the
light range covered. Another problem that should be appreciated is that because the
iris is invariably set at the maximum aperture the depth of field is greatly reduced. See
automatic light control and electronic shutter later in this chapter.

14 | P a g e

Miniature Cameras
Since CCD cameras (see later in this chapter) have been available, the size of cameras
has reduced considerably. These miniature cameras are available in a number of styles
in two main groups, either where the camera is a complete unit or where the image
sensor is separated from the camera electronics. Complete cameras are available at the
present time with dimensions similar to the size, say, of a pack of cigarettes. If even
smaller sizes are required, the cameras with separate sensor heads have sensor blocks
of only 25mm cubed. One restriction to the minimum size of camera is due to the
necessity of fitting a lens and mounting the camera. The ultimate is a camera of
current design that is about the size of a thumbnail, including all the electronics.

Line Powered Cameras


Normally a CCTV camera has to have some kind of power source, either wired from a
central point or from a local mains spur. Obviously there is a cost involved of
providing the necessary cabling or supply points for such cameras. Some camera
manufacturers have addressed this issue by making cameras to which the power for
the camera is sent down the same coaxial cable used to bring the video signal back
from the camera. CCTV systems using line-powered cameras, then, cost less to install
in terms of supply cables or mains spurs. There are, however, two disadvantages.
First, some cameras need a specialised power supply unit to feed the camera and
separate the video for the monitor. Furthermore, with long cable runs it is not possible
to amplify the video signal from the camera because the power cannot travel through
the video amplifier. This is also a problem if there is ground loop interference on the
camera as it is not possible to use a video isolation transformer with line powered
cameras.

Board Mounted Cameras


Board mounted cameras are normally small CCD cameras mounted on the printed
circuit board of another system. They are used to give a picture as part of the function
15 | P a g e

of the system. The best example of board mounted cameras is those used in video
entry phone systems. In these systems, a complete CCD camera with a lens is
mounted on the PCB of the door entry unit. The board-mounted camera gives pictures
to residents, on small dedicated monitor units, of the person operating the bell push.
Types of Image Sensors
Tubed Cameras
The first CCTV cameras to be used were based around special vacuum tubes with a
light sensitive coating on one end. Light striking this coating caused electric current to
flow down the tube, proportional to the amount of light falling at each point on the
coating. The circuits of the camera then converted the current to the video signal.
This was a good initial design and gave cameras that had good sensitivity and
resolution. However the cameras were bulky and the tubes had a limited life span,
requiring regular, expensive tube changes. CCD cameras, when introduced, were
smaller, lighter and required practically no maintenance. This has led to their
widespread replacement of tubed cameras in CCTV systems, where CCD cameras are
now used in practically all new installations. For this reason, no further discussion of
tubed cameras will be made in this report.

CCD Cameras
CCD is an abbreviation of Charge Coupled Device. This is the name given to a
group of optical detector integrated circuits made from semiconductors (see diagram
3.1). A lens focuses light onto the surface of the CCD image sensor. The areas of light
and dark are sensed by individual photo-diodes, which build up an electrical charge
proportional to the light. That is to say that the brighter the light on an individual
photo-diode the bigger the charge developed. These photo-diodes are arranged in a
matrix of rows and columns and are given the name picture cells or Pixels. The
charge is removed from each pixel by rows of CCD cells. These CCD rows are like
ladders for charge, enabling step-by-step the charge on each pixel, and consequently
the light level on it, to be read off by processing electronics.
16 | P a g e

When the first CCD cameras were developed, it was important that they could replace
existing tube cameras without having to change lens sizes. Therefore, the first CCD
cameras were created in 2/3 format. As CCD sensor technology has improved, the
format of CCD cameras has decreased to 1/2 inch, 1/3 inch, and most recently to 1/4
inch and 1/8th inch to make cameras smaller and cheaper. The associated lenses are
also much more compact, but not necessarily cheaper due to the much higher
accuracy required to grind a smaller lens. The dimensions of the imaging devices are
shown in Chapter 4.

Diagram 3. 1 CCD Imaging Device


An amplifier is needed to boost the signal from the CCD sensor electronics up to the
level where it can be used on a monitor. A synchronising generator is also used in the
CCD camera to generate the signals that read the light level charge off the CCD and
the synchronisation pulses used by the video monitor to re-create the image. The
mixer section combines the video and synchronisation signals to produce the
composite video signal used by the monitor.

17 | P a g e

Diagram 3. 2 Monochrome CCD Camera Block Diagram


There are many advantages of CCD cameras that have led to their wide spread
replacement of tubed cameras. First, CCD cameras use less power and need no high
voltages like the tube. As mentioned in the section on miniature cameras, CCD
cameras can be very much smaller than tubed cameras. The picture linearity is better
with CCD cameras as tubed cameras used a magnetic field to scan the image sensor. It
is extremely difficult to make a magnetic field that is completely even over a given
area. This meant that the pictures from tubed cameras were sometimes distorted by
the magnetic field, bulging out at the edges (barrelling) in bulging in (pin-cushioning).
CCD cameras do not use magnetic fields and consequently do not have this geometric
distortion. CCD cameras are also a good deal more rugged than tube cameras.
Viewing the sun or another bright point could easily damage the surface of the tube
and the tubes regularly needed replacement as a routine maintenance task. CCD
cameras do not have this problem and are not damaged by high light intensities, nor
do images become burned into the surface over long periods. This, and the ability of
CCD cameras to survive vibration and mechanical shock, gives very much reduced
maintenance cost for CCD cameras.

18 | P a g e

Colour CCD Cameras


Colour CCD cameras are basically the same as monochrome cameras. However, there
are additional components that have important effects on the performance of the
camera

Diagram 3. 3 Colour CCD Camera Block Diagram


Light passes through the lens and through a colour correction filter on to the CCD.
The CCD is sensitive to infrared light, which is present in normal daylight. This
infrared light produces false signals from the CCD that affects the purity of the
colours reproduced by the camera. The colour correction filter removes the infrared
light before it hits the CCD and ensures the colour purity of the camera. However, it
also means that infrared illuminators cannot be used with normal colour cameras as
the colour correction filter removes all the lighting created. The actual CCD image
sensor comprises of an array of pixels like a monochrome camera. However, each
pixel is subdivided in to three smaller light sensitive areas that are constructed to be
sensitive to red, green and blue light respectively. Consequently the pixels are larger
in size than for monochrome CCDs and the number of pixels which can be fitted on to
a colour CCD of a given size is less than a monochrome CCD of equal dimension.
This is why, generally, monochrome cameras still have resolution which is higher than
19 | P a g e

colour cameras. The colour correction filter and colour sensitivity of the pixels also
tend to make colour cameras less sensitive to light that monochrome cameras.
Typically, colour cameras have sensitivities between 1 lux and 2.5 lux whereas
monochrome cameras have sensitivities between 0.01 lux and 0.1 lux. The separate
brightness signals for red, green and blue are amplified separately and the used by
signal processing circuits to produce the luminance (Y) signal (by combination as
described in chapter 2) and the chrominance (C) signal (by phase and amplitude
modulation of the 4.434MHz colour sub-carrier as described in chapter 2). The Y and
C signals are then combine with the composite sync pulses to produce a composite
colour video signal. Many colour cameras also feature a separate connector where the
Y and C signals are output separately for connection to Super VHS video recorders
and monitors, for improved resolution.

Diagram 3. 4 Using Y-C output with S-VHS recorder


Two coaxial cables must be installed between the camera and the S-VHS video
recorder. The Y-C output of the recorder must be connected to the Y-C input of the
monitor. This is normally achieved using a pre-made S-VHS cable with mini-DIN
connectors on each end. However, the benefit of investing in this cabling plus an SVHS recorder and high-resolution colour monitor (400 TVL at centre) will be
noticeably better live and playback pictures in terms of resolution. Resolution of
typically 400 TVL will be possible when viewing live action pictures (compared with
about 350 TVL using the composite video output of the camera). Resolution of
typically 400TVL will be possible when viewing pictures recorded on theS-VHS
video recorder (compared with about 240 TVL compared with a standard VHS
recorder). The down side is the cost. An S-VHS system like this may cost twice as
much as a standard VHS system using composite video.
20 | P a g e

Advantages of CCD Cameras

No geometric distortion.

No coils, magnets, or glass tube.

Not prone to ghosting or image burn.

More compact and resistant to vibration.

Not affected to electromagnetic interference.

Initially CCD cameras could not provide the same degree of resolution compared to
tubed cameras. The dynamic range was less and produced fewer shades of grey.
However, improvements in CCD sensor design have meant that the current generation
of CCD cameras produces excellent images of high resolution and accurate colour
reproduction.

Digital Signal Processing (DSP) CCD Cameras


In conventional CCD cameras the functions of amplification, signal processing and
mixing are carried out by analogue circuits, which work on changing the voltages of
the signals by various means. Adjustments to picture quality are made by small
adjustable resistors which are set up to give the best overall performance across a
range of camera operating conditions (light levels etc.) This approach is very cost
effective and gives good quality pictures in most lighting conditions However, these
adjustments are, at best, a compromise and the effects of tolerances in the values of
the electronic components and changes over the lifetime of the camera can cause the
quality of pictures obtained from the camera to vary greatly. In DSP cameras digital
circuits, as shown in figure 3.5, carry out the signal processing and mixing.
The signals from the CCD are connected to an analogue to digital converter (ADC).
This converts the brightness level from each point into a number. In this way, the
entire picture captured by the CCD at any moment is represented by a group of
numbers. These numbers are processed at high speed by the digital signal processor,
which does mathematics on the numbers in order to produce the video signal at the
output of the camera. The digital signal processor gives the other name used for
digital cameras, DSP. The composite video signal or Y-C video signal is produced by
21 | P a g e

a digital to analogue converter (DAC) which takes the finished information from the
digital signal processor and produces the composite video described in chapter 2.
Most DSP cameras still produce these analogue composite video and Y-C signals as
this is currently the most popular format required by the other equipment in the video
system; monitors, switchers, multiplexers, VCRs etc. DSP cameras do have the
possibility to produce the video signal in a digital form and it is likely that this will
become popular when a worldwide standard is agreed for sending video pictures
digitally in CCTV systems.

Diagram 3. 5 Digital Colour CCD Camera Block Diagram


A microprocessor controller sets the settings of the camera, controlled by the DSP
circuits. This is a small computer built in to the camera, which controls the
mathematics used by the DSP circuits to build the video signal. The controls of the
camera are usually a series of push buttons on the camera, which are scanned by the
controller. With these buttons the user can select and adjust the picture quality and
performance of the camera using a series of menus overlaid on to the video picture by
the controller. Obviously, the extra circuitry required by a DSP camera make them
more expensive than a conventional analogue camera. However, there are a number of
benefits for this extra expenditure in terms of features that are not available from
conventional analogue cameras. These include:
22 | P a g e

Stability - the adjustments to the camera are made by changing number values
on an on-screen menu and not by small screwdriver adjustments.
Consequently the settings of the camera are easily repeatable and tend not to
change over time.

Menu programming - provides an easy and rapid way to adjust the camera
for the best picture during installation.

Digital zoom - The DSP circuits have a complete numerical model of each
picture and can manipulate these numbers. By performing certain calculations,
the DSP circuits can selectively enlarge a section of the picture, producing a
zoomed-in image. This is a useful feature but it should be borne in mind that
the number of pixels in the CCD is constant and so the greater the amount of
digital zoom used, the poorer the apparent resolution of the picture will be.

Multi-zone backlight compensation - Unlike analogue cameras, which


compensate for bright light behind an object by sampling the video voltage
across the whole picture, DSP cameras and have a number of separate zones
which can be positioned to cover bright light sources. Consequently, this
provides better overall picture quality in these situations.

Automatic quality adjustment - DSP cameras can hold a model of how a


good quality video signal should appear. The DSP circuits can then compare
this with the picture being produced at any moment, and then actively adjust
the camera to provide the optimum picture quality. This can give very good
picture quality over a very wide range of lighting conditions.

Remote set-up and control - like any computer, the microprocessor controller
can communicate with other computers over a digital link. Consequently, DSP
cameras can be used in systems where they are set up and controlled by a
matrix switcher or a PC, even over great distances. This also simplifies camera
replacement in the field as when a camera becomes faulty the replacement
fitted can have identical settings downloaded very quickly to give identical
performance to the original camera.

Twin Colour/Monochrome Cameras


Twin colour/monochrome cameras, are designed to meet a particular requirement in
CCTV systems. Sometimes, it is required to have outdoor cameras which produce
colour images in the day but which can provide good quality pictures in low light
23 | P a g e

levels at night, perhaps even using infrared illuminators. In the past, the only way to
meet this requirement was to use two separate cameras; one monochrome, one colour
that were switched over automatically by some type of photocell or control system.
Improvements in CCD technology and the introduction of DSP cameras have led to
the availability of colour cameras which produce monochrome pictures at night and
which have good sensitivity to infrared illumination. The cameras work as normal
colour cameras during the day. The night-time mode is controlled either by the camera
itself (by sampling the AGC voltage, see AGC below) or remotely by a control input.
In the nighttime mode, the colour sub-carrier is switched off and the camera produces
just the monochrome composite video signal. Dual format cameras do have to
overcome the problem of the infrared cut filter. Colour cameras normally have an
ifrared cut filter that removes infrared light and ensures accurate colour reproduction
by the camera. However, dual format cameras cannot use the colour correction filter
at night because this would filter out the light produced by infrared illuminators.
Camera manufacturers have solved this problem in two ways. One way is to have
small motor that moves a colour correction filter in front of the CCD in colour mode
but retracts it in monochrome mode. This has the advantage of ensuring the best
colour quality but has the disadvantage that a complex electro-mechanical assembly is
built in to the camera and this will lower its reliability compared with a camera that
has no moving parts. The other solution is to dispense with the colour correction filter
entirely. The effect of infrared light is then adjusted by the digital signal processing of
the camera. This gives a camera, which is very reliable, but the colour reproduction of
the camera will always be a compromise as the amount of infra red light seen by the
camera constantly changes and the compensation in the digital signal processing is
fixed.

Digital Cameras
There are already several camcorders on the market that produce a digital output
instead of an analogue video signal. These record onto a miniature DAT (Digital
AudioTape) in digital form or download straight to codecs. The playback can be either
via a digital to analogue converter in to a conventional monitor, or direct by RGB
input to a computer monitor. The direct input into a computer monitor will provide a
24 | P a g e

significant improvement in resolution and colour rendering. The recording capability


for CCTV is still limited by the current problems of compression and storage capacity,
but this is advancing rapidly and soon will not be the main problem. Imagine
computer graphic type resolution and quality in a CCTV installation, the day will
come. The majority of advances in CCTV cameras have been as a result of
developments in camera technology and miniaturisation in the vast domestic market.
There is no reason to doubt that the digital camera technology will soon be available
to our industry, although not at the time of publication of this issue. However, it
makes sense to propose some of the advantages of this technology when it becomes
readily available.
Transmission of video along telephone lines or fibre optic cable requires an analogue
to digital converter (ADC) to be incorporated in the transmitter and the reverse digital
to analogue converter (DAC) at the receiving end. Using a direct digital output from
the camera will render the ADC unnecessary, thus saving cost. When equipment is
available that can accept a digital signal then the DAC will not be required providing
further savings. It will no longer to use coaxial cable with all its problems of
connectors and limited range. Instead, simple twisted pair cables can be used with
greatly improved distances and quality. Multiplexers need to convert the analogue
signal to a digital signal to hold in the frame store; again, this will be unnecessary.
Every time a conversion from one form of signal to another is rendered unnecessary,
there will be an improvement in resolution and picture quality.

25 | P a g e

3. LENSES
Introduction
The human eye is an incredibly adaptable device that can focus on distant objects and
immediately refocus on something close by. It can look into the distance or at a wide
angle nearby. It can see in bright light or at dusk, adjusting automatically as it does so.
It also has a long depth of field; therefore, scenes over a long distance can be in
focus simultaneously. It sees colour when there is sufficient light, but switches to
monochrome vision when there is not. It is also connected to a brain that has a faster
updating and retentive memory than any computer. Therefore, the eyes can swivel
from side to side and up and down, retaining a clear picture of what was scanned. The
brain accepts all the data and makes an immediate decision to move to a particular
image of interest, select the appropriate angle of view and refocus. The eye has
another clever trick in that it can view a scene of great contrast and adjust only to the
part of it that is of interest.
By contrast, the basic lens of a CCTV camera is an exceptionally crude device. It can
only be focused on a single plane, everything before and after this plane becoming
progressively out of focus. The angle of view is fixed. At any time, it can only view a
specific area that must be predetermined. The iris opening is fixed for a particular
scene and is only responsive to global changes in light levels. Even an automatic iris
lens can be only be set for the overall light level, although there are compensations for
different contrasts within a scene. Another problem is that a lens may be set to see into
specific areas of interest when there is much contrast between these and the
surrounding areas. However, as the sun and seasons change so do light areas become
26 | P a g e

dark and dark areas become light. The important scene can be whited out or too dark
to be of any use.
A controversial but important aspect of designing a successful CCTV system is the
correct selection of the lens. The problem is that the customer may have a totally
different perspective of what a lens can see compared to the reality. This is because
most people perceive what they want to view as they see it through their own eyes.
Topics such as identification of miscreants or number plates must be subjects debated
frequently between installing companies and customers.
The selection of the most appropriate lens for each camera must frequently be a
compromise between the absolute requirements of the user and the practical use of the
system. It is just not possible to see the whole of a large loading bay and read all the
vehicle number plates with one camera. The solution may be more cameras or
viewing just a restricted area of particular interest. A Company putting forward the
system proposal should have no hesitation in pointing out the restrictions that may be
incurred according to the combination of lens versus the number of cameras. Better
this than an unhappy customer who is reluctant to pay the invoice.
Although a lens is crude compared to the human eye, it incorporates a high degree of
technology and development. There can be a large variation in the quality between
different makes and this should be considered according to the needs of a particular
installation. The lens is the first interface between the scene to be viewed and the
eventual picture on the monitor. Therefore, the quality of the system will be very
much affected by the choice of lens. For general surveillance of, for instance, a small
retail shop, it is possible to use a lower quality lens with quite acceptable results. As
the demands of the system requirement increase then the use of a premium quality
lens must be considered. The difference in cost between a poor quality and a high
quality lens will be a very small percentage of the total cost of a large industrial
system.

27 | P a g e

The CCTV Lens


Exposure Control
The exposure in a normal photographic camera can be controlled by a combination of
shutter speed and iris opening. This is not so with a CCTV camera lens. A standard
CCTV camera produces a complete picture every 1/2 of the mains frequency. This is
every 1/25 second where the mains frequency is 50 Hz (cycles per second) and every
1/30 second where the mains frequency is 60 Hz. Generally the exposure time is fixed
and the only control of the amount of light passing to the imaging device is by
adjusting the size of the iris. This is covered in more detail later in this chapter. Most
camera tubes and imaging devices have some tolerance of the amount of light passed
by the lens to create an acceptable picture. The range of tolerance is generally
inversely proportional to the sensitivity of the camera. The more sensitive cameras
require greater control of the iris aperture.
Types of Lenses
Lens Formats
Early CCTV lenses were designed for the 1 format tube camera and many of these
are still available on the market. The lens screw thread on these cameras is called a Cmount. This is a particular design of thread size and flange length originally used on
photographic cameras. In recent years lenses have been developed for the 2/3, 1/2
and now 1/3 format cameras. Consequently, great care must be exercised when
selecting a lens for a particular camera. Just as there are four formats of camera so
there are four formats of lenses and they are not compatible in every combination. A
lens designed for a larger format camera may be used on a smaller format but not the
reverse. In addition, the field of view will not be the same on different size cameras.
There is now a further complication in that there is a range of lenses with what is
called the CS-mount. The difference between the two types of mount is the flange
back length, which is the distance from the back flange of the lens to the face of the
sensor. See diagram 4.1.

28 | P a g e

The screw thread and shoulder length for each type of mount is identical. This makes
it impossible to see the difference except that the overall size of the CS-mount lens is
generally smaller. A C-mount lens may be used on a CS-mount camera with an
adapter ring but a CS-mount lens cannot be used on a C-mount camera. The main
problem is that either type of lens can be screwed onto both types of camera without
apparent damage. The result is that if the wrong type is used it will be impossible to
focus the camera. Some C-Mount lenses have a projection at the back that could
damage the sensor in a CS-Mount camera.

Diagram 4. 1 Types of Lens Mounts


A chart is provided at the end of this chapter showing the relationships between
different lenses and camera combinations and the associated angle of view. At the
time of going to press, most lenses with a focal length of 25mm and above are still
designed for 1 cameras. This means that special care must be taken when using this
long focal length lens on modern cameras. For instance, a 25mm 1 lens provides the
following approximate angles of view on the different formats. Therefore, there would
be a significant variation in the expected scene content if this fact were overlooked.
FORMAT

1"

2/3"

1/2"

1/3"

ANGLE OF VIEW

29

9.5

114

9.79

Diagram 4. 2 Angle of view for different formats


29 | P a g e

Lens Selection
There are two other main factors that must be considered when selecting the most
appropriate lens for a particular situation. The focal length and the type of iris
control. Within each of these factors, there are other features that will also need to be
considered. Lenses may be obtained with all combinations of focal length and iris
control. The selection will depend on the site and system requirement.
Focal Length
The focal length of a lens determines the field of view at particular distances. This can
either be calculated from the formula given later in this chapter or found from tables
provided by most lens suppliers. Most manufacturers also provide simple to use slide
or rotary calculators that computes the lens focal length from the scene size and the
object distance. The longer the focal length the narrower is the angle of view.
Although not strictly correct, lenses with a focal length longer than 25mm are often
called zoom lenses. The focal length of the lens requires careful selection to ensure
that the correct area is in view and that the degree of detail is acceptable. A rule of
thumb is that to see a person on a monitor they should represent at least 10% of the
screen height. To see in this context means to be able to decide that it is a person.
For purposes of being able to identify a known person requires them to be at least
50% of the screen height and preferably 60%. An unknown person should occupy at
least 120%of the screen height.
Fixed Focal Length
This type of lens is sometimes called a monofocal lens. As the name implies, it is
specified when the precise field of view is fixed and will not need to be varied when
using the system. The angle of view can be obtained from the suppliers specification
or charts provided. They are generally available in focal lengths from 3.7mm to
75mm. Longer focal lengths may be produced by adding a 2x adapter between the
lens and the camera. It should be noted that this would increase the f-number by a
factor of two (reducing the amount of light reaching the camera). If focal lengths
longer than these are required, it will be necessary to use a zoom lens and set it
accordingly.
30 | P a g e

Except for very wide-angle lenses, other lenses have a ring for adjusting the focus. In
addition, cameras include a focusing adjustment that moves the imaging device
mechanically relative to the lens position. This is to allow for minor variations in the
back focal length of lenses and manufacturing tolerances in assembling the device in
the camera. Correct focusing requires setting of both these adjustments. The
procedure is to decide the plane of the scene on which the best focus is required and
then set the lens focusing ring to the mid position. Then set the camera mechanical
adjustment for maximum clarity. Final fine focusing can be carried out using the lens
ring.
The mechanical focusing on cameras is often called the back focus, originally because
a screw at the back of the camera moved the tube on a rack mechanism. Modern
cameras now have many forms of mechanical adjustment. Some have screws on the
side or the top, some still at the back. There are cameras that have a combined C/CSmount on the front that also has the mechanical adjustment and can accept either type
of lens format. The longer the focal length of the lens the more critical is the focusing.
This is a function of depth of field described later in this chapter.
Variable Focal Length
This is a design of lens that has a limited range of manual focal length adjustment. It
is strictly not a zoom lens because it has quite a short focal length. They are usually
used in internal situations where a more precise adjustment of the scene in view is
required which may fall between two standard lenses. They are also useful where for a
small extra cost one lens may be specified for all the cameras in a system. This saves
much installation time and the cost of return visits to change lenses if the views are
not quite right. For companies involved in many small to medium sized internal
installations such as retail shops and offices this can save on stock holding. It makes
the standardisation of systems and costing much easier.
Manual Zoom Lens
A zoom lens is one in which the focal length can be varied manually over a range.
Usually this is by means of a knurled ring on the lens body. It has the connotation of
zooming in and therefore infers a lens with a longer than normal focal length. (Say
31 | P a g e

more than 25mm.) The zoom ratio is stated as being for instance 6:1, which means
that the longest focal length is six times that of the shortest. The usual way of
describing a zoom lens is by the format size, zoom ratio and the shortest and longest
focal lengths. For example, 2/3, 6:1, 12.5mm to 75mm. Again, great care must be
taken in establishing both the camera and the lens format. The lens just described
would have those focal lengths on a 2/3 camera but an equivalent range of 8mm to
48mm on a 1/2 camera.
Motorised Zoom Lens
Manual zoom lenses are not widely used in CCTV systems because the angle of tilt of
the camera often needs to be changed as the lens is zoomed in and out. The most
common need for a zoom lens is where used with a pan tilt unit. The lens zoom ring is
driven by tiny DC motors and operated from a remote controller.
With the development of ever-smaller cameras and longer focal length lenses the
method of mounting the camera/lens combination must be considered. There are
many cases where the lens is considerably larger than the camera and it may be
necessary to mount the lens rigidly with the camera supported by it. In other cases, it
may be necessary to provide rigid supports for both camera and the lens. Always
check the relationship between the camera and lens sizes and weights when selecting
a housing or mounting. Most manufacturers of housings can provide lens supports as
an accessory.

Focussing a Zoom Lens.


The most frequent reason for the focus changing when zooming is that the mechanical
focus of the camera has not been set correctly. The following is the procedure for
setting up the focus on a camera fitted with a zoom lens.
The focusing ring should be marked near and far. Set this to far and set the zoom
ring to the widest angle of view. Aim the camera at an object about 40 metres away
and adjust the camera focus for maximum clarity. Next zoom in to an object nearby
and set the lens focus for maximum clarity. It should now be possible to zoom all the
32 | P a g e

way back without the focus changing. Many motorised zoom lenses will be used in
external conditions with limited light. If this is the case then it is advisable to fit a
neutral density filter in front of the lens to make the iris open fully. A neutral density
filter is one that reduces the amount of light that enters the lens, evenly over the whole
of the visible spectrum. This will create the shortest depth of field and ensure setting
up more accurately for the worst conditions. The depth of field, as explained later,
depends on the aperture opening.
Some controllers can override the automatic iris mechanism, usually to open it to see
into darker areas. This is often the case when a camera is looking out over open
country in bright sunlight and the lens closes because it measures the average light
levels. The scene at ground level can be very dark in these conditions, with little
detail. This is not a desirable feature to include unless absolutely necessary. This is
because the override can be forgotten with resultant poor pictures being recorded if
the system is not fully monitored. The better solution is to tilt the camera down until
there is less proportion of sky in the picture.

Motorised Zoom Lenses with pre-sets


There are many situations where it is required to pan, tilt, and zoom to a
predetermined position within the area being covered. It is possible to obtain
motorised lenses with potentiometers fitted to the zoom and focusing mechanisms.
These cause the lens to zoom automatically and focus to the setting by measuring the
voltage across the potentiometer and comparing it with the signals in the control
system. All other functions are as for motorised zoom lenses. Pre-set controls are only
possible with telemetry controlled systems. The specification of the telemetry controls
should be checked to see whether the pre-set positions are set from the central
controller or locally from the telemetry receiver.

33 | P a g e

Iris Control of Lens


Manual Iris
With this type of lens, the iris opening is set manually by rotating a knurled ring on
the lens body. Typically, it will have a range of settings from the maximum to fully
closed, although the adjustment will be rather coarse. This type of lens is only suitable
for indoor applications where the light levels remain fairly constant. It can also be
used indoors with cameras having electronic shutters making a significant cost saving.
Care must be exercised in using this camera/lens combination in external applications
because the camera may not have adequate control to cover the total light range. In
addition, manual iris lenses do not usually have a neutral density spot filter to cope
with extremely bright sunlight.
In many indoor situations, the general level of light will vary significantly between
summer and winter due to light from windows, skylights, etc. Therefore, it is often
necessary to adjust the aperture two or three times a year to maintain optimum clarity
of the picture.

Automatic Iris
Due to ongoing development, tubed cameras were becoming more sensitive and their
use was spreading to more outdoor applications. They were very limited in the range
of light that could be coped with. To overcome this problem manual iris lenses were
fitted with motors bolted on to the barrel to drive the iris ring. The motors were
connected by way of an amplifier to the video output of the camera. This was
monitored to adjust the iris ring according to the voltage of the video signal. The
lower the voltage then the more the iris would be opened until the correct video
voltage was achieved, and the reverse when the video voltage increased. The early
amplifiers suffered from the problem of being too sensitive and responding too
quickly to changes in the video signal. This caused hunting of the iris opening
control and resulted in fluctuating contrast of the picture. To overcome this a delay
circuit was introduced in the amplifier but this sometimes caused the reverse problem
of the picture changing too slowly.
34 | P a g e

Modern automatic iris lenses are now completely self-contained units produced by the
lens manufacturer and containing very sophisticated electronics and microscopic
motors. There are three main types of automatic iris lenses.

Iris Amplifier
This type of lens is sometimes referred to as a servo lens. The most common type
contains an amplifier and is connected to the video signal of the camera. It is driven
by a dc voltage also provided from the camera. It was mentioned in Chapter 3, that the
voltage of the video signal is proportional to the amount of light on the imaging
device. The video level falls in proportion to the light level. The amplifier is
continuously monitoring this voltage to maintain it at 1-volt peak to peak. As the
voltage changes so the iris amplifier opens or closes the iris to maintain a constant 1volt.
Most cameras that provide an automatic iris drive include a socket on the rear. There
are three connections, +v, 0v, video. Unfortunately, there is no current standard for
this connector but most cameras are packed with the appropriate plug. This can create
problems if one camera is substituted for another make during maintenance or service.
It can mean that the service engineer has to change the iris plug on site, which is not
an easy job. In recognition of this problem, many cameras are now being produced
with screw terminals on the rear.

Galvanometric Lens
These are also known as a galvometric or galvano lens. This type of automatic iris
lens is driven by a reference voltage produced by an amplifier in the camera. In other
words, the amplifier is within the camera instead of being part of the lens. The lens
contains a driving motor to open and close the lens and a damping coil to prevent
hunting. These lenses have four connections, +ve drive, -ve drive, +ve damping, and
-ve damping. The camera specification should be checked to ensure that it contains
the circuitry for this type of lens. Galvanometric lenses are usually less expensive than
35 | P a g e

lenses with a built-in amplifier. They are simpler to install but can only be used with a
limited range of cameras. Again, for this type of lens many cameras are being
produced with screw connectors instead of a socket for the lens connection.

Sensor Lens
This lens includes a light sensor similar to that in a photographic camera. This
measures the light levels and adjusts the iris aperture accordingly. It requires a 12-volt
dc supply that may be obtained from any source. This type of lens is not very
common now having been introduced for use on Vidicon cameras that did not have a
video and 12 volt output. The problem was that the light sensor was pre-set and not
responsive to the video level, therefore the correct level was always maintained. The
vast majority of cameras now provide an automatic lens connection therefore there
will only be rare cases where this lens will be required.

Lens Parameters
Focal Length
The rays from infinitely distant objects are condensed by the lens at a common point
on the optical axis. The point where the image sensor of the camera is to be placed is
called the focal point. A lens has two focal points, the primary principal point and the
secondary principal point. The distance between the secondary principal point and the
plane of the image sensor is the focal length of the lens.

36 | P a g e

Diagram 4. 3 Focal Length of Lenses

Angle of View of Lenses


This is the angle that the two lines from the secondary principal point make with the
edges of the image sensor. The focal length of a lens is fixed whatever the size of the
image sensor. The angle of view however varies according the size of the sensor.

Diagram 4. 4 Angle of View


The angle of view is given by the following formula:

The angle of view for a given focal length lens varies according to the sensor size.
This is shown in diagram 4.5. The corollary of this is that for a given view the
required focal length varies according to the sensor size as shown in diagram 4.6. This
37 | P a g e

illustrates that for the same field of view, the smaller the format the shorter is the
required focal length.

Diagram 4. 5 Angles of View for Different Sensor Sizes

Diagram 4. 6 Focal Lengths for Different Sensor Sizes

Field Of View
The field of view is the ratio of the sensor size to the focal length and the distance to
the subject. This is shown in diagram 4.7. The width to height ratio of the sensor is
4:3. The horizontal and vertical angles and therefore fields of view are different and
must be considered separately.

38 | P a g e

Diagram 4. 7 Field Of View

Sensor Sizes
Diagram 4.8 shows the sensor sizes to be used when calculating fields of view and
angles of view.

Diagram 4. 8 Sensor Dimensions


For example, if it were required to view a subject 2.5 M high at a distance of 10M
using a 2/3 camera and lens the calculation would be as below. Using the
relationships given in diagram 4.6.

The nearest standard lens in this case would be a 25mm and the actual height of the
subject scene would be 2.64 M. The slightly shorter focal length lens provides a
slightly wider angle of view.
Most lens brochures give the horizontal and vertical angles of view. The relevant
views can be calculated from the formula as follows:

39 | P a g e

Where: H is the height of the scene, d is the distance from the camera to the scene.
This would give the vertical height of the scene using the vertical angle of view.
Similarly, the horizontal width of the scene would be calculated from the horizontal
angle of view.

Relationship Between Sensor Size and Lens Size


It can be very confusing to establish the actual field of view that will be obtained from
a combination of sensor size and lens specification. Lenses are specified as designed
for a particular sensor size. A lens designed for one sensor size may be used on a
smaller size but not the reverse. The reason is that the extremities of the scene will be
outside the area of the sensor. Many people in the CCTV industry have grown up with
the 2/3 camera as the most popular and are familiar with the fields of view produced.
However the 1/2 and 1/3 cameras are now being extensively used and therefore
there are important factors that must be taken account.

Diagram 4. 9 Effect of Sensor Size on View


Diagram 4.9 shows the effect of using one lens on two different sizes of sensor. The
result of using a larger lens format on a smaller lens format is to create the effect of a
longer focal length, which is a narrower angle of view.

40 | P a g e

Diagram 4. 10 Using a Correctly Matched Camera and Lens Format


Diagram 4.10 shows the result of using a lens designed for a 1/2 format on a 1/2
sensor. This is an important consideration when deciding the most appropriate lens for
a required field of view. The design size of the lens must be related to the size of the
sensor being used. To summarise then:
1. A lens designed for one format may be used on a smaller format camera but
will produce a narrower angle of view.
2. A lens designed for one format may not be used on a larger format camera.
3. Assuming a focal length has been assessed based on a particular format of
camera and lens, and it is then decided to use a smaller format camera, the
same field of view will only be obtained if a shorter focal length lens is used.
4. Always check the angle of view for the particular lens and camera
combination it is intended to use.
5. Charts at the end of this chapter provide guidance on the selection of lenses
and the relationship between different formats of camera and lenses.
Aperture
The size of the aperture is called the f number of the lens, e.g. f1.4, f1.2, etc. This is
a mechanical ratio of the lens components and is specified as:

41 | P a g e

The effective diameter is related to the size of the front lens. Note that this is effective
diameter and not the actual diameter. This is a measure of the amount of light that the
lens will pass to the imaging device. As stated it is a ratio and does not refer to the
quality of the lens. The smaller the number then the larger is the aperture. The figure
given in specifications for lenses is the maximum aperture and this value is often
followed by the minimum aperture. For instance, f1.4 -- f360, this second value being
important if the camera is very sensitive such as an intensified sensor. Intensified
cameras often require a minimum aperture as small as f1500. From the formula above
it may be calculated that with a 16mm lens having the aperture set to f360 the
effective diameter will be only 0.04mm. Even so, this could allow too much light to
the sensor of an intensified camera and damage the tube or flare out the picture.
Having said that the f-number is a ratio, this does not imply that a lens with a lower
number is better than one with a higher number. There are other factors that affect the
light transmission through a lens. However, when comparing the major brands of
lenses it is sufficient to use the f-number unless the application is especially
demanding, where, for instance, image comparison or ultra fine resolution is
necessary.
The efficiency of a lens and the amount of light it can transmit depend on many
factors that lens designers must consider. However, ultimately a lens must be a
commercial proposition and affordable to the CCTV installer and the customer. Two
factors that affect the cost of a lens are the size of the glass elements and the number
of elements. Therefore, it is less expensive to produce a 16mm f1.8 lens than it is to
produce a 16mm f1.2. Consequently, some manufacturers produce the same focal
length lens in two variations of f-number. For indoor conditions with ample light, or
outdoor use in daylight only, the cheaper f 1.8 lens would be satisfactory and could
represent a saving in cost. Exercise care in selecting the cheaper lens if the application
is outdoors with low light conditions. As can be seen from this chapter, this would
require nearly three times as much light as the f1.2 lens.

42 | P a g e

4. MONITORS
Introduction
Another important and often overlooked part of a CCTV system is the monitor.
Ultimately the picture taken by the camera and the lens is displayed on the monitor.
The monitors performance and adjustment will have an affect on the picture seen by
the system operator.
In the same way that cameras, being analogue devices, have adjustments that enable
the best picture quality to be obtained so monitors, also being analogue devices, have
settings and adjustments that enable the best picture to be displayed. If the controls on
the monitor are not correctly set then, similarly, the money spent on expensive high
performance cameras, lenses and control equipment will be a waste because the
picture displayed on the monitor will not do justice to the rest of the system.
Consequently, it is vital to understand the principles of the normal monitor controls,
their effect on picture quality and the correct way to set the controls properly.
Monitors are available in different screen sizes. The reason for this is that the size of
the monitor depends on the viewing distance. If the incorrect size or position of a
monitor is used then at best the monitor will be awkward and unpleasant to use; at
worst the picture will be too small to differentiate detail or so large that the picture
appears grainy and low quality.
43 | P a g e

In this chapter the principles of operation of monochrome and colour monitors will be
explained in a simplified way, leading to the principles and effects of their controls.
The correct procedures used to set the controls to obtain the best picture quality will
be described. Finally, the principles of choosing the correct number, size and
positioning of monitors will be discussed so as to get the maximum from this
normally undervalued part of CCTV systems.

The Principles of Monochrome Monitor Operation


Apart from the use of transistors, integrated circuits and other solid state devices in
the circuits of monitors the major part of the monitor, the television or Cathode Ray
Tube (CRT), has remained essentially unchanged since the first TV monitors were
developed.
As shown in Diagram 5.1 the CRT consists of a glass tube with all the air removed.
An electron gun at the back of the CRT (a special material that when heated boils
off electrons) generates a stream of electrons. These are attracted to the front screen
at very high speed by a high voltage of several thousand volts. The inside of the
screen is coated with a special phosphor that glows when struck by the electron
beam, the stronger the beam the brighter the spot generated.
Scanning coils around the neck of the tube generate a magnetic field. The magnetic
field affects the position of the striking point of the beam on the screen. By changing
the voltage on the scanning coils the striking point of the beam can be scanned across
the screen of the CRT to create a series of lines; when the beam moves back across the
screen, during the retrace, the beam is turned off so that only the line and not the
retrace is visible. By selecting the correct wave shape and frequency the same 625
line frame and 50 fields per second patterns as produced by the camera can be recreated For descriptions of fields, frames and the way that the camera produces these
see Chapters two and three

44 | P a g e

Diagram 5. 1 The Cathode Ray Tube


The video signal is used to control the strength of the beam. The brightness of the
beam at any point along a given line will be proportional to the level of the video
signal. This is consequently proportional to the light intensity at that point on the
image sensor of the camera. In this way the picture captured by the camera can be
recreated on the screen of the monitor and observed by the system operator.

Diagram 5. 2 Basic Monochrome Monitor Block Diagram


45 | P a g e

In a basic monitor the video signal input enters the monitor and is terminated in a
seventy-five ohm load. This matches the output impedance of the camera and the
coaxial cable (see Chapter three). A sync separator separates the video signal and sync
pulses. The sync pulses are used to synchronise the line oscillator of the monitor to
the line oscillator of the camera being viewed. The line oscillator and field oscillator
respectively control the scanning coils that scan the electron beam into 625 lines.
Field sync pulses control the scanning coils to produce 50 fields. The horizontal and
vertical hold controls adjust the frequency of the line oscillator. Consequently, these
can be used to compensate for differences in the sync pulse frequencies coming from
the camera.
A high voltage generator is used to accelerate the electron beam. The strength of the
beam is controlled by the output of an amplifier. The input of the amplifier is the
video signal. In this way, the level of the video signal controls the brightness at any
point on the screen. The brightness control sets the basic level of the beam and
therefore the general brightness of the picture. The contrast control controls the
amplification or gain of the amplifier. The greater the contrast the greater is the effect
of the video signal on the brightness. At low contrast, the picture will appear grey and
uninteresting. At excessive contrast, the blacks and whites in the picture are very
harsh and the picture is unpleasant to view. At the correct brightness and contrast
levels, the picture will appear natural with many shades of grey. The DC Restoration
affects the overall voltage level of the video signal. Sometimes this is needed because
the voltage is modified as it passes through capacitors in the circuits of cameras and
control equipment. With the DC restoration turned off there will be a grey raster
when no video is input to the monitor. With the DC restoration turned on the screen
will be completely black when no video is input.

Principles of Colour Monitor Operation


A colour monitor works in basically the same way as a monochrome monitor except
that there are three electron guns. These three guns are for the three primary colours,
red, green, and blue. The guns are aligned to the mask on the phosphor screen. If a TV
screen is examined closely, it can be seen that it is a matrix of very fine red, green and
46 | P a g e

blue dots. This is why the resolution of colour monitors is typically lower than
monochrome monitors.
A combination of all three dots is needed to generate white compared with a single
dot for a monochrome monitor. This means that for the same number of pixels the
ability to resolve black and white lines may be up to three times less on a colour
monitor. When the beam from the correct gun strikes a spot or pixel on the
corresponding mask then the pixel glows red, green, or blue. As previously explained
in Chapter two, combinations of these three basic colours can be used to form any
colour in the spectrum. The firing of the guns in combination by the colour composite
video signal recreates the colour picture viewed by the camera.

Diagram 5. 3 Colour Monitor Block Diagram


After sync separation the combined chrominance and luminance signals are processed
by decoder and amplifier circuits. These are divided into separate signals to control
the strength of the red, blue and green electron guns. Besides the normal brightness
and contrast controls there is also a colour control that affects the general
chrominance of the picture. With the control wound to minimum, the image will be
monochrome. When the control is turned to maximum the colours will be very
saturated and will normally be too unpleasant to view.
Usually a composite colour video input is provided but on some monitors a Y-C or
Super VHS input will be provided. Alternatively, an input is provided where all three
47 | P a g e

colour signals are brought in separately. This is known as an RGB (red, green, blue)
input. The advantage of either Y-C or RGB inputs is that there is no filtering as
associated with colour composite video. The bandwidth available is higher, and
consequently higher resolution is available if the Y-C or RGB inputs are used. That is,
provided of course that Y-C or RGB has been used throughout the system.

Understanding monitor performance specifications


Resolution
As with cameras, the vertical resolution of a monitor is the number of black to white
transitions or lines that can be distinguished from the top to the bottom of the picture.
In addition, as with cameras the limiting factor is the 575 lines that make up the
picture. The figure for resolution that is normally given in monitor data sheets is, as
for cameras, the horizontal resolution. That is to say, the number of black to white
transitions or lines that can be resolved along one horizontal line of the picture.
The major difference between resolution performance figures for monitors and
resolution for CCD cameras is that the figure for monitors is given for the centre of
the picture. This is where the resolution is highest.

Diagram 5. 4 The Effect Of Scanning Coils On Resolution And Linearity


The reason for this is that the picture is made by 625 horizontal lines produced by the
scanning coils using a magnetic field to drive the beam of electrons across the
phosphor screen. However, it is very difficult to get a magnetic field to have an even
or linear effect across the entire surface of the screen. At the edges of the screen, the
48 | P a g e

magnetic field tends to be non-linear and both the horizontal and vertical lines seen on
the screen will appear bent. The electron beam also tends to defocus towards the
edges. This reduces the ability to distinguish fine lines at the corners and sides of the
screen and reduces resolution at these areas. For example, a monitor with a resolution
at the centre of 600 lines might only have a resolution of 400 lines at the corners. This
is a very important point to remember in choosing a monitor and in positioning a
camera on the screen to see the most detail. The object to be viewed must be placed in
the centre of the screen to get the sharpest picture.
The problems of non-linearity became worse with the advent of flatter and squarer
tubes, because the scanning beam, which is linear had to travel further to the edges of
the screen than it did to the centre. This problem was is overcome with a
compensation circuit called S correction. This causes the beam, now non-linear to
move slower towards the edge and faster in the centre.
Monochrome monitor horizontal resolution is normally quite high, between 750 and
800 lines for a nine-inch monitor. The reason is because the coating of phosphor on
the inside of the screen is continuous and the spot size is determined by the electron
beam focus. Consequently, in monochrome systems the monitor is not the limiting
factor for the resolution of the system. The resolution tends to decrease slightly as the
monitor size increases because it is more difficult to manufacture large TV tubes with
a fine phosphor coating.
In colour monitors, however, because there are three spots to make each point, red,
green and blue, the resolution is very much lower typically 330 to 350 lines. The
highest resolution that is being achieved at this time is about 450 lines. This is
assuming that the Y-C input of the monitor is used. That, of course, has the proviso
that all the other parts of the system are Y-C and have the same or higher resolution
figures.
Bandwidth is also linked to resolution (see Chapter 3 and the section on camera
resolution) The greater the bandwidth the higher the possible resolution of the monitor
and the sharper the pictures will be. For a 750-line monitor the bandwidth might
typically be about 10MHz.

49 | P a g e

50 | P a g e

5. VIDEO SWITCHING
Introduction
There are few CCTV systems that have only a single camera apart from door entry or
vehicle rear view systems, etc. Most systems incorporate more than one camera and
therefore have the need to select the view from any camera on to a monitor. This
chapter covers the main types of video switcher and their applications.
Principles of Video Switching
It would be possible to switch video signals using simple toggle switches but this
would introduce several undesirable results. The switching could cause severe
interference on the screen due to the induced noise on to the signal. There would be a
lot of picture roll until the monitor became synchronised to the next camera. The
picture might be unstable until the monitor is synchronised correctly.
Modern video switchers incorporate electronic switches and a technique known as
vertical interval switching. When a new camera is selected, the electronic circuits
wait a fraction of a second until the field sync pulse of the video signal is detected and
then switch over. This allows the monitor to lock immediately on to the new line sync
pulse and the new picture is displayed without any rolling. This assumes that all the
cameras in the system are compatible and on the same phase of the supply. The
elimination of picture bounce is the main reason for specifying that all cameras are on
the same phase of the supply. There are cases where it is not possible to connect all
51 | P a g e

cameras to the same phase such as large industrial sites or systems having cameras in
several buildings. There are cameras available with phase adjustment controls. This
allows the video signal to be transmitted out of phase from the local supply and in
phase with the other cameras. In many cases, the adjustment is too coarse for accurate
alignment and the result would be a small amount of bounce but not a complete roll
of the picture. The measurement should be carried out at the monitor using a dual
trace oscilloscope. One trace would show the local mains sine wave. The other would
show the camera output and its relationship to the supply.

The Basic Video Switcher


The simplest switcher is one that includes the features mentioned previously and
where the coaxial cables are connected directly into the rear via BNC plugs. These
switchers usually have a number of buttons according to the number of cameras in the
system. They are mainly 2, 4, 6, and 8 way units. This type of switcher is usually
known as a manual switcher where the keys directly switch the cameras.
Switchers are usually terminated with a 75-ohm resistor, as is the monitor. In the case
of the system shown in diagram 6.1 the terminations at both the switcher and the
monitor should be left at 75 ohms.
Most switchers have two other controls, one to set the cameras to sequence
automatically, the other to adjust the dwell time between switching from one camera
to the next. The dwell time will be the same for each camera in the system.

52 | P a g e

Diagram 6.1 System with Simple Manual Switcher

Looping Switchers
On occasions, it may be required to loop one or more cameras to part of the system or
another switcher, for dual control. Here a switcher with loop through facility would be
used. This type of switcher will have two rows of BNC connectors, one above the
other. There will also be a switch adjacent to each camera input, the purpose of which
is to set the 75-ohm termination on or off. One position of the switch will usually be
marked high, the other low or 75 ohm. The camera inputs are normally the top row
of connectors with a corresponding loop through connector below. The camera signals
that are required to carry on to another location would be taken off the output
connectors via BNC plugs. The termination switch next to each looped through
camera should be set to high. The signal should then be terminated at 75 ohms at its
53 | P a g e

destination. Some switchers with looping outputs do not have a termination switch.
Instead the resistance is set to high and plugs with a built-in 75-ohm resistor are
provided to fit in unused outputs.
It is not acceptable to loop through a video signal by using a BNC tee connector. If
this is the only way available then the internal 75-ohm resistor inside the unit should
be snipped out, The correct termination at the end of the line should be ensured.

Diagram 6.2 Rear Panel of Looping Switcher.

Switchers with Additional Features


Switchers are available with two monitor outputs. Normally one monitor can be set to
sequence through the cameras and the other used as a selectable spot monitor.
Another feature available on many switchers is the capability to accept alarm inputs.
There is usually one alarm input to each camera input. If there is an input from an
alarm, the switcher will automatically switch the monitor to the associated camera. An
alarm input will override a sequence if it is set up and hold the selected camera on the
monitor. In the case of a switcher with dual monitor outputs one monitor will switch
to the alarmed camera while the other continues to sequence.

Remote Switchers
Often it may be inconvenient or difficult to route all the coaxial cables to a desktop
switcher. This is especially the case if there are eight, sixteen or more cameras in the
system. A remote switcher is one where the camera cables are connected into a panel
containing all the switching electronics. This box can be situated anywhere
54 | P a g e

convenient for routing the cables. The desktop control unit is then connected to the
remote panel by a small two or four core cable or sometimes a single coaxial cable.
The coaxial cable to the monitor(s) is connected to the remote panel.

Diagram 6.3 System with Remote Switcher.


Remote switchers can generally be more sophisticated than the desktop type and can
incorporate more features. There can be up to six or eight monitor outputs and more
versatile handling of alarm inputs. In addition, several keyboards may be incorporated
into one system. This allows selection of cameras from more than one control
position. The controls in this type of system are generally of the master and slave
type, which means that the controls are not totally independent. Where greater
flexibility is required then the choice would be to use a matrix switcher as described
in the following section.
For a system with more than four cameras, remote switchers can achieve significant
savings in installation costs.

55 | P a g e

6. ANALOGUE VIDEO RECORDING


The human eye is an incredibly adaptable device that can focus on distant objects and
immediately refocus on something close by. It can look into the distance or at a wide
angle nearby. It can see in bright light or at dusk, adjusting automatically as it does so.
It also has a long 'depth of field'; therefore, scenes over a long distance can be in focus
simultaneously. It sees colour when there is sufficient light, but switches to
monochrome vision when there is not. It is also connected to a brain that has a faster
updating and retentive memory than any computer. Therefore, the eyes can swivel
from side to side and up and down, retaining a clear picture of what was scanned. The
brain accepts all the data and makes an immediate decision to move to a particular
image of interest, select the appropriate angle of view and refocus. The eye has
another clever trick in that it can view a scene of great contrast and adjust only to the
part of it that is of interest.

Introduction
The predominant method of recording video pictures at the time of publication of this
book is by analogue video recording. In analogue recording, the voltages that make
the composite video signal are recorded on to magnetic tape; the changes in voltage
magnetise and demagnetise the tape. To play back the recording the changes in
magnetism on the tape are converted back in to voltages and the composite video
signal is re-created for connection to a video monitor.
A video tape recorder is a complex integration of electronics and extremely high
precision mechanics. There have been several types of recording systems in recent

56 | P a g e

years, the main contenders being 'Betamax' from Sony, 'Video 2000' from Philips and
'VHS' from Matsushita. They are all based around a tape contained in a cassette with a
supply spool and a take up spool. However, there were both electronic and mechanical
differences that prevented one tape being used on another make. The one to emerge as
the standard throughout the world is the VHS system. VHS means Video Home
System and was developed by the JVC Company in Japan.

The VHS Video Recorder


All video tape recorders follow the same principles as an audiocassette recorder. That
is, a tape containing thousands of tiny magnets, each with a north and a south pole is
passed through a varying magnetic field. The magnetic field is generated in a
revolving drum from the video signal. This reproduces the video signal onto the tape.
The tape is stored in a sealed cassette with a flap at the front protecting the tape.
When the tape is loaded into the recorder, a mechanism draws the cassette into and
down the machine.
The catch holding the front cover is released and the cover opened. The cassette drops
over two threading posts as shown in the first diagram. When one of the functions
such as play or record is operated the tape is drawn around the head drum as shown in
the second diagram.

Diagram 7. 1 VHS Tape Cassette.

57 | P a g e

Principles of Video Recording


The descriptions give here are of necessity over simplified and are intended to
illustrate the basic principles of recording. As stated before, the two essential elements
of a video tape recorder are a rotating head assembly and the tape passing around a
drum and head. The head consists of a ferrite ring with its continuity broken by a
small gap. A coil is wound round the ring which, when energised, creates a magnetic
field. The magnetic field in the ring concentrates in the gap. An essential aspect of
design is that head gap is in the order of 0.3 microns. A micron is one-millionth of a
metre. Therefore, 0.3 microns is about one-hundredth the thickness of a human hair.
The video signal is fed to the magnetic coil and creates an analogue version in the
form of a magnetic field. As the tape passes the gap in the head the magnetic field
causes the 'internal magnets' to align according to the signal passing through the head.
This makes a magnetic copy of the signal on the tape. The tape passes the drum at a
fixed speed, therefore low frequencies will create long 'magnets' in the tape, and high
frequencies will create short 'magnets'.

Tracks on Tape
The tape consists of an insulated base material with a fine oxide coating. For various
reasons, the head is displaced at an angle to the tape. This is known as helical
scanning and is standard for all recorders. The magnetic information is recorded at an
angle across the tape.

Diagram 7. 2 Tracks on Video Tape

The width of tape for standard VHS is 12.65mm (1/2"). The speed for standard real
time recording is 23.39 mm/sec. Early video recorders and some domestic VHS
58 | P a g e

recorders still available today had two coils, or heads, on each head cylinder. This
worked well while the tape was moving, producing moving pictures on playback.
However, when the pause function on the recorder was activated to view a single still
picture horizontal noise bars would appear on the picture because the head was not
moving fast enough to capture the single picture from the tape accurately.

The solution to this problem was introduced when the first four-head video recorders
were made. These use four coils or heads, two each on opposite sides of the head
cylinder. By using four heads instead of two twice the amount of information could be
written to or read from the tape. Four head video recorders can replay still images
without any noise bars and this has led to their general use in domestic and CCTV
video recorders, replacing the older two-head design.

The heads are spaced 65 microns apart for a standard VHS time-lapse recorder and
these lay down tracks on to the tape, which are 58 microns wide. Head cylinders of
this design are known as type SP heads.

7. DIGITAL TECHNOLOGY AND RECORDING


Introduction
Recent developments have made it possible to store video images on magnetic discs,
as on a computer hard disc. This is done by converting the image to a digital form to
store it. The early problem was that to obtain reasonable resolution required storing a
massive amount of data. The result is that only a limited number of images could be
stored. A reasonable quality colour picture with a resolution of 681 x 582 pixels has
396,000 picture elements. This would need about 1/3 megabyte (Mb) of disc storage.

59 | P a g e

Modern digital compression technology now means that many more images can be
stored. There are now systems that can store thousands of images. Even this must be
considered in the light of the quality of image and the amount that can be stored. For
instance, real time video is presented at the rate of 25 frames per second, i.e. 90,000
frames per hour. A 100-Mb hard disc would store 330 frames, which is only 13
seconds of video at normal density. A compression of 2:1 still only stores about 26
seconds of live video. Sampling every other frame would double this again but it can
be seen that digital storage has a long way to go before replacing the video recorder.
Having said this, technology in this field is advancing at a very fast rate and is the
obvious way forward.
Digital recorders are available but their use is a tiny fraction of that of analogue video
recorders. This is no surprise as a videotape costing a few pounds can store over
432,000 high quality colour images, using a recorder costing a few hundred pounds.
To store the same number of pictures digitally is very costly both in storage media and
hardware required to write to it.
The primary successes of digital recorders have been in event recording, where fast
recording and search makes digital recorders most attractive. Many digital recorders
include multiplexers as the timebase corrector required for digitising means that
comparatively little extra circuitry is needed to add this feature, which helps to make
them cost effective.
This was the original introduction to digital recording in the second edition published
in 2000 and would have been written in about 1999. Technology has moved on at a
fast pace since then. In fact it is now at the stage where digital recording is virtually
the norm with the use of analogue VCRs declining rapidly.
Along side this massive development is the growth of IP technology, which now has
the following complete chapter (9) devoted to this latest trend.

60 | P a g e

The Digital Video Recorder (DVR)


The essential elements of any digital video recorder are shown in the simplified block
diagram 8.1. Many DVRs have more components to add additional features like
motion detection or video transmission. The switcher selects which camera is to be
recorded at any moment and routes it to a timebase corrector. The timebase corrector
ensures that pictures can be recorded rapidly in sequence without having to
synchronise the cameras by gen lock or other means.
The analogue to digital converter (ADC) turns the voltages representing luminance
and into an array of binary digital numbers which represent the brightness and colour
at every point on the video picture. A digital signal processor takes this huge amount
of raw data and compresses it so that an acceptable number of pictures can be stored
on the limited space available in the digital store. The store takes this information and
holds it, usually under a reference related to the time and date of recording.

Diagram 8. 1 Simplified Block Diagram Digital Video Recorder


At any time this archived information can be retrieved and routed via a digital to
analogue converter to re-create the video signal required to play back the recording on
a conventional video monitor. Alternatively, if a Personal Computer is being used as a

61 | P a g e

digital recorder the playback pictures may stay in digital form for display on the PC
monitor.

Units of measure for digital storage


Storage and file sizes are measured in bytes where one byte is the basic unit of storage
that would represent a single letter or number. A byte comprises eight bits. One bit is a
single binary number either 1 or 0.
One Kilobyte = 1,024 bytes, (210 ) not 1,000 as is commonly used.
One Megabyte = 1,024 Kilobytes = 1,048,576 bytes (220).
One Gigabyte = 1,024 Megabytes = 1,048,576 Kilobytes = 1,073,741,824 bytes (230)
On Terabyte = 1,024 Gigabytes, (240 bytes).
The above relationships between units are strictly correct, however it is common
practice to use a factor of 1,000 as the ratio between units.

Principles of Digital Video Recording


In digital recording each field is divided in to an array of individual points or pixels.
At each one of these points, analogue to digital converters convert voltages
representing the colour and brightness at that point to a binary digital number. This
array of binary digital numbers can then be stored digitally in a file with a name cross
referenced against time and date. A single frame of monochrome video needs about
450kb (Kilobytes) of space for storage and single frame of colour needs about 650kb.
This is the uncompressed size that would be needed for storage on hard disc or other
storage medium.
Consequently to store the same number of images as a video tape a total storage
capacity of about 121.5Gb (Gigabytes) would be needed for monochrome and
62 | P a g e

175.5Gb for colour. This is considerably larger than hard discs and other media
generally available and would also be very expensive. Consequently some means is
required of reducing the amount of space required without adversely affecting picture
quality. The technique of reducing the amount of space required is generally referred
to as compression.
The video frame contains a large amount of redundant information that can be
eliminated without a great loss in perceived picture quality. Consequently, common
types of compression used are known as lossy compression because the redundant
information is discarded. Most compression methods are effective up to a certain
point, or Knee, beyond which the image quality quickly degrades.
To assist in reducing the amount of size required for storage the video signal can be
represented in a form known as YUV. The YUV format consists of the Y (luminance)
and UV (colour difference) signals (for further descriptions of luminance and video
signal components see chapter 2). The advantage of using YUV format is that fewer
bytes are needed to digitise the video. Normally, recording all of the colour
components; red, green, blue (RGB recording) would need three bytes, one byte for
each colour. By using YUV format the luminance can be digitised as one byte and the
colour difference signal as one byte. Consequently only two bytes are needed rather
than three, a saving of one third of the storage space required. This technique can be
used together with compression to minimise the amount of space required for storage.

Types of Compression
The technology for compressing video pictures originated in the storage of still
photographs on computers. The most commonly used standard, JPEG, takes its name
from the Joint Photographic Expert Group by whom it was developed. Using JPEG
compression, the knee occurs at about 8:1 compression. The most commonly used
standard is Motion JPEG for which the knee occurs at about 15:1 compression.
Consequently, M-JPEG reduces a 450kb file to only 30kb. While this is still too large
to fit the same number of images as a video tape on to a hard disk it is small enough to

63 | P a g e

permit, say, 2 images per second to be recorded for 24 hours on to a 6Gb hard disk,
which is a size generally available, costing a few hundred pounds.
Another more recent compression standard was devised by the Motion Picture Expert
Group specifically for the digitisation of moving images. This standard is given the
name MPEG. This standard makes use of the redundancy between adjacent frames.
MPEG-1 contains three types of encoded frames. Intracoded frames (I-frames)
contain all of the video information required to make a complete picture. Predicted
frames (P-frames) are generated by previous I-frames or P-frames and are used to
generate future P-frames. Bi-directional Predicted frames (B-frames) are generated
using both previous and future frames. A complete sequence of frames is made up of a
series of these different frame types with more than one I-frame for every 10 P- or Bframes. This process is known as inter-frame correlation and allows compression
ratios of 100:1 to be achieved.
MPEG-2 is the format used in the latest Digital VideoDisk (DVD) technology, which
can store about 90 minutes of VHS quality video and audio on to only 650Mb of
storage space, such as a CD-ROM. However there are a number of disadvantages to
MPEG compression. Firstly, in order for MPEG to achieve high compression it needs
the video signal not to change abruptly from frame to frame. Since many video
recording applications require multiplexing because more than one camera must be
recorded, the rapid change from frame to frame as cameras are switched defeats the
inter-frame correlation technique used in MPEG. Secondly, MPEG requires much
more electronics than JPEG making it more more expensive for security applications.
MPEG-4 is the latest development in the MPEG series and is mainly used in video
films. Note, there was no MPEG-3.
FORMAT

KNEE

WITH INTER-FRAME CORRELATION

JPEG

4-8:1

Not Available

M-JPEG

10 - 15 : 1 Not Available

MPEG

10 - 15 : 1

100 : 1

FRACTAL

20 - 30 : 1

> 100 : 1

WAVELET

30 : 1

> 100 : 1
64 | P a g e

There are two other methods of compression worthy of mention.


H.264 standard based video compression core technology with substantially increased
coding efficiency and enhanced robustness to network environments in cost effective
embedded platform. This technology will support TV broadcast, digital entertainment,
internet streaming and visual communications over broadband and wireless networks.
WAVELET', is also seen as offering superior development potential to current
MPEG compression, giving a greater amount of compression with equivalent
quality. It transforms the whole image and not just blocks of the image, so as the
compression rates increase, the image degrades gracefully, rather than into the
blocky artefacts seen with some other compression methods. Wavelet
applications can have their preferred level of compression selected by the user
higher or lower.
Thus, although Wavelet is not as established as some other compression techniques, it
is growing in popularity.

Compression summary
Compression technology is development rapidly, which makes it very dificult to
assess the true benefits of any particular method used in security applications. Each
manufacturer, naturally, pushes their own preference but it still leaves a jungle for the
end user to find their way through.
Fractal compression is not found very often in CCTV applications but is mentioned
here for completeness. It is a mathematical method of encoding that requires a great
deal of computing power to encode the images. It is not a lossy compression as in
JPEG or MPEG. One advantage is that the image can be enlarged or reduced without
the blocky appearance of other forms of lossy compression.

65 | P a g e

Storage Rate
Another factor involved in digital recording is that of storage rate. Working at the full
25 frames per second of real time video would not only require vast amounts of
storage (4.5Gb for just one hour @ 30kb per frame) but also very fast processing and
storage media capable of digitising and storing a each frame (even at 30kb) in under
0.04 seconds; 40 milliseconds.
Many DVRs currently available, particularly those based on hard disc storage get
round this problem by sampling and recording frames at lower than the full 25 frame
per second rate. This is expressed in a number of ways. For example, a DVR may
record every 12th frame, 2 frames per second or a second per frame. All of these
are the same value.
The combination of file size and storage rate will give a figure for storage capacity
per second. For example, to store a 30kb file at 3.13 frames per second requires 30 x
3.13 = 93.9kb per second, or 0.34Gb per hour. However, this is just for one camera
and most systems have more than one camera that must be recorded. For 8 cameras
the figure above would need to be multiplied by 8 which is 2.72Gb. To record these 8
cameras for 8 hours would need 8 times the storage space again, 21.76Gb. There are
currently 23Gb hard discs that would accommodate such storage.

Conditional Refreshment
A technique is now being used by which the first frame of a scene is captured and
stored at the highest possible resolution. Subsequent frames are scanned and only
those parts of the scene that have changed are stored, These refreshed scenes are
superimposed onto the original frame and the changed parts updated. The refreshed
scenes use only a tiny amount of data storage compared to the original scene. In this
way, the storage capacity can be increased by one hundred or one thousand times
according to the amount of movement in the scene.

66 | P a g e

67 | P a g e

8. IP TECHNOLOGY
Introduction
It used to be that CCTV images were always transferred over coaxial cable, for
various reasons: range, bandwidth, ease of installation, low attenuation, and so on.
However, there is a trend which is emerging to integrate CCTV images into (or over)
existing digital networks which are there to provide data services. The reasons for this
trend would appear, on the face of it, to be unarguable: most organisations have large
data networks already; there is often spare capacity (although the network manager
may disagree with that statement); twisted pair cable extends everywhere; it is simple
to install and maintain; it makes maximum use of (or leverages) an expensive asset.
There are downsides to the integration of data and images on a single infrastructure,
usually to do with two things: the effect on data patterns caused by streaming video,
and the problems of reliability and resilience in a network where 100% uptime is
usually an impossibility.
This chapter acts as a simple guide to networking which hopefully will cover a lot of
what you wanted to know about networking. This is not an in-depth technical guide:
there are already too many of those around. Rather, it looks at an overview of
networking from the data perspective, and then deals with the issues of adding CCTV
to the infrastructure.
The first part looks at what a network is and how simple networks operate. This leads
on to check out protocols, and in particular, the OSI 7 layer model. Then TCP/IP, IP
addresses and gateways are dealt with. Local Area Networks are looked at: how they
work, and what to look out for when CCTV is added. Ethernet will be described, the
worlds most popular LAN, and the difference between hubs and switches will be
examined. Later, the Internet is described: where it came from and how it works; what
domain names are, and how a name, and its location, are looked up through a service
called the DNS. Then routers are explained - how do they do their job? What happens
if they stop working? Whats a router-switch? The next part of the chapter looks at the
circuits used to connect equipment together copper, wireless and optical fibre.
68 | P a g e

Lastly, how networks are accessed is described, and security issues are dealt with
reference to Virtual Private Networks and Firewalls.
Networks
Over the years, many different definitions have emerged to cover the word Network.
A group of PCs connected together might be one; a fully interconnected system of
hardware with redundant circuits to provide resilience might be another. In actual
fact, a network is something as simple as two desktop computers sharing a single
printer, to something as large as the internet. What drives a network is the word
interconnectivity.

Diagram 9.1 Interconnectivity


Can one PC send data to another PC and vice versa, irrespective of how they are
actually connected together? Can a computer in, say, England, download information
from another computer in China? Will the two computers be compatible? Should we
need to know? The answers to these questions are yes, yes, yes and no, in that order.
The fact that a computer made by one manufacturer can talk to a computer made by
a different manufacturer somewhere else in the world isnt something just to do with
the fact that both might use Microsoft operating systems: theres a bit more to it than
that. Buried deep in the heart of the PC is a set of protocols which take care of any
incompatibilities between different computers. It isnt necessary to know that theyre
69 | P a g e

there, but it might be helpful to explain a little about protocols and how they work
before continuing.
Types of communications

Diagram 9.2 Types of communications


Whenever you write a letter, you observe a protocol: Dear Sir ends with yours
faithfully; Dear Ms Smith ends yours sincerely and so on. We do it without
thinking: its what we were probably taught at school. Similarly, when we ring
someone, we have a protocol for identifying who is at the other end of the line, and
how long we speak for before finding out whether the other party has understood. We
also know what to do if we have misheard or misunderstood what was said a sort of
error detection and correction routine using the word Pardon? or Sorry, I missed
that, say again What do we do if we answer the phone, and find someone speaking a
language we dont understand? We might be able to speak a few words of the foreign
language, but if we cant, then there is no point in trying to communicate.
What we need in the computer field is a kind of lingua franca or a common language
which is used by every computer so that any computer can communicate with any
other. That doesnt mean that if you go to a Japanese web site and download some
data that you will necessarily understand what it says it will still be in Japanese
characters, but your computer will have had no problem understanding what you
70 | P a g e

asked it to do, and no problem in understanding how to ask the computer in Japan for
the information either. This is because all computers work to an internationally agreed
set of protocols.
Back in the 1970s, there was no need for protocols: all computers were made by IBM.
By the 1980s, many other manufacturers had entered the market, using different
internal

operating

systems,

and

it

became

very clear

that

international

communications were here to stay. New email packages became available; for
example Outlook, Outlook Express, Eudora Light, Eudora Professional, MailPlus,
Pegasus, Lotus Notes, and others. So to enable anyone anywhere to send email to any
other computer anywhere, irrespective of whether, for example, one PC used Outlook
to generate an email, and another used Lotus Notes to do the same job, some sort of
protocol was needed to carry out conversion work between two dissimilar elements
of software or hardware. The International Standards Organisation (ISO) got
involved, and came up with the Open Systems Interconnection 7-layer Model as the
best way of solving the problem. The code for this is embedded into the computer
operating system, and works quietly in the background.

Open systems interconnection


One of the simplest ways of understanding the Open Systems Interconnection model
is to relate it to a set of envelopes several envelopes fit inside one another, until only
the largest is visible. The largest one hides all the others, and is the only one visible to
the eye. Before we see how it works, lets ask another question. If you want to be
absolutely sure that any postal packet you send to another person actually gets there,
what would you do? You ought not to drop it into a post box, even though the Royal
Mail has a good track record of delivery: you would send it recorded delivery or
registered post. That way you can be sure that the addressee has got it. Networks use
the same idea: if you want to send, say, an email to somebody, and be sure that (a) its
arrived, (b) its not been damaged in transit and (c) the whole email has been
delivered, and no part is missing, then your computer would automatically use a
system for recorded delivery this is called TCP, or Transmission Control Protocol.
You dont actually see this happening: your PC takes the appropriate action
71 | P a g e

immediately you decide to send an email. Lets use an example. Well send an email
to enquiries@tavcom.com. This email has an attachment which consists of a Word
document of 100 pages of text. The computer we will use has Lotus Notes as its email
package (or client, as it is usually called) and it is connected to an internal Local
Area Network, or LAN. When we click on create mail, and fill in the various boxes
with subject, addressee, text, attachment, and so on, the OSI model is already working
away on this information. The email itself is placed inside an envelope with the type
of email package Lotus Notes on the front. This in turn is placed inside another
envelope with a label on the front to indicate that the contents are, in fact, electronic
mail. This label says SMTP Simple Mail Transfer Protocol. This envelope in turn
is placed inside another, which says TCP on the front. This is the instruction for the
recipient to acknowledge safe receipt. Since this envelope isnt big enough to hold the
email and the 100 pages of text (a TCP envelope will only hold about 300 words, or
roughly the equivalent of a single A4 page of text) the computer automatically
generates enough TCP envelopes for the whole message, and gives each envelope a
sequence number. So, for example, the first envelope would have a sequence number
of 1 of 100, the second would be 2 of 100 and so on. In this way, the recipients
computer knows how many envelopes it is supposed to receive, and it can therefore
ask for retransmission of any missing ones. Each TCP envelope is then placed inside
another envelope with the source and destination addresses on it. Since networks
dont actually use email addresses to send information, the destination address
enquiries@tavcom.com - has to be changed into an address format which can be
used. This is called the Internet Protocol address, or IP address. This is automatically
done by the computer. Finally, the IP envelopes are put inside another set of envelopes
which are addressed to a device which will send the full message into the internet
where it will be routed to enquiries@tavcom.com. This device is usually called a
gateway in actual fact it will physically be a router. Think of it as your post room,
where incoming and outgoing mail is sorted for delivery.
Assuming the data successfully arrives at the post room at Tavcom, it will be
forwarded to the PC designated to handle enquiries. At this point, envelopes begin to
be opened. The IP envelope is opened to see whether it has been delivered to the
right address, and to see where it has come from. If that is OK, then the TCP
envelopes are opened one by one to check if they have arrived in the right sequence
72 | P a g e

and with their contents intact. If so, then the envelopes are passed to the computers
internal mail room where SMTP opens them and uses the information to convert
what it has received (Lotus Notes) into the email package of the computer
at enquiries@tavcom.com this is Microsoft Outlook. Only when all this has been
correctly done, and any missing envelopes chased up and checked, will the recipient
be advised that an email has arrived.

OSI model
So lets translate all this into the OSI model. Layers 7, 6 and 5 are to do with the type
of email package (Lotus) and whether it is indeed an email (SMTP). Layer 4 makes
sure that TCP is used for recorded delivery Layer 3 contains to and from IP
addresses, and Layer 2 has the address of your post room or Gateway. Layer 1
defines how, and at what speed, the data is sent from your PC to the Gateway over
the LAN. To use the correct technical term, when data arrives at Layer 3, the IP layer,
it is loaded into an envelope which is formally known as a Packet, an IP Packet or
an IP Datagram.

Diagram 9.4 The layers of the OSI


However, there is a problem with this analogy with respect to the transmission of
CCTV images. These must be sent and received in real time, so to acknowledge
73 | P a g e

receipt of each packet of video information would introduce an unacceptable delay


from end to end. So there must be a way of sending information without the need for
all the checking and acknowledging which is an essential part of TCP. The answer is
to use an alternative protocol, called UDP (User Datagram Protocol). This is
sometimes called Fire and Forget, and is the equivalent of the postal analogy where
letters are simply posted to their addressees without the need for acknowledgements.
Many IP cameras today have a user-selectable option for TCP or UDP to improve the
end-to-end delay characteristics of a network.
9. HOUSINGS
Introduction
Most cameras are fitted with some form of protective cover for several reasons. The
common exception is probably in small retail establishments where the risk of damage
is slight.

Internal Housings
Housings are used internally for a variety of reasons. Sometimes it is where the need
is for the camera to be discrete. This could be in certain types of establishment where
the security of customers or members is necessary. It may be that the impression of
intrusion of privacy needs to be subtly avoided. There are housings designed to blend
in with the decor for aesthetic purposes. These can be miniature cameras secreted in
light fittings or ventilation grills. This type of housing is often used in hotels,
museums and art galleries, shopping malls, etc.
Another range of housings is designed for covert surveillance. The intention of this
housing is that it is not a deterrent but deliberately disguised as some innocuous
common object. They usually incorporate a miniature camera fitted with a pinhole
lens. These objects have been as diverse as PIRs, clocks, extractor fan controls, smoke
detectors, etc. There appears no limit to the imaginative methods of concealing
cameras.

74 | P a g e

Indoor cameras may sometimes have to be protected from attack and therefore fitted
in vandal proof housings. This often takes the form of a wedge shaped housing fitted
in a false ceiling with the minimum area projecting below.
The disadvantage of the wedge shaped housing is that it must be mounted facing in
the correct direction. Once fitted it is not easy to change the orientation of the camera.
This type of housing is often used when it is required to view along a corridor or other
predetermined direction.

Diagram 10. 1 Camera in Wedge Housing in False Ceiling


There may be situations where it is needed to have more flexibility in setting up the
direction the camera is viewing. This requirement often also needs the direction being
viewed to be discreet. The solution here is to use a type of domed housing. The dome
can be either a hemisphere or a complete sphere. The hemispherical, or half dome, can
be fitted in place of a standard ceiling tile. The camera is mounted on an adjustable
platform that may be set for both angles of view and direction.

Diagram 10. 2 Types of Discreet Camera Dome


There are two main types of plastic used for the domes. One is a black acrylic
material with a less dense slot through which the camera views. The other has a
silvered coating on the inside and acts in the same as a one way mirror. With this type
of enclosure, there is a great deal of flexibility in setting the camera view. It is also
very easy and quick to change the direction of view through 360.
75 | P a g e

External Housings
These are often called weatherproof or environmental housings. There are standards
that specify the degree of protection to be provided by enclosures. Mainly these are,
BS 5490, IEC 529, DIN 40 050. The rating of protection is defined by two digits
prefixed by the agreed letters IP. (In some countries three digits are used.) The letters
stand for Ingress Protection, and the significance of the digits is as follows:
First digit: The degree of protection that is provided with respect to persons and to
equipment inside the enclosure.
Second digit: The degree of protection that is provided with respect to the harmful
ingress of water.
Third digit: The degree of mechanical protection.
For example, a rating of IP 54 indicates class 5 protection against the ingress of dust
and class 4 against the entry of moisture. Camera housings used in the UK will
usually have a rating of IP 65 or IP 66.
Note that these ratings only apply to normal environmental conditions. Special
protection is required for areas such as refineries, mines, flour mills, etc. If there is
any doubt the customer will be aware of special conditions applying to particular parts
of the site. Tables 10.1and 10.2 at the end of this chapter list all the index numbers.

Selection of External Housings


Weatherproof housings must be about the most mundane aspect of a CCTV
installation. Or so it seems, because many engineers simply consider the housing as a
protection against the elements. However, there are many aspects to consider and
many suppliers of housings. It is about the cheapest element of an external system yet
price appears to be the main factor in selecting which to use. Important considerations
should be:
76 | P a g e

Ease of access for pre-assembly in the workshop.

Ease of access during installation.

Ease of access for future service needs.

Is the camera mounting plate insulated from the case?

Can the mechanical focusing screw on the camera be reached? Some are at the
back, some at the side and some on top.

Can the lens be focused and the peak/average settings adjusted on site?

Can one man remove the cover and work on the inside?

If there is a telemetry board fitted, can it be accessed without removing the


camera?

10.

REMOTE POSITIONING DEVICES

Introduction
There are two main types of remote positioning device, those that move only in a
horizontal plane, and those that can move in two planes. Movement in a horizontal
plane is known as panning or scanning. Movement in the vertical plane is known as
tilting. The device that provides movement in both planes is called a pan, tilt unit or
pan, tilt drive. Both scanners and pan, tilt units are made for indoor and outdoor use.
The construction is fundamentally the same except that those units for external use are
designed for the appropriate IP rating. (See Chapter 9 for description of IP ratings).
Pan, tilt units are also produced for the range of hazardous environments mentioned in
Chapter 10.
77 | P a g e

Scanners
A common type of scanner is shown in Diagram 11.1, which may be designed for
either internal or external applications.

Diagram 11. 1 Typical Scanner Unit


The camera may be mounted directly on the platform in usual indoor situations. The
camera mounting platform is adjustable to a fixed position of tilt by a bolt through the
pivot. The degree of rotation is set by two movable strikers that operate limit switches
at each end of the required travel. These units can be set to automatically reverse
when a limit switch is operated and therefore continuously scan between the set
limits. This is called auto-pan and requires an additional simple board in the control
unit. The wiring is very simple and telemetry would not generally be used for
controlling this type of device. For external use the units are larger and made
weatherproof to the appropriate standard. They are also more powerful than indoor
models because they need to support a weatherproof housing. The camera supply and
coaxial cables must be left with sufficient slack to eliminate strain through the
movement of the scanner. There should be enough slack cable to allow for the
maximum travel of the unit. Although it may be initially installed with a small degree
of scanning, requirements could change in the future. Typical scanning speed is 6 per
second and maximum rotation in the order of 345. There is usually a minimum
rotation of 5-10 due to the size of the limit stops. This type of scanner is not very
attractive in appearance especially with the slack cables going to the camera. On the
other hand it is easily seen and is often used for its deterrent value. Where aesthetics
are important or discrete mounting is needed there are other types of scanners
available. The hemispheres and domes mentioned in Chapter10 could incorporate
scanning drives.
78 | P a g e

Diagram 11. 2 Housings for Discreet Scanners


Pan, Tilt Units
As with scanners, pan, tilt units may be designed for either internal or external use.
There are two main types of pan, tilt unit. The first is a unit where the camera or
housing is mounted directly on a platform that forms part of the construction. There
are two types of this design where the platform is either mounted on the side of the
unit or over the top. The second type of pan, tilt unit is where the driving components
are contained within an enclosed housing.

Diagram 11. 3 Types of Pan, Tilt Unit

Rating of Pan, Tilt Units


Pan, tilt units are rated by the load carrying capacity of the platform. In addition, over
the top units are rated by the centre of gravity of the load being within a certain
distance above the top of the platform. See comments later for load rating of over the
top units.
11.

CONTROL SYSTEMS AND CABLING

79 | P a g e

Introduction
Telemetry is the automatic measurement and transmission of data from a distant
source to a receiving station. In the previous chapter, the various ways in which
cameras may be moved so that a different field of view may be obtained were
discussed. Some means of controlling these positioning devices must be used where
movable cameras are present in a system. These control systems are generally referred
to as telemetry systems. This name comes from the Greek word meter, to measure,
and therefore to control, and tele meaning at a distance, in the same way that
television means viewing an object at a distance.
There are many types of control systems available on the market and, as always, each
method of controlling a movable camera has its benefits and drawbacks. The purpose
of this chapter is to explain the principles of the various types of control systems
available and to discuss their advantages and disadvantages.
There are two main ways of configuring the cabling from a controller to remote
locations. One is known as daisy chain in which the cable is looped from one unit
onto the next and so on. The other is a star configuration in which a separate cable is
run from the controller to each location. These types of connection only apply to the
control cable. The video cable must always be run from each camera location back to
the main control. In other words, the video cable is always in a star configuration.

Diagram 12. 1 Remote Control Wiring Systems

80 | P a g e

The daisy chain configuration does not need the last unit to be looped back to the
controller. The control system being considered should be checked to ascertain which
method of cabling is required. In a large industrial CCTV system, the layout of the
site will dictate which type of cabling will be the most economical.

Hard Wired Control Systems


Hard-wired control systems are the simplest way of controlling movable cameras. As
the name would suggest the connection between the control panel and the
scanner/pan-tilt and motorised lens is direct connection by a length of multicore cable.
The cost benefits of such an approach are that no form of telemetry receiver is
required at the camera location, neither is a local power supply point necessary at the
camera site as all the power for the camera, lens and pan-tilt may be sent over the
same cable. The lens functions require a 6 or 12 volt DC supply, which will be
provided by the controller. The pan, tilt functions may be 12 volt DC or 24 volt AC.
A typical hard-wired camera installation might be as shown in Diagram 12.2.

Diagram 12. 2 Typical Hard Wired Camera Installation


81 | P a g e

The video switching in a system like this would be done with a simple video switcher
on to one or two monitors. As there is only one movable camera in the system, it is a
simple matter to select the picture from the movable camera on to one of the monitors,
and then to control the position and lens of that camera with the hard-wired control
panel. Typically, the cable required for connection from the control panel to the
movable camera must consist of 12 individual wires, or cores, covered by an overall
sheath. This number of cores is needed, as all the functions of the movable camera
must be individually sent along the cable. A typical schedule for such a cable might
well be as follows:
CORE

FUNCTION

Pan Left

Pan Right

Tilt Up

Tilt Down

Pan/Tilt Common

Zoom in

Zoom out

Focus near

Focus far

10

Housing washer

11

Housing wiper

12

Common
Table 12.1 Typical telemetry connections

There are two important factors to be considered in respect of hardwired systems.


These are; the safety and cost of installing this multicore cable, the maximum distance
at which hardwired pan-tilts may be sited from the controller. It is obvious to see that
the cost per metre of a 12 cored cable will be higher than the single or double pair
cable required by other forms of telemetry system. This though is offset by the saving
in supplying telemetry receivers and transmitters. In a site where there are several
hardwired movable cameras at some distance apart, the cost of the cable may be
noticeable in the total price of the system. The second part of this concern is that there
are two main types of pan-tilt unit available, 24-volt AC types and 240 volt AC types.
The IEE wiring regulations state that 240-volt cables must be run in protective
82 | P a g e

conduit or trunking, for safety reasons. These regulations further state that low voltage
cables, such as those conductors used for lens control must not be run in the same
conduit. If 240-volt AC pan-tilts are used then all the expense of providing this
protection must be considered. The other limitation of hard-wired controllers is
imposed by the voltage drop caused by the resistance of the cable. The current drawn
by the pan-tilt unit causes the cable to heat and resist that current. The symptom of
this resistance is a drop in the voltage available at the pan tilt. The greater the current
drawn by the pan-tilt the greater the voltage drop, therefore the smaller the distance
that the pan/tilt can be from the controller before the remaining voltage to the pan/tilt
is too small for the pan-tilt motors to work! The limiting voltage drop is about 10% of
the total, I.e. 2.4 volts for a 24-volt pan-tilt and 24 volts for a 240-Volt pan-tilt. Ohms
law enables the effect of the resistance of the cable to be calculated. This is given by
the following simple formula :

Voltage drop = Current x Resistance (IR drop)

From cable datasheets, the amount of resistance per metre can be obtained. Once that
has been found then the resistance of the cable can be calculated. The overall
resistance will be for twice the length of the run. This is because there is the resistance
of the core feeding the motor and the resistance of the return core to be considered.
The current drawn by the pan-tilt can be found in the datasheet of the pan-tilt. The
current and resistance obtained can then be put into the formula above to find the
voltage drop. If the voltage drop is greater than 10% of the total then there will be
problems and a larger core of cable will have to be used. This will have a consequent
effect on the cost of the installation. As an example, a 20-AWG cable might have a
resistance of 0.053 Ohms per metre. A pan-tilt with a current consumption of 0.9
Amps is planned for siting 25 metres from the controller. The total length of the
conductor will be twice 25 metres, because of the effect of the supply and return
cores. The total resistance would be 50 times 0.053 Ohms = 2.66 Ohms. The voltage
drop will therefore be 0.9 x 2.66 = 2.4 volts. This is the maximum that may be
tolerated. Therefore, the maximum cable run for hardwired control is quite small for
24-volt AC pan-tilts. One option is, of course, to use 240-volt AC pan-tilts. The
benefits of such a choice are two fold. First, the 240-volt pan tilt uses much less
83 | P a g e

current that a 24-volt pan tilt and so the voltage drop will be smaller. Furthermore, the
10% maximum voltage drop is 24 volts rather than 2.4 volts and so the effect of any
voltage drop is less. However, due to the wiring regulations mentioned earlier, the
additional cost of installing conduit or trunking for the 240-volt cables must be
incurred. Some equipment manufacturers have approached this problem by
developing relay boxes that are installed at the pan-tilt location. A relay box consists
of several low voltage relays, one for each function. The low voltage is provided from
the controller that operates a relay that switches the mains voltage to the appropriate
function. Such a system would be as shown in Diagram 12.3. The relay boxes give
several advantages:
1. The relays use much less current than pan-tilts and so the voltage drop is much
less. The payoff is in operating range, up to 4000 metres! Alternatively a
much smaller gauge, and consequently cheaper, cable may be used.
2. Either type of pan-tilt, whether 24 volt or 240 volt, may be used at any one
camera location.
However, there are also two disadvantages:
1. 240-volt mains supply points are needed at each movable camera location to
power the relay boxes. It is important to remember, though, that these supply
points would also be needed with any other form of telemetry.
2. There is a cost involved in buying the relay boxes, but these are noticeably
less expensive than a telemetry receiver of any other type of system.

84 | P a g e

Diagram 12. 3 Hard Wired Control System with Relay Boxes


The limiting factor for hard wire systems is the number of movable cameras. It can be
confusing for the person using the system if there are, say, more than three joysticks
for camera control. There are controllers available where several movable cameras,
typically six, can be controlled from a single controller. In such a controller, the
operator pushes a button to select which camera is to be controlled and the control
voltages are switched to the corresponding multicore cable. The operation of such
systems is slightly awkward, as the operator must remember to select the same camera
on the video switcher as has been selected on the hard wire controller. The effective
limit, then, for hard-wired systems is really one or two movable cameras.
12.

MULTIPLE SCREEN DISPLAYS

Introduction
Any system that combines more than one video signal is technically a multiplexer.
These days it is customary to refer to multiplexers as equipment that can
simultaneously combine eight or more signals, otherwise they are known as screen
splitters or quad splitters.
There will be many occasions when it will be advantageous to display more than one
camera on the monitor at once. One example is if an incident occurs but it is not
certain just where it originated. With a simple switching device, it would be a tedious
business to review all the cameras recorded in sequence. In addition, as stated
85 | P a g e

previously, essential information may be lost. However, if all the cameras were
recorded simultaneously and could be displayed simultaneously then reviewing and
finding the sequence of events would be very much easier. In addition, virtually no
information would be lost and the relevant scenes can then be analysed with full
screen pictures. The essential benefit therefore of recording in the various multiple
screen formats is that no information is lost due to dwells in switching.

Analogue and Digital Displays


The picture received directly from a camera and displayed on a monitor is an
analogue representation of the scene. The picture information has been converted
directly to a video signal and reconverted to the same scene on the monitor. The
clarity of the picture is dependent on the quality of the camera, the lens, the
transmission system and the monitor.
To display or record more than one picture at a time it is necessary on most systems to
convert the analogue signal to a digital form. This is known as analogue to digital
conversion. After processing, the signal then has to be converted back to analogue
form to be displayed on a monitor. This process introduces the possibility of
degradation to the original picture. Definition can be lost through the complicated
conversion processes and noise can be added to the signal. Also, the final quality is
dependent on the resolution in terms of the number of pixels comprising the digital
information.
Picture in Picture
This is a simple system by which one scene can be inserted in another. The camera
outputs are connected to a controller that allows one camera to be designated as the
main picture. The other camera is designated as the inserted picture. The inserted
picture may be positioned and sized anywhere on the screen as shown in Diagram
13.1. Usually either camera may be displayed as a full screen picture.
The normal controls for the inserted picture are: Horizontal size, vertical size,
horizontal position and vertical position. Note that only the inserted picture may be
86 | P a g e

altered, the background camera is always shown full screen. Note that, where the
inserted picture is analogue the cameras need to be synchronised. This can be from an
external sync generator or one camera can be synchronised from the other.

Diagram 13. 1 Example of Picture in Picture

Screen Splitters
This is similar to a picture in picture inserter except that both camera scenes can be
adjusted to compose the most useful combination. A screen splitter refers to a
combination of two cameras. The split can be arranged either horizontally or
vertically. The degree of overlap of either camera can also be adjusted. Screen
splitters also require the cameras to be synchronised.
Quad Screen Splitters
As the name implies, this system allows the presentation of four cameras on the one
screen. The majority of quad splitters now incorporate digital image processing. This
means that it is not necessary to synchronise the cameras and the picture is digitally
compressed to a quarter of its size. The four images are then displayed on a single
87 | P a g e

screen. Note that each picture will only be 25% of the screen resolution. There are
many features that may be available with quad screen splitters and it is essential to
check with manufacturers literature for particular models. As always, the more
features it provides the more expensive a unit is likely to be. It is possible to

Diagram 13. 2 Illustration of Quad Screen Display


spend more than necessary if poor selection of a piece of equipment includes more
features than are required. Another factor to check out is the resolution of the
displayed pictures. Some features that may or may not be included are as follows.
Camera Inputs
By definition, quad splitters will have four inputs but there are units available that can
have eight inputs. These usually display blocks of four cameras in sequence.
Electronic Zoom
When a camera is shown in full screen this is a method of electronically enlarging a
quarter of the screen to a full screen view. The area in view may be panned around
any part of the original picture. Note though that this will produce a very grainy

88 | P a g e

looking picture. This is because each pixel in the enlarged view will be four times as
large as in the full screen scene.
For instance, if the full screen picture is made up of 512 x 512 pixels then a quarter
screen will contain 256 x 256 pixels. When zoomed the new full screen picture will
be made up of the 256 x 256 pixels.
Sequential Switching
This is the capability to provide either a quad display or sequencing through each full
screen picture.
Dual Output
The capability of providing dual monitor outputs, one with a quad display, the other a
sequence display.
Alarm Inputs
Some quad splitters offer the capability to accept alarm inputs. The treatment on
receipt of an alarm can vary. For instance, it can hold the associated camera on full
screen until deactivated or it could override a sequence and switch to quad display.
Alarm Outputs
Alarm outputs are sometimes provided. These can be used to switch a video recorder
to real time or operate any other ancillary equipment.
Camera Titling
Another option sometimes available is the facility to insert camera numbers and titles
on the screen. These can usually be moved around the screen to prevent obscuring an
important part of the scene. Not all systems allow the positioning of individual camera
titles. Some only provide a fixed position for all cameras. The number of characters
available for titles varies between models.

89 | P a g e

Loop Through
As with switchers, some models provide loop through facilities with switchable
termination. The same comments apply to ensure correct termination when looping
through video signals.
On Screen Menu
Some of the systems with more facilities provide the capability of setting up the
various functions from on screen prompts.
Video Loss Alarm
This feature can provide a warning, both visible and audible, if there is a loss of video
signal from any of the cameras
13.

LIGHT AND ILLUMINATION

Introduction
The subject of the science of illumination is complex and is not appropriate to this
book. This section is intended to provide general guidance to those aspects that affect
the performance of CCTV systems. An understanding of the principles of light is
important to the design of CCTV systems because without adequate light there can be
no pictures. What is adequate light is dependent on many factors, some of which
have already been mentioned in the specification of cameras and lens. The most
important aspects of light affecting the design of CCTV systems are: Light level in
lux: Reflectance: The wavelength of the light source. The light level and reflectance
are interrelated and decide the camera sensitivity. The wavelength must be related to
the spectral response of the camera.

90 | P a g e

Principles of Light
Electromagnetic Radiation
Light is energy in the form of electromagnetic radiation. The different forms of
electromagnetic radiation all share the same properties of transmission although they
behave quite differently when they interact with matter.

Diagram 14. 1 Electromagnetic Spectrum


Light is that part of the electromagnetic spectrum that can be detected by the human
eye. This is a very narrow band within the total spectrum as shown in Diagram 14.1.
The wavelengths used for CCTV lighting are shown and are discussed later in this
chapter. One metre is 1,000,000,000 nanometres (nm).
Electromagnetic Waves
The Transmission of light energy can be conveniently described as a wave motion and
having the following properties:

Electromagnetic waves require no medium and therefore can travel in a


vacuum.

It has been shown that different types of electromagnetic radiation have


different wavelengths or frequencies.

All electromagnetic waves travel at the same velocity, which is approximately


300,000,000 metres per second in a vacuum.

The waves travel in a straight line but can be affected by:


91 | P a g e

Reflectance. Which is the reversal of direction that occurs at the


surface of an object.

Refraction. A change of the angle that occurs at the boundaries of


different surfaces. Different wavelengths have different angles of
refraction.

Diffraction. Which is a deflection that occurs at apertures or edges of


objects.

Visible Radiation
These are the wavelengths of light that are visible to the human eye and are from
approximately 380 nm to 760 nm. When all these wavelengths are seen
simultaneously the eye cannot distinguish the individual wavelengths and the result is
seen as white light. Therefore, white light is not one wavelength but a combination of
them all. This effect can be demonstrated in reverse by passing white light through a
prism. As stated previously, different wavelengths have different angles of refraction,
therefore when the light is passed through a prism it is dispersed into its constituent
spectra because each wavelength is refracted differently. The result is that if a white
screen is placed to show the light passing out of the other side of the prism it will
show all the individual colours. This effect is shown in Diagram 14.2. The result is to
show the spectrum of light and the seven significant colours of the rainbow. In reality,
there is a continuous range of hues but the eye sees mainly the main colours. A real
rainbow is created in the same way by the light being reflected and refracted by
droplets of moisture in the atmosphere.

Diagram 14. 2 Refraction of White Light

92 | P a g e

Spectral Sensitivity
The spectral sensitivity of cameras is described in Chapter 4 and this section brings
this together with considerations of the light and the nature of the light. It should be
emphasised that the charts plot relative sensitivity. The vertical scale represents the
percentage of the rated sensitivity at different wavelengths. It is not a measure of the
camera sensitivity in lux. There are many installations that have been disappointing in
performance. This is due to a lack of understanding of the relationship between the
light source and the specification of the camera. Most manufacturers will provide a
spectral sensitivity diagram for their products on request. However, they are not all to
the same scale on each axis and so can be confusing to make a realistic comparison of
performance. It is a good idea to reproduce different diagrams to one common scale
that gives a much better impression of relative sensitivity. An example is shown in
Diagram 14.3 of two different sensitivity diagrams. The one on the right could easily
give the impression that it covers a wide range of wavelengths, whereas the one on the
left could convey the idea of very high sensitivity. They are in fact for identical
specifications.

Diagram 14. 3 Sensitivity Diagrams


14.

TRANSMISSION OF VIDEO SIGNALS BY CABLE

Introduction
This is not meant to be a textbook on transmission but is intended to remove some of
the mystery associated with various methods of transmission. Many approximations
93 | P a g e

and simplifications have been used in writing this guide. This is to make the subject
more understandable to those people not familiar with the theories. For general
application in the design of CCTV systems it should be more than adequate and at
least point the way to the main questions that must be addressed. The manufacturers
of transmission equipment will usually be only too keen to help in final design.
This first part deals with the transmission of video signals by cables. Part 2 deals with
the transmission of video signals by other methods such as microwave, telephone
systems, etc. See chapter 9 for transmission over networks in more detail.

Diagram 15.1 Methods of Transmitting a Video Signal


Diagram 15.1 illustrates the many methods of getting a picture from a camera to a
monitor. The choice will often be dictated by circumstances on the location of
cameras and controls. Often there will be more than one option for types of
transmission. In these cases there will possibly be trade offs between quality and
security of signal against cost. This diagram could now include transmission by IP
metworks.

94 | P a g e

General Principles
Video Signal
The essential components of the video signal are covered in Chapters two and three.
Certain aspects that are related to the effective transmission of those signals are
repeated in this chapter where it is necessary to save continuous cross-reference.
Synchronising
The video signal from a TV camera has to provide a variety of information at the
monitor for a correct TV picture to be displayed. This information can be divided into:
Synchronising pulses that tell the monitor when to start a line and a field; video
information that tells the monitor how bright a particular point in the picture should
be; chrominance that tells the monitor what colours a particular part of the picture
should be (colour cameras only).
Bandwidth
The composite video output from the average CCTV camera covers a bandwidth
ranging from 25Hz to 5MHz. The upper frequency is primarily determined by the
resolution of the camera and whether it is monochrome or colour. For every 100 lines
of resolution, a bandwidth of 1MHz approximately is required. Therefore, a camera
with 600 lines resolution gives out a video signal with a bandwidth of approximately
6MHz. This principle applies to both colour and monochrome cameras. However,
colour cameras also have to produce a colour signal (chrominance), as well as a
monochrome output (luminance). The chrominance signal is modulated on a
4.43MHz carrier wave in the PAL system therefore a colour signal, regardless of
definition, has a bandwidth of at least 5MHz.
Requirements to Produce A Good Quality Picture
From the above it will be obvious that to produce a good quality picture on a monitor,
the video signal must be applied to the monitor with little or no distortion of any of its
elements, i.e. the time relationship of the various signals and amplitude of these
signals. However in CCTV systems, the camera has to be connected to a monitor by a
cable or another means, such as Fibre Optic or microwave link. This interconnection
95 | P a g e

requires special equipment to interface the video signal to the transmission medium.
In cable transmission, special amplifiers may be required to compensate for the cable
losses that are frequency dependent.
Cable Transmission
All cables, no matter what their length or quality, cause attenuation when used for the
transmission of video signals, the main problem being related to the wide bandwidth
requirements of a video signal. All cables produce a loss of signal that is dependent
primarily on the frequency, the higher the frequency, the higher the loss. This means
that as a video signal travels along a cable it loses its high frequency components
faster than its low frequency components. The result of this is a loss of the fine detail
(definition) in the picture.
The human eye is very tolerant of errors of this type; a significant loss of detail is not
usually objectionable unless the loss is very large. This is fortunate, as the losses of
the high frequency components are very high on the types of cables usually used in
CCTV systems. For instance, using the common coaxial cables URM70 or RG59,
50% of the signal at 5MHz is lost in 200 metres of cable. To compensate for these
losses, special amplifiers may be used. These provide the ability to amplify selectively
the high frequency components of the video signal to overcome the cable losses.
Cable Types
There are two main types of cable used for transmitting video signals, which are:
Unbalanced (coaxial) and balanced (twisted pair). The construction of each is shown
in diagrams 15.2 and 15.3. An unbalanced signal is one in which the signal level is a
voltage referenced to ground. For instance, a video signal from the camera is between
0.3 and 1.0 volts above zero (ground level). The shield is the ground level.
A balanced signal is a video signal that has been converted for transmission along a
medium other than coaxial cable. Here the signal voltage is the difference between the
voltage in each conductor.
External interference is picked up by all types of cable. Rejection of this interference
is effected in different ways. Coaxial cable relies on the centre conductor being well
96 | P a g e

screened by the outer copper braid. There are many types of coaxial cable and care
should be taken to select one with a 95% braid. In the case of a twisted pair cable,
interference is picked up by both conductors in the same direction equally. The video
signal is travelling in opposite directions in the two conductors. The interference can
then be balanced out by using the correct type of amplifier. This only responds to the
signal difference in the two conductors and is known as a differential amplifier.
Unbalanced (Coaxial) Cables
This type of cable is made in many different impedances. In this case impedance is
measured between the inner conductor and the outer sheath. 75-Ohm impedance cable
is the standard used in CCTV systems. Most video equipment is designed to operate
at this impedance. Coaxial cables with an impedance of 75 Ohms are available in
many different mechanical formats, including single wire armoured and irradiated
PVC sheathed cable for direct burial. The cables available range in performance from
relatively poor to excellent. Performance is normally measured in high frequency loss
per 100 metres. The lower this loss figure, the less the distortion to the video signal.
Therefore, higher quality cables should be used when transmitting the signal over
long distances. Another factor that should be considered carefully when selecting
coaxial cables is the quality of the cable screen. This, as its name suggests, provides
protection from interference for the centre core, as once interference enters the cable it
is almost impossible to remove.

Diagram 15.2 Unbalanced Cable


Balanced (Twisted Pair) Cables
In a twisted pair each pair of cables is twisted with a slow twist of about one to two
twists per metre. These cables are made in many different impedances, 100 to 150
Ohms being the most common. Balanced cables have been used for many years in the

97 | P a g e

largest cable networks in the world. Where the circumstances demand, these have
advantages over coaxial cables of similar size. Twisted pair cables are frequently used
where there would be an unacceptable loss due to a long run of coaxial cable.

Diagram 15.3 Balanced Cable


The main advantages are:
1. The ability to reject unwanted interference.
2. Lower losses at high frequencies per unit length.
3. Smaller size.
4. Availability of multipair cables.
5. Lower cost.
The advantages must be considered in relation to the cost of the equipment required
for this type of transmission. A launch amplifier to convert the video signal is needed
at the camera end and an equalising amplifier to reconstruct the signal at the control
end.
Impedance
It is extremely important that the impedances of the signal source, cable, and load are
all equal. Any mismatch in these will produce unpleasant and unacceptable effects in
the displayed picture. These effects can include the production of ghost images and
ringing on sharp edges, also the loss or increase in a discrete section of the frequency
band within the video signal.
The impedance of a cable is primarily determined by its physical construction, the
thickness of the conductors and the spacing between them being the most important
factors. The materials used as insulators within the cable also affect this characteristic.
Although the signal currents are very low, the sizes of the conductors within the cable
98 | P a g e

are very important. The higher frequency components of the video signal travel only
in the surface layer of the conductors.

Diagram 15.4 Transmission Impedance.


For maximum power transfer, the load, cable and source impedance must be equal. If
there is any mismatch, some of the signal will not be absorbed by the load. Instead, it
will be reflected back along the cable to produce what is commonly known as a ghost
image.

15.

TRANSMISSION

OF

VIDEO

SIGNALS

BY

REMOTE

METHODS
Introduction
The previous chapter dealt with the transmission of video signals by various types of
cable. There are many instances where it is not possible or desirable to use cable and
other methods need to be employed. These can be:

Infrared beams.

Microwave.

Public telephone networks.

High Speed Data Links

Local Area Networks (LAN)

Wide Area Networks (WAN)

Optical fibre cables.

99 | P a g e

The choice will depend on the final system requirements. This may frequently be
coupled with the different cost of several options. In addition, the level of security and
continuity of use will have a bearing on the final selection.
With all these systems, it is imperative to study the suppliers information extremely
carefully. For instance, there was a slow scan system that described the picture update
time as 20 seconds full picture, 5 seconds quad display. What this really meant was
that in quad display one picture was updated every 5 seconds. It still took 20 seconds
until the first picture was refreshed! Wherever possible see a demonstration of a
system on a customers premises. Look carefully at the resolution versus the refresh
time.

Free space transmission


There are frequent situations where there is no possibility of making a direct cable
connection between the camera(s) and the control position. This particularly applies
when real time continuous monitoring is required. A situation needing this approach
would be where, for instance, there is a main road between the cameras and the
control.
Another situation would be when the two ends of the system are separated by a wide
river such as in London. It could be a large industrial site where the cost of cabling
would be prohibitive.
Free space transmission consists of a transmitter at the camera end and a receiver at
the control end. All free space transmission systems require that there be a direct line
of sight between the transmitter and the receiver. Normally there are one transmitter
and one receiver for each camera. A typical application is shown in Diagram 16.1.
All types of free space transmission equipment must be very rigidly mounted. This is
especially important if the transmitters or receivers are to be mounted on masts or
poles.

100 | P a g e

The distance between the two locations is critical to the choice of equipment. The
manufacturers specification must always be respected. Performance can deteriorate
exponentially if their recommendations are exceeded. A 10% increase in distance
could result in a 30% fall off in performance.

Diagram 16. 1 Application For Free Space Link


There will be situations where there are several units requiring surveillance all
controlled from a central source. Great care should be exercised in positioning
receivers so that there is suitable separation between the beams from transmitters.

Diagram 16. 2 Unsuitable Location of Receivers


If the example site in Diagram 16.1 required a second camera to be incorporated, this
would need another transmitter and receiver. If they were simply added as shown in
Diagram 16.2 there is a strong probability that the beams would overlap at the
receivers. This would cause problems with the reception of the separate video signals.
There are ways in which different systems can overcome this. However, a little
thought can prevent the need for special considerations. An alternative method of
siting the receivers is shown in Diagram 16.3.
If the receivers are located as shown there will be no chance of cross interference
between the two signals.

101 | P a g e

Diagram 16. 3 Preferred Method of Locating Receivers


There is one very important point to consider when setting up any type of free space
transmission system. The manufacturers recommended test equipment must be used
to align the pairs of units. If the width of the beam is only 1 degree, this is a width of
over 17 metres at a distance of one kilometre. Many installers have mistakenly
thought that since the receiver is within this band then the reception will be
satisfactory.
Most systems will be aligned on a clear day when it is not raining and during daylight.
Therefore, the reception will seem fine. A slight deterioration in the weather could
reduce the performance considerably after the engineers have left site. Irrespective of
the beam width, it should be emphasised that the main signal strength is in the centre
part. Only the correct test equipment will ensure that the system will be set up to its
optimum for all conditions.

Infrared Beams
With this type of system, the video is superimposed onto an infrared beam by a
transmitter. The beam is aligned to strike a receiver where the signal is transmitted as
a conventional composite video signal. The infrared beam is at a wavelength of 860
nanometres which, from Chapter 14, can be seen to be beyond the visible part of the
spectrum. The system may be configured as a full duplex set up. Then it is possible to
transmit telemetry control signals in the reverse direction to control pan, tilt units. The
system can also carry speech in both directions. The actual configuration must be
specified at the time of obtaining quotations or ordering.

102 | P a g e

The performance of infrared beams can be affected by weather and environmental


conditions. It is important to check the capability of the link with the manufacturer if
an absolute guarantee of reception in all conditions is essential.
The infrared beam is completely harmless and requires no licence or operating
restrictions. Selecting the correct beam power for a given range requires some
consideration. There is always a trade off between range and quality. One
manufacturer, for example, gives the following guidelines. (Table 16.1)
Under each model the range is given in metres for each requirement.
Requirement

Model A

Model B

Model C

Model D

Model E

(1) Economy quality

190

710

1220

2350

3100

(2) Full quality

120

320

620

1200

2100

(3) High penetration

30

160

300

750

1200

(4) High resolution

80

250

390

950

1820

(3) & (4) together

120

250

600

900

Table 16. 1 Range Of Infrared Links


This table illustrates the problem of selecting the most appropriate model for a
particular application. For instance, the model specified as having a range of 3,100
metres only provides economy quality at this range. If high resolution and high
penetration are required then the range drops dramatically to only 900 metres.
Without this information, it is very difficult for a customer to compare competing
quotations all specifying infrared links. There is a significant price jump from one
model to the next.
It can also be seen from the table that infrared links are susceptible to poor weather
conditions. It is important therefore that both the installer and the customer are aware
of the limitations of this type of link. One argument is that if the cameras are installed
outdoors then by the time the link has failed due to bad weather the camera picture
has also failed. This is a doubtful basis on which to specify a system. There are two
factors that have caused problems in the past with this type of link. Both were
intermittent and difficult to figure out the cause of lost pictures in apparently good
weather conditions. One was a steam vent outlet that caused the steam to carry
103 | P a g e

through the beam in certain wind conditions. The other was smoke from a
chimneystack that obscured the beams also only in certain wind conditions. Neither of
these effects was in the sight of the cameras.
Another important point is that the beam width of infra-red links is very small, in
order to ensure that enough of the infrared beam falls on receiver to give a good signal
to noise ratio. Typically, at a distance of 1500m the spot of infrared light shone on to
the receiver may only be a couple of metres in diameter. Consequently, over longer
distances above 500m, minute changes in the position of the transmitter may cause the
beam to be thrown completely off the receiver and transmission will be lost. This is
particularly important if the transmitter is mounted on a steel fabricated building such
as a warehouse or hanger. Steel buildings will expand and contract with temperature
change and these tiny changes may be enough to adversely affect the position of a
transmitter mounted on a steel building.
Infrared links, however, do offer a cost-effective solution to free space transmission.
The full knowledge of their possible limitations should be considered. There is no
requirement for any form of licence for an infrared link.

Microwave Transmission
Microwave links carry the video and telemetry along a link from a transmitter to a
receiver. They are capable of much farther transmission distances from 1 kilometre to
80 kilometres. The frequencies that can be used in the UK are allocated by The
Radiocommunications Agency, they also determine the maximum power that may be
transmitted, which limits the operational distance. They are largely unaffected by
weather conditions. On the other hand, they are more expensive than infrared links.
Similar comments apply that mountings must be rigid and the correct test equipment
must be used for installation. Beam width is wider than infrared systems and so
building movement is not normally a problem.
Duplex systems can be provided where it is required to operate telemetry controls in
the reverse direction. This must be specified at the time of quotation or order.
104 | P a g e

The requirement for licences should be checked with the manufacturer to find the
total cost of a system and any recurring costs. Investigation will need to be carried out
if microwave links are to be used close to other microwave equipment such as radar at
airports as it will be vital that no interference affects the performance of either system.

16.

TRANSMISSION OF VIDEO SIGNALS BY FIBRE OPTICS

Principles of Fibre Optic Transmission


Most people are familiar with the everyday use of light, X-rays, radio waves,
microwaves, and Radar. All of these are actually examples of electromagnetic
radiation, which is characterised by a radiation wavelength or oscillation frequency.
Diagram 17.1 shows the electromagnetic spectrum with application areas identified.
The 400 - 750 nm region of the spectrum is the region of visible light; this region is
expanded in the lower part. The area of interest for fibre optic transmission extends
from the red region of the spectrum out into the wavelengths much longer than those
visible to the human eye, the infrared. Specific wavelengths used have been driven by
the requirements of the fibre technology and by source and detector technologies.
Particular wavelengths used are nominally 780nm, 850nm, 1310nm, and 1550nm.

Diagram 17. 1 The electromagnetic spectrum

105 | P a g e

The different parts of the spectrum have previously been described in terms of the
wavelength. An alternative measurement is the frequency of the part being
considered. Frequency is the number of crests of a wave that move past a given point
in a given unit of time. The most common unit of frequency is the hertz (Hz),
corresponding to one cycle per second. The frequency of a wave can be calculated by
dividing the speed of the wave by the wavelength. Thus, in the electromagnetic
spectrum, the wavelengths decrease as the frequencies increase, and vice versa.
For example, the wavelength of infrared light is 850 nm; the equivalent frequency is
3.5 x 1014 Hz.

Diagram 17. 2 Bandwidth at Different Frequencies


Different frequencies have different bandwidths and the higher the frequency the
wider is the bandwidth. The wider the bandwidth then the more information can be
carried. Frequencies above the visible part of the spectrum offer a wider bandwidth,
therefore they provide more space for the multiplicity of TV signals and reams of data
that need to be transmitted.

Transmission by Light
In fibre optics, messages whether data or video are first converted from electrical
impulses into pulses of light. This function is performed by a minute device that
106 | P a g e

incorporates a laser chip or an LED (light emitting diode). The infrared light is
switched on and off at incredibly high speeds, thereby creating the stream of light
pulses. These are then focussed onto the end of the optical fibre. The lightwaves travel
along the fibre to the receiving end. Here the light pulses are converted back into
electrical pulses by a photodiode or avalanche photodiode.

Diagram 17. 3 Basics of Fibre Optic Transmission


Optical Fibre Structure and Light Guiding
An optical fibre is a complex strand of silica glass. A cross section of a typical fibre is
shown in diagram 16.4.
Very small units of length are measured in microns. One micron is one millionth of a
metre, therefore, 1 micron is 0.001 mm and 125 microns is 0.125 mm.

Diagram 17. 4 Construction of single optical fibre


The optical fibre is made from a rod of highly purified silica called a pre-form. The
pre-form is heated and drawn out into a thin fibre using highly specialised and
accurate equipment. As the fibre is drawn, it is coated with a protective polymer layer
known as the primary coating. At this stage the coated fibre is approximately 0.25 mm
107 | P a g e

diameter and is flexible enough to be coiled on drums with a bend radius of not less
than 5 cm. In most fibres in use today the diameter of the glass fibre itself is 125
microns/ 0.125 mm. This primary coated fibre is then used as the building block for
assembly into optical fibre cable that provides the ruggedisation needed for everyday
use.
The optical fibre itself has internal structure with the refractive index of the fibre
varying across its diameter with all fibres having a lower refractive index on the
surface than at the centre of the fibre. This variation in refractive index across the
fibre diameter is the key to the transmission of light by the fibre. Remembering school
physics experiments, when light passes from a high to low refractive index media e.g.
glass to air, some of the light ray is reflected and some is refracted out of the high
refractive index media. As the angle of the light ray to the surface gets shallower,
there comes a point where all of the light is reflected and no light is refracted out of
the media. This angle (to the normal) is called the Critical Angle above which all light
is reflected; optical fibre transmission uses this effect to transmit light along the fibre.
In diagram 17.5, the optical fibre structure is assumed to consist of a high refractive
index glass core surrounded by a low refractive index glass cladding. Light rays are
incident on the fibre end from a light source entering the fibre core over a range of
incident angles. Once in the fibre these rays can be considered to be travelling in
straight lines until they meet a refractive index discontinuity. At this point, some of
the ray is reflected back into the fibre core and the rest is refracted out of the core into
the cladding glass. The reflected light ray then transits the fibre core until another
reflection occurs and the refracted ray hits the cladding glass/protective polymer
cladding interface and is absorbed or dispersed. As this is concerned with light
propagation down the fibre length it is clear that the reflected ray is the one that we
require for signal transmission, with the refracted ray simply reducing the transmitted
light signal intensity.

108 | P a g e

Diagram 17. 5 Step index multi-mode fibre


Consider a continuum of light rays in the fibre core covering all possible angles of
incidence to the core/cladding discontinuity, then it can be seen that all light rays with
an angle of incidence above the critical angle will be reflected back into the fibre core.
This is known as total internal reflection. Those rays with an angle of incidence
below the critical angle will be partly reflected and partly refracted in the manner
explained above. The light rays transit along the fibre by being reflected at each
refractive index change that they encounter; in effect the rays bounce off of the sides
of the fibre core.
After multiple reflections the rays with angles of incidence below the critical angle
will have been reduced in intensity by refraction losses and do not contribute to the
light, and hence signal, transmission process. In contrast, the rays with angles of
incidence above the critical angle will not be reduced in intensity by refraction and it
is these rays that enable fibre optic transmission to work. As the angle of incidence is
measured with respect to the normal to the relevant surface it can be seen that the
fibre could be bent and twisted and still allow light to be transmitted along its length.
This ability of optical fibre to guide light along a non-linear path, just like and
electrical conductor, is essential for its use in real world applications.
This range of rays may be traced back to their original coupling to the fibre core and
we find that the transmitted rays are contained in a cone of angles as shown in
diagram 17.5. In defining optical fibre parameters this acceptance cone is

109 | P a g e

characterised by the cone half angle and the Sine of this half angle is known as the
fibre Numerical Aperture N.A.

110 | P a g e

17.

VIDEO MOTION DETECTION

Introduction
There are many methods of detecting intruders into premises. These include such
systems as:

Intruder alarms.

Fence mounted detectors.

Buried vibration or electric field devices.

Active infrared devices.

Passive infrared devices.

Microwave devices.

Video motion detection devices.

This chapter is concerned with Video Motion Detection devices. (VMD). These may
be within or outside the premises and, besides detecting intruders, can be used as part
of a building management system. VMD may often be used either as a stand-alone
system or integrated with other detection systems. In an ideal world, detection devices
would give no false alarms and 100% of genuine alarms. Unfortunately, this is not an
ideal world, and a certain amount of compromise is necessary. This compromise must
be reduced to the most effective and acceptable level to achieve the system objectives.
There are really only two types of alarm, genuine alarms and false alarms. Sometimes
mention is made of spurious alarms, unexplained alarms and system failures. These
must only be considered as false alarms because the system has alarmed for no
apparent reason. A genuine alarm is one created by deliberate nefarious human action,
e.g. by movement of a person or vehicle into the detection field or disturbance of the
111 | P a g e

alarm system. A false alarm is one that has no deliberate human input, such as those
caused by animals, birds or any malfunction of equipment. One measure of the
efficiency of a system is the False Alarm Rate (FAR). This is the ratio of false alarms
to a time scale, i.e. five per day. The FAR level will depend on many local site
considerations. The objective is to reduce this to the minimum without missing any
real alarms. Another measure is the probability of detection (PD) rate, which is the
ratio of detections to the number of attempts in controlled tests. The ideal for PD is
100%.
Uses of VMD
The primary function of a VMD system is to relieve CCTV operators from the stress
of monitoring one or many screens of information that may not change for long
periods. The VMD system will be monitoring all the cameras in its system, and only
reacting when there is suspicious activity in one of the scenes. During the long
periods of inactivity the operator can continue with other tasks, secure in the
knowledge that when something occurs the system will immediately respond. Even a
moderate sized system, with eight cameras, would prove impossible for an operator to
monitor. Eight monitors could not be viewed with any degree of concentration for
more than about twenty minutes. If the monitors were set to sequence, then activity on
seven cameras is lost for most of the time and would be totally ineffective to detect
intruders. With more cameras in a system, the task of detecting intruders becomes
impossible and technology must take over the strain.
The idea of VMD systems is that the processor is continuously monitoring all the
cameras in the system. During this time, the, operator may select or sequence cameras
using the conventional switching system. The system may include an additional
monitor connected to the VMD system that will normally show a blank screen. When
activity in any camera occurs that the VMD system interprets as an intruder, the
alarmed camera is immediately switched to the blank monitor and a warning sounded
to alert the operator. The operators attention, is therefore, immediately focused on the
camera covering the alarm. The detection of an intruder can also set off further events,
such as setting a video recorder to real time recording, setting a matrix switching

112 | P a g e

system to sequence through a specific series of cameras, etc. The operator can analyse
the scene and take the appropriate course of action.
An intruder could generate an alarm and be out of view of the camera before it is
displayed. The operator would therefore see just a blank screen and be unsure about
what to do next. To overcome this, at the time of detection, many VMD systems will
capture an alarm image sequence containing one or more freeze frames. This may be
displayed as the first view on the previously blank screen. The operator may then
examine the scene at the instant of alarm in more detail.
Principle of operation
In the descriptions that follow reference is made to a frame of video. Some systems
use frames and some use fields, some systems can select between the two. This also
applies to storage devices. For ease of description, the term frame is used for
consistency but the actual method used should be checked for the system being
considered.
Video Motion Detection is an electronic method of detecting a change in the field of
view of a camera. In its simplest form, this is achieved by storing one frame of the
video information and then comparing the next frame with this to decide whether
there has been a change. The change detected would be a difference in the video
voltage, indicating a change of brightness within the scene. This would be initially
ignored as an alarm until a further frame confirmed the change, or not. If confirmed as
a change of brightness in the scene, then an alarm would be generated. This could
cause a contact to close and activate some warning device such as a buzzer, or cause
the switcher to select the camera that detected the motion. The sampling process may
take somewhere between one fiftieth of a second and one second to detect a change,
depending on the method of sampling. This simple detector could be used in an
environment where all conditions were absolutely stable and the only possible change
in brightness would be due to an intruder. However, the intruder could be a mouse or
a person. The system couldnt differentiate between the two. In addition, by the time
the alarm is displayed on a monitor, the cause of it could be out of view. If the scene
were being continuously recorded, the event could be reviewed but this may be too
late to take effective action.
113 | P a g e

Diagram 18.1 Principles of video motion detection


Detection Cells
For the purposes of this chapter the following definitions are used although there are
no standard terms used at present. A CELL is a single detection block that is analysed
electronically for brightness changes. A cell may be a single pixel, a block of pixels,
or the whole screen. A ZONE is a group of cells that have been defined as an active
area. The exact meaning of zone must be checked with a manufacturers
specification before assuming what area is covered and to what degree of definition.
This method of comparing complete frames therefore has severe drawbacks. The next
development was to divide the picture into a number of separate areas or cells. This
was refined by being able to switch cells on or off to define the area of the scene that
is of interest. Diagram 18.4 illustrates a VMD system that divides the picture into
cells, and how only a selected part of the scene can be set for motion detection. The
shaded areas are inactive and the clear parts are the active cells. In this case, only
activity in the area of the car will create an alarm. The cells are only displayed as such
during setting up the system. Once the set-up mode is exited, the complete picture is
displayed as normal and it is not possible to see any of the cells.
The sensitivity of the cells can be adjusted to take into account local conditions. This
control though is applied across all cells to the same extent. Some systems can be preset to different sensitivity levels, for instance, to make allowance for day or night
operation when the lighting levels may be different.

114 | P a g e

Diagram 18. 2 Frame Divided Into Cells


This type of system would not be suitable in the scene shown out of doors. This is
because external light conditions are changing frequently. Clouds moving across the
sky would cause changes in brightness and create alarms. This type is used in simple
indoor situations, where the lighting conditions are constant and anything breaking the
cells could be considered an alarm. The set-up can be refined to reduce unwanted
activations. For instance, there may be two doors in the scene, only one of which
needs to be monitored. In this case, the part of the scene of interest could be adjusted
accordingly. Note that with this type of system any change in any one or all the cells
will create an alarm.

Intelligent Cells
The next move towards reducing false alarms is to build in the computing power to
process each cell individually and create algorithms that will intelligently analyse
certain situations. In this way, decisions can be made according to the direction of
movement. For instance, one cell may be declared as a pre-alarm cell and another as a
detection cell. Pre-alarm cells do not create alarms. Instead, they instruct the system to
associate detection in this area with detection in another. Activation of detection cells
alone will not create an alarm. A combination of successive detection in adjacent cells
will trigger a logical action dependant on the program. For example, if a detection cell
is activated after a pre-alarm cell an alarm will be created. However, movement in the
reverse direction, detection before pre-alarm, will not create an alarm. In this way, all
persons leaving a building will not create an alarm but persons approaching it will do
so. Also, persons moving down the right of the perimeter will not create an alarm.
115 | P a g e

Cell Count
Another factor that could be calculated in the processor is the number of cells caused
to change simultaneously. This would then be used as a further part of the equation, so
that an alarm would only be created if more than x cells change contrast
simultaneously. This brings in attendant problems in some situations. Three dogs in
the scene could activate the same number of cells as one person. A major problem
with cell count is that of the different number of cells a certain size of object occupies
in relation to the position of the camera.

Diagram 18. 3 Intelligent cells

116 | P a g e

Diagram 18. 4 Problems of Perspective


Diagram 18.4 shows that a person in the foreground occupies eight cells while one in
the background is less than half a cell. Similarly, a cat close to the camera would
activate far more cells than a person in the background. Simple cell count systems
may offer some improvement in false alarms but do not offer accurate size
discrimination.
Contrast Levels
It was stated that the detection of movement was obtained by measuring the changes
in video level (brightness) between successive frames. This is fine if a person in a
dark suit passes through a very bright scene. The change in brightness will be
dramatic and immediately evident to the processor. However, a person in a grey suit
in a grey scene, with little contrast, will cause only a small change in the brightness
levels. If the sensitivity of the system were set to detect the latter event, it would be
over responsive to insignificant changes in a bright scene. This is less important for
indoor systems, but a significant factor in external systems where the light changes
frequently and greatly. In addition, where the object is smaller than the cell, the
brightness change will be a function of both the size of the object and the contrast
between the object and the background. This becomes especially critical when
detecting a person in the background when they may be only 10% of the screen
height. This can be only 0.25% of the screen area. If the person is substantially
117 | P a g e

smaller than the cell, the sensitivity would have to be very high to detect this change,
but would cause many false alarms for larger subjects providing greater contrast,
although much smaller than a person.
Another problem with measuring brightness using large cells is that a small dark
object such as a cat could cause the same brightness change as a large low contrast
object such as a person.
Camera Shake
In external systems, cameras are mounted on brackets or towers. It is often
impractical to ensure that they are absolutely rigid with no movement. The camera
would only have to move a small amount, such as can happen in the wind, to cause a
global change and register an alarm.
Changes In Light Levels
By processing separate cells and having the power to define better algorithms, other
problems can be overcome. For instance, light changes may be ignored if all cells are
affected to the same extent. Another method to allow for global light changes is to
make one reference cell in which movement is unlikely. The other cells are then
referenced to this to compensate for light levels. This latter method can impose
limitations on the system set-up and is now infrequently used.
Cell Sensitivity
All the systems described so far have only been able to set the overall sensitivity of all
cells. This renders them quite unsuitable for outdoor use. The next need therefore is to
be able to adjust the sensitivity of each cell individually. This obviously requires
much more computing power but is an absolute prerequisite for any VMD to be used
externally.
Processing Speed
Most simple VMD systems have one processor irrespective of the number of cameras.
If it requires three frames to analyse a scene then the processing time for one camera

118 | P a g e

will be about 0.12 seconds. This must be multiplied by the number of cameras in the
system. Therefore, with eight cameras the processing speed for each will be about one
second. For example, a 1/2 camera with a 25mm lens has a width of view of about
5m at 20m from the camera. A person could run across this field of view in less than
the processing time and not be detected.
Limitations of Simple VMD Systems
The previous examples have served to show the principles of simple video motion
detectors. Variations of these types are still available but their use is limited, and they
should be used with great caution in anything but the most basic applications.
However, they do have uses and can provide a very cost-effective method of motion
detection when the situation is appropriate.
The limitations of the types described for demanding external situations are as
follows.

Will not cope with moderate changes in light levels.

Sporadic generation of alarms in high contrast scenes.

Will not cope with changing weather conditions.

Lack of size discrimination means compromise in setting up.

Non-uniform sensitivity with range.

Will not cope with size variation due to perspective.

Slow processing speed can miss moving action.

Inability to discriminate between small high contrast dark and large low
contrast objects.

Prone to false alarm due to camera shake.

Cell measurements prevent accurate area discrimination.

Restricted to small areas of view.

Unlikely to detect a person at 10% of screen height.

Only simple algorithms can be computed.

Cannot distinguish between a person moving in a line and a waving object.

Single processor increases time between frame comparisons.

119 | P a g e

18.

INTERFACING WITH OTHER SYSTEMS

Introduction
CCTV Systems are rarely used as the single means of security at any site. This is a
wise approach, as CCTV cannot on its own provide total security for any location.
There is very little point in having a system that enables intruders to be observed or
miscreants identified if this does not actually prevent loss or damage to the property
of the owner of the site. At the very minimum there must be good mechanical security
with good quality doors, locks, fences and other barriers to physically prevent
undesirables from gaining access to secure areas.
For insurance purposes, there must nearly always be some form of intruder detection
and alarm system. With the growth and reduction in relative cost of telephone lines
these intruder alarms are normally connected to some kind of central monitoring
facility, called a central station, where responses to alarms are co-ordinated and from
where the Police or other security agencies are summoned. Intruder alarm systems
form the backbone of electronic security, from the smallest retail site to the largest
industrial, commercial or governmental establishments.
A second mandatory electronic system present on sites is the fire alarm system. Fire
alarm systems are installed for both insurance and building regulations purposes.
Increasing use of electronics in the controls of these systems has meant that they have
become more sophisticated and more reliable while at the same time offering many
more features.
Having a site that is safe and secure outside business hours is vital. However, it is of
little benefit during working hours, when access control to a building or site may be
relaxed to enable the employers staff to come and go. Thieves or vandals can also
come and go at will. It is for this reason that access control systems have started to
become increasingly common. The simplest form of access control is a security guard
checking the identification passes of those who are entering and leaving the site. In
the highest security sites, this method is still used, due to the efficacy of human beings
in recognising people and determining whether they should be allowed entry.

120 | P a g e

However, due to the cost of manned guarding and the dramatic reduction in the real
cost of microprocessor based electronic systems over the last few years electronic
access control systems are becoming more common. In these systems, the individuals
who are permitted to enter various areas of a site carry some kind of token that is
presented to an electronic reader. The control electronics then identifies this token and
looks into electronic memory devices. If the individuals token is valid for that entry
point then an electric lock will be released for a short time to allow entry. Otherwise,
access will be denied and an alarm message may be displayed on the system control
terminal. The technologies available for the tokens are myriad; from simple magnetic
stripe cards similar to bankers cards through to specialised high security cards,
special keys, keypads, and even palm print readers.
On large sites there may be a very long length of fencing which can be a problem to
protect at all times, within the limits that are available with manned guarding. It is,
however, important to protect this perimeter in commercial and industrial sites to
prevent theft and vandalism, and in governmental sites to meet these as well as
terrorist and other threats. As with access control the best form of perimeter protection
is manned guard posts. This is, however, very expensive and consequently this
technique tends to be reserved for the highest security sites.
Due to this fact, various electronic devices have been developed to detect intruders
crossing the perimeter. One group is seismic wires installed in the fence material,
which detect cutting and climbing of the fence structure. Another group of seismic
detectors are buried directly in the ground and detect the footsteps of intruders
crossing the perimeter, the alarms being signalled by cable or radio link. Long range
passive infrared detectors are also used. These sense the body heat of intruders
crossing the perimeter. Finally, video motion detection as described in the previous
chapter is used to sense intruders. In high security sites, these devices are often used
in combination to minimise false alarms while maximising detection. On such sites,
regular perimeter patrols give the highest level of security available.
More recently, the control of environmental and other systems around a site has been
centralised into systems using personal computers. These Building Management
Systems (BMS) control heating, lighting and air conditioning systems while also
providing alarms on the failure of heating boilers, excessive sump water levels, etc.
121 | P a g e

The display of all these individual systems in front of an operator can be very
confusing, requiring a high level of training for the staff to operate the systems
individually. The level of work for the system operator when there are multiple alarms
can also be excessive, as several different monitors and control panels must be used. A
better solution is to integrate these different systems in to a central display station,
such as a siteplan graphics system described in Chapter 12. This central point then
gives the operator a single screen on which to observe and acknowledge any event in
the system; using a computer mouse or touch screen the CCTV may be controlled at
the same screen.
The purpose of this chapter is to describe the ways in which these other systems may
be interfaced with the CCTV system to assemble an integrated security management
system.

122 | P a g e

19.

SURVEYING FOR CCTV

Introduction
This and the next two chapters can be interpreted from two points of view. First, the
installing company when designing a system. Second, with regard to the potential
customer what to expect from a well-presented proposal.
So far this book has defined all the elements of a CCTV system and provided
guidelines on their operation and limitations. So now comes the time to visit a site and
design a system. This chapter cannot give detailed instruction on how to do this, just
123 | P a g e

as a book on mechanical engineering cannot show a person how to design a bridge.


However, it does illustrate a structured approach to producing a system design that
will ensure a satisfied customer. This chapter is intended for those situations where a
company is invited to make a system proposal from scratch. The writing of
specifications is covered in Chapter 21.

Obtaining the Brief


The initial meeting with the prospective customer is the most important link in the
chain to providing a final acceptable solution. It is essential at this stage to find out
exactly what the user is expecting to achieve. It is also useful at this preliminary
meeting to explain the relationship between general surveillance and identification,
which is that clear identification is a trade off against the width of the area in the
scene. To start, try to obtain a definition of the fundamental objective of the system.
This could be along the lines of the following examples.
1. To obtain clear identification of every person passing down the corridor to the
wages office.
2. To view the general car parking areas and alert security guards if there are
persons acting suspiciously.
3. To identify the numberplate of every vehicle passing the inward barrier.
4. To cover the entire perimeter of the site and be alerted automatically in the
event of an intruder.
5. To act as confirmation of an alarm created by an intruder detection system.
6. To provide general views of the site and identification of all persons at front
and rear entrances.
Having established the prime need of the system, use something like the following
checklist to establish the basic requirements and environment. The checklists given in
this chapter are intended as a guide only. Each company should create their own
according to the general nature of its business.
Requirement

Notes
124 | P a g e

Only a simple deterrent.


A general view of what is happening in specific areas.
A detailed view of what is happening in specific areas.
Daytime only use.
Nighttime only use.
Day and night use.
The system is for use indoors only.
The system is for outdoor use only.
The system is for both indoor and outdoor use.
Is the system to be colour, monochrome, or a mixture?
To be integrated with other systems?
Will full control of the system be on the site?
Is remote monitoring required, i.e. central station?
Is continuous recording of all areas necessary?
Automatic activation of aspects of the system is required in the event
of
an
alarm.
(VCR switched to real time, a camera sent to pre-set positions, etc.)
Adequate lighting is available.
Supplementary lighting is to be provided.
Mounting locations are available for all cameras.
Mounting locations are not available for all cameras.
Will the system be monitored continuously?
Table 20.1 Checklist for System Brief
The list can be extended considerably but the intention is to obtain a general
impression of the brief. It is not needed to answer specific questions at this stage.

Site Walkabout
The next phase is to have an informal walk around the site with the customer to
become familiarised with the topography. This also enables the names of locations
and areas to be learned. The site in this meaning could be a whole estate, a warehouse

125 | P a g e

or a retail store, etc. This initial walk around the site will be invaluable in leading up
to the more detailed survey to be carried out.

Surveying the Site


Most customers will provide a drawing of the site. If not, then a second walkabout
will be necessary to make a drawing with key dimensions on it. The main areas of
interest will now be known, therefore the amount of detail drawn can reflect this.

126 | P a g e

20.

SPECIFYING CCTV SYSTEMS

Introduction
There are three main types of specification for CCTV systems.
1. The proposal presented to a potential customer based on a companys
interpretation of preliminary visits and discussions.
2. A specification prepared by a customer in which the operating principles and
requirements of a system are outlined and the final design left to the
installation Company.
3. A specification prepared by a customer in which the position and performance
of every component in a system are clearly defined and specified technically.

127 | P a g e

There is actually a fourth type of specification. This is where the customer produces a
combination of 2 and 3 but with only a laymans knowledge of CCTV. This is a little
knowledge is dangerous type of specification.
The first part of this chapter is intended to provide guidance for the first two types of
proposal. This is followed by guidance for end users
The size and thickness of a proposal and specification are not necessarily proportional
to its usefulness. In addition, the structure of the proposal should be carefully thought
out to inform the recipient. The intention should be to provide a reasoned and
progressive argument for the system being proposed. Many customers will only have
a passing knowledge of CCTV. Therefore, avoid the use of trade jargon in anything
other than technical specifications where it is necessary.
Most companies will have their own preferred layout for proposals. The following
notes show a structured approach that can be adjusted to fit in with any corporate
presentation.

Contractual Considerations
The proposal will form the basis of a binding contract between the installing company
and the purchaser. It can be the companys defence or downfall if there is a dispute.
With the best will in the world disputes will happen. In the case of CCTV it is
invariably the quality of picture or scenes in view that cause the greatest problems. It
is equally important to describe the drawbacks as well as the advantages of the
system. This may come across as negative thinking to the salesperson but it can be
turned into a positive advantage. Statements of fact can increase the credibility of a
company and impress the customer with their ethics. This is especially the case when
the competitors have failed to point out the drawbacks.
A common comment from disappointed customers is that, I employed your Company
as an expert, took your advice and now the system does not do what I expected. This
is often followed by refusal to pay the invoice. There have been many cases where
this is a smoke screen because they now dont have the money or are simply being
128 | P a g e

fraudulent. Frequently the complaint is aggravated because it is a very subjective


judgement. Such comments as; I cant read the number plates and see the whole
width of the 60-metre entrance. I cant see people directly below the camera. The
chapter on lenses made the point that many customers expect to see through a camera
lens what they see with their own eyes. Therefore, it is important to have laid out
exactly what the system will and will not do. The following headings illustrate a
structured layout for a proposal.

Contents of Proposal
The proposal is the main selling document that will be presented to the customer. It is
an opportunity to present the company as competent and professional. Besides
providing legal protection, it can persuade the customer to accept the proposed system
as the best suited to their needs. This is the document that remains after the
salesperson has left and is maybe forgotten. Another thought is that many other people
will read the document than those that met the salesperson. Therefore, it should be
easy to read and set out logically.
Many companies now use word processors with a series of standard paragraphs to
construct a presentation. This obviously saves much time and can improve the
appearance of a document. However, it can also give the appearance of being
produced by a machine and not a person. It is possible to devise a word-processed
document that is personalised to each customer and his or her particular needs. For
instance, many companies have a standard paragraph describing a pan, tilt, zoom
camera mounted on a wall bracket with a 10:1 zoom lens, etc. This can often be about
seven or more lines of description within which may be the location and field of view.
In a system with sixteen cameras, this paragraph may be repeated sixteen times with
just minor changes for each location. This could take up about five or six pages of
repetitive information and be very difficult to comprehend. It may look impressive in
volume but not in communication. In these and similar cases the camera locations and
fields of view could be listed as one part of the proposal, followed by a separate
detailed description of the equipment proposed. This would be much easier to read
and comprehend.
129 | P a g e

There should be three main components in a proposal.


1. The written proposal and specification.
2. A site drawing showing camera locations and fields of view, the latter being
described in more detail within the specification.
3. A schematic diagram of the system.
Terms of Reference
This will contain a summary of the invitation to tender and any documentation and
drawings provided by the customer.

Site Visits
Details of any site visits made and the degree of information available. Also, state
whether further visits will be necessary to finalise site details in the event of a contract
being placed. A qualification is especially important here if a tender document
includes drawings and a description but site visits are not permitted.

Summary of Brief
This introductory section should describe the brief agreed between the installing
company and the customer. This will restate the overall objective for the system and
any qualifications to it. The statements could be taken from the checklist suggested in
Chapter 20. The purpose of this section is to ensure that both parties understand the
reasons for the specification that follows.
There will be instances where the brief has been provided by the customer without
prior discussion. It is still important to restate it, as the basis for further comments that
will be made in the proposal.

130 | P a g e

Interpretation of Brief
There will be occasions when further considerations will have become known during
design of the system. These could be limitations to desired fields of view or an extra
camera needed, etc. These should be noted as an extension or restriction of the
original brief. If comments are omitted then the customer can assume that the
proposal meets the brief in full. A major trap for the unwary is a document that
contains a requirement that the system will provide video recordings suitable for
evidential use. In these cases, it should be perfectly acceptable to include a
qualification along the following lines.

Use of Video Recordings for Evidential Purposes


It is not possible to state conclusively that all video recordings will be suitable for
evidential purposes. It depends upon many factors, mainly the distance the suspect is
from the camera and the focal length of the lens. Lighting, quality of the camera,
quality of video tape and several other factors all contribute to whether a recording is
suitable for evidence. There is also a difference between using a recording for
identification and for evidence. The rules of thumb for using video recordings are as
follows. (a) To see that it is a person rather than an animal or other object requires
that the subject should be at least 10% of the height of the screen view. This only
infers that it is a person but with no chance of identification. (b) There is a possibility
of a subject being identified if they fill 50% of the screen and are familiar to the
viewer. (c) To achieve positive identification of an unknown person they need to have
their head and shoulders fill the screen.
With the lenses fitted to the proposed system, the person will need to be within thirty
to fifty metres to see that it is a person depending on the lens fitted. They will need to
be within about ten to twenty metres to stand a chance of identification. Therefore the
cameras are generally positioned so that a person is moving towards them and at some
point should be of sufficient size on the screen to be of value.

131 | P a g e

Description of System
This should contain a summary of the complete system in plain English. There is no
need at this stage for any technical specification. It should be as brief as possible,
simply an outline of the main features. An example could be as follows. The system
will consist of eight external monochrome cameras. Five will be fully functional pan,
tilt, zoom. The others will be static units showing a fixed field of view. The cameras
will be connected back to a central control unit in the gatehouse. The main control
will be a multiplexing unit that also contains the control of the pan, tilt, and zoom
functions. The multiplexer provides the facility to almost simultaneously record all
the cameras in the system. There will be two monochrome monitors, one 17 and one
12.
This type of description is all that is necessary at this stage. It simply introduces the
rest of the more detailed specifications. In the case of a larger, more complex system,
it may be necessary to provide sub headings to make a more logical description such
as.

Description of System
- Site system.
- Main controls at site 1.
- Slave controls at site 2.
- Microwave links.

Design Considerations
There can be several different approaches to the final design of a system, with
different companies putting forward their own ideas. It is frequently useful to provide
an explanation outlining the reasoning behind the solution proposed. This section will
132 | P a g e

put the proposed solution into perspective with other possible competing systems. It
also helps to justify the proposal compared to other systems that may be a lot less
expensive but do not meet the critical objectives. One example would be where the
proposed system includes cable equalising amplifiers because there are large
variations in cable runs. Not all companies would consider this factor to be important,
and consequently submit a lower price. Explaining the reasons for such features can
increase confidence in the proposal and cast doubt on competing submissions.
It is also an opportunity to sell the advantages of certain makes of equipment where
these are important to the final performance. For instance, certain makes of camera or
video motion detector may include features that are not in other makes.

Schedule of Cameras
The essential information in this section is the location and field of view for each
camera. It may also include details of lighting conditions if existing lighting is to be
used. As noted previously it is preferred not to clutter this information with technical
detail or jargon. It is still part of describing the system to the customer in terms that
everybody will understand. The information would be taken from the schedule of
camera locations prepared during the site survey or produced by the customer. A
typical specification may be as follows.
CAMERA NO. 1

Type A

Location:

Corner bracket 2.5 m high on corner of building 39.

Scene to view:

2 metres either side of entrance A.

Cable distance to control:

95m

Distance to view:

30m

Width to view:

15m

Lens focal length:

12.5mm

Light levels, below camera

9.3 lux

" mid distance

19 lux

" furthest distance

5.7 lux

Housing:

Weatherproof with heater

Type of camera:

Fixed
133 | P a g e

Any other relevant information should be listed to ensure that there is no room for
doubt as to exactly what is being supplied. The details of light levels would be
appropriate if existing lighting were to be used; in which case the proposal may
specify that the customer provides light to a certain average level. If infrared lighting
were to be used then this information would not be required.
If the system includes several types of camera, it is better to simply state the type with
a separate list of specifications for each type.

Equipment Specifications
The degree of specification will vary between proposals according to the type of
customer. The descriptions of equipment should be specific and informative. Avoid
phrases such as high performance, low light camera. It is jargon and meaningless in
defining a camera. Whether to state the make of each item is a matter of individual
preference. There are advantages where the manufacturer is a household name and
inspires confidence. On the other hand, with the rapid development in technology it
may be considered better to state the specification and select the most appropriate and
competitive make when the order is placed. Some examples of typical specifications
follow.

21.

TESTING AND COMMISSIONING SYSTEMS

Introduction
It is assumed that an installation has been completed according to the specification
and the relevant regulations. It is also assumed that pre-assembly of all the systems
components will have been carried out according to the relevant manufacturers
instructions. The time has arrived to test, commission, and hand over the system to the
customer. There are four main aspects to this final phase.
134 | P a g e

Testing individual components to ensure that they operate to the design


specifications.

Commissioning all the components to function as an integrated system.

Demonstrating the system and its operation to the customer.

Training operators in the use of the system.

Testing Components
Although most components should have been checked and, if necessary, preassembled before dispatch to site, the final setting up can only be carried out during
installation. Although the degree of setting up will vary according to the size and
complexity of the system, there will be certain procedures that will be common to
most systems. The following is a very brief checklist of some key aspects that need
attention.

Cameras and Lenses


Check that the correct lens is fitted in line with the specification. Set up the lens focus
and back focus of the camera. If automatic iris lenses are fitted, adjust the
peak/average and level potentiometers. Check that the field of view is as required.
This will usually be adjusted using a hand held test monitor. There is also available a
hand held focus adjuster. If the camera has to cope with a wide range of light
conditions, fit a neutral density filter to set the focus at the maximum lens aperture.
If a zoom lens is fitted, check that the scene remains in focus throughout the zoom
range. If the focus changes, it may be necessary to recheck the camera back focus.

Transmission
Check every video cable for continuity and shorts to earth. A common problem is
whiskers of the braiding on a coaxial cable touching the core conductor. If twisted

135 | P a g e

pair transmission or video line correctors are fitted then the only correct way to set up
the system is using a pulse bar generator.
Check through every video line to ensure that all terminations are set correctly.

Switchers
Check the dwell time and sequencing of standard video switchers. In the case of
matrix switchers set up the dwell times and sequences for each monitor. If there is a
master/slave situation, ensure that the units are correctly located with the master
control at the main control location. Again, check for correct terminations.

Telemetry
Check that all functions are operating correctly and that end stops are set as required.
Make sure that the pan right and tilt down controls correspond to the right direction of
movement. If pre-set positions are incorporated, set them up according to the
manufacturers instructions and to the specified fields of view.

Multiplexers
Set the time, date and camera titles. There will almost certainly be options to set up
the various multiscreen displays. It is always necessary to program the multiplexer
according to the video recorder in use. Most multiplexers now have an on screen list
of current VCRs available, in which case selection is straightforward. If the VCR
installed is not on this list then it will be necessary to check with the multiplexer
manufacturer to establish the correct settings.

136 | P a g e

Video Recorders
Some systems are supplied with separate tapes for each day of the week or month.
Ensure that all the tapes and boxes are marked accordingly.
All time lapse video recorders can display the time and date on the screen. If the
recorder is the only system component that provides this information then set it to
display. If there is a multiplexer or switcher that generates the information then set the
recorder not to display and use the other component for this function.
Video Motion Detection
All video motion detection systems require a great deal of time and care in setting up
if they are to function efficiently and not generate false alarms. In the case of external
systems, it will be essential to carry out the main programming at night under the
worst lighting conditions. If the system is installed in the summer then it will always
be advisable to return in the winter to finalise the settings.

Free Space Transmission


All types of free space transmission systems need rigid mountings with correct
brackets to allow alignment. Always use the manufacturers alignment test
instruments to obtain the optimum signal strength. It is never possible to assess the
signal simply by observation of the picture.

Interfacing with Other Systems


If the CCTV system is being connected to another system it is advisable to have a
representative of the company which installed that other system visit the site and
approve the connections.

137 | P a g e

Commissioning the System


Once all the components in an installation have been checked and set up it is then
necessary to commission the system to function as set out in the specification
documents. This really means operating the system from the controls and ensuring
that every function and view is as originally designed. There will usually need to be
some fine adjustments made to cameras, lenses, and angles of view, etc. At this stage,
a record should be made of every camera and the scene in view. It is also advisable to
comment on the detail that can be seen at various distances from the camera.
Commissioning will often necessitate operating the system through the night if
appropriate. Particular note should be made of the views and focus of cameras using
infrared illumination. There may be areas of flare or dark pockets that must be
considered. It is not always easy to predict at the design stage what the effect of
infrared illumination will be. Therefore, during the commissioning stage
consideration should be given to reducing or increasing the power of some of the
lamps if they are not producing the expected results.

Operation and Maintenance Manual


When the system is complete, an operating and maintenance manual must be handed
over to the customer. This should contain a copy of the agreed specification and
equipment schedule, and will form the basis of the commissioning procedures and
tests to be carried out. The manual should contain a copy of all manufacturers data
and installation specifications. The aim should be to provide the customer with
sufficient information to be able to have the system maintained by any competent
company in the future. The need to produce this manual should be considered in the
price quoted for the system in the first place. Produced effectively, the manual will
represent a significant cost that should not be ignored.

An important aspect of commissioning the system will be to record all programming


and equipment set up procedures that have been carried out. These will need to be
138 | P a g e

included in the final operation and maintenance manual that will be handed over on
completion. There may be such items as the programming of multiplexers, the
programming of alarm handling, sequences set up on matrix switching systems, etc.
These should be fully documented in the system manual.

APPENDIX 1 - GLOSSARY OF CCTV TERMS

139 | P a g e

This glossary is intended to provide a quick reference to many terms used in closed
circuit television. Most of them are explained in much greater detail in the appropriate
sections.

2:1 INTERLACE: The precise combination of two fields of 312 1/2 lines to create a
single frame of 625 lines. (CCIR)

AGC: Automatic gain control- electronic circuitry to increase the video signal in low
light conditions. This usually introduces 'noise' in the picture giving a grainy
appearance. Camera specifications should always be considered with AGC. off.

ALARM ACTIVATED VCR: From selecting 'record', a normal V.C.R. would take
from 15 to 21 seconds before it actually starts recording usable pictures. With this
type of recorder it can be set so that the tape is spooled up and ready to commence
recording in about one second. The signal to go into recording can be from an alarm
or any other input.

ALGORITHM: Mathematics, a rule or procedure for solving a problem

ANALOGUE SIGNAL: In video, the representation of a camera scene by varying


voltages in the video signal, the voltage being directly proportional to the light level.

APERTURE: The light gathering area of a lens. The iris controls the size of the
aperture.

ARMOUR: Extra protection for a cable that improves resistance to cutting and
crushing. The most common material used is steel.

140 | P a g e

ASPECT RATIO: The ratio of the vertical to the horizontal image size. This is 3:4.

ATTENUATION: A term that refers to signal loss in a transmission system or light


loss through a lens system.

AUTOMATIC IRIS: A lens that automatically adjusts to allow the correct amount of
light to fall on the imaging device. There are a tiny motor and amplifier built in which
generally receives a control signal from the camera to maintain a constant one volt
peak to peak (pp) video level. There are two manual controls on the lens to allow
compensation for varying conditions of 'peak' and 'average' light.

BACK FOCUS: A mechanical adjustment in a camera that moves the imaging device
relative to the lens to compensate for different back focal lengths of lenses. An
important adjustment when a zoom lens is fitted.

BALANCED SIGNAL: A video signal converted to a balanced signal, usually to


enable it to be transmitted along a 'twisted pair' cable. Used in situations where the
cabling distance is too great and which would produce unacceptable losses in a
coaxial cable.

BANDWIDTH: The amount of space in a given part of the spectrum needed to carry
communication signals.

BUFFER: The material surrounding the fibre to protect it from physical damage

BLANKING PERIOD: The period of the composite video at black level and below
when the retrace occurs, making it invisible on the screen.

141 | P a g e

BLACK LEVEL: The dark parts of a video signal corresponding to approximately


0.3 volts.

BIFURCATOR: An adapter with which a loose tube containing two optical fibres
can be split into two single fibre cables. (See loose tube)

C-MOUNT: The standard screw mounting for 2/3" and 1" camera lenses. The
distance from the flange surface to the focal point is 17.526 mm. A C-mount lens can
be used on a camera with a CS- mount by adding an adapter ring to reduce this
distance to 12.5 mm. (See CS-mount )

CABLE EQUALISER: An amplifier to increase a video signal to the optimum value.


This is usually to compensate for cable losses.

CCD: Charge coupled device, a flat thin wafer that is light sensitive and forms the
imaging device of most modern cameras. Size is measured diagonally and can be
1/3",1/2" or 2/3". There are two types, frame transfer and interline transfer.

CCIR: The European 625 line standard for the video signal.

CHROMA BURST: The reference signal included in the video signal after the
horizontal sync pulse. This enables a colour monitor to lock on to a colour composite
video signal

CHROMINANCE: The part of a colour video signal that carries the colour
information.

142 | P a g e

CLADDING: The outermost region of an optical cable, less dense than the central
core. Acts as an optical barrier to prevent transmitted light leaking away from the
core.

COMPOSITE VIDEO: The complete video signal comprising the sync and video
information. The sync pulse should be .3 volts and the video signal should be .7 volts.

CORE: The central region of an optical fibre through which signal carrying infrared
is transmitted. Manufactured from high density silica glass.

CS-MOUNT: A new generation of lenses designed for, 1/2", 1/3" , 1/4" and 1/8"
cameras incorporating CS-mounts. The distance from the flange surface to the focal
point is 12.5 mm. CS-mount lenses cannot be used on cameras with C-mount
configuration. These lenses are more compact and cheaper than the C-mount
equivalents.

dB: Decibel, a logarithmic ratio between two signals.

DEPTH OF FIELD: The proportion of the field of view that is in correct focus. The
depth of field in focus DECREASES when: the focal length is longer, the f number is
smaller, or the object distance is shorter.

DESKTOP SWITCHER: A device for switching the video signal from several
cameras to one or more monitors. The cables from the cameras are connected to the
back of the unit.

DIGITAL SIGNAL: An analogue signal that has been converted to a digital form so
that it can be processed by a micro processor.
143 | P a g e

EIA: The American 525 line standard for the video signal.

f STOP: This is the ratio of the focal length to the effective diameter of the lens.
(f/A). It is not a measure of the efficiency or the transmission value of the lens. The
smaller the f number the more light is passed.

fc: Foot candles used in some USA specifications to define sensitivity. 10 fc is approx.
1 lux.

FIBRE OPTIC: A very efficient method of transmitting video and telemetry signals
over very long distances using fibre optic cable. Signals can be multiplexed and sent
along a single fibre.

FIELD OF VIEW: The relationship between the angle of view and the distance of
the object from the lens.

FIELD: One half of a frame consisting of 312 1/2 lines, 50 fields are created every
second.

FLANGE BACK LENGTH: The distance from the back flange of a lens to the
sensor face. This is 17.526mm for C mount and 12.5mm for CS-mount lenses.

FOCAL LENGTH: The distance between the secondary principal point in the lens
and the plane of the imaging device. The longer the focal length, the narrower is the
angle of view.

FRAME STORE: An electronic method of capturing and storing a single frame of


144 | P a g e

video. All slow scan transmitters include a frame store that holds the picture at the
moment of alarm, while the control is being dialled up. When the link is confirmed,
the picture is transmitted.

FRAME TRANSFER: A type of CCD imaging device in which the entire matrix of
pixels is read into storage before being processed by the electronics of the camera.

FRAME: The combination of two interlaced fields, 25 frames are created every
second.

GAMMA CORRECTION: An electronic correction carried out in the camera


circuitry to balance the brightness seen by the camera to that of the monitor.

GEN LOCK: Also called external sync. A separate coaxial cable is run to each
camera and carries sync pulse information to ensure that all cameras are producing
fields at exactly the same time. This eliminates picture bounce during switching and
can improve quality and update time in multiplexers.

GRADED INDEX: (Graded index profile). A measurement shown in the form of a


diagram which illustrates how the quality of glass used in this type of optical fibre
alters gradually. From the densest at the core to the optically less dense cladding.

GROUND LOOP TRANSFORMER: An isolation transformer so that there is no


direct connection between input and output.

GROUND LOOP: An AC current that can be produced in a cable. This is usually


caused by parts of the system being fed from different electrical sources resulting in
different earth potentials at each end. The result is interference on the signal.

145 | P a g e

HARDWIRED: Controlling remote equipment by direct voltage transmitted along a


multicore cable from the main controller. This is very labour intensive to install and is
only used in simple systems with short cable runs.

HERTZ (Hz): The number of variations or cycles per second.

ILLUMINANCE: The measurement of light in lumens per square metre, the unit of
which is the lux.

IMPEDANCE: A measure of the total opposition to current flow in an alternating


current circuit, measured in Ohms.

INFRA RED LIGHT: The wavelength of light produced above the visible part of the
spectrum.

INFRA RED TRANSMISSION: A method of transmitting video and telemetry


signals across free space along an infra red beam. This opens possibilities for using
C.C.T.V. where it had been previously impossible to run cables. Distance can be
limited and the signal can be degraded in adverse weather conditions.

INTERLINE TRANSFER: Another type of CCD imaging device in which the rows
of charge are stepped down one at a time and processed straight away.

INTERNAL SYNC: The internal generation of sync pulses in a camera without


reference to external sources. This uses a crystal controlled oscillator and is needed on
non mains powered cameras.

146 | P a g e

IP RATING: Index of protection, a number combination that defines the protection


afforded from outside influences by an enclosure.

IR. SHIFT: The difference in the field of view in focus between daylight and infra
red light.

REFERENCE
http://www.cctv-information.co.uk

http://www.cctv-information.co.uk/i/The_Principles_%26_Practice_of_CCTV

147 | P a g e