You are on page 1of 180

AN INTRODUCTION TO

NIGHT VISION
TECHNOLOGY
AN INTRODUCTION TO
NIGHT VISION
TECHNOLOGY

R Hradaynath

DEFENCE RESEARCH & DEVELOPMENT ORGANISATION


MINISTRY OF DEFENCE
NEW DELHI – 110 011
2002
DRDO Monographs/Special Publications Series

An Introduction to Night Vision Technology

R Hradaynath

Series Editors
Editor-in-Chief Editors
Mohinder Singh Ashok Kumar
A Saravanan
Asst Editor Editorial Asst
Ramesh Chander AK Sen
Production
Printing Cover Design Marketing
JV Ramakrishna Vinod Kumari Sharma RK Dua
SK Tyagi RK Bhatnagar

© 2002, Defence Scientific Information & Documentation Centre


(DESIDOC), Defence R&D Organisation, Delhi-110 054.

All rights reserved. Except as permitted under the Indian Copyright


Act 1957, no part of this publication may be reproduced, distributed
or transmitted, stored in a database or a retrieval system, in any
form or by any means, electronic, mechanical, photocopying,
recording, or otherwise, without the prior written permission of the
Publisher.

The views expressed in the book are those of the author only. The
Editors or Publisher do not assume responsibility for the statements/
opinions expressed by the author.

ISBN: 81–86514–10–4

Printed and published by Director, DESIDOC, Metcalfe House, Delhi-110 054.


CONTENTS

Foreword ix
Preface xi
Acknowledgements xv

CHAPTER 1
VISION & HUMAN EYE 1
1.1 Introduction 1
1.2 Optical parameters of human eye 2
1.3 Information processing by visual system 6
1.4 Overall mechanisms 9
1.4.1 Light stimulus 9
1.4.2 Threshold Vs intensity functions & contrast 10
1.4.3 Colour 11
1.5 Implications for night vision 12

CHAPTER 2
SEARCH & ACQUISITION 15
2.1 Search 15
2.2 Aquisition 16
2.3 Blackwell's approach 18
2.4 Johnson criteria 19
2.5 Display signal-to-noise ratio 20
2.6 Detection with target movement 22
2.7 Probabilities of aquisition 23
2.8 Contrast & acquisition 23

CHAPTER 3
THE ENVIRONMENT 29
3.1 Introduction 29
3.2 Atmospheric absorption & scattering 31
3.2.1 Scattering due to rain & snow 34
3.2.2 Haze & fog 35
3.2.3 Visibility & contrast 35
3.3 Atmosphere modelling 38
3.4 Instruments, night vision & atmospherics 39
(vi)

CHAPTER 4
NIGHT ILLUMINATION, REFLECTIVITIES &
BACKGROUND 43
4.1 Night illumination 43
4.1.1 Moonlight 44
4.1.2 Starlight 48
4.2 Reflectivity at night 48
4.3 The background 50
4.4 Effect on design of vision devices 52

CHAPTER 5
OPTICAL CONSIDERATIONS 55
5.1 Introduction 55
5.2 Basic requirements 58
5.2.1 System parameters 59
5.2.2 Design approach 64
5.2.3 Design evaluation 66
5.3 Optical considerations 69

CHAPTER 6
PHOTOEMISSION 77
6.1 Introduction 77
6.2 Photoemission & its theoretical considerations 77
6.2.1 Theoretical considerations 78
6.2.2 Types of photocathodes & their efficiencies 79
6.3 Development of photocathodes 81
6.3.1 Composite photocathodes 81
6.3.2 Alloy photocathodes 81
6.3.3 Alkali photocathodes 82
6.3.4 Negative affinity photocathodes 83
6.3.5 Transferred electron(field assisted) photocathodes 85
6.4 Photocathode response time 87
6.5 Photocathode sensitivity 87
6.6 Dark current in photocathodes 90
6.7 Summary 91

CHAPTER 7
PHOSPHORS 93
7.1 Introduction 93
(vii)

7.2 Phosphors 93
7.3 Luminous transitions in a phosphor 94
7.4 Phosphor mechanisms 96
7.5 Reduction of luminescence efficiency 99
7.6 Luminescence decay 99
7.7 Phosphor applications 100
7.8 Phosphor screens 101
7.9 Screen fabrication 103
7.10 Phosphor ageing 104

CHAPTER 8
IMAGE INTENSIFIER TUBES 105
8.1 Introduction 105
8.2 Fibre optics in image intensifiers 108
8.2.1 Concepts of fibre-optics 109
8.2.2 Fibre-optics faceplates 110
8.2.3 Micro-channel plates 114
8.2.4 Fibre-optic image inverters/twisters 117
8.3 Electron optics 117
8.4 General considerations for image intensifier designs 120
8.5 Image intensifier tube types 125
8.5.1 Generation-0 image converter tubes 125
8.5.2 Generation-1 image intensifier tubes 126
8.5.3 Generation-2 image intensifier tubes 127
8.5.4 Generation-2 wafer tube 129
8.5.6 Generation-3 image intensifier tubes 131
8.5.7 Hybrid tubes 132
8.6 Performance of image intensifier tubes 134
8.6.1 Signal-to-noise ratio 134
8.6.2 Consideration of modulation transfer function (MTF) 136
8.6.3 Luminous gain and E.B.I 137
8.6.4 Other parameters 138
8.6.5 A note on production of image intensifier tubes 138

CHAPTER 9

NIGHT VISION INSTRUMENTATION 143


9.1 Introduction 143
9.2 Range equation 144
(viii)

9.3 Experimental lab testing for range evaluation 150


9.4 Field testing 154
9.5 Instrument types 155
Index 163
FOREWORD

The author has been one of the main architects in introducing


night vision technology to India. He was intimately involved, at a
crucial time, in the R&D on this subject leading to the development
of a variety of instruments for use by the Armed Forces and their
subsequent bulk production through an integrated scheme of
technology transfer. The present monograph is a welcome and
unique addition to the already existing literature on night vision
technology. Besides introducing all the parameters and technologies
that comprise this subject, it would also assist a reader to correct
his design effort to result in an effective instrument. The development
of the subject begins with an understanding of the human eye and
vision as also the principles underlying search and acquisition. This
study enables one to realise the limits to which human observation
is restricted in practice. The study further extends to the fact that
the human observations are also constrained by the environment,
night illumination, and object and background reflectivities. At this
stage the reader is exposed to a discussion as to how these
limitations can be overcome to a reasonable extent by optical
considerations and by technological developments in photocathodes,
phosphors, fibre optics, and electron optics. This study also helps
the reader to familiarise himself in depth with the evolution of the
image intensifier tubes and their utilization in instruments of military
interest. The text is an effort to consolidate basic as well as technical
information directly related to night vision based on image
intensification in precise and concise manner within the confines
of a single volume and it is well done.
The monograph has been well supported by an exhaustive list
of references. A number of these references are also intended to
help an interested reader to probe into the independent technologies
which amalgamated to result in night vision.

(Dr APJ Abdul Kalam)


PREFACE
Vision during the night has been one of the interesting
ambitions of the humankind and for quite sometime it was
considered to be within the realm of unattainability. Yet in the early
twentieth century the scientific community did think of its
possibilities. The importance of light-gathering by a relatively
aberration-free optical system was well realized. In fact a 750
binocular with an aperture of 50 mm and an exit pupil of 7 mm to
match the human scotopic eye-pupil size was referred to as a night-
vision binocular in earlier literature. These did perform well at dusk
and dawn though not during the night and helped in early morning
assaults by an infantry column. Modern interest in the field arose
with an explanation of the photoelectric effect by Einstein in 1905,
discovered earlier by Hertz in 1887. Though attempts to develop
suitable photocathodes based on photoelectric effect for image
intensification began in early 1930s, yet the first success with an
instrument system fabricated around near-infrared sensitive Ag-
O-Cs photocathode, came in 1950s only. This resulted in ‘O’
Generation night vision instrumentation wherein the night scene
had to be irradiated with near infrared radiation cutting out the
visible and the reflections thereof made visible on the phosphor
screen of an image converter tube. The tube itself was the result of
a composite effort which brought together the photocathode
technology, electron-optics for the amplification of weak electrons
through an electro-optical system and phosphor screen
development, all in a vacuum envelope besides suitable entrance
and exit surfaces. It was by now clear that what is required is the
development of better and better photocathodes corresponding to
the natural illumination in the night-sky, better methods of
amplification of energy and number of weak electrons and more
suitable phosphors for ultimate viewing besides suitable input and
output windows for the vacuum tube that may ultimately be
designed. This development was further accelerated as by then the
upcoming television industry was also looking for suitable phosphors
and photocathodes to suit their requirements. It was hence logical
that the next generation of instruments developed for night vision
were passive in nature, i.e., where imaging was based on the night
sky illumination itself thus dispensing with any artificial irradiation.
(xii)

Generation I image intensifier tubes were the first to appear which


involved a major contribution in terms of its fibre optics input and
output windows and a photocathode much more sensitive to the
overall spectral distribution of the night sky. The earlier
photocathodes had a lower quantum efficiency and hence three
such tubes had to be coupled to give an adequate light amplification
for vision without losing on the resolution. It was only a matter of
time before Generation II image intensifier tubes appeared with
photocathodes on the military scene by introducing electron energy
amplification and electron multiplication through microchannel
plates (hollow-fibre matrix) to enable adequate light amplification
to be achieved with only one tube. Developments have since
continued both in evolving more and more sensitive photocathodes
and better and better designs for the microchannel plates.
Understanding of the functioning of a photocathode resulted in the
evolution of modern day negative electron affinity photocathodes.
Nevertheless it can be stated that scope still exists in engineering
newer and newer photocathodes with still higher quantum detection
efficiencies with matching electron-optics and microchannel plates
with better and better signal-to-noise ratio.
This monograph has been organised in nine chapters. The first
chapter on Vision and the Human Eye discusses the background
against which all vision including night-vision instrumentation has
to be ultimately assessed. The next chapter Search and Acquisition
relates to the parameters that contribute towards establishing a
visual line to an object of interest. The criteria for detection,
orientation, recognition and identification are examined as also the
relationship of contrast to help search and acquisition. Chapter III
discusses the environment that is mainly the atmosphere
(intervening medium), its attenuation of the optical signal and
thereby the effect on contrast and visibility. The next chapter
examines night-illumination in detail as also reflectance from
surfaces of interest and from the background.
After familiarizing with all the factors that affect instrument
design for night vision applications, it is but natural to consider
various design aspects such as those related to optical parameters,
the evolution of photocathodes and the development of phosphors
before one goes into the details of the image intensifier tubes which
form the mainstay of night vision systems based on image
intensification. Chapters V, VI, VII are therefore devoted to each of
the factors, i.e., Optics, Photocathodes and Phosphors. Chapter VIII
on Image Intensifier Tubes includes discussion on electron-optics
and fibre optics that is relevant to the making of intensifier tubes.
Chapter IX then concludes by drawing attention to overall
(xiii)

considerations for instrument design for night vision systems.


Photographs and illustrations of some interesting systems designed
and developed by Defence R&D Organisation also find a place in
this final chapter.
This monograph is limited to night vision based on image
intensification. Though references in the text to ‘thermal imaging’
do find a place here and there, this text does not include the
contemporary development in night vision based on thermal imaging.
Obviously that should form the subject matter of an independent
volume.

Dehradun R Hradaynath
Former Director & Distinguished Scientist
Instruments R&D Establishment
DR&DO, Dehradun
ACKNOWLEDGEMENTS

The author’s thanks are primarily due to Dr APJ Abdul Kalam


who initiated this idea of a monograph on night vision technology.
Many thanks are certainly due to Dr SS Murthy, former Director,
DESIDOC, and the present Director Dr Mohinder Singh for their
persistence and patience and to the group of scientists who helped
me in literature search and in consolidating the contents of this
volume. Thanks are also due to a few scientists at IRDE, Dehradun,
who helped me in obtaining some specific literature and by way of
discussions. Particularly, my thanks are due to Shri E David who
helped me with various figures and photographs that have been
included herein. I am also indebted to Shri M Srinivasan of BeDelft,
Pune, for the photographs referred to in Chapter VIII, and to the
Director, IRDE, for all the other photographs.
Finally, I like to record my sincere thanks to Shri KK Vohra
who provided me with working space and to Shri Swaroop Chand
an Ex-Soldier working with Shri Vohra, for his diligent day-to-day
assistance.

R Hradaynath
CHAPTER 1

VISION & HUMAN EYE

1.1 INTRODUCTION
Vision entails perception—by the eye-brain system, of the
environment based on reflectance of the static or changing
observable scene illuminated by a light source or a number of
sources, and that of the sources themselves. In most cases, the
illumination is natural and due to sun, moon and stars along with
possible reflectance of these sources by clouds, sky, or any land or
water mass. These days artificial illumination is also of significance.
The ability of a living species to recognize and represent sources,
objects, their location, shape, size, colour, shading, movement and
other characteristics relevant to its planning of action or interaction
defines its observable scene. The observable scene would thus be
limited by the capability of a species and the information sought by
it. Sustained vision would further require large steady-state
sensitivity to properly react to amplitude and wavelength changes
in the illuminating sources. Thus perception of a given scene should
not get distorted by observation from sunlight at noontime to
starlight at night, or under a wide range of coloured or white artificial
sources or by facing away or towards the sun.
Vision as perceived above would therefore call for
processing of the input visual signal to attain what has been stated.
For instance location of objects in space or their movement may be
helped by
(a) Stereopsis, i.e., using cues provided by the visual input in
two spatially separated eyes.
(b) Optic flow, i.e., by using information provided to the eye from
moment to moment (i.e., separated in time)
(c) Accommodation, i.e., by determining the focal length which
will best bring an object into focus, and
2 An Introduction to Night Vision Technology

(d) Segmentation, i.e., the process of extracting information about


areas of the image (called regions or segments) that are visually
distinct from one another and are continuous in some feature,
such as colour, depth or motion.
As these are processes that can take place all over the
image, parallel processing by the visual system would be quite in
order. Likewise, the variable reflected optical signal received by the
eye is processed by the visual system over a wide range for constancy
of luminance, colour and contrast by appropriate networking of the
individual signals from each photoreceptor. However, recognition
of a source either by direct viewing or by specular reflection would
need a different type of processing for its brightness.
1.2 OPTICAL PARAMETERS OF HUMAN EYE
This monograph is restricted to the optical and processing
aspects of the human eye and retina, though in actual practice the
entire biological processes of the eye-retina-brain combination needs
to be discussed and understood as far as presently known.
It was Helmholtz[1] who suggested a schematic eye which
is a close representation of the living eye with fairly accurate
characteristics as defined by the first-order theory of geometric
optics.
Figure 1.1 shows a cross-section through such an eye

22.38
20
2.38
1.96

15

R1 7.38
N NI
F H I
R2 R3 FI
H

6.96

3.6

7.2
Figure 1.1. Optical constants of Helmholtz's schematic eye (all
dimensions are in mm).
Vision & human eye 3

while Table 1.1 details its optical parameters[1,2,3]. Depending on


the degree of accommodation desired, the radius of the anterior
lens surface is assumed to change up to + 6.0 mm, while everything
else remains fixed. The cornea is assumed to be thin, and so also
the iris which is supported on the anterior lens surface. The optical
parametric values are for sodium light.

Table 1.1. Optical parameters of the schematic eye

Parameter Symbol Radius Distance from Refractive Refractive


(see (mm) corneal vertex power index
Fig. 1.1) (mm) (diopters)

Cornea R1 8 0 41.6 (for practical


purposes a thin
curved surface)

Anterior
Lens R2 10 3.6 12.3 —
surface
Focal plane F — –13.04 — —
Principal H — 1.96 — —
plane
Nodal plane N — 6.96 — —

Posterior
Lens R3 –6 7.2 20.5 —
surface
Focal plane F — 22.38 — —
Principal H — 2.38 — —
plane
Nodal plane N — 7.38 — —
Entrance — — 3.04 — —
pupil position (size 1.15  pupil diameter)
Exit pupil — — 3.72 — —
position (size 1.05  pupil diameter)

Volumes
Eye lens — — — 30.5 1.45
Anterior — — — — 1.33
chamber
Posterior — — — — 1.33
chamber
Eye as a — — — 66.6 —
whole

The aberrations of the eye are well documented elsewhere


as also the errors of refraction. Methods do exist for evaluating the
line spread function of the eye, retina and the entire visual system
experimentally, as also some of its geometric aberrations. It is
4 An Introduction to Night Vision Technology

interesting to note that the eye focused for infinity exhibits positive
spherical aberration and for very near distances negative, while for
intermediate distances (around 50 cm) it is essentially zero. The
line spread is minimum for a pupil diameter of 2.4 mm, and for
smaller diameters, the spread approaches the diffraction limit. At
2.4 mm also it is almost diffraction limited with an exponential
fallout representing scatter and depth of focus. As the pupil diameter
increases beyond 2.4 mm, the fallout becomes more prominent and
dominates the Guassian spread.
Figure 1.2 is basically a sketch showing the blood supply
to the eye representing arteries and veins as shaded and dark lines,
respectively [4].
The cornea (C ) with the sclera (S ) represent the outer
fibrous envelope of the eyeball. While the cornea is transparent,
the sclera is pearly white. The sclera is almost five-sixths of the
envelope. The two structures are dovetailed into one another
biologically. The cornea is thickest (about 1mm) posteriorly,

I
L
S
Ch
R

Figure 1.2. Half cross-section of the representative biological eye


(C – Cornea, S – Sclera, Ch – Choroid, R – Retina, N – Optic
Nerve, I – Iris, L – Lens. The black and shaded parts denote
veins and arteries).
Vision & human eye 5

gradually becoming thinner anteriorly. At the site of the optic nerve,


the sclera splits up into a network of interlacing bundles, called the
lamina cribrosa, leaving a series of fine sieve-like apertures, through
which the bundle of optic nerve passes from the eye.
The choroid (Ch) is interposed between the sclera (S ) and
retina (R) and is chiefly concerned along with the ciliary body and
iris (I ) in supplying nutrition to the internal parts of the eye. It
forms a continuous deeply pigmented coat, except at the entrance
of the nerve into the globe. Nourishment to the retinal pigment layer
and the outer retinal layers is provided by the choroidal capillaries,
while the innermost layers of retina are served by the retinal artery.
The retina (R) is a membrane containing the terminal
parts of the optic nerve fibres, supported by a connecting network.
It lies between the choroid and the membrane enclosing the vitreous
body. It diminishes in thickness from 0.4 mm around the optic
nerve entrance to 0.2 mm towards the frontal side. It is perfectly
transparent and of a purplish red colour, due to the visual purple
present in the rods. Viewing from the front there is a yellowish spot
somewhat oval in shape with its horizontal axis measuring around
2-3 mm. A small circular depression in the centre known as fovea
centralis has the maximum packing of cones in an area around
3 mm in diameter. Corresponding to the entrance of the optic nerve,
one observes a whitish circular disc of around 1.5 mm diameter
known as the optical disc, which presents a blind spot as this area
has no nerve-endings. It is about 3 mm to the nasal side of the
yellowish spot. The light entering the cornea passes through the
full thickness of the retina which is a thin 350 m sheet of
transparent tissue and optical nerve head to reach the layers of
rods and cones. The properties of rods and cones are very vital as
photoreceptors. It is well known that cones in the macular region,
i.e., fovea centralis and its surround are highly packed at around
0.003 mm centre to centre. Appreciation of form and colour therefore
is better possible with cone-vision which responds above a certain
visual threshold. At the same time it is known that our rods are
capable of signalling even the absorption of single photon and signal
low light level phenomena though without clear appreciation of form
and colour. Electric impulses arising in these photoreceptors are
transmitted via retinal interneurons to the innermost ganglion cell
layers with around 100,000 cells. The axons of these cells form the
6 An Introduction to Night Vision Technology

optic nerve and convey the information further to the various areas
in the visual system of the brain.
The iris (I ) arises from the anterior surface of the ciliary
body and results in an adjustable diaphragm the central aperture
of which is known as pupil. The diaphragm divides the space between
the cornea and the lens into two chambers which are filled with a
fluid – the aqueous humour. The ciliary body is in turn, a
continuation of the retina and the choroid. The iris has a firm
support by lying on the lens. The contractile diaphragm reacts to
the intensity of light and accordingly adjusts the pupil diameter
from 7 mm to 2 mm from starlight to noonlight. In a given position
it also cuts off marginal rays – which unless stopped would diminish
the sharpness of the retinal image.
The lens (L) is a transparent, colourless structure of the
lenticular shape, of soft consistence enclosed in a tight elastic
membrane whose thickness varies in different parts of the lens.
The circumference is circular, 9 mm in dia, with the central thickness
as 5 mm in an adult. The posterior surface is more highly curved,
and embedded in a shallow depression in the vitreous humour,
while the anterior surface is in contact with aqueous humour. The
vitreous humour is a transparent colourless gelatinous mass which
fills the posterior cavity of the eye and occupies about four-fifths of
the interior of the eyeball. The aqueous humour is transparent and
colourless fluid and serves as a medium in which iris can operate
freely.
The optic nerve (N ) collects its fibres from the ganglion in
the retina and passes through the eyeball. The fibres from the right
halves of both the retinas pass into the right optical tract and the
fibres from the left halves pass into the left optical tract, each tract
containing a nearly common field of vision from both the eyes. Both
the tracts continue to the centre of vision in the brain.
1.3 INFORMATION PROCESSING BY VISUAL SYSTEM
The continuous photon stream that is incident on both
the eyes as a result of light reflectance from the environment is
appropriately focused on the retinal receptors (i.e., rods and cones)
through its optical system (i.e., cornea, pupil, lens and the
intervening spaces occupied by aqueous and vitreous humour). This
photon stream variable in space (x,y,z), time (t) and wavelength ()
is sampled in space and wavelength by the three types of cone
Vision & human eye 7

receptors each sensitive to red, green and blue and appropriately


filtered by the spatial and chromatic apertures of these receptors.
See Fig. 1.3 for their response.
LOG RELATIVE SENSITIVITY

RED

BLUE GREEN

400 500 600 700


WAVELENGTH (nm)

Figure 1.3. Relative spectral sensitivities of the three types of cones;


blue, green and red (determined against a dim w hite
background).

Simple sums and differences of these signals results in


an achromatic signal, a red-green signal, and a blue-green signal.
A parallel partition of the image is by spatial location, approximately
the bandwidth of visual neurons. The phsycophysical and
physiological evidence to date suggest a partition in just two temporal
bands. The temporal frequency processing may be in terms of static
and moving components.
The role of retina as a processor is significantly complex.
Apparently immediately after the first layer of receptors, every neuron
receives multiple connections from multiple varieties of adjacent
interneurons with various anatomical and physiological roles. Each
neuron has typically 50,000 connections with other neurons. Thus
even before the signal leaves the retina, each ganglion (in the final
neural layer) relays information as based on interactions between
several receptor types, non-linear spatial summation, both
8 An Introduction to Night Vision Technology

subtractive and multiplicative inhibition, far field effects, and so


on. At the stage of the ganglion layer in the retina, it is observed
that this layer contains two intermixed cell types that differ in size
and in the way they integrate the signals from the cones and the
rods via the inter-neurons. The final signals from the retina itself
are thus in two parallel channels which reach the identified regions,
i.e., the magno- and parvocellular systems of the lateral geniculate
nucleus of the thalamus[5]. Each system is identified with certain
vision parameters. Thus, the parvocellular system has information
regarding colour, high spatial resolution, low contrast sensitivity
and is slow (static sensitive). The magnocellular system on the other
hand is colour blind, has high contrast sensitivity and low spatial
resolution, carries stereopsis information, and is fast (movement
sensitive). This information which goes to different layers of the
primary visual cortex is analysed for the perception of (i) colour
(ii) form, and (iii) movement, location and spatial organisation in at
least three separate processing systems. Thus the two pathways at
the lower level seem to be rearranged into three subdivisions for
relaying the information to higher visual areas. That movement,
high contrast sensitivity and stereopsis information is carried
through one system and leads one to predict that movement of
either the observer or the object, or stereo-viewing two images of
the same object would lead to easier detection of hard-to-see objects.

Vision, is served both by the top-down and bottom-up


processing procedures, possibly simultaneously. The bottom-up
procedure analyses the stimulus in terms of the information sought
by the retinal processes as described above, processing parallely
small elements of a scene, then joining them into larger and larger
groups, and ultimately presenting it as a single scene. The
parameters of analysis could be uniformity of shading or colour
and their variation, certain geometries like edges, convexities, etc.,
(segmentation), or movement (optic flow), or depth (stereopsis) and
orientation. The top-down principle would operate via organised
percepts recorded in our memory and improvements on it in case
something new has been observed which was not in the memory-
package earlier. The bottom-up process can be thought of as data-
driven while the top-down could be referred to as knowledge driven.
A considerable amount of the bottom-up processing is done in the
retina itself. Overall mechanisms could be discussed as under, in
terms of response to light stimulus, contrast and colour.
Vision & human eye 9

1.4 OVERALL MECHANISMS

1.4.1 Light Stimulus


Light stimulus experienced subjectively as brightness is
measured in terms of units of luminance, i.e., candela per square
metre (cd/m2). Based on this, one can define the luminosity function
of a standardized eye for cones and rods independently, i.e., for
photopic and scotopic vision. Figure 1.4 shows the normalised
relative spectral sensitivity. However, it is important to take into
account the pupil size and the luminosity function of both cones
and rods, so that the retinal illumination is correctly measured.

1.00
RELATIVE SENSITIVITY

0.80
SCOTOPIC PHOTOPIC

0.60

0.40

0.20

0.00
400 450 500 550 600 650 700
WAVELENGTH (nm)
Figure 1.4. Normalised spectral sensitivity of luminosity functions
for scotopic and photopic vision.

The unit of retinal illuminance troland (td) is a multiple of the


luminance (L) in cd/m2 and pupil area (P ) in mm. As the luminosity
functions of both scotopic and photopic vision are different, the
scotopic and photopic trolands get a different value (Fig. 1.5).
The troland values can be converted to photon values.
The determination of the actual retinal illumination in terms of
absorbed photons requires assumptions about transmission losses
from the corneal surface to the retina caused by the entire optical
system of the eye, as also by the probability of photon absorption
particularly at very low light levels. According to a number of workers,
in scotopic vision the number of photons that excite the rods is
10 An Introduction to Night Vision Technology

Luminance –6 –4 –2 0 2 4 6 8
(log cd/cm2)

7.1 6.6 5.5 4.0 2.4 2.0 2.0 2.0


Pupil diameter (mm) rod cone
Retinal photopic 1.1 2.6 4.5 6.5 8.5
luminance
(log td) –4.0 –2.1 –0.22 0.70
scotopic

Luminance of starlight moonlight indoor lighting sunlight


white paper in

Visual function scotopic mesopic photopic

scotopic photopic rod best


damage
threshold threshold saturation acuity
possible
begins
no color vision, poor acuity good colour vision, good acuity

Figure 1.5. Relationship of luminance, pupil diameter and visual


function. Dashed curves represent measured stimulus
response in single primate rod and cone.

around 25 per cent of those incident on the cornea, though the


signal transmitted to the brain suggests the arrival of around 5 per
cent. Yet rods have been shown to be highly sensitive and even
signalling the absorption of single photons. Experimental estimates
show that one scotopic troland corresponds to about four effective
photon absorptions per second. The relationship of luminance, pupil
diameter, and retinal luminance over the entire visible range is
shown in Fig. 1.5.
1.4.2 Threshold vs Intensity Functions & Contrast
Threshold vs intensity function in respect of photopic
(cone) vision referred to as the Weber-Fechner function is shown in
Fig. 1.6. The function can be put down to a fair approximation in
the form:

L / L0  k ( L  L0 ) n (1.1)

Where L is the incremental luminance as a function of the


luminance background L, and L0 is the threshold luminance
increment for a dark luminance value L0 which is just at the
threshold vision of the eye. Power n has a value from 0.5 to 1 while
k is a constant. When L>>L0 i.e., beyond a certain value of retinal
luminance and n = 1, L/L is a constant. The constancy of L/L
Vision & human eye 11

3
LOG ( L /  L o )

L
2 L

Lo

1
n =1

0 Lo
 -1 0 1 2 3 4
LOG L (td)
Figure 1.6. Threshold vs intensity function in respect of photopic
vision.
known as Weber’s Law explains that contrast remains constant with
changes of luminance in an observable scene above a certain
minimal level of retinal luminance. It has to be noted that Weber’s
Law is operative for luminances as reflected from the observable
scene and analysed by the eye-brain system. The perception of light
sources and brightness as such is in addition to the operation of
the Weber’s Law.
Various definitions of contrast are in use, but as all these
are based on a ratio, they all yield invariance of contrast from a
change in illumination level. A threshold-vs-intensity curve for rod
vision is shown in Fig. 1.7. It can be observed that rods unlike
cones are saturated by steady backgrounds as a consequence of
their high sensitivity and lack of gain control. At the lower light
levels, they are known to signal the arrival of even single photons.
Obviously, because of their low spatial resolution and saturation,
their contribution to daytime vision is rather insignificant as against
cones.

1.4.3 Colour
As indicated earlier, the colour sensation is picked up by
three independent types of cone photoreceptors with the spectral
characteristics as shown in Fig. 1.3. The three cone types are
designated blue (B ), green (G ), and red (R). Constancy of colour has
12 An Introduction to Night Vision Technology

4
Log (L/Lo )

 -4 -3 -2 -1 0 1 2 3
LOG L (SCOTOPIC td)
Figure 1.7. Threshold vs intensity function in respect of scotopic
vision.

also to be sensed in the same way, as the constancy of contrast


with changes in illumination or in its spectral content. The
sensitivities of the three types of cones are adjusted in such a way
that response ratio when adapted to coloured light is the same as
that produced in white light. Chromatic adaptation is also necessary
for correct perception of object colour. Changes in spectral
illumination can really be significant as shown in the reflectances
observed from a surface facing away from the sun, or directed at it
(Fig. 1.8). Though perfect colour constancy may not be possible, the
same is achieved over a good deal of spectral variation as indicated by
colorimetric studies.

Colorimetric units are represented by a vector in three-


dimensional colour space plot along the three axes representing
the tristimulus values, that indirectly define how the light is
registered by the three cone types: blue, green and red. As what is
of interest is the relative amounts of color along each stimulus, it is
sufficient to specify the chromaticity of a light stimulus with only
two of the three chromaticity coordinates to record the chromatic
information.
Vision & human eye 13

150 FACING AWAY FACING TOWARD


FROM SUN SUN
RELATIVE SPECTRAL POWER

100

50

300 400 500 600 700


WAVELENGTH (nm)

Figure 1.8. Relative spectral response by a reflecting surface facing


away and facing towards the sun.

1.5 IMPLICATIONS FOR NIGHT VISION


It is obvious that the naked eye responds better at low
light levels as obtained under moonlight or starlight at pupil
diameters of 5 mm and more, utilizing scotopic vision (rods as
sensors). Yet as visual acuity is best with cone vision and at higher
light levels, it becomes imperative to come to at least that level of
luminance in the scene as is essential for desired spatial location of
objects in the object-scene (Fig. 1.5). Practical studies of the line
transfer function also show greater departure from the diffraction
image of a line at pupil size of 4 mm and above and better matching
around 2 mm. This could be achieved by either illuminating the
night scene artificially or by intensifying the image of the object-
scene. Both alternatives create a hybrid situation for visual adaption
as the observer is in a dark area where the eye is adapted for rod-
vision, but looking on a scene that is either illuminated artificially
(for a few seconds or minutes) or whose image is intensified. To get
the best results therefore training for sustained foveal vision under
overall dark conditions may be essential. As optic flow and stereopsis
help better understanding, binocular observation is preferable.
Earlier designs also visualised increase of instrument
optic apertures in such a fashion that the illuminance at the eye
pupil could be increased manifold. Such devices did play a significant
14 An Introduction to Night Vision Technology

role under dawn and dusk conditions in military operations.


Astronomical telescopes also tend to increase their apertures to
observe the faintest stars in the image plane directly or for recording
on photographic plates or by utilizing image intensifiers or charge
coupled devices (CCD) with appropriate sensors for the desired
region of the spectrum.
REFERENCES
1. Helmholtz, H.V. "Helmholtz’s treatise on physiological
optics,(Vol. 1)", Translated from Optical Society of America,
3rd ed, (German), ed. by Southhall, J.P.C. (Rochester, N.Y.
1924).
2. Lawrance, L. Visual Optics and Sight Testing, 3rd ed. (London:
School of Optics, 1926.) p.452.
3. Walter, G.D. (Ed). The Eyes and Vision, Chapter 12, Handbook
of Optics. (McGraw Hill Book Company).
4. Forrest, J. The Recognition of Ocular Disease. (London: The
Hattan Press Ltd).
5. Livingstone; Margaret, & Hubel, D. "Segregation of Form,
Color, Movement and Depth". Science, vol 240, pp. 740-49.
6. Waldman, G. & Wootton, J. Electro-optical Systems Performance
Modeling. (Artech House, 1993).
CHAPTER 2

SEARCH & ACQUISITION

2.1 SEARCH
As is by now obvious, eye is basically par excellence, a
spatial location and movement detection instrument under
conditions of varying contrast, colour and resolution dependent on
differing levels of illumination and their spectral content. Having
observed a scene, need arises for search of objects of its interest.
Thus a species, say a frog, would like to know about small organisms
like the fly which it can eat and simultaneously be alert about the
predators in its field of view. At a higher level, the task of a human
being though similar is more elaborate. The humans also search
and acquire the targets of their interest for desired interaction or
avoidance. Refined search and acquisition has led to the evolution
of a large number of techniques and utilization of parts of the entire
electromagnetic spectrum beyond the capabilities of the human eye.
As such it would be of interest to know about the search and
acquisition techniques that are adopted by the human eye and by
the instruments that we are dependent on.
The image of a scene is stabilized on the retina by reflex
movements of the eye to balance its involuntary movements of high
frequency tremor, low speed drift and flicks, all of low amplitude,
even when an observer is consciously trying to fixate on a given
point. This is presumably necessitated as the rods and cones get
desensitized if the illuminance of the light falling on them is
absolutely unchanging. To make a search, the eye jumps from one
fixation point to another, dwelling momentarily on each fixation
point after each jump. The jump called a saccade has a definite
amplitude. Search-time would be excessive if the dwell-time after
each saccade is long and the saccades are small. It has been
experimentally observed that if the observed sector is larger, the
16 An Introduction to Night Vision Technology

observer makes larger saccades between fixations and dwells a


shorter time at each fixation[1]. Empirical formulae correlating
fixation (glimpse) time and the field have been reported[2] as

t  0.6836  0.2132 (2.1)

where t is the average fixation time,  is the search sector in degrees


and

s  0.152 t 9.127 (2.2)

where s is the average saccadic size.


In another approach, both the fixation time and the
saccadic size may be considered to be fixed, i.e., the fixation time
as 0.3 s and saccadic size as 5° [3]. Then the time ts to completely
search a sector  degrees by  degrees is given by

t s  0.3( / 5)(  / 5) (2.3)

These approximations seem to be reasonable for search


sectors from 15° to 45° [4].
This has particularly an interesting corollary in vision
optics. Thus if episcopes in a battle tank which are generally of
limited field of view are not properly aligned and juxtapositioned
with respect to each other for the observation of a wide angle scene
at a single go, i.e., with the head and eye movement at the same
place, the search time is bound to be longer and quite a few visual
cues are likely to be missed. This was observed to be true of earlier
designs for episcopic vision which had led to battle failures.

2.2 ACQUISITION
Once a search has been completed and it is desired to
acquire the target, it is found that acquisition is possible at various
levels. Thus while taking an early morning walk in an open space,
one may observe at a distance slight movement at first, and not be
sure about the object. Once the object is a little nearer, one may be
able to decide that it is a human being, and once the human being
is still nearer one can recognise the face and identify the person. A
similar situation arises in battlefield conditions also, wherein one
may acquire some target based on its movement, or its lack of fitment
in the background, but on closer observation may identify it as an
object of interest and subsequently recognize it as a tank, heavy
Search & acquisition 17

vehicle or a light vehicle. Further closer observation would


reveal the exact type of vehicle and whether it is one’s own or
enemy’s. The range of acquisition is considerably increased by optical
instruments during daytime while the night vision electro-optical
instruments make it possible by the night. While there are many
parameters in the instrument that decide the acquisition range and
subsequent detection, design factors like overall magnification, field
of view, contrast rendition and quick search ability are more
important. It will be appreciated that a priori knowledge helps a
great deal in deciding as to whether a target is to be engaged no
sooner it is acquired, or one has to wait further identification.
With this background it is now possible to decide on the
levels of acquisition. The standard terms used are:
(a) Detection
The term implies that the observer in his search has located
an object of interest in the field of view of the vision system
that is being used.
(b) Recognition
This would mean that the observer is able to decide on the
class to which the object belongs, e.g., a tank, a heavy
vehicle, a group of people and the like.

(c) Identification
At this level of acquisition, one should be able to indicate
the type of the object, i.e., which type of tank, vehicle, or
the number of people in a group. An important military
requirement would be identification of friend or foe (IFF).

As the vision cues during daytime (like shading, colours–


their hues and variation, better sensitivity to stereopsis and optics
flow) are far more except in extreme bad weather and fog, the levels
of acquisition do not have that relevance as during night or when a
visual scene is displayed on a monitor. The definitions of the levels
of acquisition are therefore more concerned with image
intensification or thermal mapping. We will therefore first summarise
parameters that lead to acquisition. Acquisition has been variously
discussed and found broadly to depend on search time, fixation
time, and the type of vision instrumentation used and their
characteristics. Models that have been used for detection or
acquisition proceed on the premise that the signal strength from
18 An Introduction to Night Vision Technology

target vis-a-vis its background should equal or exceed the detection


limits of the eye for a specified contrast ratio between target and
background with variations in size, shape and luminance.
2.3 BLACKWELL’S APPROACH
Blackwell (1946) conducted an elaborate set of
experiments to determine the detection capability of the humans
as a function of size, contrast, and luminance. Around 4,50,000
observations made by small groups out of 19 young observers of
circular targets against uniform backgrounds, both light on dark
and dark on bright mostly with exposure time of 6 seconds were
statistically analysed. The results were expressed in terms of contrast
threshold that was necessary for a detection probability of 50 per
cent. The following conclusions can be drawn from the data for
near foveal vision randomly at any point 3° off from the vision axis
for an exposure time of 6 seconds (i.e., involving small search)[5].
(a) The contrast threshold values decreased with increase in target
size or target brightness.
(b) The contrast threshold values do not change appreciably at
larger target angular size and at higher illuminance levels.
Analysis showed that foveal vision was used at high brightness
and parafoveal at low brightness. These conclusions can also
be drawn from the later data presented in 1969 by Blackwell
and Taylor[6] with regard to direct line of vision detection for
exposure timings of a one-third of a second and involving no
search. Obviously, the first set of experiments compiled the
data due to excitation of the fovea as well as a limited area of
the foveal region (less than a degree). Experimental data
beyond 3° seems to be limited.
The Blackwell data was best fitted later into empirical
equations by Waldman, et al [7] as

 
Log C t = 0.075  1.48  10  4   log L  2.96  1.025 log  
2

0.601   1.04 (2.4)


for high light level region (photopic vision).

Log C t = 0.1047 log L  1.991  1.823  1.64 log 


2
(2.5)

for low light level region (scotopic vision).


The two regions of light-levels were divided at about 710–4
foot lambert.
Search & acquisition 19

Where Ct is contrast at threshold, L is luminance of the target


against a uniform background (f L), and  is the target size in
angular units (minutes).
Work has continued to improve on these empirical
equations to include parameters like wider fields (beyond 3°) and
noise as important inputs[8]. This approach enables simulation of
electro-optical systems to a better degree of predictability vis–a–vis
actual field performance. The model designers have to combine
detection with search based on a great deal of empirical data on
contrast thresholds, response times, fixation times, saccadic size
etc, for a specified target and its backgrounds besides incorporating
the human factors and parameters of instrument design that are
inherent in observation. Though many models have been developed
using modern computers, the predictability has yet to reach a
standard for reliability.
Research based on the above approach leads to modelling
for acquisition and also suggests that in experimentation an
equivalent disc object could also reproduce the behaviour of a given
object of military interest in the field. There is experimental evidence
to support that detection of smaller and smaller discs could simulate
for observing the details of a particular object, i.e., simulate for
detection, recognition and identification particularly for imaging
through intensifiers or thermal imagers. Also where one is aware of
the direction of likely appearance of the object, the aim may be
restricted to know whether this is an object of interest. Air
Standardization Agreement of 1976 sets minimum ground object
sizes required in imagery for various levels of acquisition of various
targets.
2.4 JOHNSON CRITERIA
Following a suggestion in early 50s from Coleman that
one might establish a relationship between a real target against a
background and a target made of contrasting line-pairs, Johnson
(1958) decided to extend the spatial frequency approach to night
vision systems[9]. His approach involved classification of models of
objects of interest based on their silhouettes, shape and equivalent
area blobs (for detection) set alongside a bar-test pattern with
matching contrast and observing these from a given distance, as
the illumination in the test area was increased from zero. It was
possible to conclude based on a data of some 20,000 observations
20 An Introduction to Night Vision Technology

that a relationship existed between the number of lines resolved at


the target and the corresponding decisions of detection, recognition
and identification. The targets tested were truck, M-48 tank, Stalin
tank, Centurian tank, half truck, jeep, command car, standing
soldier, and 105 Howitzer. It was recorded that there was a certain
uniformity in observation regarding line-pairs for a critical target
dimension. The following data was derived for resolution per
minimum dimension across the complete object[10]:
• Detection has an average of 1.0 ± 0.25 line-pairs
• Orientation has an average of 1.4 ± 0.35 line-pairs
• Recognition has an average of 4.0 ± 0.8 line-pairs and,
• Identification has an average of 6.4 ± 1.5 line-pairs
Figure 2.1 illustrates the Johnson approach schematically.

The relationship can be extended to TV lines per minimum


object dimension where detection, recognition and identification
would have 2, 8, and 12.8 TV lines respectively. Johnson criteria
could be easily appreciated both in the laboratory and in the field
and hence has been very widely used all over the world, not
withstanding the fact that these criteria were not completely
applicable to different viewing angles and had been based on
threshold of vision and did not allow for calculations for range of
different probabilities. This approach lends itself to easy
interpretation in the spatial domain, where when an optical chain
is understood to be a linear system, the system performance can
easily be predicted based on the MTF values of the objective, image
intensifier and its subunits, and lend the answer in terms of line-
pairs per mm for the entire system for any given contrast value.
Using these criteria, search effectiveness was also tried to arrive at
a trade-off between gain, contrast and resolution on response times
in search operations through a night vision system. This work
showed that gain was much more significant for search than for
static viewing.
2.5 DISPLAY SIGNAL-TO-NOISE RATIO
As a good part of the night vision imagery is through a
video display, one may emphasize signal-to-noise ratio in the signal
to the eye from the display. To assess this aspect it may be assumed
that the resolution of the system as a whole is such that in an
equivalent direct vision system, the object of interest could have
been easily resolved. Next, the object of interest could be thought of
Search & acquisition 21

Figure 2.1. Equivalent bar targets for various field targets


22 An Introduction to Night Vision Technology

as an isolated rectangular target or to be in more consonance with


the Johnson criteria as a periodic bar chart[11]. The signal-to-noise
ratio (SNRDi ) in the two cases could be defined as

SNRDi  [ 2 tf ( a / A)]1/ 2 SNRv (2.6)

where
t = integration time of the eye
 f = video bandwidth
a = area of the target in the image
A = area of the field of view
SNRv = signal-to-noise ratio in the video signal
and for a periodic target
SNR Di  2tδ f /α  U / N 1 / 2 SNR v (2.7)
1/ 2

where
 = displayed horizontal-to-vertical ratio
U = bar length-to-width ratio
N = bar pattern spatial frequency (lines/picture
height)
These expressions in a realistic case could be modified
by involving the MTF of the system. The point to note is that detection
experiments provided a value of around 3 for the SNRDi for the
threshold value at 50 per cent probability. The value appeared to
vary only slightly for a wide range of rectangular shapes and sizes
and also for squares. The periodic bar target showed a variation for
both spatial frequency and length-to-width ratios of the patterns.
Further experiments in recognition and identification of military
vehicles followed theoretical calculations for the SNRDi for Johnson’s
equivalent bar pattern and indicated the value as 3.3 to 5.0 for
recognition and 5.2 to 6.8 for identification against various
backgrounds. The variability of values to such a large extent suggests
that this parameter is not a very good general performance measure
though one could draw on the minimum values that may be
necessary for good performance.
2.6 DETECTION WITH TARGET MOVEMENT
Detection of movement is a parameter of importance both
by day and night, and the expected sensitivity to movement is an
inbuilt faculty of the human eye-brain system. In field conditions
this would imply our sensitivity to the angular movement of a likely
target or object of interest. While the detection probability is
enhanced, visual acuity would drop, i.e., perception or detection
Search & acquisition 23

would be much easier while recognition and identification much


more difficult. Experiments have led to the following empirical
equation which shows the relationship of the contrast of a moving
object (Cm ) to that of the same object when stationary [12]:

C m =C (1+0.45 w 2 ) (2.8)
where
C= Contrast while stationary
w= Target angular speed in degrees per second for
speeds up to 5° per second.
2.7 PROBABILITIES OF ACQUISITION
Detection probability can be calculated as a product of a
number of probabilities based on the variables in the observer – object
of interest scene. The factors could be the evaluation of a target
embedded in its background, intervening atmosphere, clutter,
obscuration, the capabilities of the electro-optical system used, and
the display parameters. In addition, human factors, such as training,
search, and establishing a line of sight between the target and the
sensor also matter. Further recognition and identification could be
done by involving Johnson criteria. All these parameters and possibly
more have been selectively incorporated in various models with
appropriate algorithms to arrive at a possible prediction of the field
conditions. The approach has been of interest to many a workers,
and models have been developed based on image intensifier and
forward looking infrared systems. While a universal model is far from
developed, one can possibly select a model for advance understanding
limited to certain parameters, such as atmospherics or variations in
instrument design. It is interesting to note that this approach is rather
late in the day for most of the systems already developed, but may
have a significance in sophisticated futuristic developments.
2.8 CONTRAST & ACQUISITION
It is now obvious that consideration of contrast really leads
to the detection and acquisition of an object of interest. It may therefore
be worthwhile to directly interpret acquisition in terms of contrast at
the object and its transfer to the contrast in the image as seen by the
eye. The image contrast factor Ci would be dependent on, (if the object
structure is to be perceptible) the ratio between the random fluctuations
n during the observation time to the mean number of quanta received
by the eye n~ [13]:
24 An Introduction to Night Vision Technology

 (n )
C i (2.9)
n~

The factor of perceptibility Kr would depend on the perception


of the object and be a common value for simpler structure like rasters,
bar-charts, Landolt’s rings, disc objects and the like and would be the
constant of proportionality, i.e.,

 (n )
Ci K r (2.10)
n~

The structural content is carried in the factor (n)/ ~


n , while
the perception, a result of eye-brain combination in factor Kr .
Obviously, in a real situation one would like to find out the
relationship of the perception with the object contrast, Co . It is not
difficult to do so, if one knows the modulation transfer function for a
given spatial frequency a and defines it as Tm (a) for the complete
electro-optical system. The equation can be put down as

Kr  na 
Co  , ~ (2.11)
Tm a  n a 

where  (n) (a) is the random fluctuation and n~ (a) is mean number
of quanta received by the eye during the observation time at a spatial
frequency (a). Apparently, the total perceptibility would be a
summation of all such contrast values at all spatial frequencies of
our interest.
The structural content of the image at the retina, i.e.,
 (n) / n~ would arise as a result of the total system noise-to-signal
ratio and could best be assessed at the retina itself, if it were possible.
For practical purposes, it can be approximated most closely by
measuring the output signal-to-noise ratio of an imaging system
when its gain is such that it makes only the noise detectable.
Variants of this measure, in terms of equivalent background
illumination, background dark current, or the noise equivalent power
are different definitions in varying context for different detectors to
arrive at the same parameter which hopefully would help us in
predetermining what we are looking for.
One can argue from the fundamental reasoning that the
lowest possible noise-to-signal ratios could be achieved if every
Search & acquisition 25

quantum carrying some structural information about the object


could be processed by the sensor system so that ultimately each
such quantum could give an identifiable signal. In such a case one
could define the quantum detection efficiency qde by a factor F as

2
  n  
 ~ 
 n   out 1
F  2
 (2.12)
  n   qde
 ~ 
 n   in

It is a highly remarkable property of our visual system


that the same relationship holds true for perceptibility experiments
irrespective of the nature of quanta, e.g., for x-ray photons. At retinal
level in each case, it is conversion of visible photons into discrete
nerve signals that leads to perception.
If  (n) tends to become larger in comparison to ~
n , i.e.,
the fluctuations are far in excess to that of the mean number of
photons received by the eye, it is the fluctuations which become
dominant and the structural content gets suppressed. Experiments
show that if this factor is more than one-third, the relative
fluctuations become perceptible destroying the structural content
of the object in a wide range of luminance.
Considering Eqns (2.11) and (2.12) one could work out
the object contrast in terms of the quantum detection efficiency as
under, summing up overall the spatial frequencies of interest:

Kr F
Co  (2.13)
  n  
Tm  ~ 
 n  in

This approach has the advantage that when one limits to


simpler structures like bars, line, raster, discs or the like usually
used as test objects, the perceptibility factor can be treated as a
constant and contrast at the object conveniently evaluated from
modulation transfer function of the system, qde and the input noise-
to-signal ratio.
If one were to critically analyse the definitions and terms
used both in the visible and the infrared regions even at low light
26 An Introduction to Night Vision Technology

levels, one would find that the attempt is to get at the ultimate
perception through all the parameters that go to define a sensor
and a complete system. That it has not been completely done and is
still a matter of research is evident in the fact that the success or
the failure of a given system with a given sensor under actual field
conditions cannot be predicted accurately. The structural factor
 (n ) / n~ that is accepted by the system is also fluctuating as a whole
at the entrance aperture.
One can normally use qde to define the performance of
imaging sensors in general—be it photodetectors, image tubes or
the like. In the case of photopic imaging the number of quanta
available is so large that statistical fluctuations are negligible and
one can treat the factor  (n ) / n ~ as a constant. The imaging
performance thus depends upon modulation transfer function, Tm .
The qde is not of much concern in direct vision due to the abundance
of quanta with structural information. In case of low light level
imaging, the number of quanta available is so small that performance
is governed by statistical fluctuations. While  (n ) / n~ determines the
image contrast, the improvement in MTF of the system is not going
to play that vital a role as in the case of normal day instruments. In
case of x-ray, ultrasound or NMR imaging, there are undoubtedly
fundamental statistical limits but the method of calculating these
limits is not obvious. Due to various other phenomena associated like
scattering, differential absorption etc., the value becomes so much
complicated that the theoretical prediction is difficult and one resorts
to signal-to-noise ratio. In these cases also, MTF (Tm) is not very
important and one works out such image processing methods so that
the contrast rendition is improved to detect contrast lower to 0.001
per cent although resolution may be as low as a few lines/mm.
The goal of being able to thwart the natural conditions
which limit the usefulness of both vision and photography in
extracting information from scenes of decreasing low apparent object
contrast has been pursued with only limited success for many years.
The use of short wavelength cutoff filters, for example, combined
with the long wavelength end extension of photographic film
sensitivity has helped to penetrate the veil in cases where contrast
has been reduced to wavelength-dependent (Rayleigh and Mie)
scattering. The introduction of infrared sensitive emulsions carried
this photographic approach as much as it could.
Search & acquisition 27

Other types of imaging systems operating both in the


visible and near-infrared portions of the spectrum (classified as
electro-optical imaging systems) and relatively long wavelength
forward-looking infrared (FLIR) sensor systems have been developed
which can detect, quantify and display significantly lower contrasts
than those possible with conventional photography. The common
properties of these systems which allow them to perform at low
contrast include wide dynamic range and a high level of linearity.
These properties in turn allow for subtraction of an absolute dc
background level; effectively ac coupling carried to an ultimate
extent. Signal-to-noise for the system as a whole thus becomes the
most relevant parameter.

REFERENCES
1. Enoch, J.P, "Effects of the Size of a Complex Display Upon
Visual Research". J. Opt. Soc. Am. vol 49, (1959), pp. 280-86.
2. Waldman, G.; Wootton, J.; Hobson, G. & Luetkemeyer, K,
"A Normalised Clutter Measure for Images". Comp. Vis.,
Graphics and Image Pro. vol. 42, (1988), pp. 137-156.
3. RCA Electro-optics Handbook. Tech Series, EOH –11, (RCA Solid
States Division, 1974).
4. Waldman, G. & Wootton, J, Electro-optical Systems Performance
Modeling . (Artech House, 1993).
5. Blackwell, H.R. "Contrast Threshold of the Human Eye". J.
Opt. Soc. Am. vol. 36, no.11, (1946), pp. 624-43.
6. Blackwell, H.R. & Taylor, J.R, Survey Of Laboratory Studies
of Visual Detection. NATO Seminar on Detection, Recognition.
and Identification of Line of Sight Targets. (The Hague,
Netherlands, 1969).
7. Waldman, G.; Wootton, J. & Hobson, G, "Visual Detection
with Search: An Empirical Model". IEEE Trans. on Systems,
Man & Cyber. vol. 21, (1991), pp. 596-606.
8. Overington, J. "Interaction of Vision with Optical Aids". J. Opt.
Soc. Am., vol. 63, (1973), no.9, pp. 1043-49.
9. Johnson, J, "Analysis of Image Forming Systems". Image
Intensifier Symposium. (Fort Belvoir VA . October 1958).
10. Wiseman, R.S. "Birth and Evolution of Visionics". SPIE, Infrared
Imaging. vol. 1689, (1997), pp. 66-74.
28 An Introduction to Night Vision Technology

11. Rosell, F.A. & Wilson, R.H, Recent Psychophysical Experiments


and the Display Signal-to-noise Rate Concept. Perception of
Displayed Information. (New York: Plenum Press, 1973).
12. Petersen, H.E. & D.J. Dugas. "The Relative Importance of
Contrast and Motion in Visual Detection". Human Factors,
vol. 14, (1972), pp. 207-16.
13. Hradaynath, R. "Opto-electronic Imaging: The State of the Art".
in Proceedings of the International Symposium on Opto-electronic
Imaging, (New Delhi: Tata-McGraw Hill Publishing Co. Ltd.,
1985) pp. 19-33.
CHAPTER 3

THE ENVIRONMENT

3.1 INTRODUCTION
The environment has an important effect on the
observation of a target or an object of interest; the most important
single parameter being the atmosphere. The amount of radiation
that is received on the surface of the earth spectral-wise is
determined by the constituents of the atmosphere as also the
particulate matter that may be intervening. Observations at low
angles that is usually the case in terrestrial observation could further
aggravate the problem. The weather conditions, such as rain, snow,
haze and fog could reduce the clarity of vision. Dust and sand thrown
up by vehicular movement and various obscurants, such as smoke
could be of special significance in battlefield environment. Presence
of pollutants resulting in smog could drastically reduce the visibility
down to a few metres. Such conditions do make the contrast
rendition quite difficult in the observation plane.
It is also well known that astronomical observations are
required to be suitably corrected experimentally and theoretically
to annul the effect of the intervening atmosphere besides choosing
correct locations for observation. The common visible effect observed
by the naked eye is the twinkling of the stars. On the surface of the
earth atmospheric variation in refractive index may lead to effects
like mirage and distortions of varied nature at noon time or in sandy
terrain. Yet for quite sometime, the atmosphere does retain a
reasonably uniform refractive index value and permits good vision,
so much so that the refractive index value is generally assumed to
be unity, i.e., same as for vacuum in most calculations. The vision
could be excellent on a perfect day as witnessed in Sub-Himalayan
terrain which is reasonably free of pollutants and non-atmospheric
particulate matter.
30 An Introduction to Night Vision Technology

2500

SPECTRAL IRRADIANCE W m–2 m–1


SOLAR IRRADIANCE OUTSIDE ATMOSPHERE
2000 SOLAR IRRADIANCE AT SEA LEVEL
CURVE FOR BLACKBODY AT 5900°K
O3
1500 H2O
O2, H2O
H2O

VISIBLE SPECTRUM
1000 H2O
H2O
H2O
H2O, CO2
500
H2O, CO2
O3 H2O, CO2

0
0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0
WAVELENGTHm)
Figure 3.1. Spectral radiance of the sun at zenith on earth
The atmospheric effects manifest themselves through
absorption, scattering, emission, turbulence and secondary sources
of radiation, such as the skylight and reflections from large areas
like the clouds and the water masses on the earth. During night,
such conditions could be relevant to the moonlight and starlight
illumination. The profile of the spectral radiance of the sun at mean
earth-sun separation is shown in Fig. 3.1[1]. It also shows the
absorption at sea level due to the atmospheric constituents.
Obviously transmission is quite significant for the visible region as
also for the near infrared, though absorption bands due to water
and carbon dioxide do make significant inroads. Likewise Fig. 3.2.
shows transmission in percentage value extending right up to 16 m
and good transmission in the 3–5 m and 8–14 m bands. These
good regions of transmission are also referred to as the atmospheric
windows.
As we are concerned more with terrestrial, i.e., horizontal
transmittance it would be interesting to look at Fig. 3.3, which shows
transmittance at sea level containing 5.7 mm precipitable water at
26 °C over an atmospheric path of 1000 ft (@ 305 m)[2,3]. This
graphic data also confirms good transmittance in the visible, near
infrared 3–5 m, and 8–14 m bands. The data for horizontal
transmission would certainly vary significantly dependent on the
local condition of observation, but the spectral nature of
transmission would by and large be similar.
The Environment 31

ATMOSPHERIC TRANSMISSION
100
TRANSMISSION (%) 80

60

40

20

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
WAVELENGTH (m)
Figure 3.2. Atmospheric transmittance vs wavelength

3.2 ATMOSPHERIC ABSORPTION & SCATTERING


The gases that constitute the atmosphere absorb
incoming radiation to the planet dependent on their molecular
constituents and their characteristics in the spectral bands related
to their structure. These gaseous constituents in order of importance
are: water vapour, carbon dioxide, ozone, nitrous oxide, carbon
monoxide, oxygen, methane and nitrogen. Water vapour and carbon

100
80

60
40
20
0
TRANSMITTANCE (%)

0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0


100
80
60
40
20
0
4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5
100
80
60
40
20
0
10.0 11.0 12.0 13.0 14.0 15 16 17 18 19 20 21 22 23 24 25
WAVELENGTH (m)

Figure 3.3. Transmission over ~ 305 m (1000 ft) horizontal air path
32 An Introduction to Night Vision Technology

dioxide are the most important molecules in this respect while ozone
plays a significant part in the absorption of ultra-violet and
absorption in the 9-10  m region in the upper layers of the
atmosphere. The effect of absorption is attenuation of the signal
strength dependent on the wavelength of the incident light. Basically,
in absorption, the incident photon is absorbed by an atmospheric
molecule, causing a change in the molecule’s internal energy state.
The infrared and visible photons may just have adequate energy to
enable transition in the rotational or vibrational energy states of a
gas molecule. As the energy matching is more for the infrared,
absorption is not that significant in the visible region. The absorption
due to aerosol would depend on its density. While the energy taken
out of a beam of radiation by absorption contributes to the heating
of the air, the energy scattered by molecules, aerosol or cloud
droplets will be redistributed in the atmosphere. In addition to
absorption, the signal strength is further altered due to scattering
by air molecules, aerosol and other particulate matter present in
the atmosphere. The scattering effects are dependent on the particle
size and could be thought of in three categories. The first, where
the particle size is relatively small in comparison with the wavelength
of the incident light; second, where it is of the same order; and
third, where the particle size is relatively large in comparison with
the wavelength of the incident light. The relative sizes and their
density for important atmospheric constituents are indicated in
Table 3.1.

Table 3.1. Important atmospheric particles that cause scattering


Scattering type Particle Radius Density
(m) (per cubic cm)
Rayleigh Air molecules 10 – 4 1019
Mie/Rayleigh Haze particles 10 –2 –1 10 –103
Mie Fog droplets 1–10 10–102
Non-selective Rain drops 102–104 10–5–10 – 2

Air molecules as will be observed are relatively much smaller


in comparison to the wavelengths of light. The particles of this size
would scatter light in all directions thus reducing the signal strength
of the incident light but at the same time adding a small amount of
forward scatter. As the particle size increases approximately to a
size of quarter of a wavelength, the intensity of the scattered light
in the forward direction becomes more prominent and much more
so when the particles are much larger than the wavelength of light.
The Environment 33

In such situations, apart from reducing the signal strength,


unwanted scattered light in the forward direction is also focused in
the focal plane of an instrument system that might be observing an
event. The amount of scattered light that would be present would
also depend on the parameters of the instrument design, such as
its field of view, magnification, entrance aperture size, and its focal
length.
Though theoretical calculations are not that simple, an
approach to the problem is made by defining the attenuation or
extinction coefficient as

The transmission t = t o e   R (3.1)

where to is transmission through the vacuum and  is the attenuation


coefficient over a path length R in the atmosphere. One could also
state that the attenuation or extinction coefficient has an absolute
value proportionate to the inverse of the maximum range beyond
which there is no transmission of the incident light. This coefficient,
which takes into account both losses due to absorption and
scattering, is specific to a given wavelength and assumes uniform
atmosphere over the pathlength R. In reality, a practical value of
this coefficient may be simulated by working out the coefficients
for small bands of wavelengths and then appropriately averaging
those values over the required spectral range. Experimentally, a
number of lasers at different wavelengths could be used for such
measurements. This attenuation coefficient would in turn have a
contribution from absorption and scattering. Using subscripts a
and s respectively for absorption and scattering, we could write
down that
  a  s (3.2)
s in turn, would have contributions from different sizes
of particles, each group having particles smaller than the wavelength
of the incident light, of the same order as the wavelength of light and
of an order where the particle size is relatively much larger. Where
the particles are much smaller than the wavelength of light, the
problem can be addressed by following Rayleigh’s theory of
scattering. In such cases the scattering coefficient can be shown
to be proportional to  – 4. The dispersion of scattering about the
scattered particle is generally symmetrical showing equal forward
and backward scattering. This is true of air molecules. The second
34 An Introduction to Night Vision Technology

set, where the particle size is of the same order as the wavelength
of the incident light, would primarily be due to aerosol. The
scattering in this case becomes a complex function of particle
size, shape, refractive index, scattering angle, and wavelength.
This could be addressed by utilizing the Mie theory of scattering.
In this case, the intensity of the scattered radiation becomes less
dependent on wavelength and more dependent on angle, with a
distinct peak in the forward direction. In the third group where
the particles are much larger than the wavelengths of the incident
light, the particles would behave like micro-optical components.
Thus their theoretical treatment could be essayed by utilizing the
concepts of geometrical optics. This type of scattering also referred
to as nonselective scattering or white light scattering (because of
lack of dependence of scattering on wavelength) or scattering in
the geometrical optics regime could explain scattering due to such
large particles as raindrops. Scattering intensity has still a strong
angular dependence with a strong peak in the forward direction.
3.2.1 Scattering due to Rain & Snow
According to Gilbertson[4], the scattering coefficient in
rainfall is independent of wavelength in the visible to far infrared
region of the spectrum and could be estimated by the equation
s(rain) = 0.248 t 0.67 (3.3)
where  s(rain) is the scattering coefficient in km , and t = the rainfall
–1

rate in mm hr –1.
More recent articles by Chimelisx and others[5] give three
different formulae for the scattering coefficient due to rain, but all
the four formulae are really close enough and do not differ
significantly.
Empirical relationships have also been developed for
snow-based on experimental results and it has been found that
the results tend to show two groups, one for snow in small needle-
shape crystals and the other for larger plate-like crystals. The
relationships are as under[6]
s(snow) = 3.2 t 0.91
..for small needle shaped crystals (3.4)
and
s(snow)= 1.3 t 0.5 .. for larger plate like crystals (3.5)
where the rate of snow accumulation is expressed as equivalent to
liquid water rate in mm/hr given by t.
The Environment 35

3.2.2 Haze & Fog


While the relationships developed for scattering coefficient
in rain and snow help to a reasonable extent in the estimation of
the attenuation or extinction coefficient during such conditions, it
may relatively be easier to adopt the concept of visibility which serves
as a good substitute for the total attenuation coefficient due to
almost all the atmospheric weather variables including haze and
fog as the major contributors. Visibility is obviously related to
contrast rendition and the contrast sensitivity of the human eye.
3.2.3 Visibility & Contrast
The terms visibility, visual range and meteorological range
all refer to the horizontal or slant range in which the attenuation of
the contrast is not less than 2 per cent, as related to the inherent
contrast of an object against its horizon-sky, or its background,
i.e., if Co is the inherent contrast of the object, CR the apparent
contrast of the object at a distance R, then CR/Co should be 2 per
cent or more.

CR /Co  2% (3.6)
The contrast Co at the object plane, i.e., R = 0 may be
defined as

LO  L BO
CO  (3.7)
L BO

where LO is the object luminance, i.e., flux per unit solid angle per
unit area or intensity per unit area and LBO is the background
luminance. The contrast CR at the observation plane at a distance
R could be similarly defined as

LR  LB
CR  (3.8)
LB
where LR is the luminance in the observation plane and LB is the
background luminance in the same plane.
Equations (3.6) and (3.7) are interrelated as

L R  L o . e  R (3.9)

L B  L BO . e   R (3.10)
36 An Introduction to Night Vision Technology

The above relationships follow from the fundamental


relation of Eqn 3.1 which can also be rewritten as  R =  O.e – R
,

where  O is the flux radiated by the object and  R is the flux received
at distance R, while  is the attenuation coefficient over the same
path length.
These equations require to be modified to take into
account the luminance that is scattered into the line of sight by the
rest of the atmosphere. If this scattered luminance is Lin, the modified
equations are:

L R  L O . e   R  L in (3.11)

L B  L BO . e   R  L in (3.12)
Thus, we have
(L O  L BO ). e   R
CR  (3.13)
LB
LBO
and multiplying by
LBO , we have

L 
 Co  BO  e   R using Eqn (3.7) (3.14)
 LB 
If the object is viewed against the horizon sky, the
background remains more or less the same in both the object plane
and the observation plane. The above equation under such
conditions reduces to

C R  CO . e   R (3.15)
For targets against a terrestrial background, the Eqn
(3.14) can be remodelled [7] as

C R  CO {1  S(1  e σ R )} 1 (3.16)

where S = Lm/LBO, a quantity called sky ground ratio and where Lm


is the horizon sky luminance from a direction from which sunlight
is scattered along the line of sight and LBO has the same meaning as
defined earlier, i.e., background luminance at the object plane.
The above equations help in evaluating contrast reduction
by the atmosphere and become more practical if related to visibility.
The Environment 37

Assuming that the resolution of an object is not an


impediment in its detection, visibility could approximate to the range
at which a black object is just seen against the horizon sky, i.e., at
a contrast of 2 per cent. One could therefore rewrite equation (3.15)
as

C R / C O  0.02  e   Rv , where Rv is the visibility range.


This could also be expressed as
  3.912 / Rv (3.17)
This relationship is also explained by Fig. 3.4.
Visibility or visible range is recorded by metrological
stations and is a practical guide in many tasks, such as landing
and taking off of aircraft, military tactics and logistics and as
MODERATE

5
FOG
ATMOSPHERIC ATTENUATION COEFFICIENT (n) in km –1

2 LIGHT
FOG

THIN
1 FOG

HAZE
0.5

LIGHT
HAZE
0.2

CLEAR
0.1

VERY
CLEAR RAYLEIGH
0.05
SCATTERING
310 km

0.02 EXCEP-
TIONALLY
CLEAR
0.01

0.5 1 2 5 10 20 50 100 200


DAYLIGHT VISIBILITY RANGE (RV ) in km
Figure 3.4. Visibility vs atmospheric attenuation coefficient
38 An Introduction to Night Vision Technology

a useful parameter in weather prediction. Aerosol models


describe a clear and hazy atmosphere corresponding to a
visibility of 23 and 5 km, respectively. As the relationship with
R is so obvious, R v could be introduced in all the equations
involving R.
3.3 ATMOSPHERE MODELLING
The actual attenuation along a path from outside the
atmosphere to the point of interest on the surface can be predicted
by suitable modelling techniques to a reasonable degree of accuracy
using modern computers. Thus, five atmospheres corresponding
to tropical, mid-latitude summer, mid-latitude winter, subarctic
summer and subarctic winter with two aerosol models corresponding
to a visibility of 5 and 23 km for hazy and clear conditions have
been suitably worked out. Each of these atmospheres is different
in its temperature and pressure variations as also in absorbing gas
concentrations. The information is also available in the form of
prediction charts. It may further be augmented by giving attenuation
coefficients for the laser wavelengths referred to in Table 3.2.

Table 3.2. Laser wavelengths used

Laser Type Wavelength (m)

Nitrogen 0.3371
Argon 0.4880
Argon 0.5145
Helium-Neon 0.6328
Ruby 0.6943
Gallium arsenide 0.86
Neodymium in glass 1.06
Erbium in glass 1.536
Helium-Neon 3.39
Carbon dioxide 10.591
Water vapour 27.9
Hydrogen cyanide 337

A well-known model is the Lowtran[8,9] covering a


spectral range from 0.25 m to 28 m. One of its later versions
has six atmospheres and eight aerosol's allowing 48 combinations
for the selection of a user. Models have also been introduced for
battlefield obscurants to predict transmission through smoke due
The Environment 39

to fire, missile smoke and for possible path lengths through smoke
and dust [10]. The software could also take into account scattering
due to clouds and fog.

Scattering in the 8-14 m region is considerably less than


that in the visible region of the spectrum; and the principal
attenuation mechanism is molecular absorption, particularly that
due to water vapour. Thermal detection of an object versus its
background may not be affected by the path irradiance as it would
essentially be the same, unless one is viewing an airborne object
against a sky or a cloudy background.
3.4 INSTRUMENTS, NIGHT VISION & ATMOSPHERICS
The optical systems act as gatherers of radiant or
luminous energy and are so designed as to provide maximum
possible signal-to-noise ratio and the object-to-background contrast
ratio. The image may be presented as a whole as in an image
intensifier system or assembled as an electronic display either by
image plane scanning or object plane scanning. The object scene
may be scanned using a limited number of detectors either
sequentially, parallelly, or in both the formats. Staring and focal
plane detector-arrays may simplify the process. As the progress in
the sensors, detection systems and related electronics has been
very significant in recent times, the devices are becoming more and
more noise free and more responsive, permitting one to assume
that the images are contrast-limited rather than noise-limited in
the optical domain, i.e., in visible, near infrared and thermal regions.
Atmospheric effects can be better calculated in terms of modulation
transfer function (MTF) and assuming linearity, one could estimate
the overall MTF of an observation system through the atmosphere.
This approach has led to practical results. Thus, in the case of
longer focal length systems providing greater angular magnification,
atmospheric effects could limit their resolution and blur small details
much more than the larger details. Improvements could be possible
by spatial frequency filtering and more so by introducing adaptive
optics. There is also the possibility that in sensor detection, the
weak scattered light may not be recorded due to the limitations of
the dynamic range of the sensor. Atmospheric distortions are bound
to be magnified. However, magnification beyond a certain point may
prove useless, though when atmospheric conditions are weak or
moderate an increase in magnification may improve the system
40 An Introduction to Night Vision Technology

performance. Turbulence is more significant in the visible region


than in the thermal region or during the night.
As during the night-time turbulence is generally at a
minimum, the maximum attenuation will be due to atmospheric
absorption and scattering. The night-time range achieved would
depend on the visibility in the image intensifier systems and on the
amount of water vapour present in the case of thermal
instrumentation, apart from the optical considerations of the
instrument design. One might look for better MTF values in the
spatial frequencies of interest to the users and involve concepts
such as the minimum resolvable temperature difference for
improved thermal contrast in the thermal imaging systems.
Improvements in spatial frequency and in reduction of noise with
better quantum efficiency of photocathodes could give an edge to
image intensifier-based systems in conditions of rain and fog.
REFERENCES
1. Valley, S.L, Handbook of Geophysics and Space Environment.
(Airforce Cambridge Research Laboratories, Office of the
Aerospace Research, US Air Force. 1965: Also published by
McGraw Hill Book Co, New York, 1965).
2. "Infrared Components Brochure No. 67CM", (Goleta, CA: Santa
Barbara Research Center, 1967).
3. RCA Electro-optics. (Lancaster: RCA Corporation, Technical
Series (Section 7), EOH-II, Solid State Division, 1974).
4. Gilbertson, D.K, Study of Tactical Army Aircraft Landing
Systems. Technical Report-ECOM-03367-4, AD-477-727
(Alexandria, Va: Defence Documentation Center. Jan 1966).
5. Chimelisx, V. "Extinction of CO2 Laser Radiation by Fog &
Rain". App. Opt. vol. 21, no. 18, (1982), pp. 3367.
6. Seagraves, M.A, "Visible and Infrared Transmission through
Snow". SPIE Proc. on Atmospheric Effects on Electro-optical,
Infrared and Millimeter Wave System Performance. vol. 305,
Aug, (1981).
7. Middleton, W.E.K, Vision through the Atmosphere. (Toronto:
University of Toronto Press, 1952).
8. McClatchey, R.A. et al, Optical Properties of the Atmosphere.
AFCRL-71-0279. (Environmental Research Paper. 354
AD726116, May 10, 1971).
The Environment 41

9. McClatchey, R.A. et al, Atmospheric Transmittance/Radiance:


Computer Code Lowtran 5. AFGL-TR-80-0067. Environmental
Research Paper 897. (Air-Force Geophysics Laboratory at
Hanscom AFB, Massachusetts. Feb 21, 1980).
10. Waldman, G. & Wootton, J, Electro-optical Systems Performance
Modeling. (Artech House, 1993).
CHAPTER 4

NIGHT ILLUMINATION, REFLECTIVITIES &


BACKGROUND

4.1 NIGHT ILLUMINATION


Though atmospheric parameters may be an impediment
in detection and acquisition of an object or a target over significant
distances during night, other aspects of environment particularly
relative contrasts available at the imaging plane between the object
and its background would be more significant in normal night
observation. Light in the visible spectrum though never really extinct
during nights, certainly varies in intensity due to the presence of
the moonlight or starlight under various environmental conditions.
Thus, the vision instrumentation for the night can be considered
to be quantum starved and appropriate ways and means have to be
adopted to make the best use of whatever quanta are available for
realisation of an understandable image. The significance of scattered
light during night has to be relatively ignored, as its intensity would
be far too low. During the day, the scattered light as also the skylight
(due to scattering and peaking in blue) though less intense could
still illuminate the hollows or obstructions and enable a better depth
in seeing shadows or the like. During the night, directional effects
may be more prominent. Thus, observation with the moon behind
the observer may give a better range and so also observation from
a higher level or an aircraft of the ground below under starlight.
Illumination under clouds could be a tricky affair. It could
sometimes reflect city lights and enable better illumination and at
other times it could totally block even the starlight. During a battle,
existence of gunfire, flares, and night illuminants used by an army
could give a much better chance of viewing the area with night
vision devices than could ordinarily be possible under normal
conditions. Even the leakage of light from within a closed tank could
give away its position from relatively large distances.
44 An Introduction to Night Vision Technology

Atmospheric conditions naturally cause variations in


illuminance on the earth’s surface both during day and night.
Table 4.1 gives the approximate values of illuminance due to stars,
moon and sun when the atmosphere is both clear and cloudy.

Table 4.1. Ground-surface illumination

Sky condition Illuminance (1ux)

Starlight-overcast night 10–4


Starlight-clear night 10–3
Quarter moon 10–2
Full moon 10–1
Deep twilight 1
Twilight 10
Dark day 102
Overcast day 103
Full day light 104
Direct sunlight 105

It will be observed that the cloudy conditions cause a


reduction in illuminance by an order of a decade or more. The
information in Table 4.1 can also be illustrated graphically with
reference to sunrise and sunset to show the effect of twilight
conditions which penetrate the night illumination marginally for
an hour or so before sunrise and after sunset (Fig. 4.1). The values
are approximate and illustrative of the likely behaviour of the night
conditions.
Target and background reflectance is another important
parameter. The percentage reflectance values do not change
significantly from daytime to night.
The earth’s terrain and the surrounding bodies also
radiate in the infrared, typically peaking at around 10 m for a
mean temperature of 300 °K. One could thus make use of thermal
contrasts in a given scene and implement instrumentation using
suitable detectors, optics and display techniques. The spatial
resolution would be relatively poorer to the visible imagery, but
thermal contrast rendering could enable thermal imagery over large
distances in normal weather.
4.1.1 Moonlight
Moon has almost the same angular size as the sun when
observed from the earth. As moonlight is basically reflected sunlight,
Night illumination, reflectivities & background 45

its spectral characteristics are similar to that of sunlight with


relatively very low intensities. The daylight measurements for
reflectivity from objects, atmospheric scattering and attenuation
and contrast can therefore be adopted as a whole for computational
purposes. As a reflector, the light is bound to be partially polarised
but its effects on vision systems are not known to be significant, as
the instrumentation is not polarisation-preserving. Dependent on
weather conditions and on the phases of the moon, the ambient
light is much more variable during night than during the day.
Elevation changes in the moon’s position add another dimension
in the variability of the ground illumination. Full moon at the local
midnight has the highest elevation. The illumination changes can
also be sudden due to cloud movements. The scene illuminance
could change in a matter of minutes. Some of the characteristics of
the full moon are summarized in Table 4.2.

  1

FULL MOON CLEAR

10 –1

FULL MOON-PARTIALLY
ILLUMINANCE (LUX)

CLOUDY

10 – 2

STARLIT CLEAR

STARLIT PARTIALLY
CLOUDY

10 –3

STARLIT HEAVY
STARLIT AVERAGE
CLOUDS
10 –4
SUNSET 0 1 2 2 1 0
10 -4
SUNRISE
HOURS AFTER TWILIGHT OR BEFORE MORNING LIGHT

Figure 4.1. Expected variation of illuminance from sunset to


sunrise under night conditions.
46 An Introduction to Night Vision Technology

Table 4.2. Characteristics of full moonlight

Characteristic Value

Albedo (related to surface reflectivity) 0.072


Effective surface temperature 400 °K
Peak self-emitted radiation 7.24 m
Apparent diameter 0.5°
Effective temperature of reflected 5900°K
sunlight (same as that of sunlight)

Figure 4.2 shows radiance from the moon in terms of


watts/sq cm/steradian/m over a range from 0.4 m to 2.0 m. It
will be observed that the relative energy concentration is much more
in the visible region than in the near infrared, as is the case for
daylight. The energy content is obviously more than in the case of
illumination due to starlight.
Another important factor in relation to the intensity due
to moonlight is the change in its phase from new moon to full moon

MOONLIGHT
RADIANCE WATTS/SQ.CM/STERADIAN/ m

10 – 8

10–9

STARLIGHT

10–10

0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0

WAVELENGTH (m)

Figure 4.2. Night sky radiation vs wavelength


Night illumination, reflectivities & background 47

and vice-versa. The reduction factor could be as high as 300. Thus,


for the quarter-moon the illuminance value is around one–tenth to
that of the full-moon. Figure 4.3 shows a graph indicating the
reduction factor of moonlight vis-a-vis its phase changes. It will
also be appreciated that as the cloud cover goes on increasing
reducing direct moonlight, the relative contribution from the air
glow is likely to effectively increase, particularly in the shadows.
Thus, the spectrum in the shadows is more akin to that of starlight
rather than to sunlight. Scene brightness would be dependant on
the incident illumination and its reflectivity.

FULL MOON NEW MOON

300

200

100
REDUCTION FACTOR TO FULL MOON

80
60

40

20

10
8
6

1
20 40 60 80 100 120 140 160
MOON PHASE IN DEGREES

Figure 4.3. Reduction factors of moonlight with its phase changes


48 An Introduction to Night Vision Technology

4.1.2 Starlight
Starlight is really not evenly distributed in the sky
because of the concentration of stars in the milky way releasing a
good amount of energy in the visible spectrum, as also due to
selective spectral distribution of many stars. The intensity at zenith
and along the milky way is higher than that elsewhere in the sky.
Nonetheless, we can assume the approximate values for ground
illuminance in accordance with Table 4.1 as the average
illumination. At the same time, it will be observed from Fig. 4.2
that in the starlight the relative radiation content is more in the
near infrared than in the visible, i.e., somewhat opposite to that
in the case of moonlight. Intensity in each waveband of interest
can also be reduced or altered by the type of cloud cover, rain
and fog. Scene brightness as in all cases would be dependent on
the incident illumination and its reflectivity.
Moonless clear night, i.e., a starlit night sky radiance is
composed of the following four components within the visual
wavelength range:
(a) Extragalactic sources  1 per cent
(b) Stars and other galactic sources  22 per cent
(c) Zodiacal light  44 per cent
(d) Airglow  33 per cent
While the contribution due to (a), (b), and (c) result in a
spectrum closer to that of a sunlit or a moonlit sky with appropriate
alteration in intensity values, the characteristic spectrum of a starlit
sky is more due to airglow. In addition to more or less intense lines
in some parts of the visible spectrum, the airglow also yields
increasing intensity in the near infrared say up to 2.5 m. Thereafter
thermal emission of the atmosphere begins to supress it. Hence, a
S-20 photocathode[1] with an extended red response or a S-25 has
a reasonably good correlation under both moonlight and starlight
conditions and is the photocathode of choice in most image
intensifier tubes.
4.2 REFLECTIVITY AT NIGHT
Reflectance measurements made during day time are
equally applicable during night. However, these measurements
assume greater significance during night, as it further lowers the
amount of low intensity light that is present in the environment.
Reflectance by itself during daytime is not significant, as the number
of photons reflected is still large enough to be detected by a vision
system. No doubt contrast between the object and its background
continues to be an important factor at all levels of vision.
Night illumination, reflectivities & background 49

While extensive measurements have been done on


reflectivity from green foliage, grass, leaves (all in various stages of
freshness and decay), earth (various types such as yellow, red,
brown and loam) and sand both under wet and dry conditions, the
data is place- and weather-specific as it differs from place to place,
dependent on the climatic and aerosol conditions of the place.
The average percentage reflection may be thought of as
20 per cent or so in the visible part of the spectrum and around
50 per cent in the near infrared. For purposes of computation may
be, these values can suggest an average situation. It will be
observed that the materials to which we have been referring,
usually form the background of a scene (Fig. 4.4 and 4.5)[2].
Likewise, reflectivity measurements have also been made on
materials that may provide the signal, such as military clothing
and paints – green, khaki or the like on vehicles of various types.
The percentage reflectivity from military targets as
against their backgrounds is generally so designed that either
their reflectivity is poor or is matching the background. While the
latter approach, i.e., reflectivity of the same order as the
background causing merger with the background may be good for
stationary targets or for expanses with a similar background such
as deserts, poor reflectivity is a better answer where mobility
causes changes in the background. Thus, clothing whether
woollen or cotton and vehicular paints are usually designed to
have a properly selected drab or dull colour so that reflectivity is
less than 10 per cent in most of the visible spectra and in the near
infrared. One may further evaluate reflectance ratios between a
given background, such as rough expanses of land with little
greenery, deserts or the like against various items of clothing,
paints and pigments. Assuming similar illumination for the target
and the background as is the most general situation, the
reflectivity ratios can be used to evaluate relative contrast values.
This will enable prediction of the type of resolution possible at a
given range. Measurements of this nature vis-a-vis their likely
backgrounds provides a good input to an instrument designer.
These measurements could be on targets and backgrounds
specific to a particular territory and to a prospective aggressor.
Similar measurement of the background versus targets
would also be called for in the thermal region of 8–12 m where
the parameters would be the temperature differential and
corresponding emissivities. Here, the parametric measurements
50 An Introduction to Night Vision Technology

80

GREEN VEGETATION
70

60

ROUGH
50 CONCRETE
REFLECTANCE (%)

40

30

20

10
DARK GREEN PAINT

0
–4 –6 –8 1.0 1.2 1.4 1.6
WAVELENGTH (m)
Figure 4.4. Percentage reflections from surfaces of military interest

have to be redefined in terms of minimum resolvable temperature


difference for different spatial values as may be defined by an MTF
curve for the system.
4.3 THE BACKGROUND
A study of natural backgrounds could help detection
and also monitoring in displays. The electro-optical imaging is
usually displayed on monitors like the cathode ray tubes utilizing
phosphors. As the possible range of intensities is quite large (of
the order of 10 10), compressive transformations to limit the output
to 10 2 or so have been tried both by nature in the human eye and
as a result of technological evolution in photography, TV and the
like. Thus, from Weber's law analysing the intensity response of
the human eye, as observed threshold, intensity change (L) is
proportional to intensity (L); we have (L/L) as a constant. This
leads to Fechners logarithmatic scale for human vision, i.e., log-
intensities are on a linear scale with respect to the stimulus in the
Night illumination, reflectivities & background 51

0.9
0.8 SNOW, FRESH
0.7 SNOW,
OLD
0.6
VEGETATION
REFLECTANCE

0.5
0.4
0.3 LOAM

0.2
0.1 WATER
0
0.4 0.6 0.8 1.0 2 4 6 8 10 12 14
WAVELENGTH (m)

Figure 4.5. Percentage reflectances of some common surfaces

eye [3]. Though other transformations have also been proposed,


the log-scale explanation seems to be acceptable for a reasonable
range of intensity variation (Fig. 1.6).
The factors that define the image intensity in the background
as projected on the retina can be attributed to
(a) Strength and spectral composition of the illuminant
(b) Illuminant's position vis-a-vis the scene and the observer
(c) The orientation and reflectance of the viewed area, and
(d) The reflection function including textural, spectral and
specular factors.
Measurement and analysis based on the above factors
indicates that the distribution of luminances in a natural scene is
log-normal, somewhat similar to the nature of visual sensation. By
assigning numbers to the possible range of light intensities one
can introduce what is called a lightness scale on the log-normal
format to encompass proper scene reproduction on a display system.
It could also be that detection of man-made objects is therefore
relatively easier as these break the monotony of a scene and
enhance its response to the visual system.
Similar studies for characterizing the background in
the 8–12m bands show that for natural terrain the correlation
lengths are in the 30-600 m range and that the radiative
temperature standard deviation is of the order 1°–7 °K [4]. In this
region of the spectrum, it is obvious that reflectance is not of any
consequence and that the detection is based on self-emittance.
52 An Introduction to Night Vision Technology

4.4 EFFECT ON DESIGN OF VISION DEVICES


All the above discussion leads to the fact that in night
combat we are likely to feel the need for low light level vision system
with an ability to render visible even low contrast between the target
and its background. While technology can take care of quite a few
such aspects, a priori information is also essential about the nature
of targets and backgrounds which can be of significance both to
an instruments designer and to an observer for his appropriate
training. Training is also essential for an observer to extract the
maximum information possible from a monochromatic phosphor
display usually peaking in the blue-green regions of the visible
spectrum.
From the illumination point of view alone, the night vision
devices should fully operate at least from full moonlight to cloudy
starlight conditions, i.e., over a dynamic range of three decades,
without the need for any operator’s adjustment. Further, while
operating over this illumination range, it should be able to extract
information about low contrast targets also. Design considerations
therefore dictate:
(a) Very high intensity amplifications of the very weak optical
signals received with as limited intensification of a dark
uniform background as possible, i.e., noise limited.
(b) High intensity amplification of weak optical signals received
without proportionally intensifying its uniform background, and
(c) The photocathode used for image intensification purposes
should operate right from the visible into the near infrared as
far as possible to make full use of light quanta available both
under moonlight and starlight conditions apart from its
requirement of high quantum efficiency. This makes S–25 a
photocathode of choice as already mentioned earlier.
In case of night vision utilizing the 3–5 and 8–12 m
bands which are based on detection of the self-emittance of a
body, the nature of the daytime or night spectrum is not of much
concern even though it is known that the background natural
radiance statistics does change according to the presence of hot
sources like the sun. Further as the detection is for self-emittance,
reflectivity is also not a parameter for observation. These bands
are also well-known atmospheric windows, and hence instrument
devices in these parts of the spectrum can have a reasonably
significant range dependant on the system and its detector
characteristics. Quantum detectors are used for detection of the
self-emission of targets and their backgrounds in terms of
Night illumination, reflectivities & background 53

temperature differentials. The system effort is to correlate the


thermal contrast rendition with the desired spatial resolution.
REFERENCES
1. Hohn, D.U. & Buchtemann, W, "Spectral Radiance in the S-20
Range and Luminance of the Clear and Overcast Sky". Applied
Optics, vol. 12, no.1, Jan, (1973), pp. 52-61.
2. Driscell, W. G. (Ed)., Handbook of Optics. Chp. 14. (McGraw
Hill Book Company. 1978).
3. Soule, H. V, Electro-optics photography at Low Illumination
Levels. (John Wiley & Sons).
4. IRDE Report on creation of test facilities in the 300-ft long hall.
(Dehradun, India).
CHAPTER 5

OPTICAL CONSIDERATIONS

5.1 INTRODUCTION
While the contribution of a good optical designer is a
prime necessity in the successful design of an electro-optical night
vision system, the overall system constraints do lay down the basic
requirements for an optical system. Understanding of the user
requirements on the one hand and the technical possibilities and
limits of an optical designer on the other goes a long way in laying
down the basis of a successful design and forms one of the main
responsibilities of the system designer. These days we do have a
library of optical designs. Coupled with the availability of computers
and computer software to design and analyse a given or a modified
design from such a library, it may be possible to arrive at a desired
solution. Alternatively, it is also possible to arrive at a final solution
around a preliminary one that the designer might feel workable on
the basis of his experience. The analysis could be in terms of optical
transfer function, spot diagrams, Strehl definition, Mare'chal
criterion, wave front aberration or based on classical geometrical
optics approach. The use of appropriate software with compatible
computers certainly helps a great deal in arriving at the optimum
designs, drastically cutting down the computational time and the
time for decision making.

As night detection in the broader sense is not just


restricted to the visible spectrum only but could also utilize probing
into higher wave bands, it may be worth while to refer to the
electromagnetic spectrum of interest as given in Table 5.1
56 An Introduction to Night Vision Technology

Table 5.1. Detection possibilities

Waveband Wavelength Frequency Nature of


imaging

Visible
(Target reflectance 0.75 to 0.4 m 400–750 THz Passive
from natural sources)

(Target reflectance Not useful in this — —


from laser sources) region as the
position gets easily
known

Near infrared 2.0 to 0.75 m 150–400 THz Passive


(NIR) (Target
reflectance from
natural sources)

Target reflectance Not in use these — Active


using NIR search – days as the NIR
lights searchlights can be
detected using
appropriate vision
systems

Infrared (self- 5 to 3 m 60–100 THz Passive


emittance by the 14 to 8 m 21.4–37.5 THz Passive
targets)

(Target reflectance Detection possible — Active


from laser sources)

MMW 3 mm 100 GHz Active

X-band radar 3 cm 10 GHz Active

UHF TV 10 cm 3 GHz Tap-able


60 cm 500 MHz Tap-able

Note: Passive imaging does not give away the observer position while active imaging can do
so.

Atmospheric absorption rules out the use of wavelengths


lower than the visible for vision at a reasonable distance of military
interest. The visible and near infrared (NIR) have been linked with
detection by means of suitable photocathodes. Photocathodes have
Optical considerations 57

been developed which respond to the entire visible and NIR regions
right up to 1.2 m or so. The technological considerations for imaging
using such photocathodes are the same for photons available both
in the visible and the NIR. Maximum utilisation of the natural
night illumination is thus possible. Dioptric materials like optical
glass can also be used though one would have to watch for their
absorption characteristics particularly for the NIR. This is not quite
so as we shift to detection and image forming in the infrared bands
of 3–5 m and 8–12 m, which are also atmospheric windows.
Detection is also passive in these windows as it is based on the self-
emittance of bodies in the environment, using appropriate quantum
detectors for the spectral bands concerned. The detecting area is
micron-sized as against quite a few mm or even cm in the case of
a photocathode. While an entire image can be focused onto a
photocathode, the quantum detectors referred to only see a very
small area of the field. In other words scanning techniques have to
be introduced to cover the required field of view. Series, parallel,
and series-parallel scanning is resorted to as the number of detectors
is steadily increased in an x-y format. The more recent development
of staring or matrix arrays can dispense with scanning altogether.
Thermal energy detectors are also being tried by using matrices of
micro-bolometers. Useful dioptric materials in these spectral ranges
are: zinc selenide, zinc sulphide, silicon, germanium and the like.
Metal mirrors and polygons are also in use for the scanning optics.
Appropriate coatings are necessary in all the cases. In still higher
wavebands, the techniques are no longer passive and call for
illumination of the object and analysis of its reflection. Picturizing
an object scene utilizing TV cameras and its subsequent
transmission and reception involves a three-fold action, i.e.,
picturisation, transmission and reception. While the picturisation
aspect is dependent on the region of the spectrum used and its
corresponding cameras, transmission may utilize UHF band,
referred to in Table 5.1. Reception amplifies and modifies the signal
received into an appropriate video signal for display on a cathode
ray monitor. Picturisation is possible in the visible, visible and
near IR, and the higher IR bands in a passive mode during day or
night utilizing appropriate objectives and sensors. Thus we have low
light level television (LLLTV) systems and Thermal Imaging (TI)
systems utilizing appropriate instruments and detectors. Detection
ranges can be reasonably large and transmission of such signals
may not be opted for. In other words this approach offers an alternate
vision system.
58 An Introduction to Night Vision Technology

The entire instrumentation which enables vision or


display as referred to above is primarily in the domain of optical
and electro-optical engineering and is amenable to what may be
referred to broadly as optical techniques.
5.2 BASIC REQUIREMENTS
All optical systems should be designed to give as perfect
an image as possible for a given object scene for identification of
individual objects of varying shapes, sizes and colours distinguished
from their backgrounds. As all such scenes are an assembly of
points, it may be considered that reproduction of these points and
their correct juxta-positioning by an optical system would give an
overall faithful image. Thus the characteristics of a perfect optical
system imaging a given object scene were defined as under:
(a) To every object point there is one and only one image point.
(b) To every group of object points in a plane there corresponds a
group of image points in a plane.
(c) The image is geometrically similar to the object.
It was shown from the definitions above that one could
trace out the path of rays emanating from each object point through
the optical system, following certain fundamental laws. As the rays
travel in straight lines, this approach was defined as the subject
matter of geometric optics. It also follows from the above and the
reversal property of optical rays in a perfect optical system that
conjugation is an important property, i.e., the image formed can
also be thought of as object which forms an image which is exactly
the same object. Further it was soon realised that there are natural
limits to the formation of a point image for a point object, as the
point image does get blurred to some extent dependent on the
aperture of an optical system even when it is a perfect optical
system, due to diffraction effects. One could thus think in terms of
the progress of a wavefront through an optical system rather than
an optical ray which can be thought of as a normal to the wavelets
at the points considered. As our knowledge improved about the
image formation, one was led to think in terms of point spread
function, the line spread function, edge-gradients, the sine wave
response, and the square wave response which take diffraction
effects also into account. The intention is not only to have a perfect
image but also an analysis of the object contrast vis-a-vis its
background and as related to resolution. In actual practice, imagery
within certain tolerances in relation to a perfect image may be quite
acceptable. These tolerances based on practical evaluation of
systems may be defined in terms of geometrical or wavefront
Optical considerations 59

aberrations. Tolerancing could also be in terms of the optical


transfer function. Contrast enhancement techniques may
additionally be resorted to where object identification from its
background presents relative difficulties even in perfect imagery.
While the detailed information on these aspects is available in the
standard texts on optical design[1], it is our intention herein to
restrict to overall assessment and the systems for night vision only.
5.2.1 System Parameters
The system parameters that are of overall relevance may
be considered to be magnification, focal length, conjugate relations,
location of entrance and exit aperture, numerical aperture,
vignetting, field of view, tolerancing, consideration for contrast, and
resolution.
An optical system is generally an assembly of individual
lenses, symmetrically normal to a common axis called the optical
axis, i.e., the system has rotational symmetry around the optical
axis. This assembly is generally well-corrected and forms as perfect
an image as is possible. The system may comprise more assemblies
involving prisms, erectors, eyepieces, and the like.
The rectangular block shown in Fig. 5.1 is the outline
of an optical assembly. Conventionally, the light is supposed to
fall on an optical system from the left side, also referred to as the
object space. The points of our interest are focal points F and F ',
the principal points P and P ' and the nodal points N and N'. A
parallel ray, i.e., the one parallel to the optical axis from the object

FRONT FRONT REAR REAR


FOCAL PRINCIPAL PRINCIPAL FOCAL
PLANE PLANE PLANE PLANE

Q A A'

 P P' F' q'


q F N N' 

B B'
Q'
OPTICAL SYSTEM
f f'
U V

Figure 5.1. Imaging sequence through an optical assembly


60 An Introduction to Night Vision Technology

space passes through the focus at point F ' which is the focus point
in the image space. Likewise, a parallel ray from the image space
passes through the focus F in the object space. Focal planes
can be defined as planes normal to the optical axis at the
focal points.
Thus, if the object is assumed to be at infinity or for
practical purposes at a reasonably large distance R, then its
image would be focused in the focal plane itself. In other
words, the conjugate points to all object points at infinity lie in
the rear focal plane. If now an object to linear size d0 at infinity
subtends an angle  at the optical system, we have in the
object space
d0
 (5.1)
R
It is of course assumed that the object is at quite a
large distance in comparison to the focal length and that the
angle  is small enough for tan to be replaced by . As the
optical system brings the rays from the object to the focal plane
to an image size d i the equivalent or effective focal length gets
defined in such a manner that
di
f (5.2)

Having defined the effective focal length value in these
terms, the transverse magnification m of the system can also be
defined as

di
m (5.3)
do
Combining Eqns 5.1, 5.2 and 5.3, we have
f
di  . do (5.4)
R
m  f / R (5.5)
These relationships are of interest to a system designer.
As already stated, parallel beams from an infinitely
distant object are brought to a focus in the focal plane. While doing
so, the beams undergo deviation at each and every optical surface
of the assembly and then emerge from the last surface to come on
to the focal plane. Principal planes or surfaces are defined as the
unique imaginary surfaces from which these parallel beams could
Optical considerations 61

have been singly refracted to come to the same focus. There are
two such surfaces in each assembly depending on whether the
parallel beam is incident from the object space (A’P’B’) or the image
space (APB). Their intercepts on the optical axis are the principal
points P and P'. These surfaces and points are indicated in
Fig. 5.1. The effective focal length (EFL) is defined as the distance
P'F' and PF. It will be observed that this definition tallies with the
definition as per Eqn 5.2 for P'F' = f '. Likewise, the nodal surfaces
and points are defined as the two imaginary surfaces and their
intercepts on the optical axis wherein if a ray is incident from an
object point, the same is refracted without any deviation from the
corresponding nodal point of the second nodal surface. Thus, in
Fig. 5.1, the ray QN is transmitted parallel to itself as N'Q'. The
focal, principal and nodal points are referred to as cardinal points
of an optical assembly or subsystem.
Back focal-length and front focal-length are measured
in terms of the distances from the rear and front surfaces to
their respective focal points. These measurements are important
while going in for the mechanical design of the subsystem. Other
important parameters for correct placements are the edge and
centre thicknesses of all the optical elements and their inter-
distances.
We may now proceed to define the field of view (FOV). The
FOV refers to the angle over which ray bundles are accepted from the
object space by the lens system. This angle is restricted by the field
stop in an image plane which for distant objects is just the back focal

FRONT PRINCIPAL PLANE FIELD STOP


REAR PRINCIPAL
PLANE IN BACK
ENTRANCE FOCAL PLANE
PUPIL

FOV

di

Figure 5.2. Field of view in relation to field stop


62 An Introduction to Night Vision Technology

plane (Fig. 5.2). The field stop can be placed in any real image plane
in a relaying system of optics, to give a sharp boundary to the FOV.
It will be observed that
tan ( FOV / 2 )  d i / 2 f (5.6)
where di is the linear dimension of the circular field stop. The field
stop can be rectangular also in which case the FOV will have two
different values in the corresponding perpendicular directions.
An aperture stop may also be introduced into an optical
system to physically limit the size of a parallel bundle that enters
it. Usually the aperture stop is the clear aperture of the front surface
but it can be anywhere based on the design consideration. The
image of the aperture stop in all the system elements preceding it
is called the entrance pupil and in succeeding elements it is the
exit pupil. Both the aperture and field stops are of importance as
one limits the size of the parallel beam bundle and the other the
angle of entry of such bundles. Parallel bundle's size determines
the brightness in the image while entry at greater angles requires
a much more stricter control of aberrations. Further, at greater
angles of incidence, the entire beam may not find an entry to the
image plane, as a mismatch between the entrance pupil and the
field stop may limit its transmission through all the optical elements
of the system. This is referred to as vignetting, and leads to a greater
loss of brightness towards the edges of the image field. Vignetting
becomes a serious problem in night vision systems.
Relative aperture or F number is also a relevant
parameter from the system point of view. It is defined by f '/D
where D is diameter of the entrance aperture (Fig. 5.3).

ENTRANCE APERTURE PRINCIPAL SURFACE

A'

f'
D P'
 F'

B'

PARALLEL BEAM
FROM INFINITY OPTICAL SYSTEM

Figure 5.3. F number and numerical aperture


Optical considerations 63

F number  f '/D (5.7)


Similarly, numerical aperture (NA) by definition is the
sine of the angle that the marginal ray makes with the optical axis,
i.e., for angle/2 in Fig. 5.3. As the principal surface of a perfect
optical system is defined as the imaginary single surface from which
after refraction a parallel beam comes to a focus in the focal plane,
i.e., the image plane for a distant object, it is obvious that this
would be a segment of a sphere centred on the image point. Thus,
we have

D
NA  sin /2  (5.8)
2f '
and

1
F number  (5.9)
2NA
Both these values are of importance in objective systems,
as these decide the light gathering power of the system or its
throughput.
In systems design, matching of the throughputs may be
quite essential particularly, where it is the intention to collect as
much light as possible and then be able to transfer it to the next
assembly in the chain without any loss. Obviously, at unit
magnification, all the subsystems should have the same numerical
aperture. Nevertheless, practical demands will have to be met where
some magnification is also desired. Referring to Fig. 5.4, it will be
observed that the numerical aperture in the object space is

D
sin  / 2  (5.10)
2u

D U
V
2
 '
D

OPTICAL SYSTEM

Figure 5.4. Matching of numerical aperture


64 An Introduction to Night Vision Technology

and in the image space is

D
sin   / 2  (5.11)
2v
As the principal surfaces are segments of spheres
centered on object and image points on the axis respectively, we
thus have

v sin  / 2
  m (magnification ) (5.12)
u sin   / 2

This means that it is possible to determine limiting values


for throughput and selecting reasonable values for magnification
and conjugate distances before one goes into the detailed optical
design.
5.2.2. Design Approach
Snell’s Law n1 sin1 = n2 sin2 enables one to find out the
direction of a light ray after refraction at the interface between two
homogenous, isotropic media of differing indices of refraction where
n1 and n2 refer to the refractive indices of the media before and
after refraction and likewise 1 and 2 refer to angle of incidence in
the first medium and angle of refraction in the second. The sine
function in the Snell’s Law can be expanded into an infinite series
based on the formula:

Sin  =  –  3/3! +  5/5! –  7/7! +  9/9!...... (5.13)

If the sine function is replaced by  and the refraction


of rays worked out from the object to the image plane on this
basis through an optical system, it is called the first order or
paraxial approach. Obviously, it is paraxial as it is only close to
axis that sin  can be approximated to . Formulae have been
developed for the purpose and used in the first stages of design
of an optical system to determine parameters like the system
focal length, magnification, conjugate distances, etc. The next
step in closer approximation includes the third order term in
the sine expansion. As it was mainly investigated by Seidel, the
aberrations resulting from this approach are referred to as Seidel
or third order aberrations. Restricting to monochromatic light,
these aberrations have been classified as spherical aberration,
astigmatism, field curvature, coma and distortion. Formulae and
tolerances have been worked out and methods developed to
Optical considerations 65

annul these aberrations as far as possible. It is possible to


overcome chromatic aberrations, i.e., aberrations due to various
colours in the white light because the index of refraction of a
material is a function of wavelength thus offering a possibility
to balance chromatic differences by appropriate selection of
refractive index values for each lens or a prism in an optical
system. It is not quite easy mathematically to involve the higher
terms, i.e., the fifth order and onwards for better correction and
hence where the user demand is more stringent it is essential to
go in for exact trigonometrical ray tracing. This would be
particularly true for the systems involving large angular FOV and
large apertures demanding a state of very high correction.
Even if the geometrical approach as discussed above were
to result in an exact point image for a point object, the diffraction of
light through the optical system results in a spread of the point
image dependent on the diameter D of the entrance aperture, the
focal length f of the system and on the wavelength  of the light
used. The point image spreads into an intense circular patch
surrounded by alternating dark and bright rings. The maximum
energy, i.e., 83.9 per cent is concentrated in the central circular
patch, 7.1 per cent in the first bright ring and the rest 9 per cent
spread over the remaining rings in declining order (Fig. 5.5). For

7.1 % MORE IN FIRST BRIGHT RING

83.9% IN AIRY DISC

1.0
NORMALIZED PATTERN IRRADIANCE

.9
.8

.7

.6
.5

.4

.3 CIRCULAR APERTURE
.2

.1
0.0
–8 –7 -6 –5 –4 –3 –2 –1 0 1 2 3 4 5 6 7 8
POSITION IN IMAGE PLANE

Figure 5.5. Diffraction pattern of a point object through a circular


lens system.
66 An Introduction to Night Vision Technology

practical purposes therefore the size of image point is equal to the


diameter of the central bright spot which is referred to as Airy’s
disc and whose radius is given by
1.22 
r  .f (5.14)
D
where r is the radius of the Airy disc, the practical size of an image
point.
In most of the cases, the systems are aberration limited,
i.e., though the aberrations are within specified tolerances, the total
effect of these tolerances is over and above the diffraction effects.
However, in quite a few cases it does become necessary to minimise
all the aberrations so that the total wavefront aberration is of the
order of the practical limits as may be laid down by diffraction
effects. Of course further confinement of the Airy’s disc beyond the
diffraction limit is not possible for a given diameter, though
techniques do exist for detecting weak signals in the neighbourhood
of a very strong signal.
5.2.3 Design Evaluation
Optical components, subsystems, and systems can be
evaluated by a large number of techniques for their parameters
and aberration characteristics involving precision opto-mechanical
instrumentation, collimators, auto-collimators, interferometers,
modulation transfer function (MTF) measuring equipment, and the
like. While discussion of all these techniques is beyond the scope
of this book, reference to the MTF approach is particularly
significant from the systems point of view for night vision devices.
An incoherent imaging system can be characterized by
a two dimensional optical transfer function (OTF). The OTF is a
complex quantity whose modulus is a sine-wave amplitude response
function called the MTF and whose argument is the phase transfer
function (PTF). Thus

     
OTF Vx ,Vy  MTF Vx ,Vy exp jPTF VxVy  (5.15)
where Vx and Vy refer to spatial frequencies in the two imaging
directions of the image of an isoplanatic patch. The MTF gives the
modulation reduction of the imaging system versus its spatial
frequency when a sinusoidal radiance pattern is imaged. For a
perfect imaging system, the modulation transfer function would be
unity at all the spatial frequencies of a sinusoidal radiation pattern.
However, as we will see, it cannot be so even for a perfect diffraction
limited optical system.
Optical considerations 67

As by and large optical and electro-optical systems are


known to behave linearly, it can be shown that the total
performance of a complete optical system or an electro-optical
system composed of many sub-assemblies is obtained by
multiplying the individual OTFs of each of the sub-assemblies.
Thus, the OTF of a complete night vision system could be a
multiplication of its values for the objective, image intensifier tube
and the eye piece or a viewing system. Generally this could be true
of MTF values also. The sinusoidal chart as an object instead of a
bar chart is certainly more deterministic of the optical system, as
the results from it combine the characteristics of contrast and
resolution which are otherwise evaluated separately.
Illuminating a sine chart uniformly and using it as an
object, one can define the object contrast or modulation for the
frequency u of the sine chart as
O max  O min
Co  (5.16)
O max  O min
where Co is the object contrast, Omax the maximum transmission
through the sine wave chart and Omin the minimum transmission
through the same chart for a frequency u. Likewise, the image
contrast Ci of the imaged sine wave chart can be defined as
I max  I min
Ci  (5.17)
I max  I min
where Imax and Imin are the maximum and minimum irradiances.
The MTF at frequency u is then defined as
MTF (u)  Ci / Co (5.18)
The MTF curve is arrived at by plotting MTF values against
frequency. Instrumentation is available for generating the sine wave
objects or simulating their output as also to plot the MTF curves
from intensity measurements in the image plane. Instruments are
also designed to measure the polychromatic MTF directly.
A perfect optical system theoretically would have MTF value
of unity at all frequencies, as both Co and Ci would have a value of
unity. In practice, however, even a diffraction limited optical system
cannot have a unit value for all the frequencies as the aperture of
the imaging systems leads to diffraction effects. For instance, it can
be shown that for a circular aperture (most usual with lens systems)
the monochromatic diffraction limited MTF is given by
68 An Introduction to Night Vision Technology

MTF diffraction limited 


2
 cos 1
n  n 1 n 2  (5.19)

where n is the normalised spatial frequency, i.e., the ratio of the


absolute spatial frequency u to the cutoff frequency uc due to
diffraction, i.e.,

u
n (5.20)
uc
The cutoff frequency is that frequency at which the MTF
value is zero. Frequencies may be expressed in cycles per mm
(c/mm) or cycles per milliradian (c/mr), keeping due regard to the
units used for other parameters. There are several formulas for uc.
The one relating directly to Airy’s disc is given by
D 1.22
u c (c /mm )   (5.21)
f r

1.0

0. 8

IDEAL MTF

0. 6
MTF

0. 4

LENS WITH
0. 2 1/4 WAVELENGTH
ABERRATION

0 0.2 0.4 0.6 0.8 1.0


NORMALIZED SPATIAL FREQUENCY

Figure 5.6. Diffraction limited MTF


Optical considerations 69

where D is the diameter of the entrance pupil, f the focal length, 


the wavelength of the light, and r the radius of the Airy’s disc, all
in mm. The formula can also put into angular terms when

D 1.22
u c (c /mr )   (5.22)
 r
Where r refers to half the angle subtended by Airy’s disc at the
entrance pupil diameter D.
Figure 5.6 shows the MTF values plotted against the
normalized spatial frequency, n, it will be observed that the
diffraction limited ideal performance curve is almost a straight line
which dips slightly towards the origin. Comparison has also been
made with a lens system that has a quarter wavelength
aberration[2]. Obviously all real lenses will have their graphs
between the origin and the line indicating the diffraction limited
ideal performance. As indicated earlier, one could now evaluate the
MTF curves for the objective and the image intensifier tube to arrive
at the combined MTF of the system. Nonetheless, experiments
following Johnson’s criterion seem to prefer square-wave spatial
frequency amplitude response, i.e., bar chart in practice defined
by line-pairs per mm. As discussed earlier in Chapter 2, this
permits a correlation between acquisition, recognition and detection
though these values cannot be cascaded in the manner that is
possible with MTF values, in respect of optics, image intensifier
tubes, camera tubes, video amplifiers and displays. Some workers
have developed calculating and graphical schemes to convert one
set of values to the other. The manufacturers of image intensifier
tube give the data generally in terms of resolution in line-pairs per
mm as also normalised MTF values in line-pairs per mm.
5.3 OPTICAL CONSIDERATIONS
It may be better to think of night vision systems as an
assembly of an objective, image intensifier tube and an eyepiece
or a display system, i.e., it may not be thought of in the conventional
manner as a telescopic system though it does behave like one and
essentially does the same task. The objective in this case is to collect
as many photons as possible from the night sky and concentrate
these photons in as small an area as possible so that the intensity
per unit area is as high as possible to enable maximum excitation
of the photocathode of the image intensifier tube.
At the same time one has to reconcile these
requirements, with the FOV and overall magnification that may be
70 An Introduction to Night Vision Technology

desired from the entire system to design a reasonably aberration-


free optical system. In other words, an objective has to be designed
both with a high aperture and fast lens system optimised for a given
FOV and required overall magnification usually to go with standard
unit magnification I.I. tubes of 25 mm and 18 mm diameters, in
most cases. Thus the objective designs are close to fast photographic
objectives and do make use of knowledge available in that field.
The objectives could be totally refractive, catadioptric, or catoptic,
i.e., purely lens system, mixture of lenses and reflecting surfaces
or only based on reflecting surfaces.

Refractive lenses have been characterised and generally


developed from the well known basic configurations of Petzval
lenses, the Triplet, and the Symmetrical. Petzval lenses took the
form of two doublets to form a system, though in some cases each
of the doublet was replaced by a doublet with associated singlets.
The main advantage of this lens system is that it lends itself to
comparatively high apertures though with a considerably reduced
field coverage. It may be preferred where field flattners can also be
introduced and the desire is to have a relatively flatter field. It has
been mainly used for cine-film projection. Triplet lens family started
with the Cooke's Triplet and in its earlier form comprised a single
negative lens placed between two positive lenses. Lenses of the
triplet and derived forms are used mainly for narrow to moderate
field of view. It can be stated that higher the aperture of the triplet,
the smaller is the field of view that it covers. For better aberration
control and balancing, element splitting and compounding of three
elements of a basic triplet have led to a number of useful
photographic objectives like the Tessar, Pentac, Sonnars, and the
like. Symmetrical systems as the name implies are symmetric with
respect to a central stop where the front and rear sections are
related to each other as mirror images. Apparently, such a system
results in balancing out of a number of aberrations particularly at
equi-conjugate positions. The system is altered somewhat in its
symmetry so that it can usefully operate at infinite conjugates also.
Complex systems, such as Dagor, Aviogon, Topogon and the like
have resulted from it. Double Gauss system (six piece lens system)
developed in this category have resulted in high aperture lenses of
moderate field of view, i.e., of the order of 60° or so and this has
made them adaptable to a number of applications. With
advancements in lens design, the distinction between the basic
triplet and the symmetrical lenses has tended to become blurred;
nonetheless, the approach is still significant and valuable.
Optical considerations 71

Utilization of a reflecting curved surface as the primary


focusing element in an optical system leads to the development of
catoptic and catadioptric systems. To correct spherical aberration
of a concave mirror, some designs utilize a parabolic surface but
then the useful field may be somewhat restricted. Thus, a
paraboloid primary and a hyperbolic secondary, a catoptic system
referred to as the Cassegarainian system has resulted in a
successful objective and led to better systems for special
applications. Likewise catadioptric systems involving a primary
concave mirror with suitable correctors in front and field flattners
has also resulted in systems of interest. In Mangin mirrors, the
back surface of a double concave lens is used as the primary mirror
permitting its use as a refracting component also to enable better
correction at higher apertures. Concentric designs were also
introduced by Bouwers[3] for realisation of useful high numerical
aperture systems. Aspheric correctors – the Schmidts, have also
been used in appropriate planes in front of the primary concave
mirrors to replace the set of spherical correctors that are otherwise
needed. The problem of accessibility to the focal plane and the likely
long length of systems utilizing a primary concave mirror are
variously solved by different designers. From the MTF point of view,
lens systems may be corrected to achieve different specifications
for different applications. Thus, for photographic objectives, these
may be designed for optimum performance at higher spatial
frequencies. That however would not be the case for the night
systems, i.e., lens system coupled to an image intensifier tube or
to a low light level imaging TV as the highest possible MTF values
will be desired in the lower range of spatial frequencies because of
the limitations imposed by photoelectron statistics and the nature
of fibre-optics elements that have been used.

Figures 5.7a, b and c shows some illustrative optical


designs developed and actually utilized in instrument systems at
the Instrument Research & Development Establishment at
Dehradun, India. One can easily decipher the triplet and its
derivative, the symmetrical system, and the catadioptric systems
including one utilizing a Mangin mirror and the other a Schmidt
corrector. It also shows the square-wave frequency response of each
system at different field angles. Classical aberrations can also be
appreciated by visualizing the size of the spot diagrams. It can be
observed that the progress is towards faster F numbers at larger
apertures[4].
72

SPOT DIAGRAMS FREQUENCY RESPONSE

FULL INTERMEDIATE AXIAL


100
80
80 80 20
MTF 60 0°
40 16°
20

F/6.3 PHOTOGRAPHIC TRIPLET 0 22°


20 40 60 80
LINE PAIR/mm

160 100 40


100
80 0°
MTF 60 7°
40 10°
An Introduction to Night Vision Technology

20
F/2 TV OBJECTIVE DOUBLE GAUSS
0
20 40 60 80
LINE PAIR/mm

80 70 40


100
80
MTF 60 0°
40
20 2.8°
0
F/1.7 INFRARED ARTY SIGHT OBJECTIVE
20 40 60 80
LINE PAIR/mm
Figure 5.7a. Some illustrative optical designs
100
30 25 15 80
MTF 60
40
20
0
10 20 30
LINE PAIR/mm
F/1.3 PASSIVE NIGHT TELESCOPE OBJECTIVE (80 mm CATADIOPTRIC)

100
150

100 15 80
MTF 60
40 17°
20 25°
0
10 20 30
LINE PAIR/mm
F/1.2 PASSIVE NIGHT PERISCOPE OBJECTIVE (50 mm DIOPTRIC)
Optical considerations

Figure 5.7b. Some illustrative optical designs


73
74

200 130 40


100
MTF 80
60

40
3.5° 2.8°
20
0
10 10 20 30
LINE PAIR/mm
F/1 PASSIVE NIGHT TELESCOPE OBJECTIVE 200 mm
3.5
An Introduction to Night Vision Technology

12 100 0°
80 2.5°
MTF 60 8°
40
1 20
0
10 20 30
LINE PAIR/mm

F/1 OBJECTIVE WITH ASPHERIC CORRECTOR (200 mm CATADIOPTRIC)

Figure 5.7c. Some illustrative optical designs


Optical considerations 75

REFERENCES
1. Cox, Arthur. A System of Optical Design. (The Focal Press,
1964).
2. Griot, Mells. Optics Guide 5.
3. Bouwers, A. Achievements in Optics. (New York: Elsevier
Publishing Company Inc., 1950).
4. Various Optical Designs and their Characteristics. (IRDE,
Dehradun).
CHAPTER 6

PHOTOEMISSION

6.1 INTRODUCTION
The need for detection of weak radiation signals both in
visible and the infrared has, of necessity, led to the development
of quantum detectors. Quantum detection may be based on the
principles of photoemission or utilize solid-state devices in which
the excited charge is transported within the solid either as
electrons or as holes. Photoemission of electrons has been utilized
in image intensifiers (I.I. tubes), photomultipliers and the like, or
in general, in various vacuum or gas-filled tube devices for
different applications. Solid-state devices may be classified as
photoconductive or photovoltaic. These may be simple p-n
junctions, photocells, phototransistors, avalanche photodiodes,
p-i-n photodetectors, schottky-barriers, or quantum well devices.
Photoemissive surfaces are possible in relatively larger sensitive
sizes.
6.2 PHOTOEMISSION & ITS THEORETICAL
CONSIDERATIONS
Materials (metals, metal compounds or semiconductors)
which give a measurable number of photoelectrons when light is
incident on them, form photocathodes in a vacuum tube
enveloping both cathode and anode in an electric circuit (Fig. 6.1).
The electrons emitted from the photocathode when the light is
incident are collected at the anode maintaining the flow of the
current as the anode is positively charged. As the anode potential
is increased, the current also increases which ultimately reaches
a saturation value beyond which further increase of the anode
potential is not helpful. This saturation value of the current is
proportional to the intensity of the light incident on the
photocathode. If the anode potential is now reduced, the current
value can be reduced to zero at a negative threshold potential. This
potential value is found to be dependent on the wavelength of the
incident radiation and not on its intensity.
78 An Introduction to Night Vision Technology

LIGHT
VACUUM ENVELOPE (GLASS)

PHOTOCATHODE
ANODE

BATTERY GALVANOMETER

Figure 6.1. A basic photoelectric circuit

6.2.1 Theoretical Consideration


The energy of the incident photon Eph is given by
E ph  h ν (6.1)
where h is the Planck’s constant given by h = 6.624  10 Js and –34

ν is the frequency of the incident light. Assuming that the electron


in the material has a maximum kinetic energy, W 1 and it has to
spend an energy W for its release from the material by overcoming
the potential barrier at the cathode surface, then according to the
quantum approach the maximum energy that the photoelectron
can possess is given by
E  hν  W1  W (6.2)
If  is the work function, i.e., the value of the potential
barrier measured in volts at the cathode surface, we have
according to the thermionic theory
e  = W  W1 (6.3)
Where e = 1.59  10 –19 Coloumbs is the charge of the electron.
Eqn 6.2 can now be rewritten as
hc
E   (6.4)
e
Photoemission 79

where E is the maximum emission velocity of the photoelectron


measured in electron-volts and  is the wavelength measured in m,
substituting the value h, e and c (the velocity of light = 2.99 108
m/s) we have
1.246 10 6
E =   (6.5)

From Eqn 6.5, we can easily deduce the maximum
value of 0 as
1.246 10 6
o = (in m) (6.6)

The lower the value of the work function, farther the
wavelength threshold is shifted towards the longer wavelengths.
For a given value of , the work function, the maximum value of
wavelength  also gets fixed above which photoelectrons do not
acquire any energy to escape from the photocathode. This
quantum approach due to Einstein explained these experimental
observations.
6.2.2 Types of Photocathodes & their Efficiencies
Efficiency of photocathodes is expressed in terms of
their quantum yield. If each incident photon were to generate one
photoelectron, the quantum yield is said to be unity or
100 per cent. In practice the yields are much lower. Apart from
photon losses due to reflection of photons from the photocathode
surface, the emission of photoelectrons would depend on the
optical absorption coefficient, electron scattering mechanisms, and
the potential barrier at the surface that has to be overcome. These
parameters have been investigated in depth by Spicer and Go'mez[1].
The results obtained show a better understanding of
photoemission for both fundamental and practical applications.
Figure 6.2 based on their results is very illustrative. It expresses
quantum yield for metals, semiconductors, transferred electron
cathodes and negative affinity photocathodes against theoretical
estimates for response time. In the case of a transferred electron
cathode it may be possible to achieve both higher quantum
efficiency and faster response. Metals as a group have the electron-
electron as the dominating mode of scattering in which case the
escape depth for the electron is short. Thus the quantum yield is
poor and the time response is very fast. In a semiconductor with
a sufficiently low electron affinity there is no electron-electron
scattering near the threshold. In semiconductors, a finite band gap
separates the highest states filled with large number of electrons
80 An Introduction to Night Vision Technology

10–7
REPRESENTATIVE GaAS (Cs.0)
10–8
10–9 NEGATIVE AFFINITY
THEORETICAL ESTIMATE OF

10–10
TRANSFERED
RESPONSE TIME (S)

10–11 ELECTRON
(INCLUDING
CATHODE
COMPOSITES,
10–12 ALLOYS
MULTIALKALIS
10–13 AND ANTIMONIDES)

10–14
METALS (INCLUDING PURE
10–15 ALKALIS)
10–16

10–17
10–5 10–4 10–3 10–2 10–1 100
YIELD (ELECTRONS/PHOTONS)

Figure 6.2. Types of photocathodes yield vs response time

and the lowest conduction band states, so that electrons must have
sufficient energy above the conduction band minimum to suffer
electron-electron scattering. The dominating mode of scattering
is the electron-photon (lattice) scattering. Thus it has relatively a
larger escape depth for the photoelectrons. The quantum yield is
better and the response time is relatively slower in comparison
to that of metals.
The next type of photocathodes to emerge were the
negative electron affinity (NEA) photocathodes. In these
photocathodes, the vacuum level is dropped below the conduction
band minimum so that electron affinity takes on a negative value.
The large response near the threshold is due to the fact that
electrons which are inelastically scattered may escape even if they
thermalize into the bottom of the conduction band, i.e., in addition
to a fraction of photoelectrons escaping without losing their initial
energy; most of the electrons thermalize, diffuse to the surface,
and escape without losing all their initial energy. By combining
negative affinity approaches with structures that allow an internal
potential to be applied across the semiconductor nearest the
surface, it is possible to extend farther into the infrared (1.4 m).
These cathodes have been referred to as transferred electron (field-
assisted) photocathodes. The response time is also faster than the
NEA photocathodes. Further researches may lead to better yield,
still faster responses and further extension of the wavelength
beyond 1.4 m (Fig. 6.2).
Photoemission 81

6.3 DEVELOPMENT OF PHOTOCATHODES


Historically Hertz in 1887 was the first to note the
photoelectric effect ahead of the discovery of the electron. Its
explanation based on the quantum theory was due to Einstein in
1905. Nevertheless the phenomenon was not fully explained till a
much later date when it was realised that photoelectrons are not
just surface-emitted but can come from the depth of the material
also. The later theories took into account the mechanism of optical
absorption of the incident light quanta, transport to the surface
from the depth of the material and its subsequent escape from the
surface. In earlier days, i.e., up to 1930s the quantum yield or
efficiency of the materials (metals) used was less than 0.01 per cent
and hence these materials did not prove to be of any practical use.
6.3.1 Composite Photocathodes
It was just a little later that Koller and Campbell[2]
accidentally and independently discovered that a combination of
silver, oxygen and caesium (Ag-O-Cs, also called S-1) produced a
photocathode with a better quantum efficiency than known
hitherto. The peak quantum efficiency was around 0.5 per cent,
at least a decade better than the earlier compounds. It is made by
oxidizing a clean silver layer, and distilling caesium vapour into
the tube at a temperature of approximately 150 °C. The vapour
then reacts with the silver oxide, resulting in a layer of caesium
atoms adsorbed on caesium oxide on silver. Evaporation of a
further thin layer of silver and subsequent baking causes further
increase in sensitivity. The process known as activation with
caesium or cessiation in one form or the other continues to the
present day for all photocathodes. Because of low work function
of these materials, the response extends into the infrared up to
1.2 m. This S-1 photocathode found many applications in
industry, astronomy, biology, etc. The layers could be made
sufficiently thin to be semi-transparent for special uses in image
converters or television camera tubes. This way light can fall on
the photocathode from the back while the photoelectrons are
liberated from the front side inside the vacuum. This family has
also been referred to as the composite cathodes. The image
converters are of particular interest to us, as these photocathodes
were utilized in such tubes with success to result in images for
hot objects and for night vision before the image intensifiers
appeared in the military market.
6.3.2 Alloy Photocathodes
The next important photocathode developed was
antimony-caesium cathode by Gorlich in 1936, a semiconducting
82 An Introduction to Night Vision Technology

alloy of formula Cs3Sb. This and similar compound photocathodes


are known as alloy photocathodes. It consists of a layer of antimony
onto which caesium is distilled at a temperature around 150 °C to
form the alloy. Final oxidation resulted in further increase on
sensitivity. This cathode is also known as S-11. The peak quantum
efficiency was around 15 per cent. It found its use in TV camera
tubes, photomultipliers and the like. Another important alloy
photocathode was soon developed by Sommers in 1938. It
consisted of a base layer of a 50 per cent alloy of silver and
bismuth, oxidized and treated with caesium. It resulted in a
panchromatic cathode without the high peak of S-11 in the blue
and the infrared response of S-1. Though its peak response was
less than that of S-11, its relative panchromacity in the visible was
of greater use for colour television. The cathode also known as
S-10 (Bi-Ag-O-Cs) was used in image orthicons, which were later
replaced by photoconductive tubes of the vidicon type.

6.3.3 Alkali Photocathodes


A layer of an alkali metal deposited in vacuum by
evaporation or by electrolysis onto a glass envelope which forms
the base of a vacuum tube, results in an alkali photocathode.
These atoms have a low ionization potential and get absorbed on
the base surface reducing their work function and permitting a
better photoelectric effect at certain wavelengths. The quantum
efficiency at the peak value could be around 10 per cent. Next,
multi-alkali photocathodes were introduced by Sommers. A base
layer of antimony is first treated with potassium. Post baking, it
is mostly replaced by sodium which in turn is partly replaced by
caesium. The photocathode of chemical composition Na2KSb (Cs)
for the first time showed a reasonably high quantum efficiency of
20 per cent at the peak wavelength. It came to be known as S-20.
Its activation process is more complicated but it proved its utility
in earlier image intensifier tubes for night vision. Many other
photocathodes of the type Na2CsSb, K2CsSb also got developed.
K2CsSb after superficial oxidation was almost similar in its
response to Cs3Sb (S-11) and proved better for scintillation
counters, while Na2K Sb (not cessiated) had a response identical
to S-11. The extension of the threshold of the red response from
800 to 900 nm is possible by increasing the thickness of the multi-
alkali photocathodes and/or using different processing techniques
leading to an extended red multi-alkali (ERMA) or S-25
photocathode.
Photoemission 83

6.3.4 Negative Affinity Photocathodes


Around this time, i.e., in late fifties a better theoretical
understanding of photo-emission as a bulk phenomenon revealed
quite a few facts. For instance, in metals the quantum efficiencies
are very low due to strong interactions between the photo excited
and conduction electrons which limit the diffusion length. In
multi-alkalis, photocathodes showed that the yield of S-25 over
S-20 is primarily due to a reduction in the effective surface work
function. It was also surmised that by the use of caesium it is
possible to decrease the surface vacuum level thus improving the
yield by orders of magnitude. The electron affinity values could
be made lower and lower. Later, it was found that surface
treatment with Cs plus oxygen gave an even lower electron
affinity and higher average quantum yields, over the visible and
the near infrared portions of the spectrum. It became clear that
an optimum photocathode may first be realised by selecting a
material with compatible optical properties of absorption as also
electronic properties that assist photoelectrons ejection (diffusion
to the conduction band minimum) and then processing the
material surface to reduce the effective electron affinity to a
minimum. Table 6.1 shows the advantage of decreasing value of
electron affinity on the increase in quantum yield. All the
materials in this list have bulk p-type conductivities. The negative
value of electron affinity will depend on the height of the band
bending (Fig. 6.3).
The closer the vacuum level to the valence band, the
more will be the bending and consequently greater the reduction
in the value of electron affinity.
Table 6.1. Photocathode types in relation to electron affinity

Photo- Type Band gap Electron Max. quantum Wavelength


cathode energy affinity yield in the cutoff
material (eV) (eV) visible (nm)

Cs3Sb S-11 1.6 0.45 ~20 650


Bi-Ag-O-Cs S-10 0.7 0.9 ~10 750
Na2KSb S-20 1.0 0.55 ~25 850
(cessiated)
Multi-alkali S-25 1.1 0.24 >25 950
antimonides ERMA
(cessiated)
GaAs (Cs,O) NEA 1.4 –ve value >35 ~1000
with respect
to bulk
conduction
band minimum
84 An Introduction to Night Vision Technology

Thus a new era of engineered photocathode materials


started, as against the Edisonian research which was prevalent thus
far. The negative electron affinity photocathodes were the first
scientifically engineered materials of bulk p-type with n-type surfaces
where the band-bending could be downwards introducing negative
electron affinity.
These considerations led to the suggestion that optimally
heavily doped (say Zn-doped) single GaAs with a surface film of
caesium and oxygen should have better sensitivity than all previous
materials. The same was experimentally verified. High doping
(~1019/cm3) is used to minimise the band bending region, while the
addition of a caesium monolayer allows the vacuum level to drop
below the conduction band minimum (Fig. 6.3). Most NEA
photocathodes these days are activated with Cs and oxygen, forming
a monolayer of caesium oxide and are referred to as Generation-3
photocathodes.
For many devices semi-transparent photo-emissive
materials are required as in the case of image intensifiers. This
requirement is met by growing AlxGa1-xAs layer on top of GaAs
active layer acting as a window material. The AlxGa1-xAs layer is
suitably coated with an antireflection material like silicon nitride
to quarter-wavelength thickness to minimise reflection losses. Both
GaAs and AlxGa1-xAs layers are grown epitaxially. The single layer
of GaAs (Zn-doped) has usually a thickness from one to
approximately two micrometers. Around 1.2 m thickness, almost
90 per cent of the incident light gets absorbed. Figure 6.4 shows

NORMAL PHOTOCATHODE NEA PHOTOCATHODE

Vac
EA
C C
Vac
EG E Ae f f EA
F EG
F
V V

V = Top of valence bond E G = Band gap energy


F = Fermi level E A = Electron affinity
C = Bottom of conduction band E Ae ff = Effective electron affinity
V ac = Vacuum level

Figure 6.3. Band model of NEA and normal semiconductor


photocathodes.
Photoemission 85

LIGHT IN

ANTIREFLECTION
COATED GLASS FACEPLATE

QUARTER-WAVELENGTH
ANTI REFLECTION COATING

AlxGa1-XAs WINDOW LAYER


NiCr
METALLIZING
GaAs ACTIVE LAYER

SURFACE TREATED WITH


CAESIUM AND OXYGEN
ELECTRONS OUT

Figure 6.4. GaAs photocathode in transmission mode

the structure of such an assembled photocathode[3]. Activation of


the GaAs layer with caesium and oxygen is carried out in ultra
high vacuum systems after heat cleaning of its surface.
6.3.5 Transferred Electron (field-assisted)
Photocathodes
The NEA photocathodes are limited to a band gap
corresponding to a cutoff of around 1.1 m. Efforts to push the
cutoff wavelength led to the evolution of transferred electron
photo-emission. The transferred electron type of photoemitter
depends on transferring the photogenerated electrons from a lower
to upper valley by means of an electric field (as in Gunn effect)
from where they can be emitted. A number of such external field-
assisted photoemission geometries were proposed and studied.
One such structure was made of p-type InP on which a thin layer
of Ag formed a Schottky barrier. If a reverse bias is applied to the
Schottky barrier then the photogenerated electrons are accelerated
towards the surface. The accelerated electrons may be placed in L
to X valley from which it can penetrate through the metal (Ag)
which is activated with Cs/O to lower its electron affinity to 1 eV.
InGaAsP materials with a band gap 0.85 eV matched to
InP can give lower threshold energy with a Ag layer activated with
Cs2O. Since there is no band gap limitations in NEA, the cutoff
wavelength was increased with the reduction of band gap and a
threshold near 1.5 m was obtained. Later a transmission mode
structure giving photoemission up to 2.1 m was achieved by using
a In0.77Ga0.23 As layer of band gap of 0.52 eV as the emitter layer. Since
86 An Introduction to Night Vision Technology

this is not lattice-matched to InP, a layer of InAsxP1-x was used


in-between. In this configuration, the photons are incident on the
back surface (transmission mode) but only those photons for which
0.83>h>0.52 are absorbed in the InGaAs layer. Photons for which
1.35>h>0.83 are absorbed in InAsP layer which has a graded
composition meant to give an accelerating field to the photoelectron
towards the surface. Photons with energy more than 1.35 eV would
be absorbed in InP. The silver film thickness was about 50 A° which
was activated to Cs and oxygen. The ultimate limit of band gap for
this material system should give the largest threshold up to 3.54 m
for InAs. But the main problem of operation of these cathodes is the
dark current. Even with cooling to –100°C, the dark current is very
high (10 –8A/cm2) and rises sharply with increase of bias voltage.
The rapid rise is due to impact ionisation. These cathodes should
become quite useful for certain types of applications if the dark
current could be reduced. Figure 6.5 graphically illustrates the
comparative yield for some of the common photocathodes used[4].

10 0 1. Ag-O-Cs (S-1)

2. Cs3Sb (S-11)
5 X 10–1
3. Bi-Ag-O-Cs (S-10)

4. Na2 KSb (Cs) (S-20)


2 X 10–1
5. GaAs (Cs,O) (NEA)
QUANTUM YIELD (ELECTRON/PHOTON)

1 X 10–1

5 X 10–2

2 X 10–2

1 X 10–2
5

5 X 10–3

4
2 X 10–3 3
2 1

400 500 600 700 800 900 1000 1100


WAVELENGTH (nm)
Figure 6.5. Comparative quantum yield for some of the common
photocathodes used.
Photoemission 87

6.4 PHOTOCATHODE RESPONSE TIME


Response time depends on the materials used. Metal
photocathodes have the fastest response time of the order of 10–15
to 10–14 seconds though their quantum yield is the poorest and of
an order of 10–4 electrons per photon. It is well understood that
as in metals, powerful electron-electron mechanisms dominate
leading to relatively restricted release of photoelectrons from near
the surface explaining their fast response. Further, electrons from
the depth of the material have hardly any chance of escape.
Semiconductors with small but positive electron affinities have
response times of the order of 10–13 to 10–12 seconds, with yields of
0.05 to 0.25 electrons per photon. Photo excited electrons in these
materials do not undergo electron-electron scattering near
threshold since a finite band gap separates the highest states filled
with a large number of electrons and the lowest conduction band
states, enabling longer escape depth. Photoelectrons will still
undergo electron-photon (lattice) scattering changing their
direction in the bulk, somewhat reducing their energy and
increasing their path length to the surface and thereby the
increased response time. The highest yields 0.1 to 0.6 electrons/
photons and longest response of the order of 10–10 to 10–8 seconds
are obtained in negative electron affinity photocathodes. In these
materials even for photons near the band gap, the diffusion length
of the thermalized electron is greater than the optical absorption
depth. As a result the yield rises much more rapidly than for the
earlier type of photocathodes near threshold. The transferred
electron (field-assisted) cathodes give faster responses of the order
of ~10–11 seconds in comparison to other categories, except metals,
and have a quantum yield better than 10–2 electrons per photon.
The future interest is in their longer cut-off wavelength values,
with a possibility of better quantum yield and still faster response
times (Fig. 6.2). The cathode represented by the dot in the figure
has a cut-off wavelength at 1400 nm. The two arrows show the
development – directions that may be possible.
6.5 PHOTOCATHODE SENSITIVITY
In practice, photocathodes have to be used in
semitransparent mode so that they can be used essentially in all
types of image intensifier tubes. The overall characteristics thus
presented are those of the photocathodes in combination with the
supporting material. At low wavelength end, one may use a
lithium fluoride window with a cutoff at 104 nm though for the
visible and near infrared the window material may be lime or
boro-silicate crown glass. Fused silica has also been used. Input
88 An Introduction to Night Vision Technology

window could also be a fibre-optics bundle supporting the


photocathode material. Evaluation is preferred in terms of
‘sensitivity’. Sensitivity is expressed in terms of microamperes per
lumen (A/lm) which relates to the luminous sensitivity in white
light or in milliamperes per watt (mA/W) as the radiant sensitivity
at a given wavelength. Both values are measured using a
tungsten lamp with a colour temperature of 2856° ± 50°K as the
raw light source. Filters are used to determine the radiant
sensitivity at specified wavelengths. Commercial specifications
usually give the luminous sensitivity values, as also radiant
sensitivity values at around 800 and 850 m for all modern
photocathodes. Earlier photocathodes barely came up to a
luminous sensitivity of 50 A/lm or to a peak radiant sensitivity
of the order of 10 mA/W. Significant increase in these values took
place with the introduction of S-10 (Bi-Ag-O-Cs), S-11 (Cs3Sb) and
ultimately the photocathodes of choice for passive night vision,
i.e., S-20 (Na2KSb(Cs)) and S-25/ERMA. These multialkali
antimonides (cessiated) photocathodes now offer luminous
photocathode sensitivities of the order of 400 A/lm and radiant
sensitivities of the order of 40-45 A/W in between 800-850 nm.
Some suppliers of image intensifier tubes claim even higher
values of luminous and radiant sensitivities for this family of
photocathodes. Subsequent introduction of NEA photocathodes
has resulted in still higher values for the sensitivities. Thus the
most used NEA photocathode GaAs (Cs,O) has a typical luminous
sensitivity of 1300 A/lm and a radiant sensitivity exceeding
50 mA/W. Absolute sensitivity values in mA/Watt against
wavelength are plotted in Fig. 6.6 for a number of photocathodes
of interest[5]. Spectral response curves in the figure are for
combination of photocathodes and windows. Thus, lime or
borosilicate glass windows are used in respect of S-10, S-11 and
ERMA (Extended red multialkali), or S-25. It will also be
appreciated that both luminous and radiant sensitivity values
can vary in the same type of photocathode material depending
on the processing techniques which may be resorted to by
different manufacturers to satisfy their requirements for an end
product. The material composition may also be somewhat altered
to permit a higher or lower wavelength cut-off and to improve
sensitivity for a specific region. The exact values will hence have
to be known or determined in each type of photocathode that
may be used in an image intensifier tube. Manufacturers of these
tubes usually mention in their specifications the photocathode
sensitivity in the white light (tungsten source) in A/lm and
Photoemission 89

1. Ag-O-Cs (S-1)
80 2. Cs3Sb (S-11)
3. Bi-Ag-O-Cs (S-10)
60
4. Na2 KSb (Cs) (S-20)
5. ERMA/S-25
40
6. GaAs

20 6
ABSOLUTE SENSITIVITY (mA/W)

10
8
6

5
2 4

2 3

0.8
0.6
1
0.4

100 200 300 400 500 600 700 800 900 1000 1100
WAVELENGTH (nm)
Figure 6.6. Absolute sensitivity in mA/W vs wavelength

radiant sensitivities in mA/W at specified wavelengths of 800


and 850 nm or at any intermediate wavelength[6]. The higher
the radiant sensitivity in the near infrared and better the
luminous sensitivity, the more is the response to the night sky
subject only to the overall noise limitations in the intensifier
system. Analytically, luminous sensitivity is given by the
expression
2
 S (  ) E (  ) d
1
0.76 A / lm (6.7)
680  y (  ) E (  ) d
0.40
where S () is the spectral responsivity of the sensor within its
spectral limits 1 and2 in amperes/watt, and E() is the spectral
radiance due to the source in watts per square meter. y( ) is the
90 An Introduction to Night Vision Technology

relative photospectral response of the human eye, and wavelength


is in m (=1000 nm). The quantum yield Q () can also be correlated
by the equation:
1.24 S (  )
Q(  )  electrons / photons (6.8)

6.6 DARK CURRENT IN PHOTOCATHODES
The ultimate limit to any imaging system’s ability is the
photon-to-electron conversion noise, resulting in dark current.
This noise for a uniformly lighted area of a photoemitter results
from a random release of charge carriers and is hence analytically
measurable in terms of a root mean square (rms) photocurrent irms
given by

irms  (2ei f )1/2 A (6.9)

where e is the charge of an electron and f is the measuring


bandwidth in Hertz. This dark current in a photocathode arises
out of thermionic emission and has a characteristic peculiar to
each photo surface. The value is usually greater for red sensitive
tubes. The value does increase with increase in sensitivity of the
photocathode as also with increase in temperatures under which
it is operated. Cooling does help particularly in red-sensitive
photocathodes like S-1. Its value is proportional to the surface
area of a photocathode also and hence the unit for practical
comparison is in amperes/cm2. Table 6.2 compares the values of
dark current for different materials with different photocathode
sensitivities and different values of long wavelength threshold.
Table 6.2 Dark current values for different photocathode materials
Material Type Max quantum Long wave- Luminous Dark
yield (peak length sensitivity current
wavelength threshold (A/lm) (A/cm2)
in nm) (nm)

Ag-O-Cs S1 0.5(800) 1100 60 10–11


Cs3Sb S 11 20 (400) 650 80 10–15
Bi-Ag-O.Cs S 10 10 (450) 750 80 10–14
Na2K Sb (Cs) S 20 25 (400) 850 ~300 10–16
~100 m
thick
Na2K Sb (Cs) S 25 30 (400) 900 ~400 10–16
~1000 m
thick)
Ga As(Cs, O) NEA 40 (over a 950 ~1300 10–14
wide range)
Other suitable ~3540 10–8 (for InAs
Photoemission 91

semiconductors at –100 °C)

In practice for image intensifier tubes, this


measurement is incorporated in the definition of equivalent
background input. This implies a measurement procedure for
screen brightness when the operating potential has been applied
to the assembly and no radiation is incident on the photocathode.
This value in the device as a whole may be due to field emission,
gas ionization, inter-electrode leakage, residual radioactivity and
a host of other causes beside the photocathode thermionic
emission. However, in all well designed and appropriately
assembled image intensifier tubes, these sources of dark current
are virtually eliminated except that due to photocathode
thermionic emission. This measurement therefore indicates the
level of the photocathode dark current.
6.7 SUMMARY
The physics of the semiconductors has proved very
effective in the understanding of existing photoemissive materials
and is helping in the search for better and better materials. The
exact processing details including the methodology of depositing
material layers are quite important for repetitive production to
meet exacting standards for image intensifier tubes.
REFERENCES
1. Spicer,W.E. & Gomez, A H., "Modern Theory and Applications
of Photocathodes". Photodetectors and Power Meters. SPIE.
vol. 2022, (1993), pp.18-33.
2. Sommer, A.H., "Brief History of Photoemissive Materials".
Photodetectors and Power Meters. SPIE. vol. 2022, (1993),
pp. 2-17.
3. Csorba, I.P., Current Status and Performance Characteristics
of Night Vision Aids. Opto-electronic Imaging, (Tata-McGraw
Hill Publishing Co. Ltd. 1985).
4. Sommer, A.H. Photoemissive Materials. (New York, London,
Sydney, Toronto: John Wiley & Sons Inc., 1968).
5. Walter, G.D.(Ed). Handbook of Optics. (McGraw Hill Company).
pp. 4-23.
6. Biberman, L.M. & Nudelman, S. Photoelectric Imaging Devices
Vol. 1 & 2. (New York, London: Plenum Press, 1971).
CHAPTER 7

PHOSPHORS

7.1 INTRODUCTION
Luminescence refers to the emission of light by a
material induced by an external source of energy. It may be induced
by light which after absorption is reradiated in a different waveband,
termed as photoluminescence or by the kinetic energy of electrons
termed as cathodo-luminescence. It could also be triggered by the
incidence of high energy particles, applied electric fields or
currents, or chemical reactions. Luminescent technologies by now
embrace liquid crystal devices, gas panels and electroluminescent
panels besides the well known cathode ray tubes. The success of
these tubes is mainly due to high performance level of modern day
phosphor materials. The word phosphor literally meaning light bearer
refers to luminescent solids, mainly inorganic compounds
processed to a microcrystalline form for practical use of their
luminescent property. The earliest phosphors used the naturally
occuring Zn2SiO4 and CaWO4 as a thin powder on a mica substrate
to act as viewing screens. Usually phosphors are in the powder
form but they could also be used as thin films. The image intensifier
tube screens have borrowed from the phosphor developments for
use in cathode ray tubes. The luminescence we are concerned
with is the cathodo-luminescence.
7.2 PHOSPHORS
Most phosphors are activated by the introduction of an
impurity of the order of a few parts per billion. This impurity which
activates the phosphor is known as activator, while phosphor crystal
itself is known as the host or matrix. The chemical formulae
indicate the presence of an activator in the host crystal. Thus one
such formula can be ZnS:Cu indicating ZnS as the host and Cu as
the activator. In a sulphide phosphor the dopant of a VII-b group
element, i.e., halogens (chlorine, bromine, iodine) or a III-b group
element (gallium, aluminium) in addition to the activator is referred
94 An Introduction to Night Vision Technology

to as the co-activator. Thus, the complete chemical formula is of


the type ZnS:Cu, Al and ZnS:Ag, Cl. The role of the co-activators in
ZnS phosphors is to compensate for the excess minus charge caused
by an activator.
7.3 LUMINOUS TRANSITIONS IN A PHOSPHOR
When an accelerated electron of high energy say 6 KeV
or more penetrates an inorganic crystal, a large number of electron
and free holes are produced along its path leading to many
possibilities for optical transitions. If the crystal is free from
impurities, doping, and lattice defects, the free electrons and holes
that have been created in the conduction and valence bands may
recombine emitting photons whose energy is equivalent to the band
gap [Fig. 7.1 (a)]. These emissions have been rarely observed,
except in the case of ZnO where the phosphor has been used in
flying spot tubes. In actual practice, the phosphor crystals do have
lattice defects, incidental impurities and also deliberately
introduced activators and co-activators which create a number of
energy levels providing a number of recombination paths for the
excited electrons and holes at much less band-gap values
resulting in emissions within the visible part of the spectrum.
Activators produce deep acceptor levels with different depths.
Donor levels on the other hand may be introduced by lattice
irregularities, incidential impurities and co-activators [Fig. 7.1(b)].
The above explanation is particular to the two well
known CRT phosphors ZnS:Cu, Al and ZnS:Ag, Cl. Differences of
colour that is green and blue is attributed to deep acceptor levels
respectively created by Cu and Ag at 1.25 eV and 0.72 eV. Time
and excitation dependent spectra have been observed for these
DONOR LEVELS
CONDUCTION BAND

DIRECT
TRANSITION
a b c d e

VALENCE BAND ACCEPTOR LEVELS

Figure 7.1. Luminescent transition models in phosphors[1]


Phosphors 95

phosphors. For phosphors, in general, a number of other transition


models are also possible, apart from the direct transition
corresponding to the band gap of the host material (Fig. 7.1). The
direct recombination transition has been marked as a, while b is
the recombination transition between a donor and an acceptor;
transition c is between the conduction band and a deep acceptor
level, while transition d is between a deep donor and the valence
band. The transitions occuring in a well localized luminescent
centre or a molecular complex of atoms is represented by e
wherein the electrons are confined to the same centre before and
after the transition. Such centres are in rare-earth or transition
metal ions of the type Eu+3, Ce+3, Mn+2. Rare-earth activators give
better results with Y or La as hosts. Configuration coordinate
models have been proposed for luminescent centres in the
generalized configuration diagram shown in Fig. 7.2. The two
curves G and E represent the energies of a luminescent centre in
the ground (G) and in the excited (E) states against the
configurational coordinate. When the centre is in its state of lowest
energy, the configuration coordinates assume the value for which
energy is a minimum, i.e., point A on the curve G. Since the
equilibrium configuration of the interacting atoms is different for
the ground and excited states, the two do not correspond on the
CURVE E (EXCITED STATE)

CURVE G
(GROUND
STATE)

O
C
B
CO-ORDINATE
ENERGY

D
CONFIGU-
RATIONAL
A CO-ORDINATE

Figure 7.2. Luminescent centre – a configurational model


96 An Introduction to Night Vision Technology

configurational coordinate axis. The minimum on the curve E is


shown by point B. As the absorption of external energy will occur
before the ions have time to readjust themselves to the equilibrium
of the excited state, the absorption corresponds to the transition
AC. The system readjusts itself and dissipates a little of the energy
gained by way of heat to reach the minimal equilibrium point B
enabling a radiative transfer to the point D, followed by heat
dissipation to reach the point A again. If the thermal temperature
of the system is high enough, the centre may be stimulated to the
position O and relax its energy to the host crystal, transiting to A
along the curve OA without radiative transfer. While the energy
difference between A and C corresponds to the peak of the absorption
spectrum, that between B and D corresponds to the peak of the
emission spectrum. This theoretical and analytic approach has led
to a better understanding of luminous centres.
7.4 PHOSPHOR MECHANISMS
High energy primary electrons incident on a phosphor
may suffer elastic and inelastic collisions or penetrate producing a
cascade of photons and internal secondary electrons. Those
secondary electrons that overcome the work function escape into
the vacuum. High energy electrons which undergo elastic scattering
are the reflected and back scattered electrons from the surface,
while those undergoing inelastic collisions are the re-diffused
electrons with some energy loss. The reflected and back scattered
electrons cause contrast degradation of a picture if these are
absorbed by the neighbouring phosphor elements. The emission of
secondary electrons also has a deleterious effect. In case their rate
of emission is significant or more than the rate of arrival of primary
high energy electrons, it would shift the potential of the insulated
phosphor. The negative charging of the phosphor screen by reducing
its potential may seriously reduce the light output. This charging
is prevented by depositing a thin film of aluminium, which is
penetrated by high energy electrons on the surface of the phosphor.
All these three factors, i.e., elastic and inelastic collisions, and the
escape of secondary electrons lead to the attenuation of the absorbed
energy and thereby the efficiency of cathodo-luminescence. Incident
higher energy electrons result in a reduced spread of luminescence
along their path within the phosphor, as against lower energy
primary electrons. Figure 7.3 is illustrative of the observations of
the cathodo-luminescence of a crystal excited by a fine electron
beam. It shows that with increasing energy of the incident electron
beam, the spread in the phosphor is lesser and lesser as the depth
of penetration is progressively increased, i.e., an electron with a
Phosphors 97

lower energy has a larger probability of energy dissipation. At


relatively lower incident electron energies the luminescent volume
has a hemispherical shape, while at higher energies, the
penetration volume is along a narrow channel ultimately
terminating into a large spherical volume. The relationship between
the energy of the primary electrons and the depth to which it
penetrates the phosphor are related by an empirical formula of the
type

E  E 0 {1  ( x / R )}1/ 2 (7.1)

where E0 is the energy of the primary electron, E the energy at a


depth x and R a characteristic of the material. The energy reduces
to zero when x = R. R in turn has been empirically defined in terms
of material parameters, such as its bulk density, the molecular
weight and the atomic number. Another empirical formula specially
for the ZnS phosphors which can be applied up to 20kV is given by

. E10.65 (in nm)


X   116 (7.2)
where X' is the depth where the energy of the primary electrons
(E0) drops to e-2 of its original value.
One more aspect is that when E0 is decreased, while the
beam current is maintained, the cathodo-luminescence drops to

LOW ENERGY

MEDIUM ENERGY

FINE INCIDENT ELECTRON BEAM

NARROW CHANNEL HIGH ENERGY

PHOSPHOR MATERIAL

Figure 7.3. Luminescence spread as a function of incident beam


energy (schematic).
98 An Introduction to Night Vision Technology

zero at around 100 V to a few kV depending on the material in the


powder form and its method of preparation. Cathodo-luminescence
increases linearly for energies above the threshold though at higher
values the increase is slower. Figure 7.4 shows the luminescence
intensity of ZnS:Ag, Cl against the incident electron energies.
The energy efficiency of cathodo-luminescence is
obviously a product of the mechanisms of energy transfer that take

2
LUMINESCENCE INTENSITY
(mW/cm2)

0
0 2 4 6 8 10 12
VOLTAGE (kV)

Figure 7.4. Incident electron energy vs luminescence of ZnS: Ag,Cl,


phosphor.
place inside a phosphor when a high energy electron is incident
on it. These mechanisms relate to surface reflection and scattering
and to the division of the incident electron energy that enters the
phosphor into pair production (electrons and holes) and loss by photon
emission. Effective models have been developed which give results
very close to the practical values. Thus for ZnS:Ag, Cl the observed
value of efficiency 0.21 compares favourably with the maximum
possible theoretical efficiency of 0.28. The efficiency also depends
on the current density of the incident electron beam and the
Phosphors 99

temperature. It is reduced with increase in current density as


also with increase in temperature. The former results in brightness
saturation while the latter leads to thermal quenching.
7.5 REDUCTION OF LUMINESCENCE EFFICIENCY
Luminescence efficiency is quenched by an increase in
temperature, level of excitation, presence of undesirable impurities
and high activator concentration. Increase in temperature generally
leads to an increase in non-radiative energy transfer thus reducing
the luminescence efficiency. See the path OA in Fig. 7.2 for a
luminescent centre. Brightness saturation is possible as a result of
a high level of excitation. Mechanisms of brightness saturation are
not fully understood though a number of explanations seem to explain
this behaviour partially. Undesirable impurities act as killers of the
luminescence. Thus the presence of a few parts of Fe per billion in a
ZnS phosphor may kill its luminescence totally. One of the reasons
is that the impurity centres capture the free carriers in competition
to the luminescent centres and enable a non-radiative transfer of
energy. Resonance energy transfer is also possible from a nearby
luminescent centre. When the concentration of an activator is too
high, a fraction of the activators behave as killers and induce
quenching.
7.6 LUMINESCENCE DECAY
It is observed that after an electron beam ceases to fall
on a phosphor causing luminescence, an afterglow persists for some
time. This time is known to vary from 10–8 seconds, i.e., of the
order of a spontaneous emission to a few tenths of a second or longer.
As the response time of the human eye is around 0.1 seconds, it is
obvious that the decay time more than 0.1 seconds would be
registered by the human brain. This delayed luminescence is called
phosphorescence while the one not registered, i.e., below 0.1
seconds decay time is referred to as fluorescence. As the decay
time of the luminance in the case of sulphide phosphors is mainly
dependent on the time spent by carriers in the luminescent centres
which does not exceed 10–1 seconds, these phosphors are usually
not phosphorescent. One observes that the decay in these phosphors
follows a time power law of the form

I t  I o (1  At ) n  I o At  n (7.3)
Where It is intensity after a time t after termination of excitement,
Io the intensity under excitement, and A a constant. The exponent n
value could be 1.1 to 1.3 according to a number of workers in the
field. A defect or an impurity which allows a charge carrier to remain
100 An Introduction to Night Vision Technology

for a while before these reach the luminescent centres give rise to
trapping levels and lead to phosphorescence, i.e., an afterglow lasting
for more than 0.1 seconds (Fig. 7.5). The decay is prolonged by the
time the charged carrier spends in the traps. This time would be
dependent on the depth of the trapping centre in relation to the
conduction band and temperature, and would be inversely proportional
to the probability of non-radiative transfer between these levels. It
has also been reported that in some phosphors the decay is strongly
dependent on the duration of excitation, for example, ranging from
microseconds for short excitation to milliseconds for longer excitation.
Steady-state values are reached for longer exposures. In practice,
the specified decay value in a phosphor or a mix of phosphors has to
be such that it does not cause scintillations due to fast decay and at
the same time it does not cause multiple images of a moving object
resulting from a slow decay.
7.7 PHOSPHOR APPLICATIONS
Phosphors have found a large number of commercial
applications ranging from television screens to vacuum fluorescent

CONDUCTION BAND
(EMPTY)
TRAPPING
LEVELS

RADIATIVE
TRANSFER

ACTIVATOR LEVELS

VALENCY BAND
(FILLED)

Figure 7.5. Luminescent process with trapping levels

displays. Additive mixing of blue, green and red phosphors


emissions allows realization of colours within the chromaticity
diagram, and thus appropriate phosphor screens for the colour
TV displays. Cathode ray tubes (CRT’s) form the basic unit for a
large number of applications including scientific and technological,
such as terminal displays, projection displays, beam index tubes,
Phosphors 101

flying spot scanners, radar tubes, storage tubes and the like wherein
the selection of a phosphor or phosphors would depend on the
requirement and be decided in terms of phosphor grain size,
phosphor thickness, nature of emission and the decay time or
persistence of vision. Phosphors for applications like image
intensifier tubes, or electron microscopes call for high resolution
phosphors. The grain size has to be small to reproduce images
with high resolution. The size cannot be reduced much as it
results in the decrease of the luminous efficiency. The minimum
size is restricted practically to 2 m. Green emitting phosphors
are generally preferred for direct visual observation because of
their spectral match to the photopic human eye. Blue-emitting
phosphors are in use for photographic recording because of their
good spectral match to the silver-halide photographic films.
7.8 PHOSPHOR SCREENS
CRT screens usually have a phosphor weight of about
3–7 mg/cm2 on its glass surface. The phosphor particles may
be of size 3–12 m and 2–4 particle-layer thick. The aim is to
maximize the emission intensity vis-a-vis its optical screen
weight. Image intensifier screens are usually built up on the
fibre-optics windows of the tube systems. Both in the case of CRT’s
and the fibre-optics windows for the I.I. tubes, the side on which
the electron beam impinges is coated with a thin aluminium
film. The film works as an electrode which prevents the screen
from negative charging during excitation and thus increases the
output. Further, it also prevents the light generated in the screen
to feedback to the cathode and reflect the light to increase its
effective output. Applied voltages have to be relatively higher to
penetrate the thickness of this aluminium film. Thus, around
3 kV is the minimum estimated value for penetration through
an aluminium film of around 300 nm thickness. About 30 kV
applied voltage is applied for X-ray image intensifiers and
somewhat lower, i.e., of an order from 9–16 kV for image
intensifiers in the optical region. The screen thickness in the
case of phosphors for I.I. tubes may be of the order of 100 nm.
Usually the green emitting phosphor ZnS:Cu, Al phosphor may
be preferred with a particle size of around 2–3 m for image
intensifiers though blue emitting phosphor ZnS:Ag has also been
referred to. Emission peak of the green phosphor at 530 nm
can be shifted to longer wavelengths either by employing a solid
solution ZnI-x Cdx S, or by introducing a deeper acceptor level
due to gold. Usually the exact parameters of a phosphor or for
102 An Introduction to Night Vision Technology

that matter even for a photocathode, for I.I. tubes, like the


chemical formulation, layer thickness and its method of
application or deposition is an information that each
manufacturer keeps to himself. The broad specifications of an
user for all these products is satisfied for the whole unit, i.e.,
the image intensifier tube (I.I. tube). Thus for the phosphor the
relative spectral response may suggest a peak value at
510-560 nm with a bandwidth of about 200 nm and a response
not exceeding 10 per cent of the peak value at 650 nm. The
user may also specify that the afterglow should not exceed 1 per
cent of intensity after termination of exciting energy
corresponding to a low value of input illumination at the
photocathode end within one-tenth of a second. Specifications
may also be laid down for field emission or scintillations. Such
user requirements generally interrelate phosphors, the electron
lens, power supply and the photocathode – the main constituents
of an I.I. tube, leaving the design and material choice to a
prospective manufacturer.
Figure 7.6 shows a section through an image intensifier
phosphor screen. The fibre-optics faceplate is the substrate for the
phosphor. After coating with the phosphor, it is coated with an
aluminium layer facing the incident electron beam. This face may
be plane or curved depending on the aberration characteristics of
the electron beam. In an alternative process the core of the fibre
may be selectively etched away to a depth of a few microns before
ELECTRON BEAM
CURVATURE TO
ALUMINIUM
SUIT
LAYER
ELECTRON
OPTICS PHOSPHOR

FIBRE
CLADDING

FIBRE
CORE

LIGHT OUTPUT
Figure 7.6. A section through a phosphor screen for I.I. tubes
Phosphors 103

depositing the phosphor. This confines the light on excitement


within the channel and helps to reduce the cross-talk.
For use in I.I. tubes, phosphors require to have a high
luminous efficiency (in terms of lumens per watt) as also an
optimum rendering of contrast and resolution in relation to the
other components of the system, i.e., photocathode, electron lens,
micro-channel plates, etc., that may have been used in the system.
While luminous efficiency is a property of the phosphor material,
its optimum thickness, larger grain size and the method of its
deposition; the imaging properties would be more related to its
smaller grain size and optimization with the fibre-optical
components, i.e., micro-channel plate and faceplate. The latter
properties can be evaluated in terms of modulation-transfer function
(MTF).
7.9 SCREEN FABRICATION

After purification of the raw material and removal of the


killer materials, the constituent phosphor compound is synthesized.
After synthesis, crystal growth is brought about by firing. Next, the
coagulated phosphor grains are suitably milled and dispersed
uniformly to form a liquid slurry. Fine particles for high resolution
of the order of 2–5 m are separated from the larger grains by the
sedimentation process. The dispersal and adhesion of the phosphor
particles should be suitable for the technique that may be adopted
for screen fabrication. Various processes for screen fabrication
include settling under gravity, brushing and dusting techniques.
Electrophoretic method has been suggested to obtain dense
monochromatic phosphor screens with fine particles that are
required for high resolution applications as in I.I. tubes. This
method is preferred, as the migration of well dispersed fine powders
is more affected by the applied electric field than by settling due to
gravity. Further, the deposition due to created electric field is such
that the pinholes if formed attract phosphor particles preferentially.
The result is a dense uniform screen free of pinholes with a smooth
surface. Brushing technique has also been preferred to the settling
technique as it results in significantly better MTF value for the I.I.
tubes phosphor screens though with a slight decrease in luminous
efficiency. Though as referred to elsewhere, the exact type of
phosphor, its thickness and the nature of screen fabrication is a
company’s guarded information, literature does refer to sulphide
P.20 and RCA 1052 phosphors with a peak frequency of 560 nm for
use in wafer I.I. tubes. Likewise a mixture of silicates, P1 and P39
is also known to have been used in second generation 25 mm
104 An Introduction to Night Vision Technology

electrostatically focussed invertor tubes. The efficiencies are stated


to be around 15 lm/W for 6 KeV electrons. The techniques of making
screens with appropriate phosphors or phosphor mixtures continue
to be refined for better and better contrast and resolution in imagery
and for more accurate colour rendition. Thin film technique has
also been introduced. Efforts continue to develop phosphor materials
with reduced brightness saturation, special colour characteristics,
better ageing properties and control over persistence.
7.10 PHOSPHOR AGEING
Phosphors are known to age under use for sometime,
and some portions in the screen may thus turn brown or black. In
case it happens immediately or in a short duration of exposure to
the electron beam, it is referred to as burning. The same effect
can take place over a reasonably long time when the term ageing
is more relevant. The substrate glass may also get affected turning
the effected portions to brown colour. This is known as browning.
Ageing is generally dependent on the charge per unit area falling
on the phosphor. Sometimes the phosphor darkening can be
overcome by thermal bleaching, that is the phosphor may be
annealed at a few hundred degrees Celsius. According to
Leverenz, the harder, high melting, water insoluble materials are
most resistant to loss of luminescence during operation. Browning
of glass would certainly be enhanced by a relatively poor packing
of grains leaving a large number of pinholes. An appropriate
technique for phosphor deposition is therefore an important
parameter from this point of view also. Burn-in profile tests are
laid down by the users for the I.I. tube as a whole. These are in
the nature of a large number of cyclic operations for minimal and
maximal values of the luminous gain against time. Screens can
be examined with high power magnifiers at both high and low
levels of illumination.
REFERENCES

1. Flynt, W.E. "Characterization of some CRT Phosphors",


Ultrahigh Speed and High Speed Photography, Photonics and
Videography. Proc. SPIE. 1989, vol. 1155, pp. 123-30.
2. Hase, T., et al. Phosphor Materials for Cathode Ray Tubes",
Advances in Electronics and Electron Physics. (Academic
Press. Inc., 1990) p. 79.
CHAPTER 8

IMAGE INTENSIFIER TUBES

8.1 INTRODUCTION
An image intensifier tube essentially accepts a photon
spread from a quantum starved scene below the visibility level
through an optical system on its photocathode. Such photons release
weak electrons which in turn are accelerated through an electron-
lens system and made to impinge on a phosphor maintaining
correspondence between the optical photon-spread on the
photocathode and the amplified optical output from the phosphor.
This amplified output from the phosphor can be coupled to an
eyepiece system for direct vision or to a video system for vision on a
monitor. Thus if h1 is the energy of the incident photon on a
photocathode and h2 is the energy of the output photon
corresponding to the electrons impinging on the phosphor, one could
indicate this double conversion as
h1 (on photocathode) ———————>e–
e– (accelerated) ——————————>h2 (from phosphor)
The range of 1 which release electrons from the
photocathode depends on its spectral sensitivity. Likewise, the limits
of 2 are defined by the spectral sensitivity of the phosphor. These
aspects have been well discussed in Chapters 6 and 7. The original
photon-spread focused on the photocathode is formed by suitable
optical systems as discussed in Chapter 5. Further, in modern image
intensifiers, the accelerated electrons are significantly multiplied to
increase the number of impinging electrons on a corresponding area
of the phosphor through the use of micro-channel plates.
Historically, image intensifier tubes (I.I. tubes) have now
been classified in terms of generations based on the type of
photocathode that has been utilized. Thus, in the 1940s, the zero
generation made its first appearance using the S-1 photocathode,
wherein artificial illumination in the near infrared beyond the visible
106 An Introduction to Night Vision Technology

range was a definite requirement for its proper functioning. The systems
matured after the next two decades or so and were reasonably operative
in the night environment till research led to better photocathodes and
to sensor development for detection of light beyond the visible range.
Interest to these systems was particularly drawn when it came to be
known that Russian tanks could move about freely during the nights
without any lights in the then East Germany. The World over, armies
built these systems which were soon to become obsolete on the advent
of better photocathodes. These systems however dominated the early
sixties and were built also in India virtually parallelly to those in the
more advanced countries of the West. Generation-1 tubes also started
making their appearance in sixties based on alkali photocathodes at
sensitivities around 200 A/lumen which could be later cascaded with
each other through fibre-optic input and output windows to enable
reasonably higher gains.
The systems built around these tubes had no need for any
supporting artificial illumination as in the case of Generation-0. These
tubes are known as Generation-1 I.I. tubes. This approach proved to
be quite spectacular at the time of its introduction and research activities
were thus directed to the development of better and better
photocathodes and smarter techniques for amplification. It was soon
realised that the photon rate from a night sky incident on a
photocathode through a suitable optical system is greater by 5–7 times
in the 800–900 nm region as compared to that in the neighbourhood
of 500 nm. The output signal could thus be significantly improved, if
the photocathode is also red-sensitive. This brought in more sensitive
S-25 or ERMA (extended red multi alkali photocathodes) for use in I.I.
tubes in preference to the standard S-20. This development coupled
with the technological development of micro-channel plates (MCPs) to
increase the number and energy of impinging electrons on the phosphor,
brought in Generation-2. The military significance was all the more as
it not only increased the sensitivity and hence the night vision range of
the systems designed around it, but it also drastically reduced the weight
as one could now substitute a single diode Generation-2 tube for a three-
stage Generation-1 with better results. Proximity tubes without an
electron-lens but with a MCP compacting it further and with a further
reduction of weight could also be produced for a number of applications.
Systems based on Generation-2, I.I. tube have been produced in large
numbers within the country and these could withstand tough
competition from contemporary production of the West. Generation-1
systems produced earlier were also upgraded. Meanwhile a good
theoretical understanding of the photocathode physics has led to the
development of Negative Electron Affinity photocathodes, further
shooting up the sensitivity values to an order of 1000 A/lumen or
Image intensifier tubes 107

better. Though these photocathodes have slightly lower values for


signal-to-noise ratio in comparison to Generation-2 and 1 tubes, their
excellent sensitivity to the entire spectrum including the red has led
to systems with signal detection at much lower levels of ambient light.
I.I. tubes incorporating NEA photocathodes are now called
Generation-3.
Table 8.1 gives a comparison of I.I. tubes belonging to
different generations, as these developed from the earlier days.

Table 8.1. The family of I.I. tubes

Year 1940s 1960s 1970s 1980s


onwards onwards onwards
Gen Gen-0 Gen-1 Gen-2 Gen-2 Gen-3
(wafer)

Noise — 1 (1.35–1.7) (1.3–1.6) (1.75–2.0)


factor

Photo- S-1 To begin Usually S-25 NEA


cathode (Ag-O-Cs) with S-20 S-25 (Ga-As
type & Multialkali cessiated)
consti- Later:S-25
tuents Extended
red multialkali

Tube Single (a) Single (a) Electro- Wafer tube; Improved


charac- diode diode static double MCP’s
teristic & low gain (ceramic- inverter with proximity
techno- high dark metal seals) micro-channel focusing
logies current (b) 3-diodes plate(MCP)
in cascade b)Brushing
(Fibre-optics techniques
faceplates) for the
Phosphor

Sensitivity 30 Progressively 300 300 1000


(A/1m) with more
advanced
version from
200 > 300
usually 240

Weight (gm) — 900 350 90 75

Remarks Active High perfor- Lighter tube Very light (a) Visibility
type mance (High tube down to 10–4
(requires (10–3 Lux) performance Lux
illumi- 10–3 Lux) (b) Strong
nation) (a) Risk of anti blooming sensitivity
blooming (c) Spectral
(b) Image response
distortion from 0.6 m
to 0.9 m
108 An Introduction to Night Vision Technology

Image quality and resolution at low light levels is


determined by photocathode sensitivity, spectral response, spectral
emission of the phosphor screen, signal-to-noise ratio of the I.I. tube
and its integrated radiant power or luminous gain. As several
hundred photons per storage time of eye are needed to experience
a comfortable visual sensation, I.I. tube amplification should be
able to cover up losses caused by low quantum efficiency of the
eye and the low transfer efficiency of light transferred from the
phosphor. As this percentage may be around 1, it would mean that
around 105 photons should be produced per photoelectron to cause
a visual sensation. The ultimate resolution is however determined
by the statistics of the photoelectrons released from the
photocathode and amplified during the storage time of the eye. As
the P-20 phosphor matches the spectral distribution of the photopic
eye, it is preferred as the phosphor of choice in I.I. tubes. Another
phosphor 10-52 which is somewhat closer to mesopic response of
the eye is also in use.
8.2 FIBRE OPTICS IN IMAGE INTENSIFIERS
Generation-1 single I.I. tubes require to be cascaded for
better overall gain, so as to be useful under low light level conditions.
Various approaches were therefore tried including insertion of
phosphor photocathode dynodes. These dynodes consisted of a thin
plate of mica or glass with a phosphor layer on one surface and a
photocathode layer on the other mounted in a single glass tube
envelope. However, reasonable imagery, i.e., with freedom from
curvature of the electron image field and radial distortion was
possible only by introducing magnetic focusing between the flat
dynodes resulting in a cumbersome and expensive design (Fig. 8.1).
A similar approach using secondary emission multiplier dynodes
also did not give any better overall results[1]. Simple economical
and modular cascading became possible only after the appearance

PHOSPHOR
PHOTOCATHODE GLASS ENVELOPE PHOSPHOR PHOTOCATHODE

THIN SHEET OF GLASS OR MICA (DYNODE)


Figure 8.1. Principle of earlier cascaded image-intensifiers
Image intensifier tubes 109

of the fibre-optics fused faceplates and their use as input and output
windows. Later, Generation-2 tubes became a success due to the
introduction of micro-channel plates. Introduction of fibre-optic
twisters in the proximity I.I. tubes of Generation-2 was a further
advancement. The contribution of fibre-optics components has
therefore been quite important to the continued use of the I.I. tubes
for night vision. All the three components, i.e., fibre-optics faceplate,
hollow fibre micro-channel plates and fibre-optics twisters continue
to be used for some purpose or other either singly or in combination
in modern day I.I. tubes.
8.2.1 Concepts of Fibre-optics
Though it is not possible to deal with the subject of Fibre-
Optics in detail within the confines of this volume, a brief introduction
to understand some of the concepts relevant to the functioning of
fibre-optical components for use in I.I. tubes may be necessary[2].
Conduction of light along cylinders by multiple total internal
reflections has been known for quite sometime. However, it was
only in early fifties when glass-coated glass fibres made their
appearance, that there was a technological quantum jump. Earlier
uncoated fibre in air used to get contaminated very easily and did
not provide a proper interface for multiple total internal reflections.
Techniques of fabrication of multiple-fibres subsequently led to the
successful manufacture of the fused fibre-optics faceplates. The term
Fibre-Optics was first introduced by Kapany. Figure 8.2 shows the
path of an optical ray through a glass-coated glass fibre. Rays after
refraction from the entrance face strike at the interface of the core
and the cladding. All the rays which strike at the interface at an
angle equal to or greater than the critical angle get trapped within
the core of the fibre and are thus transmitted to the exit-end.

ANGLE OF REFRACTION CORRESPONDING TO MAXIMUM


ACCEPTANCE ANGLE
 cr
ncl
nc
na C

MAX. ACCEPTANCE ANGLE

Figure 8.2. Path of a light ray through a optical fibre


110 An Introduction to Night Vision Technology

Important aspects of relevance to fibre-optical components,


particularly the faceplates are (i) the numerical aperture and (ii) the
absorption. As the optical beam through the fibre transits a relatively
small length, the absorption is not that significant as in the case of
optical fibres for signal communication. Nevertheless, the material
of the core and the cladding should be exceptionally uniform and
absorption as low as possible keeping in view the fact that the
refractive indices of the core and cladding will be decided based on
the requirements of the numerical aperture. Assuming good quality
fibres, i.e., with the minimal of absorption, and with neat interface
between the core and the cladding, the total light transmission is
dependent on its numerical aperture. This parameter defines the
cone of light that is accepted by an optical fibre or for that matter
by a lens system at its entrance aperture. It can easily be shown
that the numerical aperture(N.A.) of an optical fibre quantitatively
defined by na sin is given by


n a sin  n c 1 n cl /n c  
2 1/2
(8.1)

where na is the refractive index of the medium from which the light
is incident on the fibre, i.e., air or vacuum, nc is the refractive index
of the core of the fibre and nc1 that of the cladding. Angle  is the
angle of incidence (Fig. 8.2). Equation 8.1 shows that the na will
tend to be a maximum if the core refractive index is higher and
the cladding index is lower. Maximizing this value enables greater
acceptance angle of the incident beam. This angle can be maximized
to 90° with suitable selection of refractive index values for the core
and the cladding. na can be unity. In other words, the optical fibre
can transmit all the light that is incident on it, which is not quite
true of an optical system. To attain this sort of working from an
optical system a lens system will be required with a numerical
aperture of F/0.5! It will be observed that the factor ncl /nc is also
sine-inverse of the critical angle cr at the interface of the core and
the cladding. If the angle of refraction at the entrance face is c
then if angle (90-c) is equal to or greater than the critical angle cr
the ray will remain trapped within the core and undergo multiple
reflections till it reappears at the exit end (Fig. 8.2). The other aspect
is that the light incident on an optical fibre received all over its
maximum acceptance angle is somewhat averaged out by multiple
reflections by the time it reaches the output end.
8.2.2 Fibre-optics Faceplates
If a large number of such optical fibres are packed
together parallely over a short distance of the order of a few mm,
Image intensifier tubes 111

the result is an optical fibre faceplate which is in the form of a disc.


The disc size is determined by the size of I.I. tubes that it has to fit
into. The standard sizes usually are 18, 25 and 40 mm in diameter.
Both the end faces of the disc are polished. The packing should be
efficient so that maximum light incident is on the core. A hexagonal
shape for individual fibres seems to be preferable. Additionally one
has to ensure that the light incident on the cladding and which may
leak from the core into the cladding because of incidence at angles
greater than the acceptance angle is absorbed. For this purpose,
strategically placed black glass rods known as extramural absorbers
are also introduced in the pack (Fig. 8.3).

EXTRAMURAL ABSORBER

Figure 8.3. Likely placement of extramural absorbers

Such a faceplate can be optically characterized in terms


of the optical resolution that it may offer and in terms of MTF as is
the case with other optical and electro-optical subsystems. Thus,
if an image is formed on one of the end faces of a fibre-optics
faceplate, the intensity pattern is faithfully carried through the fibre
to other end but with a resolution corresponding to that of the core-
fibre diameter, or rather centre-to-centre distance of the adjacent
fibres in the fibre pattern. Thus, to have better resolution the fibre
diameters should be lesser and lesser. However, diffraction
considerations limit this diameter to an order 5 m or so. Figure 8.4
shows a typical fibre faceplate as designed and fabricated at one
of the laboratories in India.
Fibre faceplates can also be used as field flatteners in
optical systems to correct for the curvature of field where other
aberrations are under reasonable control. This also applies to
electrostatic lens systems which are in use in image intensifiers
particularly for cascading (Fig. 8.5). It will be observed that to
112 An Introduction to Night Vision Technology

Figure 8.4. View through a fibre faceplate

simplify the electron-lens systems, both the input and output


fibre-optics faceplates have been suitably curved on the inside
surfaces facing the vacuum. This helps the design to be both
modular and simpler. In fact, the success of the three tube
Generation-1, image intensifiers resulting in a high luminous
amplification was entirely due to fibre-optic faceplates. The
output from the Generation-1 first diode output faceplate could
be coupled to the input window of the second diode, and likewise,
the output from the second diode could be coupled to the third
one (Fig. 8.5).
A large number of optical, thermal, and chemical criteria
must be satisfied in the fabrication of a usable fibre-optics fused
plate. One of the important methods for fabrication of such fibres
is the rod-in-tube process. A high numerical aperture fibre results
by drawing high-index glass rods snug-fit in tubes of low refractive
index. The glasses have a high degree of homogeneity and are free
Fibre input Fibre output Final output
Fibre input Fibre output Fibre input
window window window
window window window
mm
50

180 mm
Figure 8.5. A sectional view through a Gen-1 cascade system
Image intensifier tubes 113

from bubbles and seeds. The glass types are also so chosen that
they are compatible both thermally and chemically. Obviously, both
the rods and tubes must have thoroughly clean and smooth
surfaces before being snug-fit and placed in a drawing machine.
The drawn fibre is dipped through a dark solution of a ceramic
material which provides an absorption coating also known as
extramural absorption coating or EMA. To ensure precise diameters
of the output fibres, the thermal gradient in the furnaces, the rate
of sliding the rod-in-tube combination into the furnace, and the
rate of drawing the output single fibre are controlled very critically
and effectively. A proper calibration of the drawing and furnace
equipment is essential before good results can be expected. The
nominal diameter of the output single fibre is not allowed to be
varied by more than a few per cent of its value to maintain
excellent uniformity. The nominal value of single fibres may be
from 0.5 mm to around 3 mm. Exact diameter is decided by the
nature of the materials and the equipment that has been used,
as also the equipment that will be used to produce multiple fibres.
The single fibre is usually cut in short lengths say of the order
250 mm or more. These cut single fibres are then grouped and
aligned in graphite moulds of usually hexagonal or square cross-
section. Alignment is fully assured manually or through utilization
of appropriate jigs and fixtures. This mould is next raised to a
temperature corresponding to the softening point of the fibre coating
material to accomplish tacking between the single fibres. This group
of single fibres is then redrawn after appropriate annealing resulting
in multiple fibres using the same or similar drawing and furnace
equipment.
The drawn multifibres are cut to right lengths and aligned
in a suitable jig. High quality fusion between the multiples is
ensured by controlled heating to the softening temperature of the
coating material and by appropriate pressure. This is followed by
annealing to eliminate strain or inhomogeneities in the composite.
The boule so formed can be sliced in appropriate thickness to form
the component fibre-optic plates. Both surfaces of a disc or plate
so available need to be polished and surfaced, as per the
requirements to form suitable faceplates. Needless to say, control
and testing has to be adopted at each stage for optical and
mechanical control with precise instrumentation besides ensuring
complete vacuum tightness. Degree of cleanliness while drawing,
fusing, sawing, surfacing and polishing has also to be of a very
high order so as to obtain maximum efficiency from the finished
product. It has also to be ensured that the materials used for the
114 An Introduction to Night Vision Technology

fibres and coating are not such as to poison photocathode or


phosphor materials. The fused plate as a whole should also have a
compatible thermal behaviour for correct sealing to the envelope-
material of the photoelectric device in which it is to be lodged, to
avoid the possibility of bad sealing or development of cracks.
8.2.3 Micro-channel Plates
A micro-channel plate (MCP) is usually a disc-shaped
compactly packed assembly of short length micro-channels finished
flush with the two plane end faces of the disc. These two faces may
be parallel to each other or have a wedge. Each micro-channel is
basically a capillary or more correctly a hollow fibre. The material of
the hollow fibre has a certain amount of electrical conductivity and
hence each micro-channel may be considered as a continuous
dynode electron multiplier. Its introduction in a I.I. tube enables
high gain with a minimum size and weight with an additional
advantage, as the saturation characteristics of each channel limit
the blooming effect over itself and restricts its spread to the nearby
area. Usually the channel diameter is of an order of 10-15 m with
the overall disc thickness of a millimetre or less. The diameter-to-
length ratio of the micro-channels is dictated by the electron-
multiplication considerations and the material of the capillary with
a view to obtain as linear a gain as possible with minimum noise.
Amplification fluctuation should be minimum both with time for each
micro-channel and of one micro-channel with respect to another.
Thus, uniformity of channel material as also the diameter and lengths
of each micro-channel have to be critically controlled. The micro-
channel diameter is also related to the desired spatial optical
resolution. As shown in Fig. 8.6, an electron entering any of the
individual channel is reflected from the channel walls releasing
secondary electrons. Now if a voltage is applied between the input
and output faces of MCP providing a potential gradient, then these
electrons get further accelerated and result in many more electrons
as a field exists accelerating these secondaries. This process of
generation of secondary electrons continues till the output end.
The gain in each micro-channel of MCP (assuming
similarity in all the micro-channels) is dependent on the average
number of collisions of the electrons within the channel walls and
on the emission coefficients on each collision[3]. The value of the
coefficient is always a maximum for the first collision. For
subsequent collisions, this value goes on decreasing. If the value
of the first collision is denoted by t and the average secondary
coefficient by s, it can be shown that the gain (g) is approximately
given by
Image intensifier tubes 115

A MICRO-CHANNEL PLATE

ELECTRON AVALANCHE

MICRO-CHANNEL
VOLTAGE
(GAIN CONTROL)
BIAS ANGLE

10-15 

ION TRAP
SECONDARY
ELECTRONS
INPUT ELECTRONS

OUTPUT
ELECTRONS
MICRO-CHANNEL (G x INPUT ELECTRONS)
~ 500 

Figure 8.6. Electron amplification through a micro-channel

g ~ t .s N (8.2)
Where N is the total number of collisions that take place.
For a given diameter, the number of collisions is dependent on the
direction of the incident electron and the length of the micro-channel.
Noting that the diameter of the micro-channel has to have an optimum
value from the point of view of optical resolution or MTF, the
parameters that can be varied to improve on the gain are:
(a) Increasing the potential gradient to further accelerate the
electrons in the channel thereby increasing the s value.
(b) Increasing the value of N, i.e., number of collisions. This
suggests an increase in the length-to-diameter (l/d) ratio and
accommodating steeper direction for the incident electrons.
It has been stated that for MCPs with 15 -m centre-to-centre
hollow fibres, the gain roughly doubles for every increase of 50 V.
116 An Introduction to Night Vision Technology

Having optimised the value of this potential, further gain is possible


only by increasing the l/d ratio. Here, one observes that for a given
constant potential, the secondary emission coefficient goes on
decreasing since the impact energy decreases with each further
collision. In other words, s gets a lower and lower value with increase
in N tending to reduce the overall gain. Hence, the factor l/d cannot
be increased beyond a certain value of N for which g becomes less
than 1. To appreciate this behaviour better, closer approximations
than in Eqn 8.2 have been adopted. One such is the calculation of
the decrease value of s at each collision as a function of the potential
gradient. Using better computing methods, it has been possible to
calculate the value of l/d for a given potential gradient which
maximizes the gain. The gains desired are of an order of 102 to 104 at
length/d ratios between 30–40 and operating voltage between 600–
900 V. The other important factor apart from gain is the noise. Fixed
pattern noise in an MCP is due to variation in gain with time or
between adjacent channels. These result in scintillations or speckles.
This sort of output may be observed even in the absence of an input
current or as a superposition in the output when an input current
is also present. Most of these spurious fluctuations arise from field-
emitted electrons emerging from the uneven topography of the input
ends of an MCP. Better control in fabrication technology reduces
these effects to within tolerance. Control in diameter is also very
essential to a great degree as diameter differences may contribute
significantly to the fixed pattern noise. Excellent manufacturing
technology also minimizes the dark current and gives it a consistent
value. The signal induced noise-figure is a function of the open area
of an MCP. Electrons hitting the closed area may produce secondaries
which may effect the I.I. tube noise rather then the MCP noise. It is
observed that improvements in the noise figure may be brought
about by increasing the open area ratio, using materials which
enable better values for secondary emission coefficient for the first
strike, and decreasing the first strike depth. Thus, micro-channels
at the input end have been coated with MgO or CsI. As in an I.I. tube,
the electrons arrive at the MCP at normal or near normal to the
surface, biasing of the surface can be resorted to for decreasing the
first strike depth. A bias of around 10° is given to the face to enable
steeper angles of strike at this shorter first strike depth. A higher
bias is not useful as it results in distortion in focusing of the output
electrons. Curved MCPs have also been introduced in which each
channel is curved, so that the first strike depth is decreased.
Image intensifier tubes 117

MCPs are fabricated out of glass containing basically silica,


alkali ions for desired softening and annealing and a requisite
amount of lead and bismuth oxides to provide conductivity. Rod-in-
tube method is adopted as it is necessary for making fibre faceplates
as detailed in section 8.2. After finishing fused fibre plates in all
respects, the core is removed by a chemical etching process.
Obviously, the core-material should be easy to remove while the
cladding which ultimately forms the micro-channel is unaffected.
Selective etching at the input end may be done first to improve on
the open area ratio. The channel conductivity is adjusted by
appropriate reduction of lead or bismuth oxides by controlled firing
in an atmosphere of hydrogen. This activates the MCP. A suitable
metal is next evaporated on its front and back surfaces. Techniques
are also adopted by suitable coatings to improve on the secondary
emission coefficient on first strike and to prevent ion feedback.
Alternative techniques are also available by drawing hollow fibres
from the beginning. Obviously the control and care in the fabrication
of MCP has to be much more than what is necessary in the case of
fused fibre-optics faceplates. As indicated in Table 8.1, Generation–
2 I.I. tubes have proved to be a success because of the incorporation
of MCPs in their design.
8.2.4 Fibre-Optic Image Inverters/Twisters
Image inversion is accomplished in some types of
proximity I.I. tubes using a fibre-optic component. This internal
component within the tube avoids the need for additional optical
components and enables a compact and light weight device. Such
components are fused optical fibre bundles in which varying degrees
of twists have been imparted during the fabrication process. Images
are known to be transmitted without distortion through several
complete rotations of the fibre bundles. In a bundle around 13 mm
in diameter, a 180° rotation has been achieved in 13 mm length
itself. As the image inversion takes place as a result of the twisting
of the optical fibres, these components are also referred to as
optical twisters or inverters (Fig. 8.14).
8.3 ELECTRON OPTICS
Any inhomogeneous axis-symmetrical electrical or
magnetic field acts upon electrons moving in the near axis area in
the same manner as an optical lens acts on light. This property of
non-homogeneous axis-symmetrical field is used to focus electron
beams emanating from a photocathode on to a phosphor, producing
an inverted electro-optical image of a scene that has been earlier
responsible for the ejection of photoelectrons, when such a lens
118 An Introduction to Night Vision Technology

system is incorporated within an I.I. tube. As electrons are required


to be accelerated right from their emission, the field extends right
up to the photocathode surface. Technological advantages are in
favour of electrostatic lens systems in I.I. tubes intended for night
vision systems. That is so, as magnetic lenses tend to be bulky,
heavier and with larger overall dimensions. The magnetic lenses
also consume considerable power. Three types of electrostatics
lenses are generally in use for the electrostatics electro-optical
systems[4] (Fig. 8.7). These are:
(a) Aperture lens,
(b) Bipotential lens, and
(c) Unipotential lens.
The aperture lens (Fig. 8.7 (a) is formed by a disc-shaped
electrode with a circular aperture at a certain potential immersed
in two different potentials on either side. As the field intensity varies
near the aperture, it is this region which forms the lens. The
bipotential lens is generally formed by two coaxial apertures, an

U1 U2
Z
(a)

U1 U2 U2 U2
U1 U1

(b)

U2 U2
U2
Z

U1 U1 U1

(c)
Figure 8.7. Scheme of electrode systems for (a) aperture lens,
(b) bipotential lens, and (c) unipotential lens (U 1, U2
refer to potential values).
Image intensifier tubes 119

aperture and a cylinder around the same axis or by two coaxial


cylinders at different potential. The potential at both sides of the
lens are constant and equal to the electrodes forming the lens. The
field on one side may extend right up to the photocathode when it
is also referred to as an immersion objective. The optical power of
bipotential lenses greatly depends on the potential ratio of the
electrodes (Fig. 8.7 (b). The unipotential lens is formed by three
coaxial apertures, with the two outer electrodes at a common
potential (Fig. 8.7 (c). Both symmetric and asymmetric combinations
are possible so that the field of the lens is symmetric in relation to
the midpoint of the lens or otherwise. As in the case of bipotential
lenses the optical characteristics are determined by the ratio of
potential of the electrodes. In all these cases electrodes are coaxial
bodies of revolution for a lens system. The subject of electron-optics
has been well developed and all its correlations with the classical
optics have been fully exploited. The optical equivalent path is more
similar to a path through a changing refractive index medium where
the refractive index n at each point gets defined by n = u with u
representing the potential at the point. These potential changes
take place more rapidly where a crossover or focusing action is
desired. Relationships have been worked out for cardinal points,
aberration characteristics, focal ratios, and the like so that the

LENS REGION (FOCUSING


FIELD)

n 1 u 1 n 2 u 2

r1 1 2 z
r2

a b

Figure 8.8. Illustration of the Langrange-Helmholtz equation


120 An Introduction to Night Vision Technology

understanding of the subject of optics is helpful in arriving at


decisions in the field of electron-optics. Analogy with optics is quite
useful. For instance, one can utilize the Langrange-Helmholtz
equation which is a derivation from the Abbe sine condition, and
is defined in terms of the refractive indices of the media on both
sides of a focusing field (Fig. 8.8).
The relationship is

n 1r11 n 2r2 2 (8.3)


where the subscript 1 refers to the object space and subscript 2 to
the image space, for heights, aperture angles, and refractive indices.
The same relationship can be rewritten or derived in terms of
potential in the form

r11 u1  r22 u 2 (8.4)

The focal lengths are also related by the formula

f1 u1 n
  1 (8.5)
f2 u2 n2
A lens system consisting of two or more electron lenses
can thus be defined on the optical pattern, to form effective electron-
optic devices.
8.4 GENERAL CONSIDERATIONS FOR IMAGE
INTENSIFIER DESIGNS
A typical electron image intensifier may employ a two lens
optical system[4]. The first lens besides focusing must also
accelerate the photoelectrons. The field of the first lens must thus
be extended to the photocathode, so as to collect and accelerate
all emitted electrons, i.e., the cathode is immersed in the field
originating from the potential forming the first lens. This means
that the object is immersed in the field as if in a medium of refractive
index n, corresponding to the under-root of the potential forming
the lens in the object-space. Such a lens is also known as an
immersion objective. It is essentially a bipotential lens. This may
be coupled to an aperture to form a complete system. The second
lens helps in the control of divergence and assists in reducing
aberration characteristics (Fig. 8.9).
As shown in the figure, the photocathode is immersed
in the objective field. A diaphragm is provided near the crossover
formed by the immersion objective. The second lens transferring
the image to the screen is formed between the first and the second
Image intensifier tubes 121

CATHODE SCREEN (PHOSPHOR)


PHOTOCATHODE
IMMERSED IN
OBJECTIVE
CATHODE APERTURE
FIELD DIAPHRAGM

IMMERSION
LENS ANODE CONE

Figure 8.9 A typical layout of an intensifier tube

anode, and is thus a bipotential lens. The screen should be a


concave surface and coincide with the surface of the best image to
overcome significant aberrations, such as distortion and curvature
of the image surface. The photocathode is also suitably curved for
the best results for the input image. Both these aspects are taken
care of in the Generation-1 tubes with input and output suitably
curved fibre-optic windows. In Generation-2 tubes where micro-
channel plates (MCP) have been introduced the output is more
conveniently coupled to plane-parallel fibre-optics windows. This
necessitates an additional electrode for distortion correction before
the electron beam is incident on the MCP. In all these cases while
theoretical understanding and correspondence of electron-optics
with the classical optic is a great help to lay down preliminary
designs, the ultimate designs adopted are experimentally developed
to give the best of results.
Next, we can consider the design parameters for the
overall intensification of distant objects that is possible utilizing
I.I. tubes. Obviously, these parameters include photocathode
sensitivity, luminous efficiency of the screen, and accelerating
voltage, apart from the optical systems. If the light-flux (including
radiation beyond the visible to which the photocathode is sensitive)
is c, then it generates a current kpc where kp is the photocathode
sensitivity over the entire spectral region of photocathode response.
In actual practice, the sensitivity of the photocathode will be
frequency dependent. The manufacturers usually give the sensitivity
values separately for the white light in A/lumen and for radiation
above 800 nm in A/W (The actual wavelength values may also be
indicated). This current kpc gets amplified by the accelerating
122 An Introduction to Night Vision Technology

potential U of the I.I. tubes and transfers a power U kpc to the


screen of the I.I. tube. If the luminous efficiency of the screen is ,
then the light flux  emitted by the screen is Ukpc. The luminous
gain Gp can then be defined as

Gp 

Light flux emitted by the screen Uk p c   Uk p (8.6)
Light flux incident on the photocathode c 
Thus, the gain is higher if the screen efficiency, accelerating
potential and the photocathode sensitivity is higher. No doubt there
would be limitations due to noise and dark current in the system
and system components. The above is applicable if both the input
object size and output image on the phosphor are of the same size.
In case the I.I. tube has a magnification mi, the image would be
spread over an area m i2 times the area of the image on the
photocathode. Thus, we have
G p =  Uk p /mi2 (8.7)
The gain of an image intensifier is usually expressed in
cd/m2/lx as a ratio of the output brightness in nits (candelas/sq m)
to the input illuminance in lux (lumens/sq m). This measurement
is usually done at a colour temperature of 2854°K at appropriate
input light level of low order, i.e., say 20 lx. Its equivalence to the
theoretical value is given by Eqn 8.7, where  the phosphor
efficiency is in lumens per watt, U the applied voltage and KP the
photocathode sensitivity in amperes per lumen. As the
photocathode sensitivity at different wavelengths has a different
value, the composition and magnitude of the light stimulus to the
photocathode has to be standardized to give a consistent value for
the gain. This also helps in comparing I.I. tubes from different
manufacturers or in the same lot. Extending to incorporation of
an I.I. tube in an instrument system and defining the total luminous
gain G in terms of the object brightness B0 in a scene that is being
imaged through an optical system we have
Brightness of the image on the screen (B s ) B s
G   (8.8)
Brightness of the object (B o ) Bo
Referring to Fig. 8.10, we have the relationship of image
heights ho, hc and hs corresponding to object, photocathode and
phosphor (screen) as

h mh mm h (8.9)


s i c i o o
where mi is magnification due to I.I. tube as in Eqn 8.7, and m0 is
the magnification due to the optical system on the photocathode
Image intensifier tubes 123

of the object of height h0. To investigate the total luminous gain,


let us assume an area so in the object space around the axis which
is imaged on to an area sc on the photocathode. The amount of
light flux c that will reach this area on the photocathode will be a
function of the brightness of the object Bo, the transmission factor
through the atmosphere and the objective lens system , and the
maximum angle of acceptance of the light cone emanating from the
object area s0 , depending on the entrance pupil diameter of the
optical system D and its distance from the object R or the angle 0 ,
i.e.,

c = B 0s 0 sin2 0 . (8.10)

Using Abbe sine-condition we have

s 0 sin2 0 = sc sin2 c (8.11)

Therefore, c = B 0sc  sin2 c   and if sin2 c is determined


from the geometrical relationship of Fig. 8.10 the equation can be
rewritten in the form
D2
0 = B 0 sc   (8.12)
D 4 f 2
2

D2
as sin2 c = .
D 2 4f 2

This is on the assumption that the distance R of the object is very


large in comparison to the focal length of the optical system and
that the image is formed in the focal plane. This assumption is
valid as in practice, viewing is needed for distant objects. As the
image intensifier gain is Gp, (Eqn 8.6) we have

D2
s  G p .c  G p .Bosc 2 . (8.13)
D  4f 2
As the screen (phosphor) area corresponding to sc would
be given by  sc m i2 we have screen brightness B s given by

s D2 1
Bs = = Gp   2  B0  (8.14)
 sc m i2 2 2
D  4 f mi

giving the total gain as


124 An Introduction to Night Vision Technology

OBJECTIVE PHOTOCATHODE
(OPTICAL SYSTEM) OCULAR SYSTEM

hO O C EYE
hS
D
hC

R f

IMAGE THROUGH INTENSIFIED IMAGE ON


OPTICAL SYSTEM SCREEN (PHOSPHOR)

Figure 8.10. Sketch of an instrument system with an I.I. tube

Bs D2 1
G = = Gp 2   (8.15)
B0 D  4 f 2 m i2
This may be put in the form
2
D 1
G  0.25 G p    2  (8.16)
 F  mi
The numerical value in Eqn 8.16, 0.25 is really much
closer to the more exactly calculated values for aperture ratios of
1:5 or slower. Its value changes more rapidly at faster F numbers.
Thus at an aperture ratio of 1:1 it would be 0.20 and at an aperture
ratio of 1:2 it would be 0.235. Nevertheless, the variation in the
numerical value of 0.25 for different apertures is not so significant.
A system with an aperture ratio of 1:1 is 25 times faster and a
system with an aperture ratio of 1:2 is more than six times faster
than one with an aperture ratio of 1:5 against a change 20/25 and
23.5/25 due to the more exact calculation of the numerical value
of 0.25. To maximize the overall gain, the second term in this
equation Gp should be as large as possible. Referring to Eqn 8.6,
this would mean that the screen efficiency, accelerating potential
and the photocathode sensitivity should be as high as possible.
Factors relating to phosphor (screen) and photocathodes have been
well discussed in the related chapters earlier. Obviously,
accelerating potential and the overall design has to be such as to
add minimal noise and not to overbrighten the phosphor. As the
size and weight of I.I. tubes is also a major consideration, suitable
power supplies (wrap-around) have also been developed with
automatic brightness control. As resolution of a high gain noise
limited I.I. tube is primarily limited by the finite number of
photoelectrons released by the photocathode, it is advisable to
design instrument systems which detect at the required distance
Image intensifier tubes 125

well above this limitation. This means that for an excellent tube,
this has to be done primarily by having as large an aperture as
permitted by various design restrictions so that more of the light
flux from an object scene is concentrated on the photocathode. Thus
the physical value of D 2 is important, for a practical application.
The third term in the Eqn 8.16 signifies the need for a faster and
faster aperture ratio. The fourth term suggests a minification of
the tube magnification, i.e., the screen size to be lesser than the
photocathode size. Such tubes have also been designed particularly
where the overall magnification presented to the eye is around unity
and larger field of view is a requirement. The final term  suggests
that the transmission factor of the optical system should be as high
as possible, i.e., the objective lens surfaces should be properly
coated for the spectral range to which the photocathode is sensitive.
Likewise, the eyepiece lenses should be coated to maximise
transmission in relation to the nature of the output spectrum from
the screen (phosphor)[5]. Further relevant optical considerations
have been referred to in Chapter 5. As discussed therein,
considerations for total field of view and overall magnification are
significant.
8.5 IMAGE INTENSIFIER TUBE TYPES
Further to the historical development as discussed in
paragraph 8.1, and parametric details in Table 8.1, we may now
discuss the types of tubes that have evolved so far.
8.5.1 Generation-0 Image Converter Tubes
These tubes referred to also as Image Converter tubes have
an Ag-O-Cs photocathode with an S-1 response (Fig. 6.6). The
phosphor could be a typical P-20 type. The acceleration voltage in the
tubes are of the 10-15 KV order. To improve the brightness the screens
of these tubes can be aluminised. This also eliminates optical feedback.
AFTER
PHOTOCATHODE CATHODE ACCELERATION
(CATHODE) APERTURE ANODE POTENTIAL RINGS
CONE
SCREEN

Figure 8.11. Cross-sectional view of a special Generation-0 tube


126 An Introduction to Night Vision Technology

There were some developments in these types before the arrival of more
sensitive photocathodes covering both visible and near infrared regions.
There was the development of multi-slot photocathodes with higher
sensitivity in the longer wavelength regions. Image converter tubes were
also produced by using further after acceleration of the electrons near
the screen[4] (Fig 8.11).
8.5.2 Generation-1 Image Intensifier Tubes
Generation-1 tubes started making their appearance in
early sixties and had an S-20 photocathode (See Fig. 6.6) coupled
through a two electrostatic lens system to a phosphor screen,
usually P-20. The lens system more or less followed a similar
pattern as in Generation-0 with an aperture-cone electrode
combination. As the gain was not that high, it was thought
expedient to cascade these tubes either internally or externally.
As stated earlier, these efforts were only partially successful and
resulted in cumbersome and expensive designs (Fig. 8.1 and para
8.2). A cross-sectional view of a single Generation-1 tube is shown
in Fig. 8.12. Earlier versions of these tubes used S-20
photocathodes and later on stabilized to the use of S-25
photocathodes with a P-20 phosphor. As the fused fibre faceplates
made their appearance, their incorporation in a Generation-1 tube
made cascading relatively easier, effective and economical.
Figure 8.5 shows a sectional view through a cascaded Generation-1
CATHODE GLASS WALL
APERTURE CYLINDER
PHOTOCATHODE ANODE PHOSPHOR SCREEN
CATHODE CONE
FIBRE OPTIC
PLATE

IMAGE OF
SCENE
INTENSIFIED
IMAGE

+15 kv
Figure 8.12. Sectional view through a Generation-1 tube
Image intensifier tubes 127

tube where three tubes have been cascaded. Refer also para 8.2.2.
Gains have been measured to be in excess of 30,000 and may be in
a range of 50,000 to 100,000. Three-stage systems are also suitable
for incorporation of automatic brightness control particularly as
the system is rather sensitive to supply voltage variations and
ripples. Generation-1 tube like later generations have been
standardized to 18 mm and 25 mm diameters for both the
photocathodes and the phosphor, thus operating at unit
magnification, keeping a variety of applications in view. Tubes with
40 mm photocathodes are also in the market for specific
applications. The resolution of the Generation-1 tubes is dependent
on good electron-optical systems as also on the grain structure of
the phosphors of the screens. As is obvious, the second and third
tubes in a cascade pick up the input from the phosphor screens
and may progressively degenerate the overall resolution. Excellent
manufacturing and phosphor deposition or coating techniques are
therefore called for. A moving object when seen may give rise to a
smear. It could also cause blooming when viewing a bright object.
8.5.3 Generation-2 Image Intensifier Tubes
The second Generation tube is a combination of a single-
stage I.I. tube of Generation-1 coupled internally to a micro-channel
plate. The photocathode is highly improved and is of the S-25 type
and has an extended red response. The micro-channel plate has
been discussed above in paragraph 8.2.3. A section through a
Generation-2 tube is shown in Fig. 8.13. Thus a high gain single-
stage image intensifier, with a better photocathode using an
electrostatic lens system impinges electrons in the input of the
micro-channel plate. These electrons after intensification are shown
in Fig. 8.6 are proximity focused on the phosphor screen. The
electrostatic lens system has to be such as to produce a flat image
at the input of the MCP. Thus the normal electrode system of a
spherical cathode and a conical anode with an aperture is
augmented by a distortion correction ring (a sheet cylindrical
electrode) before the electrons impinge on the MCP. Impinging
electrons can also generate positive ions which may travel back
and reduce the life and efficacy of the photocathode. A positive ion
barrier is obtained by placing the input of the MCP at a lower beam
potential than the anode cone potential. A thin ion-barrier film could
also be deposited on the input face of the MCP. It could trap some
of the incoming electrons also and prevent the re-entry of electron
that rebound from the solid edges of MCP channels on the input
face. This tube has many advantages over the Generation-1
cascaded tube. It achieves the same order of lumen amplification
128 An Introduction to Night Vision Technology

CATHODE
SHIELD
INPUT FIBRE MCP
OPTIC BODY INPUT MCP
FACEPLATE - 2500 V CERAMIC - 900 V OUTPUT
+ 6000 V
- 2500 V
MCP

PHOSPOR
SCREEN

PHOTOCATHODE OUTPUT
(S-25, (ERMA) FIBRE-OPTIC
FACEPLATE

ANODE DISTORTION CORRECTION


CONE RING
Figure 8.13. A sectional view through a Generation-2 tube

in a much smaller length and weight. As there are no phosphor-


photocathode interfaces in Generation-2 tubes as in Generation-1,
image smear of moving objects is avoided. Further the
considerations for the graininess of the phosphor do not affect to
that extent, as only one phosphor surface is involved instead of
three in the case of Generation-1. However, the resolution and noise
characteristics of the MCP become more important considerations
in the case of a Generation-2 or more advanced tubes utilizing MCPs.
A rugged wraparound voltage stabilized power supply powered by
a high duty 2.7 V battery is usually used to give appropriate voltage
to all the electrodes, i.e., cathode, anode cone, distortion correction
ring and the screen. Circuitry also ensures flash suppression, bright
source protection and automatic brightness control, to enable good
image transmission. Battlefield illuminants like gun flashes,
explosive, fires, etc, thus do not seriously disturb the vision.
Advantage of the confinement of an illuminant to a few channels
of the MCP also prevents a smear across the whole field. Successful
night vision devices for low light vision have been optimised using
Generation-2 tubes which are almost distortion free and have a long
operational life. Dependent on the ultimate use, the system may have
both a large aperture and a fast aperture-ratio to make vision possible
at stipulated ranges and fields of view with high quantum efficiency
photocathodes and high luminous gains available in Generation-2
tubes.
Image intensifier tubes 129

It is also obvious, that Generation-2 tube is an inverter


like the single unit of a Generation-1 or 3 units of Generation-1 or
Generation-0 and hence systems utilizing these tubes do not require
any optical erector system. The inverted image on the photocathode
becomes erect on the phosphor and can hence be directly viewed
through an eyepiece. The very nature of focusing by an electron-
optic lens inverts the image as is done by an optics-objective.
Nonetheless, the Generation-2 may be named electrostatic image
inverting Generation-2 image intensifier tube to emphasize its
difference from the proximity focused wafer tubes.
8.5.4 Generation-2 Wafer Tube
Proximity imaging on the phosphor was tried much
earlier with a view to develop the simplest types of image converters
(Generation-0 type). Thus if a photocathode and a screen (phosphor)
are placed parallel to each other in an uniform field inside a vacuum
envelope a unit magnification image should be possible. It was
shown that if the initial velocity of electrons leaving the
photocathode was represented by the potential U0 and Uac was the
accelerating potential for these electrons, then in a uniform
electrostatic field, the diameter D of the scattering circle of the
impinging electrons is given by

Uo
D  4l (8.17)
U ac

Where l is the distance between the photocathode and the phosphor.


With practical designs this value of blur circle could not give a
resolution better than 10 line-pairs per mm. Further, the back-
scattered light from the screen (phosphor) would illuminate the
photocathode and add to the background noise. A diaphragm would
also help, as the electrons moved along the lines of force of the
uniform field – it could only reduce the screen area to the size of the
hole in the diaphragm. A thin film of alumina of proper thickness
on the phosphor could somewhat decrease the back scatter, if it
permitted the forward movement of electrons and prevented the
backward movement of the quanta of light. Nonetheless, the
resolution was far too low to permit any devices to be built around
such a system. The technological development of the micro-channel
plates drastically improved on the situation as now the electrons
could not only be confined within a channel, but also used to release
secondary electrons improving on the gain while taking the benefit
of a uniform accelerating field. The resolution would depend on the
design of the MCP. It simply meant that the MCP could be now
130 An Introduction to Night Vision Technology

sandwiched between a photocathode and a phosphor in a vacuum


envelope and an appropriate potential applied across its input and
output faces. The electron defocusing as these emanated from the
photocathode could be minimised by having as small a gap as
possible between the photocathode and the MCP, usually of the order
of 0.2 mm or less. The output from the screen could be taken out
via a fibre-optics faceplate. In such a system, the resolution could
be of the order of 30 line pairs per mm or better, but the
photocathode image would not be inverted as in an electrostatic lens
system used in Generation-0, Generation-1, and Generation-2.
Incorporation of fibre-optics twister, (para 8.2.4), overcame this
problem also, by providing an integrated inverting system within the
vacuum envelope itself. Such tubes are now referred to as Generation-
2 wafer tubes as these employ the MCP as in Generation-2 I.I. inverter
tubes (Fig. 8.13).
Unlike the electrostatic image inverting Generation-2
image intensifiers which are available in many sizes usually with
input-output faces as 50/40, 25/25 and 18/18 mm, the wafer
tubes are generally confined to 18/18 mm size (Fig. 8.14). These
result in highly compact and light weight designs most suitable

PHOSPHOR
CERAMIC FIBRE-OPTICS
SCREEN
METAL BODY INVERTER

FIBRE OPTIC
FACEPLATE

MICRO-CHANNEL
PLATE PHOTOCATHODE (S-25, ERMA)

Figure 8.14. A schematic view of a Generation-2 wafer tube


Image intensifier tubes 131

for design of night vision goggles and night-sights for small arms.
These have found a great application area in avionics also. Their
freedom from distortion and uniform resolution over the entire
picture area make them more suitable for biocular or binocular
applications. These tubes are also referred to as double proximity
focused wafer tubes because of the image transfer through the MCP
which is in proximity both to the photocathode and the screen and
immersed in a horizontal field.
8.5.6 Generation-3 Image Intensifier Tubes
Image intensifier tubes utilizing Generation-3 photocathodes,
i.e., NEA photocathodes such as cessiated GaAs and improved MCPs
are generally referred to as Generation-3 I.I. tubes. Their sensitivity to
much lower light level makes them more eminently suitable for
incorporation in low light level systems particularly for night vision
goggles. The Gallium Arsenide (GaAs) cessiated photocathode, the
photocathode of choice for Generation-3 tubes, is an excellent
compromise for low dark current and good infrared detection. The
photon rate is around five to seven times greater in the region 800-
900 nm than in the visible region say around 500 nm. However, this
photocathode requires protection from bombardment by gas-ions
released from the channels of the MCPs, as otherwise it would get
rapidly destroyed. To avoid this effect, a thin ion-barrier film may be
deposited on the entrance face of the MCP to trap gas-ions (Fig. 8.6).
This film may however trap some of the incoming electrons also. A
very high level of vacuum in the tube during processing would also

AL2O 3 ION BARRIER FIBRE-OPTIC


INVERTER
MGF 2 COATING (TO
IMPROVE FACEPLATE
OUTPUT)

CATHODE FACEPLATE
(FIBRE-OPTIC)

MICRO-CHANNEL PLATE
Si 3N4 COATING TO
IMPROVE PHOTO-
CATHODE OUTPUT
GaAs PHOTOCATHODE
PHOSPHOR SCREEN
LIGHT ABSORPTION
MEDIA

Figure 8.15. A schematic view of a Generation-3 wafer tube


132 An Introduction to Night Vision Technology

ensure to limit the ion-feedback damage to the photocathode. MCPs


may also be required to improve on their open area ratio by using
manufacturing techniques which make the channel input area larger
by funnel shaping their input ends. Improvements in MTF and
resolution are also possible by reducing the channel diameter. The
quality of MCPs and the stability of Generation-3 photocathodes become
important factors to push Generation-3 I.I. tubes down for operation
at 10–4 lux or a still lower value. Manufacturing accuracy and controls
become very significant. Thus, the production of Generation-3 image
inverter type on a regular basis is still not that frequent on a cost-
effective basis. Generation-3 wafer tubes, however, are now in regular
use. These tubes have a close similarity to Generation-2 wafer tubes
except for the type of photocathode used and may be an improved
MCP (Fig. 8.15). Coatings are in use which are relevant to the type of
photocathode. Fibre-optics twister to erect the image may also be
incorporated as in Generation-2 [5,6].
8.5.7 Hybrid Tubes
According to some manufacturers, stability of a
Generation-3 photocathode and thus the utilization of its sensitivity
by a noise-limited MCP continue to be difficult. The alternative may
be to prefer more robust photocathodes of the multi-alkali type
coupled to low noise MCPs. The need for an ion trap (Fig. 8.6), at
the input face of the MCP to restrict gas-ions from reaching a
Generation-3 photocathode and deteriorating it, does reduce the
total number of electrons as also the re-entry of rebound electrons
from the solid edges of the MCP channels on the input face, into
the micro-channel. As the total performance depends on the
amplification stage MCP, photocathode and the screen
improvements are more helpful by having an improved MCP with
a stable advanced Generation-2 photocathode where the technology
is in better control. Some manufacturers refer to these tubes as
super Generation tubes.
Likewise a Generation-1 inverter tube could be coupled
to a Generation-2 or Generation-3 wafer tube and enable a better
performance. Such tubes have been referred to as super inverters.
It is also possible to gate I.I. tubes for special applications
wherein the functions of an intensifier are coupled to a fast electro-
optical shutter. Fast gating is used in range-gating, fast
spectroscopy and in some special areas of plasma and nuclear
physics.
Image intensifiers, particularly the wafer types (for
compactness) can be coupled to area-array charge coupled devices
Image intensifier tubes 133

(CCD’s). The coupling is suitably effected through a fibre-optics


element, which suitably demagnifies the output size from the
intensifier to that of the CCD. Many such couplings both internally
and externally between different types of electron-optical tubes and
image intensifiers have resulted in a number of interesting
instrument systems. Some successful one’s relate to the development
of low light level (night vision) television systems.
Image intensifier tubes have been successfully combined
to silicon self-scanning array systems resulting in suitable night
vision cameras and for application in many other fields. The self-
scanning array may be a charge coupled device (CCD), a charge
injection device (CID) or a photodiode array (PDA). Self-scanning
array-based cameras though in use independently, require to be used
with image intensifiers to provide a low noise optical amplification
to produce a good signal-to-noise ratio for either very low exposure
applications or for operation at very low levels of light say below 0.5
lux minimum illumination, the usual limit of silicon self-scanning array
TV cameras having a frame rate of the order of 1/30 to 1/25 of a
second. Other important applications arise because of the ability to
electronically shutter image intensifiers as fast as 1 ns or less or
utilizing the higher sensitivity of the intensifiers in certain spectral
regions. Suitably coupled self-scanning arrays with image intensifiers
have resulted in a large number of applications be it for spectral
analysis, range gating or other application of high speed optical
framing cameras, military cameras, night time surveillance, and
astronomy. The systems so coupled are well designed for low image
distortion, linear operation, and robustness. Usually, coupling is done
with 2 and 3 Generation-2 proximity I.I. tubes. It is also possible to
operate with two or three micro-channel plates in face to face contact
to achieve high electron gains in an I.I. tube. While the electron gain
could be more than double by such means, the resolution would
have a tendency to fall and almost get halved.
Such systems, i.e., image intensifiers coupled to silicon
self-scanning array systems or CCD’s either optically or preferably
through fibre-optics have also been used for active imaging. A
narrow beam CW laser raster-scans across the object scene and
its reflections are displayed by the system on a video monitor for
direct viewing. The system can be operated in atmosphere, under
water or in space. The advantage lies in scanning large fields of
view over very short periods of time. A more interesting method
employs a laser pulse of only a few nanoseconds synchronized with
a gated I.I. based CCD so that only reflected pulses corresponding
134 An Introduction to Night Vision Technology

to a certain distance alone are received. Thus, the imagery is both


free of noise contributed by the interviewing medium and well
focused for the stipulated time or distance. This enables the exact
range also to be known. The method is referred to as range-gating.
Another interesting application using such systems could
be day-cum-night cameras. This could be done by introducing auto-
iris camera lens to control the effective aperture of the optical
system and by controlling the MCP gain. Thirteen orders of
luminance- magnitude are known to have been automatically
covered by such cameras. Such systems have also been used in
spectroscopy to measure optical radiators in linear patterns.
Conventional methods of measurement, i.e., by photographic films,
single channel photomultiplier tubes or by TV camera tube has
given way to image-intensified charge couple devices. Spectra are
acquired up to 1000 times faster and/or with better signal-to-noise
ratio during a given measuring period. Applications are for Raman
spectroscopy, multiple input spectroscopy and small angle light
scattering.
8.6 PERFORMANCE OF IMAGE INTENSIFIER TUBES
A number of manufacturers internationally produce I.I.
tubes mostly for incorporation in night vision devices for different
applications that might have been designed or produced by the same
set of manufacturers or others. The acceptance of these devices is
done through standard specifications which might have been laid
for each type of instrument and the tube. Besides the optical and
electro-optical performance, the I.I. tubes have to be environmentally
stable, withstand extremes of climates, be of minimal size and weight
and be cost-effective for the application in view. The important
parameters of optical and electro-optical significance are (i) signal-
to-noise ratio (ii) modulation transfer function (resolution and
contrast), (iii) output brightness and its uniformity, (iv) automatic
brightness control, and (v) its life. Factors like image shift, image
alignment, and equivalent background illumination are also of
concern in the tube as a whole. Besides these and simulator tests,
evaluation has also to be done for independent testing of
photocathode and phosphor sensitivity and verification of electric
stability[7,8].
8.6.1 Signal-to-Noise Ratio
At very low levels of illumination, the statistical variation
in the photon stream becomes more dominant and this results in
quantum noise for an elementary image area depending on the
number of photons received by it. When such a stream is incident
Image intensifier tubes 135

on a photocathode, the resolution characteristics of the tube are


limited primarily by the number of photoelectrons that have been
emitted as also their statistical distribution. Because of this, the
intensified image of a discrete object may not be recognizable as it
could be broken into an assortment of scintillations for that order
or resolution which is limited by the statistical considerations of
the photoelectrons received on the phosphor, even when integration
over the storage time of the eye in relation to phosphor characteristics
may be a little helpful. In practice this limit of resolution would be
further limited because of the photocathode quantum detection
efficiency, photocathode noise current, statistical distribution of
photoelectrons after multiplication in an MCP or the tube noise
factor – an overall measure. This low light level resolution is
proportional to the signal-to-noise ratio which can be defined as
the ratio of the ‘dc signal to the r ms value’ in the output beam.
For precise comparisons and evaluation, these measurements will
have to be done at a specified very low light level input which may
correspond to a starlight or overcast sky over an area which may
be of the order of a pinhole. As the I.I. tubes may be active even
when no light is incident, these measurements require to be
modified suitably. Thus the signal-to-noise ratio (S/N) may be
defined as
S S o  Sb
 (8.18)
N (N o2  N b2 )1/ 2
where
So = dc signal output when the tube is illuminated at
the specified level of illumination
No = rms noise output at the same specified level of
illumination
Sb = dc signal when there is no input light on the I.I.
tube, i.e., the background signal
Nb = rms noise when there is no input light on the I.I.
tube, i.e., the background noise
A constant of proportionality K is also introduced which
is dependent on the phosphor decay characteristics and involves a
correction factor to obtain a signal-to-noise ratio over an equivalent
bandwidth of 10 Hz independent of the frequency response of the
assembly. The equation is thus rewritten as
1 So  Sb
S /N 

K N2 N2
o b 
2 (8.19)
136 An Introduction to Night Vision Technology

Measurements are done with special test equipment


which utilize low dark current photomultiplier tubes and are able
to measure the dc and r ms values over an electronic bandwidth of
10 Hz. Thus, S/N ratios of the order of 3:1 or better may be achieved
when illuminating an area of an order of 0.2 mm dia on a
photocathode at illumination levels of the order 1.2  10–4
footcandles. Lower sensitivity of photocathodes, increasing MCP
voltage, a proper open ratio of the MCPs ion feedback effects, and
a poor detection efficiency of the phosphor and similar defects can
all lead to a poor S/N ratio[8].
8.6.2 Consideration of Modulation Transfer Function
(MTF)
Assuming an I.I. tube to be a linear system, the total
MTF of an I.I. tube would be a multiplication of the MTF values of
its components, i.e., the photocathode, electron-optics, MCP, the
screen (phosphor) and the fibre faceplates[3]. Thus, the overall MTF
of an I.I. tube can be written as
MTF (overall of I.I. tube) = MTF (fibre-optics input
faceplate) MTF (photocathode) 
MTF (electrostatic lens system)  MTF (MCP) 
MTF (screen) 
MTF (Fibre-optics output faceplate) (8.20)

MTF deterioration due to fibre faceplates and the


photocathode is relatively insignificant as the centre-to-centre
distance in these fibres is of the order of 5 m. This may not be so
true for fibre-optics image inverters where the centre-to-centre
distance may be to an order of 10 m. The MTF of the electrostatic
lens system can be considerably improved, as already stated by
curving the input and output surfaces which is very practicable
with fibre-optics faceplates. Where an output from the electrostatic
inverter tube is desired to be focused onto the plane face of the
input to a MCP, freedom from distortion is obtained by introducing
field-flattener electrodes. Thus, while electrostatic system is
responsible for a little reduction in the overall MTF, it still is not
the limiting parameter. Keeping this in view one can say that the
MTF of the Generation-1 single stage tube is limited by the MTF of
the screen (phosphor). As the electrostatic lens system and the
photocathode have almost an MTF value of unity, it is obvious that
a good phosphor efficiency is an essential requirement. It is well
known that the detection efficiency of these screens can vary from
50 to 90 per cent depending on the manufacturing process. Hence,
Image intensifier tubes 137

we have MTF for Generation-1 single tube mainly limited by MTF


for the screen, i.e.,
MTF (Generation-1, single tube)
limited by MTF (Screen) (8.21)
and in the three-stage version we have
MTF (Generation-1, three-stage)
limited by MTF (Screen)3 (8.22)
In Generation-2 inverter tubes in addition to a similar
limitation as on a Generation-1, we have more restrictive limitations
due to MCP. The limitation due to MCP will be both due to its
physical configuration as also dependent on its gain-parameters
(para 8.2.3). Thus, we have
MTF (Inverter tube with MCP)
limited by MTF (MCP)  MTF (Screen) (8.23)
This would apply to Generation-3 and Generation-2
proximity tubes, except that MTF (electrostatic lens system) would
not be relevant in this case.
Standard methods of measurement are in use at specified
low light levels may be of the order of cloudy moonless nights,
usually at the centre of the image screen. The normalisation may
be with respect to a spatial frequency of a low order say 0.2 line
pairs per mm. The specifications would then lay down the
acceptance values at increasing lp/mm, values with obviously
decreasing percentage values. This it may state acceptance values
of 25, 60 and 90 per cent at 15 lp/mm, 7.5 lp/mm and 2.5 lp/mm.
Notwithstanding these MTF measurements for the tube as a whole,
criteria need to be laid down for centre and peripheral resolution
also at low light levels, where the variation should be minimal and
resolution in excess of 30 lp/mm. Suitable tests are also designed
to check that the resolution does not fall off seriously at higher
light levels and obliterate vision of low light level objects in the
neighbourhood of a relatively intense object. This could be also an
indirect measure of veiling glare.
8.6.3 Luminous Gain & E.B.I
Exact procedures are laid down to measure luminous
gain at various luminous input levels and also evaluate the
equivalent background illumination (EBI) at room temperatures and
may be at stipulated high and low temperatures to satisfy military
requirements. It may be noted that EBI is an optical measure
related to the minimal brightness level of the photocathode
138 An Introduction to Night Vision Technology

corresponding to a light level around a decade or so below an


overcast sky in relation to when there is no incident light on the
I.I. tube. In a way it is a measure of the optical dark current. Thus,
if Ip is the dark current of a sensitive photomultiplier, Io the current
in the photomultiplier through the intensifier when there is no light
on the photocathode, IB the photomultiplier current due to the
brightness of the intensifier at a very low level corresponding to a
decade or so less than the overcast sky, say 210–11 lumens/cm2
we have
Io I p
EBI =
I B Io

= 2 10-11 lumens/cm2  (8.24)

Obviously, IB should be significantly greater than Io so


that the tube performs reasonably well at higher levels of
illumination. The value of the fraction Io–Ip/IB–Io has been put at
one or less than one in some specifications [8].
8.6.4 Other Parameters
There are many more parameters that are required to
be tested particularly in relation to the prolonged use of an I.I. tube.
Thus, automatic brightness control and freedom from damage on
exposure to brighter sources, electrical stability, surface properties
of the photocathode and the phosphor, and the like are required
to be appropriately tested. Likewise, tests have been devised for
photocathode stability, as also the stability of the tube over a wide
temperature range. Environmental stability has also to be ensured
to a high degree of reliability not unlike other equipment of a
sophisticated nature used by the military.
8.6.5 A Note on Production of Image Intensifier Tubes
In view of all the requirements discussed above, it is
apparent that the production of I.I. tubes has to be carried out
under a strict control both while selecting suitable materials for
component making and in assembly. Though it is not proposed to
go into the details of manufacture it is obvious that all the
considerations that are applicable to the high order vacuum tubes
are of great relevance to these tubes also, apart from considerations
for deposition of the photocathodes in vacuum, application of
phosphor and integration of input and output fibre-optics windows
as also the MCPs. Usually, the fibre-optics components and the
MCPs are not made in-house but purchased from other source or
a subsidiary unit. The design and assembly has to ensure that the
photocathodes do not get poisoned and further avoid burnouts in
any of the sensitive surfaces. The details of the manufacture are
Figure 8.16. A view of module assembly room
Image intensifier tubes
139
140 An Introduction to Night Vision Technology

Figure 8.17. Cathode processing station

considered to be trade secrets even when these methods differ from


one production centre to another. The other area of specialisation
in production was the wrap-around power supply which took
sometime to evolve before becoming a routine fitment. Likewise,
the assembly also has to be implemented under very strict
environmental conditions. Thus, one manufacturer reports that the
modular assembly is carried out in class 100 laminar flow clean
air tables housed in class 10,000 environment (Fig. 8.16). The multi-
alkali photocathode is processed under ultra high vacuum of the
Image intensifier tubes 141

Figure 8.18. Assembly of I.I. tubes: 18 mm and 25 mm bare tube


modules, high voltage power supply units and finished
goods.

order of 10–9 mm of Hg. The vacuum system is an all stainless


steel chamber with vacuum manipulators employing cryo pumps
(Fig. 8.17). As ultra high vacuum techniques and photocathode
deposition techniques are more or less well established, the chain
of production of I.I. tubes and their testing does give a high rate of
acceptance unlike what happens with quantum detectors
particularly in the linear or matrix form for night vision in the
thermal region of 8-12 m. The I.I. systems thus continue to be
cost-effective for a large number of applications. Finally (Fig. 8.18)
shows the relatively simpler subassembly schemes of 25 mm and
18 mm I.I. tubes. The upper row in the photograph shows the sub-
assemblies that go to form the complete 25 mm I.I. tubes, while the
lower row shows a similar layout that form the 18 mm I.I. tube. The
first column shows the wrap-around power assemblies for 25 mm
and 18 mm I.I. tubes, respectively.
REFERENCES

1. Biberman, L.M., & Nudelman, S., (Eds). Photoelectronic Imaging


Devices. Vol. 1 & 2. (Plenum Press, 1971).
2. Kapany, N.S., Fiber Optics: Principles and Applications.
(Academic Press).
142 An Introduction to Night Vision Technology

3. Kingslake, R. & Thompson, J. B. (Eds). Applied Optics & Optical


Engineering. Vol. 6, Chap. 10. (Academic Press. 1980).
4. Zhigarev, A. Electron Optics and Electron-beam Devices.
(Moscow: MIR Publishers, 1975).
5. Csorba, P. I. Current Status and Performance Characteristic of
Night Vision Aids, in Opto-Electronic Imaging. (New Delhi: Tata
McGraw Hill Publishing Co., Ltd., 1987).
6. Girad, P.; Beauvais, Y. & Groot. P.D. Night Vision with
Generation-3 Image Intensifiers, in Opto-Electronic Imaging.
(New Delhi: Tata McGraw Hill Publishing Co. Ltd., 1987).
7. Cochrane, J.A. & Guest, L.K.V. Image Intensifier Design
Technologies, in Opto-Electronic Imaging. (New Delhi: Tata
McGraw Hill Publishing Co. Ltd., 1987).
8. Image Intensifier Assembly, 25 mm. Micro-channel Inverter, MIL-
I-49040E. (Military Specification, 29.5.92.).
CHAPTER 9

NIGHT VISION INSTRUMENTATION

9.1 INTRODUCTION
The image intensifier (I.I.) based instrument systems
developed so far have been of significant use in night time
observation and navigation, primarily on land and from helicopters.
The need for night time use to direct fire on enemy targets by the
infantry, artillery and the armoured corps has resulted in a series
of instruments for each specific application. It is therefore obvious
that the instruments systems are likely to have optical
characteristics like the field of view, magnification etc., similar to
those in use during daylight for observation, navigation and fire
control. Reticles would also be required to be introduced for proper
laying and engagement and thus match the weapon capabilities
as accurately as possible. The methods of mounting on or in the
weapon system is also of great concern. Besides, like all other
types of military instruments these instruments have to withstand
climatic and environmental tests as may be laid down for
instruments in the daylight category for a given weapon system.
These requirements would be both for use and in storage. The
criteria for acceptance of I.I. tubes laid down in military
specifications, as also for the acceptance of I.I. based instrument
systems, include these aspects in detail.
Further, as we are aware of the limitations of the human
eye, environment and night conditions, technological aspects get
mainly augmented by optical considerations and the I.I. tubes.
Image intensifier tubes in turn are dependant on photocathodes,
electron amplification and phosphors. The instrument system
as a whole is therefore an integrated multiple of all the above factors.
Nonetheless, the success of an instrument would depend on overall
considerations intended for the satisfaction of a user. Apart from
field of view, magnification and the mechanical limitations that it
may have to satisfy, it is obvious that the user would be interested
144 An Introduction to Night Vision Technology

in the distance that such a system can see during the night, i.e.,
the night range. This is an important parameter of a system which
cannot be predicted by any single subcomponent and will be also
dependent on the night time conditions. Theoretical and
experimental prediction about the night range is therefore an
important parametric requirement.
9.2 RANGE EQUATION
Various paradigms have been developed from time
to time to arrive at a possible range value during night time.
One such paradigm[1] has been explored here more to illustrate
the factors on which range is dependent and to indicate the
possibilities for optimisation. This theoretical approach gives a
reasonable basis but it is still necessary to evaluate a given system
under standard night time conditions so that the range in the
field can be more or less estimated to a reasonable degree of
accuracy. It would still be an estimate even when tried out in
practice in the field as the field conditions are not likely to remain
standard all through the measurements.
If we take an object dimension of Z m at a night range
of R m and assume that N line-pairs at spatial frequency Ak in
line-pairs per mm are required to detect it at the photocathode
we have,

Z N / Ak
 .......(in meters ) (9.1)
R F
where F is the focal length of the objective in mm.
This relationship though geometrically true requires to
be investigated further for the practical value that R can attain for
a given night vision system.
If we now concentrate on a detail of area a = Z 2 in the
image at the photocathode as the minimum area of detection
and assume bar chart as an object in both the x and y planes of
the image where N/A k is the minimum resolution in each
direction, i.e., x and y, we have in a rotationally symmetric optical
system:
a=N/Ak .N/Ak .10 6 m
or
a= (N/Ak )2.10 6 m (9.2)
Assuming a photon flux of n1 photons from the object
per sq. m on the detail then over an integration time of t seconds
Night vision instrumentation 145

for the system as a whole, we have the number of photons per


integration time incident in this area as
n1at (9.3)
and further assuming a photon flux of n2 photons on this area, for
the same integration time from the background, we have the total
number of photons incident per integration time as

n 2at (9.4)
From the above the signal, S can be defined as

S  n1at  n 2at (9.5)

and the noise N as the quadrature sum of photons in the detail


and the background, i.e.,

N  n1at  n 2at (9.6)

due to photon fluctuations both in the detail and background. Thus,


the signal-to-noise ratio (S/N) is given by
(n  n 2 )2.at
(S / N )2  1
n1  n 2

= C n1  n 2  at
2

= 2C 2n at (9.7)

Where C the contrast is defined by n1  n 2 / n1  n 2 , and n is the


average number of photons of both the signal and the background
photon rate. The deVries-Ross law then states that the detail can
be resolved if S/N is greater than a value p which is a constant
dependant on the type of the scene and the state of eye and thus a
factor related to perceptibility, i.e., 2C 2n at  p 2 or in the limiting
case

2C 2n at  p 2 (9.8)

For a bar pattern this value can be taken as 1.1 according to


some authors.
Now, if the mean illuminance at the photocathode is c lux
and the photocathode sensitivity is Kp in A/lumen, the current generated
then is c.Kp microamperes.
The number of photoelectrons emitted therefore is
146 An Introduction to Night Vision Technology

c .K p
electrons/s/m 2 (9.9)
e.
where e is the electron-charge in C having a value of e =1.6010–13 C.
Further if the noise power factor of an I.I. tube is defined by

2
 Signal to noise ratio of the photoelectrons 
f   (9.10)
Signal to noise ratio of the output scintillations 
the effective number of photons available for detection is

c . K p
. per s/m 2 (9.11)
e. f

This factor f would incorporate the efficacy of the MCP and


the phosphor screen apart from other factors such as that be contributed
by the electron-optics of the tube.
Substituting this value for n in Eqn 9.8, we have


2C 2 c  K p at  = p2
e. f
and further substituting for a from Eqn 9.2, we have

f
 AK /N 
2
c  A  2 (9.12)
C K p t

ep 2  6
where A  10 is a constant.
2
Following Eqn 8.16, c has a relationship with object
luminance o. The object luminance is given by

 o  0.25  o D/F    to a first approximation


2

0.25  0 (F number )2. . (9.13)


where D,F, and  have the same meaning as in para 8.4 of chapter 8,
i.e., diameter of the objective D, focal length of the objective F, and
 the transmission factor through the atmosphere and the objective
lens system.  further adds a factor indicating the reflectivity of the
object scene.
Combining Eqns 9.12 and 9.13, we have
Night vision instrumentation 147

f
A 2 A /N 2  0.25  0 F number 2  
C K p t k

Substituting for  Ak /N 
2
from Eqn 9.1, we have

f
A. 2
(R / Zf )2  0.25 o (F . number )2.  . 
C . K p .t

i.e.,

 
R 2  A *Z 2  0 F number  F 2 C 2 k p /f   t
2
(9.14)

where A* = 0.25/A and basically taken into account the perceptibility


factor p, the electron charge e and other numerical values, taking
care that the resultant value of R is in metres. Before one makes
use of this equation, it would be better to interpret C the quantum
contrast in terms of the modulation transfer function of the night
vision system as a whole and of its constituents.
The quantum contrast can be directly interpreted in terms
of normally defined contrast

I max  I min
i .e.,
I max  I min

Thus, while viewing, a line-pair Imax would signify the


average brightness in the white bar and Imin the average brightness
of the dark bar. Restricting now to a frequency Ak line-pairs/
mm, we could state that the object contrast Co has been modified
by the modulation transfer function (MTF), M of the total electro-
optical night vision system. While this may not be quite true for
square wave response, as M refers to sine wave response only, it
has been shown that for frequencies of normal interest, i.e., higher
than 2.5 line-pairs/mm the higher harmonics in the expansion
of a square wave in terms of a summation of sine waves do not
have greater significance. Further, as a night vision system can
be taken to be a linear optical system, the total MTF, i.e., M can
be obtained from the MTF value for the individual subunits of
such a system. Usually, as a night vision system consists of three
major subsystems in a cascading order, it can be stated that
M= M o. M i. M e (9.15)
where, Mo is the MTF of the objective subsystem
148 An Introduction to Night Vision Technology

Mi is the MTF of the I.I. tube


Me is the MTF of the eyepiece
All at a given frequency Ak.
Equation 9.14 can now be rewritten in the following form:


R 2  E .( Z 2 . o . . ) (F number )2 .F 2 .M o2 .M e2  K 2
p .M i / f .t (9.16)

where constant E now additionally takes into account the constant


of proportionality between the contrast values and the MTF.
The above equation is illustrative of the fact that the range
achieved in meters depends on the following factors:
(i) Constant E which takes into account the perceptibility factor,
electron charge and constants of numerical conversion from
sq. mm. to sq metres.

(ii) Factor: Z 2.o .  . 

This factor could be considered as the object scene factor, as


it refers to Z the minimum detectable size of the object
scene,  the reflectivity of this scene and  the transmission
factor through the atmosphere and the objective lens
system. As o. and  are natural factors, one can only argue
about the Z value that may be necessary in practice for
detection of given objects, assuming that the transmission
factor through the optics has already been optimised.

(iii) Factor: F number  F 2 2


M o2 M e2 
This could be considered as the optical factor. Herein we find
that range is directly proportional to F number and the focal
length value. Thus range will be higher if the F-number is
faster and focal length larger. This could also be interpreted
as that the diameter of the objective should be larger. However,
as considerations of systems design particularly their
requirements of compactness as also for field of view limit the
diameters to which one can go practically, it is obvious that
for a given diameter, the system should be as fast as possible,
compatible with the desired field of view requirements and
the values desired for the MTF.
The optical factor also indicates that for the spatial frequencies
of interest the values of MTF for the objective and the eyepiece
should be as close to unity as possible.
Night vision instrumentation 149

(iv)  2

Factor K p M i / f t

This factor refers to the image intensifier function. The higher


the sensitivity of the photocathode, i.e., Kp, the better will be
range achieved provided the noise factor f does not increase
proportionately. The designers therefore try to balance these
two factors to achieve the best response that is possible. Low
noise MCPs have been used to improve on the Generation-2
tubes to enable such types to compete with Generation-3 tubes
(para 8.5.7).
As in the case of MTF values for the objective and eyepiece,
here also the attempt has to be made to improve on the MTF
value of an I.I. tube. Integration time t however is decided by
the phosphor time response which in no case should be more
than that of the eye to detect movement.
Many authors have worked out similar relationships as
in Eqn 9.16, essentially trying to calculate range for different low
light level objects with varying atmosphere, optical and intensifier
characteristics. Some of these are also amenable to computer
programming. Nomographs have also been evolved.
One would now like to invite attention to our assumption
as it was stated earlier that an object of dimension Z m is detectable
at the photocathode at a spatial frequency Ak with N line-pairs.
The value that N should have is not clear from the above equation
and rightly so, as this value is dependent on our ability to detect,
recognize and identify the object. Reference to Johnson criteria in
para 2.4 is therefore now relevant. As stated therein, the minimum
line-pairs for detection, orientation, recognition and identification
are 1.0, 1.4, 4.0 and 6.4 line-pairs respectively with tolerances as
indicated therein. Thus N has a value dependent on the task that
has to be performed.
Infantry, artillery, armoured corps and other wings of
the Armed Forces define their own target and target sizes in
relation to the range and accuracy of their weaponry. Hence the
value of N, i.e., the number of line-pairs at a particular spatial
frequency should be such as to be able to detect, recognize or
identify in accordance with Johnson’s criteria for the target-size
in question. For instance tank detection is usually decided around
the turret-size and assuming an accurate engagement at around
2 km range, one should be able to detect less than 2 metres at a
still larger range. This information can be correlated with the spatial
150 An Introduction to Night Vision Technology

frequency and the number of lines that may achieve such an


objective and hence the minimal parameters of the night vision
system. Intensive work has been done at many places particularly
in the USA to arrive at the possibility of detection and recognition
as a function of system resolution across critical target dimension.
Thus, it has been worked out by one group of researchers that a
60 per cent probability for recognition exists for a vehicle if three
cycles of spatial information are resolved in the height of the
vehicle. Similarly, a similar value of possibility may be thought of
for detecting a human being at around two cycles of spatial
information (Fig. 2.1).
9.3 EXPERIMENTAL LAB TESTING FOR RANGE
EVALUATION
While one can project the optimal ranges using equation
at 9.16 or approaches similar to it, the values obtained can only be
a good guide for a design effort. It would still be necessary to
experimentally lab-test such designs with artificially created night
vision scenes before operating these in the field. Workers in this
specialization have therefore evolved experimental methods for the
purpose. Each such research group or a quality control agency is
likely to work out the exact conditions for system evaluation based
on their own experiences. One such group in India preferred to utilize
a hall of around 300 feet in length and 24 feet in width properly
light-sealed for this purpose[3].
Their approach involved (i) study of illumination
characteristics of artificial sources and their matching to natural
levels of illumination during night, (ii) making of models/targets
with different reflectivities and contrasts which could be placed at
one end of the hall, and (iii) using the other end of the hall for
installing instruments to observe the models/targets. The details
were worked out in the following manner:
(a) Study of illumination characteristics of artificial sources and
their matching
The spectral distribution and radiant power of night sky was
studied and the distribution was standardised as shown in
Fig. 4.2 both for moonlight and starlight. Thereafter, efforts
were made to combine suitably attenuated low power tungsten
sources with appropriate filters such that the resultant
transmission had both the spectral and intensity distribution
corresponding to the night sky as shown in Fig. 4.2. A cluster
of three such lamps and filters with suitable apertures was
selected to produce reliable levels of illumination ranging from
10–8

SUM OF THREE LAMP CLUSTER

10–9

STARLIGHT
10–10

RADIANCE WATTS/sq cm/STERADIAN/m


 LOW POWER TUNGSTEN LAMP THROUGH SELECTED FILTER A – 

LOW POWER TUNGSTEN LAMP THROUGH SELECTED FILTER B – 


LOW POWER TUNGSTEN LAMP THROUGH SELECTED FILTER C – 


0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0


WAVELENGTH (m)
Night vision instrumentation

Figure 9.1. Three lamp cluster as an equivalent to starlight


151
152

10-8

SUM OF THREE
LAMPS
'
'
10-9

'
'

'
An Introduction to Night Vision Technology

MOONLIGHT
'
10-10

LOW POWER TUNGSTEN LAMP THROUGH SELECTED FILTER A' - '

RADIANCE WATTS/Sq cm/STERRADIAN/m


LOW POWER TUNGSTEN LAMP THROUGH SELECTED FILTER B' -  '
LOW POWER TUNGSTEN LAMP THROUGH SELECTED FILTER C' - '

0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0


WAVELENGTH (m)

Figure 9.2. Three-lamp cluster as an equivalent to moonlight


Night vision instrumentation 153

clear moonlit night to overcast starlit night. Twenty-four such


lamps were housed in an illuminator of a suitable design
painted inside with dull, white diffusing paint to give
uniformity of illumination over a given field of view. Ten such
illuminators were placed strategically at a suitable height
along the length and width of the hall at predetermined
places such that there could be uniform illumination in the
scene area as also its foreground. A selection of eight levels
of illumination was possible to correspond from overcast
starlight to full moonlight. While the first four levels
corresponded to starlight with varying amount of cloud-cover,
the next four levels represented increasing moonlight levels
with the phases of the moon. Figures 9.1 and 9.2 respectively
depict the illumination levels as obtained in this hall at the
target end in comparison with the standard illumination
levels for starlight and moonlight.
(b) Making of models/targets with different reflectivities and
contrasts for the scene-area
The approach here was two-fold. One was to utilize standard
established large size resolution charts like those based on
USAF 1951 charts with decreasing spatial frequencies at three
levels of contrast, i.e., high, medium, and low, to work out the
limits of the systems to be tested. The other was to prepare
models of likely objects, such as vehicles, tanks etc, for
placement in the scene area for observation from the other
end of the hall. The models subtended the same angle at the
observation point, i.e., at 300 feet as an actual object would
have subtended at a pre-thought of range in kilometres. The
models thus were scaled down from the originals both in
respect of size and contrast as also reflectivity. Results could
be thus obtained which would be closer to those in the field.
These models and test charts were placed at a suitable height
by putting them on a stage where the foreground on the stage
could also be suitably illuminated and/or varied in its
reflectance. The stage area was almost as wide as the hall and
with a length equal to its width. This ensured a reasonable
field as well as an adequate foreground for the models and
test charts when tested from the other end.
(c) Installation of instruments
Suitable pedestals were made at the observation end of the
hall for convenience of observation of the scene area at the
other end. Naturally, the heights of the pedestal and the stage
154 An Introduction to Night Vision Technology

have to be appropriately matched. The night vision


instrument testing in this manner proved quite effective in
comparing various designs for various applications. The data
compiled with the use of eight levels of illumination from
clear moonlight to an overcast sky provided the field
behaviour of the instruments to a sufficiently accurate
degree to satisfy user requirements.
Another approach has been to prepare a collimator type
test equipment[4]. Thus, an integrated test equipment designed by
one of the agencies is a composite unit having facility of producing
the required low light level collimated beam for resolution and gain
measurements and other tests. It uses a calibrated lamp, an iris
diaphragm, a Lambertian integrator and a set of neutral density
filters for producing low light levels to illuminate the reticles of a
collimator which can be brought into the focal plane one by one
using a rotating turret. The reticles introduced may be USAF 1951
pattern, so as to check on resolution of the night vision device at
low light levels varying from cloudy starlight to full moonlight at
different contrast levels or a uniformly illuminated plane parallel
glass window which becomes a source for gain measurement by the
system as a whole. A suitable photometer can be used to measure
the light levels incident on the objective lens of the night vision
device. With various accessories, such a system can be used to check
the overall optical and electro-optical parameters of the night vision
system or I.I. tubes, independently. The approach thus gives a
good result on the optical and intensifier factors referred to in
Eqn 9.16. It can be utilized to test a series in production or to finalize
the attributes of an acceptable design. The advantage is that it
dispenses with the need of a long hall and the equipment can be
used both for quality control and production. It however, cannot
lead to direct prediction of ranges possible as in the former approach
where models are in use, subtending the same angle at the
observation end as an original would have subtended at a given
range.
9.4 FIELD TESTING
Notwithstanding the lab-testing or range-prediction as
referred to in earlier paragraphs, it may still be necessary to field
test every new instrument both from the point of view of night
observation and night fighting capability after their successful
laboratory tests. While for night observation one can select any
desired area and conduct experiments both in moonlight and
starlight during different periods, the same is not true for checking
Night vision instrumentation 155

their night fighting capabilities. For such testing it would be


necessary to actually fire a weapon system to arrive at a correct
decision. Obviously, such tests will have to be done in proper
established military ranges otherwise utilized for daylight
engagements also. Once a new instrument is approved on the field,
further serial production can be very well controlled by laying down
the lab standards and by checking the overall performance by lab
testing.
9.5 INSTRUMENT TYPES
Figure 9.3 shows a photograph of a Generation-0 night
vision system which was adopted in early sixties for the armoured
corps. As indicated in the photograph, the complete set consisted
of a driver’s night sight with infrared headlights cutting out the visible
as also the gunner’s night sight which operated in tandem with the
larger infrared searchlight cutting out the visible and thus permitting
larger ranges to be achieved with the sights. Obviously, such a
system could be detected with infrared sensors. Further, the system
was rather cumbersome as a large searchlight had to be barrel or
turret mounted involving many complications in the mechanical
design so as to confine both the sights and the searchlight to the

GUNNER'S NIGHT COMMANDER'S NIGHT SIGHT


SIGHT SEARCHLIGHT (GUNNER & COMMANDER)

POWER SUPPLY
UNIT
HEADLIGHT
DRIVER'S NIGHT
(DRIVER)
SIGHT

Figure 9.3. Image converter based active night vision devices


156 An Introduction to Night Vision Technology

Figure 9.4. Hand-held binocular (image-intensifier based)

same area of vision. Obviously, for driving the system, one has to
have a unit magnification and a significant field of view, while for
engagement of a target the Gunner’s sight should have a compatible
magnification and correct illumination of the scene. Consequent on
the development of alkali and multi-alkali photocathodes and
Generation-1 and Generation-2 series of instruments, Generation-0
series is now obsolete for military purposes, though these can still
be used for perimeter search or surveillance in security zones.
Figure 9.4 shows a hand-held night vision binocular of
the Generation-2[5]. Such binoculars utilize advanced I.I. tubes
matched for their sensitivity and noise factor or tubes of Generation-3.
Generally, these binoculars have only one objective channel while
the viewing is through two oculars, more for comfort of vision than
for detailed depth appreciation. For some applications, the phosphor
screen can also be viewed through a carefully designed ocular system
allowing the scene to be seen with both the eyes through a single
magnifier type of optical component. Such systems referred to as
biocular systems have a distinct advantage, as the positioning of
the eyes is not critical.
Figure 9.5 shows an I.I. based night vision observation
device integrated with a laser rangefinder and a goniometer[5].
Such an observation device has a large aperture at a fast f-number
so that the optical factor in Eqn 9.16 is maximized in addition to the
image intensification factor which in any case should be as high
as possible. This approach permits a maximum night range that
can be viewed subject only to the object scene factor. The object scene
is also improved by appropriate coatings of the optical elements so
Night vision instrumentation 157

LASER RANGEFINDER

IMAGE INTENSIFIER
BASED NIGHT
OBSERVATION DEVICE

GONIOMETER

Figure 9.5. Image intensifier-based night observation devices


integrated with laser range finder and goniometer.

that the transmission factor through the optics is also as near


unity as possible. As the night vision capability is high, this
capability can be utilized for fire control purposes by say an artillery
unit if both range measurement and direction of the target are
simultaneously measured. Thus, while the laser rangefinder
mounted in tandem is utilized to measure the range accurately
utilizing laser pulses, the goniometer gives a measurement of the
precise bearing (and may be elevation too). Obviously, the two units,
i.e., the night observation device and the laser rangefinder, have
to be appropriately synchronized and then appropriately mounted
on the goniometer to aim at a given target.
Figure 9.6 shows a low light level television system [5].
Though low light level television systems have not been discussed
in this monograph in depth, these have found limited use in
applications. It is particularly useful where the same scene is required
to be viewed at different places simultaneously, as for instance a
158 An Introduction to Night Vision Technology

Figure 9.6. Low light level television system

commander and a gunner in an armoured fighting vehicle. In such


systems the I.I. tube may be coupled to a vidicon or orthicon type
of tube either internally or externally.
Presently, however, it is usual to couple image
intensifiers to silicon self scanning array systems like the charge

MONITOR (OPERATOR) CAMERA

MONITOR (COMMANDER)

Figure 9.7. Thermal imager


GUNNER'S DAY CUM NIGHT SIGHT

LOADER'S EPISCOPE (DAY LIGHT)


GUNNER'S ARTICULATED SIGHT
COMMANDER'S (DAY LIGHT) COMMANDER'S EPISCOPE (DAY LIGHT)

DAY CUM NIGHT


SIGHT

DRIVER'S
NIGHT SIGHT
(ROLE OF NIGHT VISION)
Night vision instrumentation

Figure 9.8. Fire control system (role of night vision)


159
160 An Introduction to Night Vision Technology

couple devices to obtain optimum utility for a series of applications


(para 8.5.7).
Night vision based on image intensification has a strong
competition from thermal imaging. Thermal imaging for night
vision utilizes the atmospheric window 8 to 12 m in the far
infrared. Self-radiation from the objects in this region is collected
through a fast infrared optical system and concentrated on a
quantum detector or detectors in some arrangement, i.e., array,
columns or matrix sensitive to that region. Dependent on the
design considerations of the system, the instrument system may
employ scanners, cooling arrangement for the detectors and
coupling to a video system for display. The intricacies of the design
as also the cost of detectors along with their selective electronics
and other opto-mechanical and opto-electronic considerations
besides cooling systems, scanners and the like make the system
relatively very expensive. Nonetheless, the alternative is well
utilized for specific applications where cost considerations are
not that important vis-a-vis the strategic requirements for long
range viewing. Night vision based on thermal imaging has not
been dealt within this volume. Figure 9.7 shows one such system
developed by the Defence R&D Organisation[5]. As may be
inferred, the refractive materials for optics for the infrared region
have also to be such as to transmit in the region 8 m to 12 m
and thus are special both in their nature and in their working.

Figure 9.9. Weapon-mounted night vision device


Night vision instrumentation 161

It is natural that with the development of the night


vision technologies, weapon systems have also been adopted
to incorporate these features to extend their roles for night
fighting. Thus, an armoured vehicle should have night vision
capabilities for its movement, observation and target
engagement. In other words, the driver should have a night
sight, a gunner too, but one with an additional capability to
guide engagement to a target while the commander has a system
for night time observation. Figure 9.8 shows a sketch of an
armoured vehicle with a likely layout for its daylight and night
time vision[5]. For small arms a simpler system would be
adequate to meet the user requirements[6]. Figure 9.9 shows
the photograph of one such system developed by the Defence
R&D Organisation for the Indian Army[5].

REFERENCES
1. Blackler,F.G., "Practical Guide to Night Viewing System
Performance", SPIE, Assessment of Imaging System, Visible and
Infrared (SIRA). vol. 274, (1981), pp.248-55.
2. Soule, H.V. "Electro-optical Photography at Low Illumination
Levels". (John Wiley and Sons Inc).
3. Report on Creation of Test Facilities for Night Vision in the 300ft
Long Hall. (IRDE, Dehradun).
4. Integrated Test Equipment for Night Vision Devices. (Perfect
Electronics, Dehradun).
5. Six Photographs and One Sketch. Various Night Vision Devices.
(Courtesy, Instruments Research & Development
Establishment, Dehradun).
6. Gourley, S & Henish, M. "Sensors for Small Arms". International
Defence Review. vol. 5, 1995, pp. 53-57.
Index
A E
Accommodation 1 Electron image intensifier
Acquisition 16, 23 120
Active imaging 56 Electron-optics 117
Airy disc 66, 68, 69 Entrance pupil 62
Alloy Photocathodes 81 Environment 29
Aperture lens, 118 Exit pupil 62
Aperture stop 62 Experimental lab testing for
Atmospheric windows range-evaluation 150
30, 57 Eyepieces 59
Attenuation coefficient 33
F
B
Fibre-optic twisters 109
Back focal-length 61 Fibre-optics 109
Background 44 Field of view 61
Bipotential lens 118 Field stop 62
Blackwell’s Approach 18 Field test 154
C Focal-length 61
Front focal-length 61
Cathodo-luminescence 93 Fused fibre-optics faceplates
Charge coupled devices 109
(CCD) 14, 133
Collimator type test equip- G
ment 154 Generation-0 125
Composite Photocathodes night vision system 155
81
Cone Photoreceptors 11 Generation-1 106, 125
Cone Receptors 6 Generation–2 106, 109
Cones 9 Wafer Tube 129
Contrast 23, 35, 145 Generation-3 107, 131
D H
Detection 17, 20, 23 Hand-held night vision
of movement 22 binocular of the
probability 23 Generation-2 156
164 An Introduction to Night Vision Technology

Human eye 2 Night time sniper’s rifle


Hybrid tubes 131 telescope 161
Night vision devices 52
I Image intensifier function
Image Intensifier based night 149
vision observation device Night vision system 147
156 Optical factor 148
Identification 17, 20 Night-time turbulence 40
Image intensifier tubes Noise power factor 146
Production of 138 Numerical aperture 63
Performance 134
O
Signal–to–noise ratio 134
Image intensifiers 14 Object scene factor 148
Function 149 Optic flow 1
Imaging performance 26 Optical designs 55
Instrument systems 143 Optical parameters
Instrument types 155 Schematic eye 3
Optical system 59
J Orientation 20
Johnson criteria 19, 23
P
L Paraxial approach 64
Langrange-helmholtz equa- Passive imaging 56
tion 119 Perfect image 58
Low light level television Perfect optical system 58
system 157 Phosphor
Luminescence decay 99 Luminous transitions in
Luminous gain & E.B.I 137 94
Luminous sensitivity 89 Phosphor ageing 104
Phosphors 93
M Luminescence efficiency
99
Magnification 60, 63
Photocathode 52, 120
Micro-bolometer 57
Alkali 82
Modulation transfer function
Dark current in 90
66, 69, 24, 136
Efficiency of 79
Monochromatic diffraction
Negative affinity 83
limited mtf 66, 67, 68
Response time 87
Moonlight 44, 46
Sensitivity 87
Multi-alkali photocathode
Transferred electron 85
140
Types 80, 83
N Photoemission 77
Natural backgrounds 50 Q
Night illumination 43
Quantum contrast 147
Index 165

Quantum detection efficiency Snow 34


25 Starlight 46, 48
Quantum detectors 57 Stereopsis 1
Quantum starved 43
T
R
Target 44
Range equation 144 Thermal imaging 160
Ray tracing 65 Third order aberrations 64
Recognition 17, 20 Transmission 30
Reflectivity at night 48 Trigonometrical ray tracing
Retina 2, 5, 7, 24 65
Rods 9
U
S
Unipotential lens 118
Scattering coefficient in
rainfall 34 V
Schematic eye 2 Vignetting 62
Screen fabrication 103 Visibility 35
Signal-to-noise 27 Vision 1, 8
Signal-to-noise ratio 20 Vision cues 17
Silicon self-scanning array Visual system 6
systems 133
Small arms 161 Z
Snell’s law 64 Zero generation 105

You might also like