Professional Documents
Culture Documents
NIGHT VISION
TECHNOLOGY
AN INTRODUCTION TO
NIGHT VISION
TECHNOLOGY
R Hradaynath
R Hradaynath
Series Editors
Editor-in-Chief Editors
Mohinder Singh Ashok Kumar
A Saravanan
Asst Editor Editorial Asst
Ramesh Chander AK Sen
Production
Printing Cover Design Marketing
JV Ramakrishna Vinod Kumari Sharma RK Dua
SK Tyagi RK Bhatnagar
The views expressed in the book are those of the author only. The
Editors or Publisher do not assume responsibility for the statements/
opinions expressed by the author.
ISBN: 81–86514–10–4
Foreword ix
Preface xi
Acknowledgements xv
CHAPTER 1
VISION & HUMAN EYE 1
1.1 Introduction 1
1.2 Optical parameters of human eye 2
1.3 Information processing by visual system 6
1.4 Overall mechanisms 9
1.4.1 Light stimulus 9
1.4.2 Threshold Vs intensity functions & contrast 10
1.4.3 Colour 11
1.5 Implications for night vision 12
CHAPTER 2
SEARCH & ACQUISITION 15
2.1 Search 15
2.2 Aquisition 16
2.3 Blackwell's approach 18
2.4 Johnson criteria 19
2.5 Display signal-to-noise ratio 20
2.6 Detection with target movement 22
2.7 Probabilities of aquisition 23
2.8 Contrast & acquisition 23
CHAPTER 3
THE ENVIRONMENT 29
3.1 Introduction 29
3.2 Atmospheric absorption & scattering 31
3.2.1 Scattering due to rain & snow 34
3.2.2 Haze & fog 35
3.2.3 Visibility & contrast 35
3.3 Atmosphere modelling 38
3.4 Instruments, night vision & atmospherics 39
(vi)
CHAPTER 4
NIGHT ILLUMINATION, REFLECTIVITIES &
BACKGROUND 43
4.1 Night illumination 43
4.1.1 Moonlight 44
4.1.2 Starlight 48
4.2 Reflectivity at night 48
4.3 The background 50
4.4 Effect on design of vision devices 52
CHAPTER 5
OPTICAL CONSIDERATIONS 55
5.1 Introduction 55
5.2 Basic requirements 58
5.2.1 System parameters 59
5.2.2 Design approach 64
5.2.3 Design evaluation 66
5.3 Optical considerations 69
CHAPTER 6
PHOTOEMISSION 77
6.1 Introduction 77
6.2 Photoemission & its theoretical considerations 77
6.2.1 Theoretical considerations 78
6.2.2 Types of photocathodes & their efficiencies 79
6.3 Development of photocathodes 81
6.3.1 Composite photocathodes 81
6.3.2 Alloy photocathodes 81
6.3.3 Alkali photocathodes 82
6.3.4 Negative affinity photocathodes 83
6.3.5 Transferred electron(field assisted) photocathodes 85
6.4 Photocathode response time 87
6.5 Photocathode sensitivity 87
6.6 Dark current in photocathodes 90
6.7 Summary 91
CHAPTER 7
PHOSPHORS 93
7.1 Introduction 93
(vii)
7.2 Phosphors 93
7.3 Luminous transitions in a phosphor 94
7.4 Phosphor mechanisms 96
7.5 Reduction of luminescence efficiency 99
7.6 Luminescence decay 99
7.7 Phosphor applications 100
7.8 Phosphor screens 101
7.9 Screen fabrication 103
7.10 Phosphor ageing 104
CHAPTER 8
IMAGE INTENSIFIER TUBES 105
8.1 Introduction 105
8.2 Fibre optics in image intensifiers 108
8.2.1 Concepts of fibre-optics 109
8.2.2 Fibre-optics faceplates 110
8.2.3 Micro-channel plates 114
8.2.4 Fibre-optic image inverters/twisters 117
8.3 Electron optics 117
8.4 General considerations for image intensifier designs 120
8.5 Image intensifier tube types 125
8.5.1 Generation-0 image converter tubes 125
8.5.2 Generation-1 image intensifier tubes 126
8.5.3 Generation-2 image intensifier tubes 127
8.5.4 Generation-2 wafer tube 129
8.5.6 Generation-3 image intensifier tubes 131
8.5.7 Hybrid tubes 132
8.6 Performance of image intensifier tubes 134
8.6.1 Signal-to-noise ratio 134
8.6.2 Consideration of modulation transfer function (MTF) 136
8.6.3 Luminous gain and E.B.I 137
8.6.4 Other parameters 138
8.6.5 A note on production of image intensifier tubes 138
CHAPTER 9
Dehradun R Hradaynath
Former Director & Distinguished Scientist
Instruments R&D Establishment
DR&DO, Dehradun
ACKNOWLEDGEMENTS
R Hradaynath
CHAPTER 1
1.1 INTRODUCTION
Vision entails perception—by the eye-brain system, of the
environment based on reflectance of the static or changing
observable scene illuminated by a light source or a number of
sources, and that of the sources themselves. In most cases, the
illumination is natural and due to sun, moon and stars along with
possible reflectance of these sources by clouds, sky, or any land or
water mass. These days artificial illumination is also of significance.
The ability of a living species to recognize and represent sources,
objects, their location, shape, size, colour, shading, movement and
other characteristics relevant to its planning of action or interaction
defines its observable scene. The observable scene would thus be
limited by the capability of a species and the information sought by
it. Sustained vision would further require large steady-state
sensitivity to properly react to amplitude and wavelength changes
in the illuminating sources. Thus perception of a given scene should
not get distorted by observation from sunlight at noontime to
starlight at night, or under a wide range of coloured or white artificial
sources or by facing away or towards the sun.
Vision as perceived above would therefore call for
processing of the input visual signal to attain what has been stated.
For instance location of objects in space or their movement may be
helped by
(a) Stereopsis, i.e., using cues provided by the visual input in
two spatially separated eyes.
(b) Optic flow, i.e., by using information provided to the eye from
moment to moment (i.e., separated in time)
(c) Accommodation, i.e., by determining the focal length which
will best bring an object into focus, and
2 An Introduction to Night Vision Technology
22.38
20
2.38
1.96
15
R1 7.38
N NI
F H I
R2 R3 FI
H
6.96
3.6
7.2
Figure 1.1. Optical constants of Helmholtz's schematic eye (all
dimensions are in mm).
Vision & human eye 3
Anterior
Lens R2 10 3.6 12.3 —
surface
Focal plane F — –13.04 — —
Principal H — 1.96 — —
plane
Nodal plane N — 6.96 — —
Posterior
Lens R3 –6 7.2 20.5 —
surface
Focal plane F — 22.38 — —
Principal H — 2.38 — —
plane
Nodal plane N — 7.38 — —
Entrance — — 3.04 — —
pupil position (size 1.15 pupil diameter)
Exit pupil — — 3.72 — —
position (size 1.05 pupil diameter)
Volumes
Eye lens — — — 30.5 1.45
Anterior — — — — 1.33
chamber
Posterior — — — — 1.33
chamber
Eye as a — — — 66.6 —
whole
interesting to note that the eye focused for infinity exhibits positive
spherical aberration and for very near distances negative, while for
intermediate distances (around 50 cm) it is essentially zero. The
line spread is minimum for a pupil diameter of 2.4 mm, and for
smaller diameters, the spread approaches the diffraction limit. At
2.4 mm also it is almost diffraction limited with an exponential
fallout representing scatter and depth of focus. As the pupil diameter
increases beyond 2.4 mm, the fallout becomes more prominent and
dominates the Guassian spread.
Figure 1.2 is basically a sketch showing the blood supply
to the eye representing arteries and veins as shaded and dark lines,
respectively [4].
The cornea (C ) with the sclera (S ) represent the outer
fibrous envelope of the eyeball. While the cornea is transparent,
the sclera is pearly white. The sclera is almost five-sixths of the
envelope. The two structures are dovetailed into one another
biologically. The cornea is thickest (about 1mm) posteriorly,
I
L
S
Ch
R
optic nerve and convey the information further to the various areas
in the visual system of the brain.
The iris (I ) arises from the anterior surface of the ciliary
body and results in an adjustable diaphragm the central aperture
of which is known as pupil. The diaphragm divides the space between
the cornea and the lens into two chambers which are filled with a
fluid – the aqueous humour. The ciliary body is in turn, a
continuation of the retina and the choroid. The iris has a firm
support by lying on the lens. The contractile diaphragm reacts to
the intensity of light and accordingly adjusts the pupil diameter
from 7 mm to 2 mm from starlight to noonlight. In a given position
it also cuts off marginal rays – which unless stopped would diminish
the sharpness of the retinal image.
The lens (L) is a transparent, colourless structure of the
lenticular shape, of soft consistence enclosed in a tight elastic
membrane whose thickness varies in different parts of the lens.
The circumference is circular, 9 mm in dia, with the central thickness
as 5 mm in an adult. The posterior surface is more highly curved,
and embedded in a shallow depression in the vitreous humour,
while the anterior surface is in contact with aqueous humour. The
vitreous humour is a transparent colourless gelatinous mass which
fills the posterior cavity of the eye and occupies about four-fifths of
the interior of the eyeball. The aqueous humour is transparent and
colourless fluid and serves as a medium in which iris can operate
freely.
The optic nerve (N ) collects its fibres from the ganglion in
the retina and passes through the eyeball. The fibres from the right
halves of both the retinas pass into the right optical tract and the
fibres from the left halves pass into the left optical tract, each tract
containing a nearly common field of vision from both the eyes. Both
the tracts continue to the centre of vision in the brain.
1.3 INFORMATION PROCESSING BY VISUAL SYSTEM
The continuous photon stream that is incident on both
the eyes as a result of light reflectance from the environment is
appropriately focused on the retinal receptors (i.e., rods and cones)
through its optical system (i.e., cornea, pupil, lens and the
intervening spaces occupied by aqueous and vitreous humour). This
photon stream variable in space (x,y,z), time (t) and wavelength ()
is sampled in space and wavelength by the three types of cone
Vision & human eye 7
RED
BLUE GREEN
1.00
RELATIVE SENSITIVITY
0.80
SCOTOPIC PHOTOPIC
0.60
0.40
0.20
0.00
400 450 500 550 600 650 700
WAVELENGTH (nm)
Figure 1.4. Normalised spectral sensitivity of luminosity functions
for scotopic and photopic vision.
Luminance –6 –4 –2 0 2 4 6 8
(log cd/cm2)
L / L0 k ( L L0 ) n (1.1)
3
LOG ( L / L o )
L
2 L
Lo
1
n =1
0 Lo
-1 0 1 2 3 4
LOG L (td)
Figure 1.6. Threshold vs intensity function in respect of photopic
vision.
known as Weber’s Law explains that contrast remains constant with
changes of luminance in an observable scene above a certain
minimal level of retinal luminance. It has to be noted that Weber’s
Law is operative for luminances as reflected from the observable
scene and analysed by the eye-brain system. The perception of light
sources and brightness as such is in addition to the operation of
the Weber’s Law.
Various definitions of contrast are in use, but as all these
are based on a ratio, they all yield invariance of contrast from a
change in illumination level. A threshold-vs-intensity curve for rod
vision is shown in Fig. 1.7. It can be observed that rods unlike
cones are saturated by steady backgrounds as a consequence of
their high sensitivity and lack of gain control. At the lower light
levels, they are known to signal the arrival of even single photons.
Obviously, because of their low spatial resolution and saturation,
their contribution to daytime vision is rather insignificant as against
cones.
1.4.3 Colour
As indicated earlier, the colour sensation is picked up by
three independent types of cone photoreceptors with the spectral
characteristics as shown in Fig. 1.3. The three cone types are
designated blue (B ), green (G ), and red (R). Constancy of colour has
12 An Introduction to Night Vision Technology
4
Log (L/Lo )
-4 -3 -2 -1 0 1 2 3
LOG L (SCOTOPIC td)
Figure 1.7. Threshold vs intensity function in respect of scotopic
vision.
100
50
2.1 SEARCH
As is by now obvious, eye is basically par excellence, a
spatial location and movement detection instrument under
conditions of varying contrast, colour and resolution dependent on
differing levels of illumination and their spectral content. Having
observed a scene, need arises for search of objects of its interest.
Thus a species, say a frog, would like to know about small organisms
like the fly which it can eat and simultaneously be alert about the
predators in its field of view. At a higher level, the task of a human
being though similar is more elaborate. The humans also search
and acquire the targets of their interest for desired interaction or
avoidance. Refined search and acquisition has led to the evolution
of a large number of techniques and utilization of parts of the entire
electromagnetic spectrum beyond the capabilities of the human eye.
As such it would be of interest to know about the search and
acquisition techniques that are adopted by the human eye and by
the instruments that we are dependent on.
The image of a scene is stabilized on the retina by reflex
movements of the eye to balance its involuntary movements of high
frequency tremor, low speed drift and flicks, all of low amplitude,
even when an observer is consciously trying to fixate on a given
point. This is presumably necessitated as the rods and cones get
desensitized if the illuminance of the light falling on them is
absolutely unchanging. To make a search, the eye jumps from one
fixation point to another, dwelling momentarily on each fixation
point after each jump. The jump called a saccade has a definite
amplitude. Search-time would be excessive if the dwell-time after
each saccade is long and the saccades are small. It has been
experimentally observed that if the observed sector is larger, the
16 An Introduction to Night Vision Technology
2.2 ACQUISITION
Once a search has been completed and it is desired to
acquire the target, it is found that acquisition is possible at various
levels. Thus while taking an early morning walk in an open space,
one may observe at a distance slight movement at first, and not be
sure about the object. Once the object is a little nearer, one may be
able to decide that it is a human being, and once the human being
is still nearer one can recognise the face and identify the person. A
similar situation arises in battlefield conditions also, wherein one
may acquire some target based on its movement, or its lack of fitment
in the background, but on closer observation may identify it as an
object of interest and subsequently recognize it as a tank, heavy
Search & acquisition 17
(c) Identification
At this level of acquisition, one should be able to indicate
the type of the object, i.e., which type of tank, vehicle, or
the number of people in a group. An important military
requirement would be identification of friend or foe (IFF).
Log C t = 0.075 1.48 10 4 log L 2.96 1.025 log
2
where
t = integration time of the eye
f = video bandwidth
a = area of the target in the image
A = area of the field of view
SNRv = signal-to-noise ratio in the video signal
and for a periodic target
SNR Di 2tδ f /α U / N 1 / 2 SNR v (2.7)
1/ 2
where
= displayed horizontal-to-vertical ratio
U = bar length-to-width ratio
N = bar pattern spatial frequency (lines/picture
height)
These expressions in a realistic case could be modified
by involving the MTF of the system. The point to note is that detection
experiments provided a value of around 3 for the SNRDi for the
threshold value at 50 per cent probability. The value appeared to
vary only slightly for a wide range of rectangular shapes and sizes
and also for squares. The periodic bar target showed a variation for
both spatial frequency and length-to-width ratios of the patterns.
Further experiments in recognition and identification of military
vehicles followed theoretical calculations for the SNRDi for Johnson’s
equivalent bar pattern and indicated the value as 3.3 to 5.0 for
recognition and 5.2 to 6.8 for identification against various
backgrounds. The variability of values to such a large extent suggests
that this parameter is not a very good general performance measure
though one could draw on the minimum values that may be
necessary for good performance.
2.6 DETECTION WITH TARGET MOVEMENT
Detection of movement is a parameter of importance both
by day and night, and the expected sensitivity to movement is an
inbuilt faculty of the human eye-brain system. In field conditions
this would imply our sensitivity to the angular movement of a likely
target or object of interest. While the detection probability is
enhanced, visual acuity would drop, i.e., perception or detection
Search & acquisition 23
C m =C (1+0.45 w 2 ) (2.8)
where
C= Contrast while stationary
w= Target angular speed in degrees per second for
speeds up to 5° per second.
2.7 PROBABILITIES OF ACQUISITION
Detection probability can be calculated as a product of a
number of probabilities based on the variables in the observer – object
of interest scene. The factors could be the evaluation of a target
embedded in its background, intervening atmosphere, clutter,
obscuration, the capabilities of the electro-optical system used, and
the display parameters. In addition, human factors, such as training,
search, and establishing a line of sight between the target and the
sensor also matter. Further recognition and identification could be
done by involving Johnson criteria. All these parameters and possibly
more have been selectively incorporated in various models with
appropriate algorithms to arrive at a possible prediction of the field
conditions. The approach has been of interest to many a workers,
and models have been developed based on image intensifier and
forward looking infrared systems. While a universal model is far from
developed, one can possibly select a model for advance understanding
limited to certain parameters, such as atmospherics or variations in
instrument design. It is interesting to note that this approach is rather
late in the day for most of the systems already developed, but may
have a significance in sophisticated futuristic developments.
2.8 CONTRAST & ACQUISITION
It is now obvious that consideration of contrast really leads
to the detection and acquisition of an object of interest. It may therefore
be worthwhile to directly interpret acquisition in terms of contrast at
the object and its transfer to the contrast in the image as seen by the
eye. The image contrast factor Ci would be dependent on, (if the object
structure is to be perceptible) the ratio between the random fluctuations
n during the observation time to the mean number of quanta received
by the eye n~ [13]:
24 An Introduction to Night Vision Technology
(n )
C i (2.9)
n~
(n )
Ci K r (2.10)
n~
Kr na
Co , ~ (2.11)
Tm a n a
where (n) (a) is the random fluctuation and n~ (a) is mean number
of quanta received by the eye during the observation time at a spatial
frequency (a). Apparently, the total perceptibility would be a
summation of all such contrast values at all spatial frequencies of
our interest.
The structural content of the image at the retina, i.e.,
(n) / n~ would arise as a result of the total system noise-to-signal
ratio and could best be assessed at the retina itself, if it were possible.
For practical purposes, it can be approximated most closely by
measuring the output signal-to-noise ratio of an imaging system
when its gain is such that it makes only the noise detectable.
Variants of this measure, in terms of equivalent background
illumination, background dark current, or the noise equivalent power
are different definitions in varying context for different detectors to
arrive at the same parameter which hopefully would help us in
predetermining what we are looking for.
One can argue from the fundamental reasoning that the
lowest possible noise-to-signal ratios could be achieved if every
Search & acquisition 25
2
n
~
n out 1
F 2
(2.12)
n qde
~
n in
Kr F
Co (2.13)
n
Tm ~
n in
levels, one would find that the attempt is to get at the ultimate
perception through all the parameters that go to define a sensor
and a complete system. That it has not been completely done and is
still a matter of research is evident in the fact that the success or
the failure of a given system with a given sensor under actual field
conditions cannot be predicted accurately. The structural factor
(n ) / n~ that is accepted by the system is also fluctuating as a whole
at the entrance aperture.
One can normally use qde to define the performance of
imaging sensors in general—be it photodetectors, image tubes or
the like. In the case of photopic imaging the number of quanta
available is so large that statistical fluctuations are negligible and
one can treat the factor (n ) / n ~ as a constant. The imaging
performance thus depends upon modulation transfer function, Tm .
The qde is not of much concern in direct vision due to the abundance
of quanta with structural information. In case of low light level
imaging, the number of quanta available is so small that performance
is governed by statistical fluctuations. While (n ) / n~ determines the
image contrast, the improvement in MTF of the system is not going
to play that vital a role as in the case of normal day instruments. In
case of x-ray, ultrasound or NMR imaging, there are undoubtedly
fundamental statistical limits but the method of calculating these
limits is not obvious. Due to various other phenomena associated like
scattering, differential absorption etc., the value becomes so much
complicated that the theoretical prediction is difficult and one resorts
to signal-to-noise ratio. In these cases also, MTF (Tm) is not very
important and one works out such image processing methods so that
the contrast rendition is improved to detect contrast lower to 0.001
per cent although resolution may be as low as a few lines/mm.
The goal of being able to thwart the natural conditions
which limit the usefulness of both vision and photography in
extracting information from scenes of decreasing low apparent object
contrast has been pursued with only limited success for many years.
The use of short wavelength cutoff filters, for example, combined
with the long wavelength end extension of photographic film
sensitivity has helped to penetrate the veil in cases where contrast
has been reduced to wavelength-dependent (Rayleigh and Mie)
scattering. The introduction of infrared sensitive emulsions carried
this photographic approach as much as it could.
Search & acquisition 27
REFERENCES
1. Enoch, J.P, "Effects of the Size of a Complex Display Upon
Visual Research". J. Opt. Soc. Am. vol 49, (1959), pp. 280-86.
2. Waldman, G.; Wootton, J.; Hobson, G. & Luetkemeyer, K,
"A Normalised Clutter Measure for Images". Comp. Vis.,
Graphics and Image Pro. vol. 42, (1988), pp. 137-156.
3. RCA Electro-optics Handbook. Tech Series, EOH –11, (RCA Solid
States Division, 1974).
4. Waldman, G. & Wootton, J, Electro-optical Systems Performance
Modeling . (Artech House, 1993).
5. Blackwell, H.R. "Contrast Threshold of the Human Eye". J.
Opt. Soc. Am. vol. 36, no.11, (1946), pp. 624-43.
6. Blackwell, H.R. & Taylor, J.R, Survey Of Laboratory Studies
of Visual Detection. NATO Seminar on Detection, Recognition.
and Identification of Line of Sight Targets. (The Hague,
Netherlands, 1969).
7. Waldman, G.; Wootton, J. & Hobson, G, "Visual Detection
with Search: An Empirical Model". IEEE Trans. on Systems,
Man & Cyber. vol. 21, (1991), pp. 596-606.
8. Overington, J. "Interaction of Vision with Optical Aids". J. Opt.
Soc. Am., vol. 63, (1973), no.9, pp. 1043-49.
9. Johnson, J, "Analysis of Image Forming Systems". Image
Intensifier Symposium. (Fort Belvoir VA . October 1958).
10. Wiseman, R.S. "Birth and Evolution of Visionics". SPIE, Infrared
Imaging. vol. 1689, (1997), pp. 66-74.
28 An Introduction to Night Vision Technology
THE ENVIRONMENT
3.1 INTRODUCTION
The environment has an important effect on the
observation of a target or an object of interest; the most important
single parameter being the atmosphere. The amount of radiation
that is received on the surface of the earth spectral-wise is
determined by the constituents of the atmosphere as also the
particulate matter that may be intervening. Observations at low
angles that is usually the case in terrestrial observation could further
aggravate the problem. The weather conditions, such as rain, snow,
haze and fog could reduce the clarity of vision. Dust and sand thrown
up by vehicular movement and various obscurants, such as smoke
could be of special significance in battlefield environment. Presence
of pollutants resulting in smog could drastically reduce the visibility
down to a few metres. Such conditions do make the contrast
rendition quite difficult in the observation plane.
It is also well known that astronomical observations are
required to be suitably corrected experimentally and theoretically
to annul the effect of the intervening atmosphere besides choosing
correct locations for observation. The common visible effect observed
by the naked eye is the twinkling of the stars. On the surface of the
earth atmospheric variation in refractive index may lead to effects
like mirage and distortions of varied nature at noon time or in sandy
terrain. Yet for quite sometime, the atmosphere does retain a
reasonably uniform refractive index value and permits good vision,
so much so that the refractive index value is generally assumed to
be unity, i.e., same as for vacuum in most calculations. The vision
could be excellent on a perfect day as witnessed in Sub-Himalayan
terrain which is reasonably free of pollutants and non-atmospheric
particulate matter.
30 An Introduction to Night Vision Technology
2500
VISIBLE SPECTRUM
1000 H2O
H2O
H2O
H2O, CO2
500
H2O, CO2
O3 H2O, CO2
0
0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 2.4 2.6 2.8 3.0
WAVELENGTHm)
Figure 3.1. Spectral radiance of the sun at zenith on earth
The atmospheric effects manifest themselves through
absorption, scattering, emission, turbulence and secondary sources
of radiation, such as the skylight and reflections from large areas
like the clouds and the water masses on the earth. During night,
such conditions could be relevant to the moonlight and starlight
illumination. The profile of the spectral radiance of the sun at mean
earth-sun separation is shown in Fig. 3.1[1]. It also shows the
absorption at sea level due to the atmospheric constituents.
Obviously transmission is quite significant for the visible region as
also for the near infrared, though absorption bands due to water
and carbon dioxide do make significant inroads. Likewise Fig. 3.2.
shows transmission in percentage value extending right up to 16 m
and good transmission in the 3–5 m and 8–14 m bands. These
good regions of transmission are also referred to as the atmospheric
windows.
As we are concerned more with terrestrial, i.e., horizontal
transmittance it would be interesting to look at Fig. 3.3, which shows
transmittance at sea level containing 5.7 mm precipitable water at
26 °C over an atmospheric path of 1000 ft (@ 305 m)[2,3]. This
graphic data also confirms good transmittance in the visible, near
infrared 3–5 m, and 8–14 m bands. The data for horizontal
transmission would certainly vary significantly dependent on the
local condition of observation, but the spectral nature of
transmission would by and large be similar.
The Environment 31
ATMOSPHERIC TRANSMISSION
100
TRANSMISSION (%) 80
60
40
20
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
WAVELENGTH (m)
Figure 3.2. Atmospheric transmittance vs wavelength
100
80
60
40
20
0
TRANSMITTANCE (%)
Figure 3.3. Transmission over ~ 305 m (1000 ft) horizontal air path
32 An Introduction to Night Vision Technology
dioxide are the most important molecules in this respect while ozone
plays a significant part in the absorption of ultra-violet and
absorption in the 9-10 m region in the upper layers of the
atmosphere. The effect of absorption is attenuation of the signal
strength dependent on the wavelength of the incident light. Basically,
in absorption, the incident photon is absorbed by an atmospheric
molecule, causing a change in the molecule’s internal energy state.
The infrared and visible photons may just have adequate energy to
enable transition in the rotational or vibrational energy states of a
gas molecule. As the energy matching is more for the infrared,
absorption is not that significant in the visible region. The absorption
due to aerosol would depend on its density. While the energy taken
out of a beam of radiation by absorption contributes to the heating
of the air, the energy scattered by molecules, aerosol or cloud
droplets will be redistributed in the atmosphere. In addition to
absorption, the signal strength is further altered due to scattering
by air molecules, aerosol and other particulate matter present in
the atmosphere. The scattering effects are dependent on the particle
size and could be thought of in three categories. The first, where
the particle size is relatively small in comparison with the wavelength
of the incident light; second, where it is of the same order; and
third, where the particle size is relatively large in comparison with
the wavelength of the incident light. The relative sizes and their
density for important atmospheric constituents are indicated in
Table 3.1.
set, where the particle size is of the same order as the wavelength
of the incident light, would primarily be due to aerosol. The
scattering in this case becomes a complex function of particle
size, shape, refractive index, scattering angle, and wavelength.
This could be addressed by utilizing the Mie theory of scattering.
In this case, the intensity of the scattered radiation becomes less
dependent on wavelength and more dependent on angle, with a
distinct peak in the forward direction. In the third group where
the particles are much larger than the wavelengths of the incident
light, the particles would behave like micro-optical components.
Thus their theoretical treatment could be essayed by utilizing the
concepts of geometrical optics. This type of scattering also referred
to as nonselective scattering or white light scattering (because of
lack of dependence of scattering on wavelength) or scattering in
the geometrical optics regime could explain scattering due to such
large particles as raindrops. Scattering intensity has still a strong
angular dependence with a strong peak in the forward direction.
3.2.1 Scattering due to Rain & Snow
According to Gilbertson[4], the scattering coefficient in
rainfall is independent of wavelength in the visible to far infrared
region of the spectrum and could be estimated by the equation
s(rain) = 0.248 t 0.67 (3.3)
where s(rain) is the scattering coefficient in km , and t = the rainfall
–1
rate in mm hr –1.
More recent articles by Chimelisx and others[5] give three
different formulae for the scattering coefficient due to rain, but all
the four formulae are really close enough and do not differ
significantly.
Empirical relationships have also been developed for
snow-based on experimental results and it has been found that
the results tend to show two groups, one for snow in small needle-
shape crystals and the other for larger plate-like crystals. The
relationships are as under[6]
s(snow) = 3.2 t 0.91
..for small needle shaped crystals (3.4)
and
s(snow)= 1.3 t 0.5 .. for larger plate like crystals (3.5)
where the rate of snow accumulation is expressed as equivalent to
liquid water rate in mm/hr given by t.
The Environment 35
CR /Co 2% (3.6)
The contrast Co at the object plane, i.e., R = 0 may be
defined as
LO L BO
CO (3.7)
L BO
where LO is the object luminance, i.e., flux per unit solid angle per
unit area or intensity per unit area and LBO is the background
luminance. The contrast CR at the observation plane at a distance
R could be similarly defined as
LR LB
CR (3.8)
LB
where LR is the luminance in the observation plane and LB is the
background luminance in the same plane.
Equations (3.6) and (3.7) are interrelated as
L R L o . e R (3.9)
L B L BO . e R (3.10)
36 An Introduction to Night Vision Technology
where O is the flux radiated by the object and R is the flux received
at distance R, while is the attenuation coefficient over the same
path length.
These equations require to be modified to take into
account the luminance that is scattered into the line of sight by the
rest of the atmosphere. If this scattered luminance is Lin, the modified
equations are:
L R L O . e R L in (3.11)
L B L BO . e R L in (3.12)
Thus, we have
(L O L BO ). e R
CR (3.13)
LB
LBO
and multiplying by
LBO , we have
L
Co BO e R using Eqn (3.7) (3.14)
LB
If the object is viewed against the horizon sky, the
background remains more or less the same in both the object plane
and the observation plane. The above equation under such
conditions reduces to
C R CO . e R (3.15)
For targets against a terrestrial background, the Eqn
(3.14) can be remodelled [7] as
C R CO {1 S(1 e σ R )} 1 (3.16)
5
FOG
ATMOSPHERIC ATTENUATION COEFFICIENT (n) in km –1
2 LIGHT
FOG
THIN
1 FOG
HAZE
0.5
LIGHT
HAZE
0.2
CLEAR
0.1
VERY
CLEAR RAYLEIGH
0.05
SCATTERING
310 km
0.02 EXCEP-
TIONALLY
CLEAR
0.01
Nitrogen 0.3371
Argon 0.4880
Argon 0.5145
Helium-Neon 0.6328
Ruby 0.6943
Gallium arsenide 0.86
Neodymium in glass 1.06
Erbium in glass 1.536
Helium-Neon 3.39
Carbon dioxide 10.591
Water vapour 27.9
Hydrogen cyanide 337
to fire, missile smoke and for possible path lengths through smoke
and dust [10]. The software could also take into account scattering
due to clouds and fog.
1
10 –1
FULL MOON-PARTIALLY
ILLUMINANCE (LUX)
CLOUDY
10 – 2
STARLIT CLEAR
STARLIT PARTIALLY
CLOUDY
10 –3
STARLIT HEAVY
STARLIT AVERAGE
CLOUDS
10 –4
SUNSET 0 1 2 2 1 0
10 -4
SUNRISE
HOURS AFTER TWILIGHT OR BEFORE MORNING LIGHT
Characteristic Value
MOONLIGHT
RADIANCE WATTS/SQ.CM/STERADIAN/ m
10 – 8
10–9
STARLIGHT
10–10
WAVELENGTH (m)
300
200
100
REDUCTION FACTOR TO FULL MOON
80
60
40
20
10
8
6
1
20 40 60 80 100 120 140 160
MOON PHASE IN DEGREES
4.1.2 Starlight
Starlight is really not evenly distributed in the sky
because of the concentration of stars in the milky way releasing a
good amount of energy in the visible spectrum, as also due to
selective spectral distribution of many stars. The intensity at zenith
and along the milky way is higher than that elsewhere in the sky.
Nonetheless, we can assume the approximate values for ground
illuminance in accordance with Table 4.1 as the average
illumination. At the same time, it will be observed from Fig. 4.2
that in the starlight the relative radiation content is more in the
near infrared than in the visible, i.e., somewhat opposite to that
in the case of moonlight. Intensity in each waveband of interest
can also be reduced or altered by the type of cloud cover, rain
and fog. Scene brightness as in all cases would be dependent on
the incident illumination and its reflectivity.
Moonless clear night, i.e., a starlit night sky radiance is
composed of the following four components within the visual
wavelength range:
(a) Extragalactic sources 1 per cent
(b) Stars and other galactic sources 22 per cent
(c) Zodiacal light 44 per cent
(d) Airglow 33 per cent
While the contribution due to (a), (b), and (c) result in a
spectrum closer to that of a sunlit or a moonlit sky with appropriate
alteration in intensity values, the characteristic spectrum of a starlit
sky is more due to airglow. In addition to more or less intense lines
in some parts of the visible spectrum, the airglow also yields
increasing intensity in the near infrared say up to 2.5 m. Thereafter
thermal emission of the atmosphere begins to supress it. Hence, a
S-20 photocathode[1] with an extended red response or a S-25 has
a reasonably good correlation under both moonlight and starlight
conditions and is the photocathode of choice in most image
intensifier tubes.
4.2 REFLECTIVITY AT NIGHT
Reflectance measurements made during day time are
equally applicable during night. However, these measurements
assume greater significance during night, as it further lowers the
amount of low intensity light that is present in the environment.
Reflectance by itself during daytime is not significant, as the number
of photons reflected is still large enough to be detected by a vision
system. No doubt contrast between the object and its background
continues to be an important factor at all levels of vision.
Night illumination, reflectivities & background 49
80
GREEN VEGETATION
70
60
ROUGH
50 CONCRETE
REFLECTANCE (%)
40
30
20
10
DARK GREEN PAINT
0
–4 –6 –8 1.0 1.2 1.4 1.6
WAVELENGTH (m)
Figure 4.4. Percentage reflections from surfaces of military interest
0.9
0.8 SNOW, FRESH
0.7 SNOW,
OLD
0.6
VEGETATION
REFLECTANCE
0.5
0.4
0.3 LOAM
0.2
0.1 WATER
0
0.4 0.6 0.8 1.0 2 4 6 8 10 12 14
WAVELENGTH (m)
OPTICAL CONSIDERATIONS
5.1 INTRODUCTION
While the contribution of a good optical designer is a
prime necessity in the successful design of an electro-optical night
vision system, the overall system constraints do lay down the basic
requirements for an optical system. Understanding of the user
requirements on the one hand and the technical possibilities and
limits of an optical designer on the other goes a long way in laying
down the basis of a successful design and forms one of the main
responsibilities of the system designer. These days we do have a
library of optical designs. Coupled with the availability of computers
and computer software to design and analyse a given or a modified
design from such a library, it may be possible to arrive at a desired
solution. Alternatively, it is also possible to arrive at a final solution
around a preliminary one that the designer might feel workable on
the basis of his experience. The analysis could be in terms of optical
transfer function, spot diagrams, Strehl definition, Mare'chal
criterion, wave front aberration or based on classical geometrical
optics approach. The use of appropriate software with compatible
computers certainly helps a great deal in arriving at the optimum
designs, drastically cutting down the computational time and the
time for decision making.
Visible
(Target reflectance 0.75 to 0.4 m 400–750 THz Passive
from natural sources)
Note: Passive imaging does not give away the observer position while active imaging can do
so.
been developed which respond to the entire visible and NIR regions
right up to 1.2 m or so. The technological considerations for imaging
using such photocathodes are the same for photons available both
in the visible and the NIR. Maximum utilisation of the natural
night illumination is thus possible. Dioptric materials like optical
glass can also be used though one would have to watch for their
absorption characteristics particularly for the NIR. This is not quite
so as we shift to detection and image forming in the infrared bands
of 3–5 m and 8–12 m, which are also atmospheric windows.
Detection is also passive in these windows as it is based on the self-
emittance of bodies in the environment, using appropriate quantum
detectors for the spectral bands concerned. The detecting area is
micron-sized as against quite a few mm or even cm in the case of
a photocathode. While an entire image can be focused onto a
photocathode, the quantum detectors referred to only see a very
small area of the field. In other words scanning techniques have to
be introduced to cover the required field of view. Series, parallel,
and series-parallel scanning is resorted to as the number of detectors
is steadily increased in an x-y format. The more recent development
of staring or matrix arrays can dispense with scanning altogether.
Thermal energy detectors are also being tried by using matrices of
micro-bolometers. Useful dioptric materials in these spectral ranges
are: zinc selenide, zinc sulphide, silicon, germanium and the like.
Metal mirrors and polygons are also in use for the scanning optics.
Appropriate coatings are necessary in all the cases. In still higher
wavebands, the techniques are no longer passive and call for
illumination of the object and analysis of its reflection. Picturizing
an object scene utilizing TV cameras and its subsequent
transmission and reception involves a three-fold action, i.e.,
picturisation, transmission and reception. While the picturisation
aspect is dependent on the region of the spectrum used and its
corresponding cameras, transmission may utilize UHF band,
referred to in Table 5.1. Reception amplifies and modifies the signal
received into an appropriate video signal for display on a cathode
ray monitor. Picturisation is possible in the visible, visible and
near IR, and the higher IR bands in a passive mode during day or
night utilizing appropriate objectives and sensors. Thus we have low
light level television (LLLTV) systems and Thermal Imaging (TI)
systems utilizing appropriate instruments and detectors. Detection
ranges can be reasonably large and transmission of such signals
may not be opted for. In other words this approach offers an alternate
vision system.
58 An Introduction to Night Vision Technology
Q A A'
B B'
Q'
OPTICAL SYSTEM
f f'
U V
space passes through the focus at point F ' which is the focus point
in the image space. Likewise, a parallel ray from the image space
passes through the focus F in the object space. Focal planes
can be defined as planes normal to the optical axis at the
focal points.
Thus, if the object is assumed to be at infinity or for
practical purposes at a reasonably large distance R, then its
image would be focused in the focal plane itself. In other
words, the conjugate points to all object points at infinity lie in
the rear focal plane. If now an object to linear size d0 at infinity
subtends an angle at the optical system, we have in the
object space
d0
(5.1)
R
It is of course assumed that the object is at quite a
large distance in comparison to the focal length and that the
angle is small enough for tan to be replaced by . As the
optical system brings the rays from the object to the focal plane
to an image size d i the equivalent or effective focal length gets
defined in such a manner that
di
f (5.2)
Having defined the effective focal length value in these
terms, the transverse magnification m of the system can also be
defined as
di
m (5.3)
do
Combining Eqns 5.1, 5.2 and 5.3, we have
f
di . do (5.4)
R
m f / R (5.5)
These relationships are of interest to a system designer.
As already stated, parallel beams from an infinitely
distant object are brought to a focus in the focal plane. While doing
so, the beams undergo deviation at each and every optical surface
of the assembly and then emerge from the last surface to come on
to the focal plane. Principal planes or surfaces are defined as the
unique imaginary surfaces from which these parallel beams could
Optical considerations 61
have been singly refracted to come to the same focus. There are
two such surfaces in each assembly depending on whether the
parallel beam is incident from the object space (A’P’B’) or the image
space (APB). Their intercepts on the optical axis are the principal
points P and P'. These surfaces and points are indicated in
Fig. 5.1. The effective focal length (EFL) is defined as the distance
P'F' and PF. It will be observed that this definition tallies with the
definition as per Eqn 5.2 for P'F' = f '. Likewise, the nodal surfaces
and points are defined as the two imaginary surfaces and their
intercepts on the optical axis wherein if a ray is incident from an
object point, the same is refracted without any deviation from the
corresponding nodal point of the second nodal surface. Thus, in
Fig. 5.1, the ray QN is transmitted parallel to itself as N'Q'. The
focal, principal and nodal points are referred to as cardinal points
of an optical assembly or subsystem.
Back focal-length and front focal-length are measured
in terms of the distances from the rear and front surfaces to
their respective focal points. These measurements are important
while going in for the mechanical design of the subsystem. Other
important parameters for correct placements are the edge and
centre thicknesses of all the optical elements and their inter-
distances.
We may now proceed to define the field of view (FOV). The
FOV refers to the angle over which ray bundles are accepted from the
object space by the lens system. This angle is restricted by the field
stop in an image plane which for distant objects is just the back focal
FOV
di
plane (Fig. 5.2). The field stop can be placed in any real image plane
in a relaying system of optics, to give a sharp boundary to the FOV.
It will be observed that
tan ( FOV / 2 ) d i / 2 f (5.6)
where di is the linear dimension of the circular field stop. The field
stop can be rectangular also in which case the FOV will have two
different values in the corresponding perpendicular directions.
An aperture stop may also be introduced into an optical
system to physically limit the size of a parallel bundle that enters
it. Usually the aperture stop is the clear aperture of the front surface
but it can be anywhere based on the design consideration. The
image of the aperture stop in all the system elements preceding it
is called the entrance pupil and in succeeding elements it is the
exit pupil. Both the aperture and field stops are of importance as
one limits the size of the parallel beam bundle and the other the
angle of entry of such bundles. Parallel bundle's size determines
the brightness in the image while entry at greater angles requires
a much more stricter control of aberrations. Further, at greater
angles of incidence, the entire beam may not find an entry to the
image plane, as a mismatch between the entrance pupil and the
field stop may limit its transmission through all the optical elements
of the system. This is referred to as vignetting, and leads to a greater
loss of brightness towards the edges of the image field. Vignetting
becomes a serious problem in night vision systems.
Relative aperture or F number is also a relevant
parameter from the system point of view. It is defined by f '/D
where D is diameter of the entrance aperture (Fig. 5.3).
A'
f'
D P'
F'
B'
PARALLEL BEAM
FROM INFINITY OPTICAL SYSTEM
D
NA sin /2 (5.8)
2f '
and
1
F number (5.9)
2NA
Both these values are of importance in objective systems,
as these decide the light gathering power of the system or its
throughput.
In systems design, matching of the throughputs may be
quite essential particularly, where it is the intention to collect as
much light as possible and then be able to transfer it to the next
assembly in the chain without any loss. Obviously, at unit
magnification, all the subsystems should have the same numerical
aperture. Nevertheless, practical demands will have to be met where
some magnification is also desired. Referring to Fig. 5.4, it will be
observed that the numerical aperture in the object space is
D
sin / 2 (5.10)
2u
D U
V
2
'
D
OPTICAL SYSTEM
D
sin / 2 (5.11)
2v
As the principal surfaces are segments of spheres
centered on object and image points on the axis respectively, we
thus have
v sin / 2
m (magnification ) (5.12)
u sin / 2
1.0
NORMALIZED PATTERN IRRADIANCE
.9
.8
.7
.6
.5
.4
.3 CIRCULAR APERTURE
.2
.1
0.0
–8 –7 -6 –5 –4 –3 –2 –1 0 1 2 3 4 5 6 7 8
POSITION IN IMAGE PLANE
OTF Vx ,Vy MTF Vx ,Vy exp jPTF VxVy (5.15)
where Vx and Vy refer to spatial frequencies in the two imaging
directions of the image of an isoplanatic patch. The MTF gives the
modulation reduction of the imaging system versus its spatial
frequency when a sinusoidal radiance pattern is imaged. For a
perfect imaging system, the modulation transfer function would be
unity at all the spatial frequencies of a sinusoidal radiation pattern.
However, as we will see, it cannot be so even for a perfect diffraction
limited optical system.
Optical considerations 67
u
n (5.20)
uc
The cutoff frequency is that frequency at which the MTF
value is zero. Frequencies may be expressed in cycles per mm
(c/mm) or cycles per milliradian (c/mr), keeping due regard to the
units used for other parameters. There are several formulas for uc.
The one relating directly to Airy’s disc is given by
D 1.22
u c (c /mm ) (5.21)
f r
1.0
0. 8
IDEAL MTF
0. 6
MTF
0. 4
LENS WITH
0. 2 1/4 WAVELENGTH
ABERRATION
D 1.22
u c (c /mr ) (5.22)
r
Where r refers to half the angle subtended by Airy’s disc at the
entrance pupil diameter D.
Figure 5.6 shows the MTF values plotted against the
normalized spatial frequency, n, it will be observed that the
diffraction limited ideal performance curve is almost a straight line
which dips slightly towards the origin. Comparison has also been
made with a lens system that has a quarter wavelength
aberration[2]. Obviously all real lenses will have their graphs
between the origin and the line indicating the diffraction limited
ideal performance. As indicated earlier, one could now evaluate the
MTF curves for the objective and the image intensifier tube to arrive
at the combined MTF of the system. Nonetheless, experiments
following Johnson’s criterion seem to prefer square-wave spatial
frequency amplitude response, i.e., bar chart in practice defined
by line-pairs per mm. As discussed earlier in Chapter 2, this
permits a correlation between acquisition, recognition and detection
though these values cannot be cascaded in the manner that is
possible with MTF values, in respect of optics, image intensifier
tubes, camera tubes, video amplifiers and displays. Some workers
have developed calculating and graphical schemes to convert one
set of values to the other. The manufacturers of image intensifier
tube give the data generally in terms of resolution in line-pairs per
mm as also normalised MTF values in line-pairs per mm.
5.3 OPTICAL CONSIDERATIONS
It may be better to think of night vision systems as an
assembly of an objective, image intensifier tube and an eyepiece
or a display system, i.e., it may not be thought of in the conventional
manner as a telescopic system though it does behave like one and
essentially does the same task. The objective in this case is to collect
as many photons as possible from the night sky and concentrate
these photons in as small an area as possible so that the intensity
per unit area is as high as possible to enable maximum excitation
of the photocathode of the image intensifier tube.
At the same time one has to reconcile these
requirements, with the FOV and overall magnification that may be
70 An Introduction to Night Vision Technology
20
F/2 TV OBJECTIVE DOUBLE GAUSS
0
20 40 60 80
LINE PAIR/mm
100
150
0°
100 15 80
MTF 60
40 17°
20 25°
0
10 20 30
LINE PAIR/mm
F/1.2 PASSIVE NIGHT PERISCOPE OBJECTIVE (50 mm DIOPTRIC)
Optical considerations
12 100 0°
80 2.5°
MTF 60 8°
40
1 20
0
10 20 30
LINE PAIR/mm
REFERENCES
1. Cox, Arthur. A System of Optical Design. (The Focal Press,
1964).
2. Griot, Mells. Optics Guide 5.
3. Bouwers, A. Achievements in Optics. (New York: Elsevier
Publishing Company Inc., 1950).
4. Various Optical Designs and their Characteristics. (IRDE,
Dehradun).
CHAPTER 6
PHOTOEMISSION
6.1 INTRODUCTION
The need for detection of weak radiation signals both in
visible and the infrared has, of necessity, led to the development
of quantum detectors. Quantum detection may be based on the
principles of photoemission or utilize solid-state devices in which
the excited charge is transported within the solid either as
electrons or as holes. Photoemission of electrons has been utilized
in image intensifiers (I.I. tubes), photomultipliers and the like, or
in general, in various vacuum or gas-filled tube devices for
different applications. Solid-state devices may be classified as
photoconductive or photovoltaic. These may be simple p-n
junctions, photocells, phototransistors, avalanche photodiodes,
p-i-n photodetectors, schottky-barriers, or quantum well devices.
Photoemissive surfaces are possible in relatively larger sensitive
sizes.
6.2 PHOTOEMISSION & ITS THEORETICAL
CONSIDERATIONS
Materials (metals, metal compounds or semiconductors)
which give a measurable number of photoelectrons when light is
incident on them, form photocathodes in a vacuum tube
enveloping both cathode and anode in an electric circuit (Fig. 6.1).
The electrons emitted from the photocathode when the light is
incident are collected at the anode maintaining the flow of the
current as the anode is positively charged. As the anode potential
is increased, the current also increases which ultimately reaches
a saturation value beyond which further increase of the anode
potential is not helpful. This saturation value of the current is
proportional to the intensity of the light incident on the
photocathode. If the anode potential is now reduced, the current
value can be reduced to zero at a negative threshold potential. This
potential value is found to be dependent on the wavelength of the
incident radiation and not on its intensity.
78 An Introduction to Night Vision Technology
LIGHT
VACUUM ENVELOPE (GLASS)
PHOTOCATHODE
ANODE
BATTERY GALVANOMETER
10–7
REPRESENTATIVE GaAS (Cs.0)
10–8
10–9 NEGATIVE AFFINITY
THEORETICAL ESTIMATE OF
10–10
TRANSFERED
RESPONSE TIME (S)
10–11 ELECTRON
(INCLUDING
CATHODE
COMPOSITES,
10–12 ALLOYS
MULTIALKALIS
10–13 AND ANTIMONIDES)
10–14
METALS (INCLUDING PURE
10–15 ALKALIS)
10–16
10–17
10–5 10–4 10–3 10–2 10–1 100
YIELD (ELECTRONS/PHOTONS)
and the lowest conduction band states, so that electrons must have
sufficient energy above the conduction band minimum to suffer
electron-electron scattering. The dominating mode of scattering
is the electron-photon (lattice) scattering. Thus it has relatively a
larger escape depth for the photoelectrons. The quantum yield is
better and the response time is relatively slower in comparison
to that of metals.
The next type of photocathodes to emerge were the
negative electron affinity (NEA) photocathodes. In these
photocathodes, the vacuum level is dropped below the conduction
band minimum so that electron affinity takes on a negative value.
The large response near the threshold is due to the fact that
electrons which are inelastically scattered may escape even if they
thermalize into the bottom of the conduction band, i.e., in addition
to a fraction of photoelectrons escaping without losing their initial
energy; most of the electrons thermalize, diffuse to the surface,
and escape without losing all their initial energy. By combining
negative affinity approaches with structures that allow an internal
potential to be applied across the semiconductor nearest the
surface, it is possible to extend farther into the infrared (1.4 m).
These cathodes have been referred to as transferred electron (field-
assisted) photocathodes. The response time is also faster than the
NEA photocathodes. Further researches may lead to better yield,
still faster responses and further extension of the wavelength
beyond 1.4 m (Fig. 6.2).
Photoemission 81
Vac
EA
C C
Vac
EG E Ae f f EA
F EG
F
V V
LIGHT IN
ANTIREFLECTION
COATED GLASS FACEPLATE
QUARTER-WAVELENGTH
ANTI REFLECTION COATING
10 0 1. Ag-O-Cs (S-1)
2. Cs3Sb (S-11)
5 X 10–1
3. Bi-Ag-O-Cs (S-10)
1 X 10–1
5 X 10–2
2 X 10–2
1 X 10–2
5
5 X 10–3
4
2 X 10–3 3
2 1
1. Ag-O-Cs (S-1)
80 2. Cs3Sb (S-11)
3. Bi-Ag-O-Cs (S-10)
60
4. Na2 KSb (Cs) (S-20)
5. ERMA/S-25
40
6. GaAs
20 6
ABSOLUTE SENSITIVITY (mA/W)
10
8
6
5
2 4
2 3
0.8
0.6
1
0.4
100 200 300 400 500 600 700 800 900 1000 1100
WAVELENGTH (nm)
Figure 6.6. Absolute sensitivity in mA/W vs wavelength
PHOSPHORS
7.1 INTRODUCTION
Luminescence refers to the emission of light by a
material induced by an external source of energy. It may be induced
by light which after absorption is reradiated in a different waveband,
termed as photoluminescence or by the kinetic energy of electrons
termed as cathodo-luminescence. It could also be triggered by the
incidence of high energy particles, applied electric fields or
currents, or chemical reactions. Luminescent technologies by now
embrace liquid crystal devices, gas panels and electroluminescent
panels besides the well known cathode ray tubes. The success of
these tubes is mainly due to high performance level of modern day
phosphor materials. The word phosphor literally meaning light bearer
refers to luminescent solids, mainly inorganic compounds
processed to a microcrystalline form for practical use of their
luminescent property. The earliest phosphors used the naturally
occuring Zn2SiO4 and CaWO4 as a thin powder on a mica substrate
to act as viewing screens. Usually phosphors are in the powder
form but they could also be used as thin films. The image intensifier
tube screens have borrowed from the phosphor developments for
use in cathode ray tubes. The luminescence we are concerned
with is the cathodo-luminescence.
7.2 PHOSPHORS
Most phosphors are activated by the introduction of an
impurity of the order of a few parts per billion. This impurity which
activates the phosphor is known as activator, while phosphor crystal
itself is known as the host or matrix. The chemical formulae
indicate the presence of an activator in the host crystal. Thus one
such formula can be ZnS:Cu indicating ZnS as the host and Cu as
the activator. In a sulphide phosphor the dopant of a VII-b group
element, i.e., halogens (chlorine, bromine, iodine) or a III-b group
element (gallium, aluminium) in addition to the activator is referred
94 An Introduction to Night Vision Technology
DIRECT
TRANSITION
a b c d e
CURVE G
(GROUND
STATE)
O
C
B
CO-ORDINATE
ENERGY
D
CONFIGU-
RATIONAL
A CO-ORDINATE
E E 0 {1 ( x / R )}1/ 2 (7.1)
LOW ENERGY
MEDIUM ENERGY
PHOSPHOR MATERIAL
2
LUMINESCENCE INTENSITY
(mW/cm2)
0
0 2 4 6 8 10 12
VOLTAGE (kV)
I t I o (1 At ) n I o At n (7.3)
Where It is intensity after a time t after termination of excitement,
Io the intensity under excitement, and A a constant. The exponent n
value could be 1.1 to 1.3 according to a number of workers in the
field. A defect or an impurity which allows a charge carrier to remain
100 An Introduction to Night Vision Technology
for a while before these reach the luminescent centres give rise to
trapping levels and lead to phosphorescence, i.e., an afterglow lasting
for more than 0.1 seconds (Fig. 7.5). The decay is prolonged by the
time the charged carrier spends in the traps. This time would be
dependent on the depth of the trapping centre in relation to the
conduction band and temperature, and would be inversely proportional
to the probability of non-radiative transfer between these levels. It
has also been reported that in some phosphors the decay is strongly
dependent on the duration of excitation, for example, ranging from
microseconds for short excitation to milliseconds for longer excitation.
Steady-state values are reached for longer exposures. In practice,
the specified decay value in a phosphor or a mix of phosphors has to
be such that it does not cause scintillations due to fast decay and at
the same time it does not cause multiple images of a moving object
resulting from a slow decay.
7.7 PHOSPHOR APPLICATIONS
Phosphors have found a large number of commercial
applications ranging from television screens to vacuum fluorescent
CONDUCTION BAND
(EMPTY)
TRAPPING
LEVELS
RADIATIVE
TRANSFER
ACTIVATOR LEVELS
VALENCY BAND
(FILLED)
flying spot scanners, radar tubes, storage tubes and the like wherein
the selection of a phosphor or phosphors would depend on the
requirement and be decided in terms of phosphor grain size,
phosphor thickness, nature of emission and the decay time or
persistence of vision. Phosphors for applications like image
intensifier tubes, or electron microscopes call for high resolution
phosphors. The grain size has to be small to reproduce images
with high resolution. The size cannot be reduced much as it
results in the decrease of the luminous efficiency. The minimum
size is restricted practically to 2 m. Green emitting phosphors
are generally preferred for direct visual observation because of
their spectral match to the photopic human eye. Blue-emitting
phosphors are in use for photographic recording because of their
good spectral match to the silver-halide photographic films.
7.8 PHOSPHOR SCREENS
CRT screens usually have a phosphor weight of about
3–7 mg/cm2 on its glass surface. The phosphor particles may
be of size 3–12 m and 2–4 particle-layer thick. The aim is to
maximize the emission intensity vis-a-vis its optical screen
weight. Image intensifier screens are usually built up on the
fibre-optics windows of the tube systems. Both in the case of CRT’s
and the fibre-optics windows for the I.I. tubes, the side on which
the electron beam impinges is coated with a thin aluminium
film. The film works as an electrode which prevents the screen
from negative charging during excitation and thus increases the
output. Further, it also prevents the light generated in the screen
to feedback to the cathode and reflect the light to increase its
effective output. Applied voltages have to be relatively higher to
penetrate the thickness of this aluminium film. Thus, around
3 kV is the minimum estimated value for penetration through
an aluminium film of around 300 nm thickness. About 30 kV
applied voltage is applied for X-ray image intensifiers and
somewhat lower, i.e., of an order from 9–16 kV for image
intensifiers in the optical region. The screen thickness in the
case of phosphors for I.I. tubes may be of the order of 100 nm.
Usually the green emitting phosphor ZnS:Cu, Al phosphor may
be preferred with a particle size of around 2–3 m for image
intensifiers though blue emitting phosphor ZnS:Ag has also been
referred to. Emission peak of the green phosphor at 530 nm
can be shifted to longer wavelengths either by employing a solid
solution ZnI-x Cdx S, or by introducing a deeper acceptor level
due to gold. Usually the exact parameters of a phosphor or for
102 An Introduction to Night Vision Technology
FIBRE
CLADDING
FIBRE
CORE
LIGHT OUTPUT
Figure 7.6. A section through a phosphor screen for I.I. tubes
Phosphors 103
8.1 INTRODUCTION
An image intensifier tube essentially accepts a photon
spread from a quantum starved scene below the visibility level
through an optical system on its photocathode. Such photons release
weak electrons which in turn are accelerated through an electron-
lens system and made to impinge on a phosphor maintaining
correspondence between the optical photon-spread on the
photocathode and the amplified optical output from the phosphor.
This amplified output from the phosphor can be coupled to an
eyepiece system for direct vision or to a video system for vision on a
monitor. Thus if h1 is the energy of the incident photon on a
photocathode and h2 is the energy of the output photon
corresponding to the electrons impinging on the phosphor, one could
indicate this double conversion as
h1 (on photocathode) ———————>e–
e– (accelerated) ——————————>h2 (from phosphor)
The range of 1 which release electrons from the
photocathode depends on its spectral sensitivity. Likewise, the limits
of 2 are defined by the spectral sensitivity of the phosphor. These
aspects have been well discussed in Chapters 6 and 7. The original
photon-spread focused on the photocathode is formed by suitable
optical systems as discussed in Chapter 5. Further, in modern image
intensifiers, the accelerated electrons are significantly multiplied to
increase the number of impinging electrons on a corresponding area
of the phosphor through the use of micro-channel plates.
Historically, image intensifier tubes (I.I. tubes) have now
been classified in terms of generations based on the type of
photocathode that has been utilized. Thus, in the 1940s, the zero
generation made its first appearance using the S-1 photocathode,
wherein artificial illumination in the near infrared beyond the visible
106 An Introduction to Night Vision Technology
range was a definite requirement for its proper functioning. The systems
matured after the next two decades or so and were reasonably operative
in the night environment till research led to better photocathodes and
to sensor development for detection of light beyond the visible range.
Interest to these systems was particularly drawn when it came to be
known that Russian tanks could move about freely during the nights
without any lights in the then East Germany. The World over, armies
built these systems which were soon to become obsolete on the advent
of better photocathodes. These systems however dominated the early
sixties and were built also in India virtually parallelly to those in the
more advanced countries of the West. Generation-1 tubes also started
making their appearance in sixties based on alkali photocathodes at
sensitivities around 200 A/lumen which could be later cascaded with
each other through fibre-optic input and output windows to enable
reasonably higher gains.
The systems built around these tubes had no need for any
supporting artificial illumination as in the case of Generation-0. These
tubes are known as Generation-1 I.I. tubes. This approach proved to
be quite spectacular at the time of its introduction and research activities
were thus directed to the development of better and better
photocathodes and smarter techniques for amplification. It was soon
realised that the photon rate from a night sky incident on a
photocathode through a suitable optical system is greater by 5–7 times
in the 800–900 nm region as compared to that in the neighbourhood
of 500 nm. The output signal could thus be significantly improved, if
the photocathode is also red-sensitive. This brought in more sensitive
S-25 or ERMA (extended red multi alkali photocathodes) for use in I.I.
tubes in preference to the standard S-20. This development coupled
with the technological development of micro-channel plates (MCPs) to
increase the number and energy of impinging electrons on the phosphor,
brought in Generation-2. The military significance was all the more as
it not only increased the sensitivity and hence the night vision range of
the systems designed around it, but it also drastically reduced the weight
as one could now substitute a single diode Generation-2 tube for a three-
stage Generation-1 with better results. Proximity tubes without an
electron-lens but with a MCP compacting it further and with a further
reduction of weight could also be produced for a number of applications.
Systems based on Generation-2, I.I. tube have been produced in large
numbers within the country and these could withstand tough
competition from contemporary production of the West. Generation-1
systems produced earlier were also upgraded. Meanwhile a good
theoretical understanding of the photocathode physics has led to the
development of Negative Electron Affinity photocathodes, further
shooting up the sensitivity values to an order of 1000 A/lumen or
Image intensifier tubes 107
Remarks Active High perfor- Lighter tube Very light (a) Visibility
type mance (High tube down to 10–4
(requires (10–3 Lux) performance Lux
illumi- 10–3 Lux) (b) Strong
nation) (a) Risk of anti blooming sensitivity
blooming (c) Spectral
(b) Image response
distortion from 0.6 m
to 0.9 m
108 An Introduction to Night Vision Technology
PHOSPHOR
PHOTOCATHODE GLASS ENVELOPE PHOSPHOR PHOTOCATHODE
of the fibre-optics fused faceplates and their use as input and output
windows. Later, Generation-2 tubes became a success due to the
introduction of micro-channel plates. Introduction of fibre-optic
twisters in the proximity I.I. tubes of Generation-2 was a further
advancement. The contribution of fibre-optics components has
therefore been quite important to the continued use of the I.I. tubes
for night vision. All the three components, i.e., fibre-optics faceplate,
hollow fibre micro-channel plates and fibre-optics twisters continue
to be used for some purpose or other either singly or in combination
in modern day I.I. tubes.
8.2.1 Concepts of Fibre-optics
Though it is not possible to deal with the subject of Fibre-
Optics in detail within the confines of this volume, a brief introduction
to understand some of the concepts relevant to the functioning of
fibre-optical components for use in I.I. tubes may be necessary[2].
Conduction of light along cylinders by multiple total internal
reflections has been known for quite sometime. However, it was
only in early fifties when glass-coated glass fibres made their
appearance, that there was a technological quantum jump. Earlier
uncoated fibre in air used to get contaminated very easily and did
not provide a proper interface for multiple total internal reflections.
Techniques of fabrication of multiple-fibres subsequently led to the
successful manufacture of the fused fibre-optics faceplates. The term
Fibre-Optics was first introduced by Kapany. Figure 8.2 shows the
path of an optical ray through a glass-coated glass fibre. Rays after
refraction from the entrance face strike at the interface of the core
and the cladding. All the rays which strike at the interface at an
angle equal to or greater than the critical angle get trapped within
the core of the fibre and are thus transmitted to the exit-end.
n a sin n c 1 n cl /n c
2 1/2
(8.1)
where na is the refractive index of the medium from which the light
is incident on the fibre, i.e., air or vacuum, nc is the refractive index
of the core of the fibre and nc1 that of the cladding. Angle is the
angle of incidence (Fig. 8.2). Equation 8.1 shows that the na will
tend to be a maximum if the core refractive index is higher and
the cladding index is lower. Maximizing this value enables greater
acceptance angle of the incident beam. This angle can be maximized
to 90° with suitable selection of refractive index values for the core
and the cladding. na can be unity. In other words, the optical fibre
can transmit all the light that is incident on it, which is not quite
true of an optical system. To attain this sort of working from an
optical system a lens system will be required with a numerical
aperture of F/0.5! It will be observed that the factor ncl /nc is also
sine-inverse of the critical angle cr at the interface of the core and
the cladding. If the angle of refraction at the entrance face is c
then if angle (90-c) is equal to or greater than the critical angle cr
the ray will remain trapped within the core and undergo multiple
reflections till it reappears at the exit end (Fig. 8.2). The other aspect
is that the light incident on an optical fibre received all over its
maximum acceptance angle is somewhat averaged out by multiple
reflections by the time it reaches the output end.
8.2.2 Fibre-optics Faceplates
If a large number of such optical fibres are packed
together parallely over a short distance of the order of a few mm,
Image intensifier tubes 111
EXTRAMURAL ABSORBER
180 mm
Figure 8.5. A sectional view through a Gen-1 cascade system
Image intensifier tubes 113
from bubbles and seeds. The glass types are also so chosen that
they are compatible both thermally and chemically. Obviously, both
the rods and tubes must have thoroughly clean and smooth
surfaces before being snug-fit and placed in a drawing machine.
The drawn fibre is dipped through a dark solution of a ceramic
material which provides an absorption coating also known as
extramural absorption coating or EMA. To ensure precise diameters
of the output fibres, the thermal gradient in the furnaces, the rate
of sliding the rod-in-tube combination into the furnace, and the
rate of drawing the output single fibre are controlled very critically
and effectively. A proper calibration of the drawing and furnace
equipment is essential before good results can be expected. The
nominal diameter of the output single fibre is not allowed to be
varied by more than a few per cent of its value to maintain
excellent uniformity. The nominal value of single fibres may be
from 0.5 mm to around 3 mm. Exact diameter is decided by the
nature of the materials and the equipment that has been used,
as also the equipment that will be used to produce multiple fibres.
The single fibre is usually cut in short lengths say of the order
250 mm or more. These cut single fibres are then grouped and
aligned in graphite moulds of usually hexagonal or square cross-
section. Alignment is fully assured manually or through utilization
of appropriate jigs and fixtures. This mould is next raised to a
temperature corresponding to the softening point of the fibre coating
material to accomplish tacking between the single fibres. This group
of single fibres is then redrawn after appropriate annealing resulting
in multiple fibres using the same or similar drawing and furnace
equipment.
The drawn multifibres are cut to right lengths and aligned
in a suitable jig. High quality fusion between the multiples is
ensured by controlled heating to the softening temperature of the
coating material and by appropriate pressure. This is followed by
annealing to eliminate strain or inhomogeneities in the composite.
The boule so formed can be sliced in appropriate thickness to form
the component fibre-optic plates. Both surfaces of a disc or plate
so available need to be polished and surfaced, as per the
requirements to form suitable faceplates. Needless to say, control
and testing has to be adopted at each stage for optical and
mechanical control with precise instrumentation besides ensuring
complete vacuum tightness. Degree of cleanliness while drawing,
fusing, sawing, surfacing and polishing has also to be of a very
high order so as to obtain maximum efficiency from the finished
product. It has also to be ensured that the materials used for the
114 An Introduction to Night Vision Technology
A MICRO-CHANNEL PLATE
ELECTRON AVALANCHE
MICRO-CHANNEL
VOLTAGE
(GAIN CONTROL)
BIAS ANGLE
10-15
ION TRAP
SECONDARY
ELECTRONS
INPUT ELECTRONS
OUTPUT
ELECTRONS
MICRO-CHANNEL (G x INPUT ELECTRONS)
~ 500
g ~ t .s N (8.2)
Where N is the total number of collisions that take place.
For a given diameter, the number of collisions is dependent on the
direction of the incident electron and the length of the micro-channel.
Noting that the diameter of the micro-channel has to have an optimum
value from the point of view of optical resolution or MTF, the
parameters that can be varied to improve on the gain are:
(a) Increasing the potential gradient to further accelerate the
electrons in the channel thereby increasing the s value.
(b) Increasing the value of N, i.e., number of collisions. This
suggests an increase in the length-to-diameter (l/d) ratio and
accommodating steeper direction for the incident electrons.
It has been stated that for MCPs with 15 -m centre-to-centre
hollow fibres, the gain roughly doubles for every increase of 50 V.
116 An Introduction to Night Vision Technology
U1 U2
Z
(a)
U1 U2 U2 U2
U1 U1
(b)
U2 U2
U2
Z
U1 U1 U1
(c)
Figure 8.7. Scheme of electrode systems for (a) aperture lens,
(b) bipotential lens, and (c) unipotential lens (U 1, U2
refer to potential values).
Image intensifier tubes 119
n 1 u 1 n 2 u 2
r1 1 2 z
r2
a b
f1 u1 n
1 (8.5)
f2 u2 n2
A lens system consisting of two or more electron lenses
can thus be defined on the optical pattern, to form effective electron-
optic devices.
8.4 GENERAL CONSIDERATIONS FOR IMAGE
INTENSIFIER DESIGNS
A typical electron image intensifier may employ a two lens
optical system[4]. The first lens besides focusing must also
accelerate the photoelectrons. The field of the first lens must thus
be extended to the photocathode, so as to collect and accelerate
all emitted electrons, i.e., the cathode is immersed in the field
originating from the potential forming the first lens. This means
that the object is immersed in the field as if in a medium of refractive
index n, corresponding to the under-root of the potential forming
the lens in the object-space. Such a lens is also known as an
immersion objective. It is essentially a bipotential lens. This may
be coupled to an aperture to form a complete system. The second
lens helps in the control of divergence and assists in reducing
aberration characteristics (Fig. 8.9).
As shown in the figure, the photocathode is immersed
in the objective field. A diaphragm is provided near the crossover
formed by the immersion objective. The second lens transferring
the image to the screen is formed between the first and the second
Image intensifier tubes 121
IMMERSION
LENS ANODE CONE
Gp
Light flux emitted by the screen Uk p c Uk p (8.6)
Light flux incident on the photocathode c
Thus, the gain is higher if the screen efficiency, accelerating
potential and the photocathode sensitivity is higher. No doubt there
would be limitations due to noise and dark current in the system
and system components. The above is applicable if both the input
object size and output image on the phosphor are of the same size.
In case the I.I. tube has a magnification mi, the image would be
spread over an area m i2 times the area of the image on the
photocathode. Thus, we have
G p = Uk p /mi2 (8.7)
The gain of an image intensifier is usually expressed in
cd/m2/lx as a ratio of the output brightness in nits (candelas/sq m)
to the input illuminance in lux (lumens/sq m). This measurement
is usually done at a colour temperature of 2854°K at appropriate
input light level of low order, i.e., say 20 lx. Its equivalence to the
theoretical value is given by Eqn 8.7, where the phosphor
efficiency is in lumens per watt, U the applied voltage and KP the
photocathode sensitivity in amperes per lumen. As the
photocathode sensitivity at different wavelengths has a different
value, the composition and magnitude of the light stimulus to the
photocathode has to be standardized to give a consistent value for
the gain. This also helps in comparing I.I. tubes from different
manufacturers or in the same lot. Extending to incorporation of
an I.I. tube in an instrument system and defining the total luminous
gain G in terms of the object brightness B0 in a scene that is being
imaged through an optical system we have
Brightness of the image on the screen (B s ) B s
G (8.8)
Brightness of the object (B o ) Bo
Referring to Fig. 8.10, we have the relationship of image
heights ho, hc and hs corresponding to object, photocathode and
phosphor (screen) as
D2
as sin2 c = .
D 2 4f 2
D2
s G p .c G p .Bosc 2 . (8.13)
D 4f 2
As the screen (phosphor) area corresponding to sc would
be given by sc m i2 we have screen brightness B s given by
s D2 1
Bs = = Gp 2 B0 (8.14)
sc m i2 2 2
D 4 f mi
OBJECTIVE PHOTOCATHODE
(OPTICAL SYSTEM) OCULAR SYSTEM
hO O C EYE
hS
D
hC
R f
Bs D2 1
G = = Gp 2 (8.15)
B0 D 4 f 2 m i2
This may be put in the form
2
D 1
G 0.25 G p 2 (8.16)
F mi
The numerical value in Eqn 8.16, 0.25 is really much
closer to the more exactly calculated values for aperture ratios of
1:5 or slower. Its value changes more rapidly at faster F numbers.
Thus at an aperture ratio of 1:1 it would be 0.20 and at an aperture
ratio of 1:2 it would be 0.235. Nevertheless, the variation in the
numerical value of 0.25 for different apertures is not so significant.
A system with an aperture ratio of 1:1 is 25 times faster and a
system with an aperture ratio of 1:2 is more than six times faster
than one with an aperture ratio of 1:5 against a change 20/25 and
23.5/25 due to the more exact calculation of the numerical value
of 0.25. To maximize the overall gain, the second term in this
equation Gp should be as large as possible. Referring to Eqn 8.6,
this would mean that the screen efficiency, accelerating potential
and the photocathode sensitivity should be as high as possible.
Factors relating to phosphor (screen) and photocathodes have been
well discussed in the related chapters earlier. Obviously,
accelerating potential and the overall design has to be such as to
add minimal noise and not to overbrighten the phosphor. As the
size and weight of I.I. tubes is also a major consideration, suitable
power supplies (wrap-around) have also been developed with
automatic brightness control. As resolution of a high gain noise
limited I.I. tube is primarily limited by the finite number of
photoelectrons released by the photocathode, it is advisable to
design instrument systems which detect at the required distance
Image intensifier tubes 125
well above this limitation. This means that for an excellent tube,
this has to be done primarily by having as large an aperture as
permitted by various design restrictions so that more of the light
flux from an object scene is concentrated on the photocathode. Thus
the physical value of D 2 is important, for a practical application.
The third term in the Eqn 8.16 signifies the need for a faster and
faster aperture ratio. The fourth term suggests a minification of
the tube magnification, i.e., the screen size to be lesser than the
photocathode size. Such tubes have also been designed particularly
where the overall magnification presented to the eye is around unity
and larger field of view is a requirement. The final term suggests
that the transmission factor of the optical system should be as high
as possible, i.e., the objective lens surfaces should be properly
coated for the spectral range to which the photocathode is sensitive.
Likewise, the eyepiece lenses should be coated to maximise
transmission in relation to the nature of the output spectrum from
the screen (phosphor)[5]. Further relevant optical considerations
have been referred to in Chapter 5. As discussed therein,
considerations for total field of view and overall magnification are
significant.
8.5 IMAGE INTENSIFIER TUBE TYPES
Further to the historical development as discussed in
paragraph 8.1, and parametric details in Table 8.1, we may now
discuss the types of tubes that have evolved so far.
8.5.1 Generation-0 Image Converter Tubes
These tubes referred to also as Image Converter tubes have
an Ag-O-Cs photocathode with an S-1 response (Fig. 6.6). The
phosphor could be a typical P-20 type. The acceleration voltage in the
tubes are of the 10-15 KV order. To improve the brightness the screens
of these tubes can be aluminised. This also eliminates optical feedback.
AFTER
PHOTOCATHODE CATHODE ACCELERATION
(CATHODE) APERTURE ANODE POTENTIAL RINGS
CONE
SCREEN
There were some developments in these types before the arrival of more
sensitive photocathodes covering both visible and near infrared regions.
There was the development of multi-slot photocathodes with higher
sensitivity in the longer wavelength regions. Image converter tubes were
also produced by using further after acceleration of the electrons near
the screen[4] (Fig 8.11).
8.5.2 Generation-1 Image Intensifier Tubes
Generation-1 tubes started making their appearance in
early sixties and had an S-20 photocathode (See Fig. 6.6) coupled
through a two electrostatic lens system to a phosphor screen,
usually P-20. The lens system more or less followed a similar
pattern as in Generation-0 with an aperture-cone electrode
combination. As the gain was not that high, it was thought
expedient to cascade these tubes either internally or externally.
As stated earlier, these efforts were only partially successful and
resulted in cumbersome and expensive designs (Fig. 8.1 and para
8.2). A cross-sectional view of a single Generation-1 tube is shown
in Fig. 8.12. Earlier versions of these tubes used S-20
photocathodes and later on stabilized to the use of S-25
photocathodes with a P-20 phosphor. As the fused fibre faceplates
made their appearance, their incorporation in a Generation-1 tube
made cascading relatively easier, effective and economical.
Figure 8.5 shows a sectional view through a cascaded Generation-1
CATHODE GLASS WALL
APERTURE CYLINDER
PHOTOCATHODE ANODE PHOSPHOR SCREEN
CATHODE CONE
FIBRE OPTIC
PLATE
IMAGE OF
SCENE
INTENSIFIED
IMAGE
+15 kv
Figure 8.12. Sectional view through a Generation-1 tube
Image intensifier tubes 127
tube where three tubes have been cascaded. Refer also para 8.2.2.
Gains have been measured to be in excess of 30,000 and may be in
a range of 50,000 to 100,000. Three-stage systems are also suitable
for incorporation of automatic brightness control particularly as
the system is rather sensitive to supply voltage variations and
ripples. Generation-1 tube like later generations have been
standardized to 18 mm and 25 mm diameters for both the
photocathodes and the phosphor, thus operating at unit
magnification, keeping a variety of applications in view. Tubes with
40 mm photocathodes are also in the market for specific
applications. The resolution of the Generation-1 tubes is dependent
on good electron-optical systems as also on the grain structure of
the phosphors of the screens. As is obvious, the second and third
tubes in a cascade pick up the input from the phosphor screens
and may progressively degenerate the overall resolution. Excellent
manufacturing and phosphor deposition or coating techniques are
therefore called for. A moving object when seen may give rise to a
smear. It could also cause blooming when viewing a bright object.
8.5.3 Generation-2 Image Intensifier Tubes
The second Generation tube is a combination of a single-
stage I.I. tube of Generation-1 coupled internally to a micro-channel
plate. The photocathode is highly improved and is of the S-25 type
and has an extended red response. The micro-channel plate has
been discussed above in paragraph 8.2.3. A section through a
Generation-2 tube is shown in Fig. 8.13. Thus a high gain single-
stage image intensifier, with a better photocathode using an
electrostatic lens system impinges electrons in the input of the
micro-channel plate. These electrons after intensification are shown
in Fig. 8.6 are proximity focused on the phosphor screen. The
electrostatic lens system has to be such as to produce a flat image
at the input of the MCP. Thus the normal electrode system of a
spherical cathode and a conical anode with an aperture is
augmented by a distortion correction ring (a sheet cylindrical
electrode) before the electrons impinge on the MCP. Impinging
electrons can also generate positive ions which may travel back
and reduce the life and efficacy of the photocathode. A positive ion
barrier is obtained by placing the input of the MCP at a lower beam
potential than the anode cone potential. A thin ion-barrier film could
also be deposited on the input face of the MCP. It could trap some
of the incoming electrons also and prevent the re-entry of electron
that rebound from the solid edges of MCP channels on the input
face. This tube has many advantages over the Generation-1
cascaded tube. It achieves the same order of lumen amplification
128 An Introduction to Night Vision Technology
CATHODE
SHIELD
INPUT FIBRE MCP
OPTIC BODY INPUT MCP
FACEPLATE - 2500 V CERAMIC - 900 V OUTPUT
+ 6000 V
- 2500 V
MCP
PHOSPOR
SCREEN
PHOTOCATHODE OUTPUT
(S-25, (ERMA) FIBRE-OPTIC
FACEPLATE
Uo
D 4l (8.17)
U ac
PHOSPHOR
CERAMIC FIBRE-OPTICS
SCREEN
METAL BODY INVERTER
FIBRE OPTIC
FACEPLATE
MICRO-CHANNEL
PLATE PHOTOCATHODE (S-25, ERMA)
for design of night vision goggles and night-sights for small arms.
These have found a great application area in avionics also. Their
freedom from distortion and uniform resolution over the entire
picture area make them more suitable for biocular or binocular
applications. These tubes are also referred to as double proximity
focused wafer tubes because of the image transfer through the MCP
which is in proximity both to the photocathode and the screen and
immersed in a horizontal field.
8.5.6 Generation-3 Image Intensifier Tubes
Image intensifier tubes utilizing Generation-3 photocathodes,
i.e., NEA photocathodes such as cessiated GaAs and improved MCPs
are generally referred to as Generation-3 I.I. tubes. Their sensitivity to
much lower light level makes them more eminently suitable for
incorporation in low light level systems particularly for night vision
goggles. The Gallium Arsenide (GaAs) cessiated photocathode, the
photocathode of choice for Generation-3 tubes, is an excellent
compromise for low dark current and good infrared detection. The
photon rate is around five to seven times greater in the region 800-
900 nm than in the visible region say around 500 nm. However, this
photocathode requires protection from bombardment by gas-ions
released from the channels of the MCPs, as otherwise it would get
rapidly destroyed. To avoid this effect, a thin ion-barrier film may be
deposited on the entrance face of the MCP to trap gas-ions (Fig. 8.6).
This film may however trap some of the incoming electrons also. A
very high level of vacuum in the tube during processing would also
CATHODE FACEPLATE
(FIBRE-OPTIC)
MICRO-CHANNEL PLATE
Si 3N4 COATING TO
IMPROVE PHOTO-
CATHODE OUTPUT
GaAs PHOTOCATHODE
PHOSPHOR SCREEN
LIGHT ABSORPTION
MEDIA
9.1 INTRODUCTION
The image intensifier (I.I.) based instrument systems
developed so far have been of significant use in night time
observation and navigation, primarily on land and from helicopters.
The need for night time use to direct fire on enemy targets by the
infantry, artillery and the armoured corps has resulted in a series
of instruments for each specific application. It is therefore obvious
that the instruments systems are likely to have optical
characteristics like the field of view, magnification etc., similar to
those in use during daylight for observation, navigation and fire
control. Reticles would also be required to be introduced for proper
laying and engagement and thus match the weapon capabilities
as accurately as possible. The methods of mounting on or in the
weapon system is also of great concern. Besides, like all other
types of military instruments these instruments have to withstand
climatic and environmental tests as may be laid down for
instruments in the daylight category for a given weapon system.
These requirements would be both for use and in storage. The
criteria for acceptance of I.I. tubes laid down in military
specifications, as also for the acceptance of I.I. based instrument
systems, include these aspects in detail.
Further, as we are aware of the limitations of the human
eye, environment and night conditions, technological aspects get
mainly augmented by optical considerations and the I.I. tubes.
Image intensifier tubes in turn are dependant on photocathodes,
electron amplification and phosphors. The instrument system
as a whole is therefore an integrated multiple of all the above factors.
Nonetheless, the success of an instrument would depend on overall
considerations intended for the satisfaction of a user. Apart from
field of view, magnification and the mechanical limitations that it
may have to satisfy, it is obvious that the user would be interested
144 An Introduction to Night Vision Technology
in the distance that such a system can see during the night, i.e.,
the night range. This is an important parameter of a system which
cannot be predicted by any single subcomponent and will be also
dependent on the night time conditions. Theoretical and
experimental prediction about the night range is therefore an
important parametric requirement.
9.2 RANGE EQUATION
Various paradigms have been developed from time
to time to arrive at a possible range value during night time.
One such paradigm[1] has been explored here more to illustrate
the factors on which range is dependent and to indicate the
possibilities for optimisation. This theoretical approach gives a
reasonable basis but it is still necessary to evaluate a given system
under standard night time conditions so that the range in the
field can be more or less estimated to a reasonable degree of
accuracy. It would still be an estimate even when tried out in
practice in the field as the field conditions are not likely to remain
standard all through the measurements.
If we take an object dimension of Z m at a night range
of R m and assume that N line-pairs at spatial frequency Ak in
line-pairs per mm are required to detect it at the photocathode
we have,
Z N / Ak
.......(in meters ) (9.1)
R F
where F is the focal length of the objective in mm.
This relationship though geometrically true requires to
be investigated further for the practical value that R can attain for
a given night vision system.
If we now concentrate on a detail of area a = Z 2 in the
image at the photocathode as the minimum area of detection
and assume bar chart as an object in both the x and y planes of
the image where N/A k is the minimum resolution in each
direction, i.e., x and y, we have in a rotationally symmetric optical
system:
a=N/Ak .N/Ak .10 6 m
or
a= (N/Ak )2.10 6 m (9.2)
Assuming a photon flux of n1 photons from the object
per sq. m on the detail then over an integration time of t seconds
Night vision instrumentation 145
n 2at (9.4)
From the above the signal, S can be defined as
= C n1 n 2 at
2
= 2C 2n at (9.7)
2C 2n at p 2 (9.8)
c .K p
electrons/s/m 2 (9.9)
e.
where e is the electron-charge in C having a value of e =1.6010–13 C.
Further if the noise power factor of an I.I. tube is defined by
2
Signal to noise ratio of the photoelectrons
f (9.10)
Signal to noise ratio of the output scintillations
the effective number of photons available for detection is
c . K p
. per s/m 2 (9.11)
e. f
2C 2 c K p at = p2
e. f
and further substituting for a from Eqn 9.2, we have
f
AK /N
2
c A 2 (9.12)
C K p t
ep 2 6
where A 10 is a constant.
2
Following Eqn 8.16, c has a relationship with object
luminance o. The object luminance is given by
f
A 2 A /N 2 0.25 0 F number 2
C K p t k
Substituting for Ak /N
2
from Eqn 9.1, we have
f
A. 2
(R / Zf )2 0.25 o (F . number )2. .
C . K p .t
i.e.,
R 2 A *Z 2 0 F number F 2 C 2 k p /f t
2
(9.14)
I max I min
i .e.,
I max I min
R 2 E .( Z 2 . o . . ) (F number )2 .F 2 .M o2 .M e2 K 2
p .M i / f .t (9.16)
(iv) 2
Factor K p M i / f t
10–9
STARLIGHT
10–10
10-8
SUM OF THREE
LAMPS
'
'
10-9
'
'
'
An Introduction to Night Vision Technology
MOONLIGHT
'
10-10
POWER SUPPLY
UNIT
HEADLIGHT
DRIVER'S NIGHT
(DRIVER)
SIGHT
same area of vision. Obviously, for driving the system, one has to
have a unit magnification and a significant field of view, while for
engagement of a target the Gunner’s sight should have a compatible
magnification and correct illumination of the scene. Consequent on
the development of alkali and multi-alkali photocathodes and
Generation-1 and Generation-2 series of instruments, Generation-0
series is now obsolete for military purposes, though these can still
be used for perimeter search or surveillance in security zones.
Figure 9.4 shows a hand-held night vision binocular of
the Generation-2[5]. Such binoculars utilize advanced I.I. tubes
matched for their sensitivity and noise factor or tubes of Generation-3.
Generally, these binoculars have only one objective channel while
the viewing is through two oculars, more for comfort of vision than
for detailed depth appreciation. For some applications, the phosphor
screen can also be viewed through a carefully designed ocular system
allowing the scene to be seen with both the eyes through a single
magnifier type of optical component. Such systems referred to as
biocular systems have a distinct advantage, as the positioning of
the eyes is not critical.
Figure 9.5 shows an I.I. based night vision observation
device integrated with a laser rangefinder and a goniometer[5].
Such an observation device has a large aperture at a fast f-number
so that the optical factor in Eqn 9.16 is maximized in addition to the
image intensification factor which in any case should be as high
as possible. This approach permits a maximum night range that
can be viewed subject only to the object scene factor. The object scene
is also improved by appropriate coatings of the optical elements so
Night vision instrumentation 157
LASER RANGEFINDER
IMAGE INTENSIFIER
BASED NIGHT
OBSERVATION DEVICE
GONIOMETER
MONITOR (COMMANDER)
DRIVER'S
NIGHT SIGHT
(ROLE OF NIGHT VISION)
Night vision instrumentation
REFERENCES
1. Blackler,F.G., "Practical Guide to Night Viewing System
Performance", SPIE, Assessment of Imaging System, Visible and
Infrared (SIRA). vol. 274, (1981), pp.248-55.
2. Soule, H.V. "Electro-optical Photography at Low Illumination
Levels". (John Wiley and Sons Inc).
3. Report on Creation of Test Facilities for Night Vision in the 300ft
Long Hall. (IRDE, Dehradun).
4. Integrated Test Equipment for Night Vision Devices. (Perfect
Electronics, Dehradun).
5. Six Photographs and One Sketch. Various Night Vision Devices.
(Courtesy, Instruments Research & Development
Establishment, Dehradun).
6. Gourley, S & Henish, M. "Sensors for Small Arms". International
Defence Review. vol. 5, 1995, pp. 53-57.
Index
A E
Accommodation 1 Electron image intensifier
Acquisition 16, 23 120
Active imaging 56 Electron-optics 117
Airy disc 66, 68, 69 Entrance pupil 62
Alloy Photocathodes 81 Environment 29
Aperture lens, 118 Exit pupil 62
Aperture stop 62 Experimental lab testing for
Atmospheric windows range-evaluation 150
30, 57 Eyepieces 59
Attenuation coefficient 33
F
B
Fibre-optic twisters 109
Back focal-length 61 Fibre-optics 109
Background 44 Field of view 61
Bipotential lens 118 Field stop 62
Blackwell’s Approach 18 Field test 154
C Focal-length 61
Front focal-length 61
Cathodo-luminescence 93 Fused fibre-optics faceplates
Charge coupled devices 109
(CCD) 14, 133
Collimator type test equip- G
ment 154 Generation-0 125
Composite Photocathodes night vision system 155
81
Cone Photoreceptors 11 Generation-1 106, 125
Cone Receptors 6 Generation–2 106, 109
Cones 9 Wafer Tube 129
Contrast 23, 35, 145 Generation-3 107, 131
D H
Detection 17, 20, 23 Hand-held night vision
of movement 22 binocular of the
probability 23 Generation-2 156
164 An Introduction to Night Vision Technology