You are on page 1of 51

Page 1 of 51

THE SCIENCE OF REMOTE SENSING

Dr. S. K. Dash
Assoc. Prof. of Geology,
Govt. College, Sundargarh
Odisha- 770002

dashsudhikumar@gmail.com
Page 2 of 51

1.1 Remote Sensing

Remote sensing is a technology used for obtaining information about a target through the analysis of
data acquired from the target at a distance. It is composed of three parts, the targets - objects or
phenomena in an area; the data acquisition - through certain instruments; and the data analysis - again by
some devices. This definition is so broad that the vision system of human eyes, sonar sounding of the
sea floor, ultrasound and x-rays used in medical sciences, laser probing of atmospheric particles, are all
included. The target can be as big as the earth, the moon and other planets, or as small as biological cells
that can only be seen through microscopes. A diagrammatic illustration of the remote sensing process is
shown in Figure 1.1.

An essential component in geomatics, natural resource and environmental studies is the measurement
and mapping of the earth surface - land and water bodies. We are interested in knowing the types of
objects, the quality and quantity of various objects, their distribution in space and time, their spatial and
temporal relationships, etc. In this book, we introduce some of the major remote sensing systems used
for mapping the earth. We concentrate on examining how satellite and airborne images about the earth
are collected, processed and analyzed. We illustrate various remote sensing techniques for information
extraction about the identity, quantity, spatial and temporal distribution of various targets of interest.

Remote sensing data acquisition can be conducted on such platforms as aircraft, satellites, balloons,
rockets, space shuttles, etc. Inside or on-board these platforms, we use sensors to collect data. Sensors
include aerial photographic cameras and non-photographic instruments, such as radiometers, electro-
optical scanners, radar systems, etc. The platform and sensors will be discussed in detail later.

Electro-magnetic energy is reflected, transmitted or emitted by the target and recorded by the sensor.
Because energy travels through the medium of the earth's atmosphere, it is modified such that the
signal between the target and the sensor will differ. The effects of the atmosphere on remote sensing will
be examined later. Methods will be introduced to reduce such atmospheric effects.

Once image data are acquired, we need methods for interpreting and analyzing images. By knowing
"what" information we expect to derive from remote sensing, we will examine methods that can be used
to obtain the desirable information. We are interested in "how" various methods of remote sensing data
analysis can be used.

In summary, we want to know how electromagnetic energy is recorded as remotely sensed data, and
how such data are transformed into valuable information about the earth surface.

Figure 1.1 The Flows of Energy and Information in Remote Sensing

dashsudhikumar@gmail.com
Page 3 of 51

1.2 Milestones in the History of Remote Sensing

The following is a brief list of the times when innovative development of remote sensing were
documented. More details may be found in Lillesand and Kiefer (1987) and Campbell (1987).

1839 Photography was invented


Parisian Photographer, Gaspard Felix Tournachon used a balloon to ascend to a height of 80m to
1858
obtain the photograph over Bievre, France
1882 Kites were used for photography
1909 Airplanes were used as a platform for photography
1910-
World War I. Aerial reconnaissance: Beginning of photo interpretation
20
1920-
Aerial photogrammetry was developed
50
1934 American Society of Photogrammetry was established. Radar development for military use started
1940's Color photography was invented
Non-visible portions of electromagnetic spectrum, mainly near-infrared, training of photo-
1940's
interpretation
Further development of non-visible photography, multi-camera photography, color-infrared
1950-
photography, and non-photographic sensors. Satellite sensor development - Very High Resolution
1970
Radiometer (VHRR), Launch of weather satellites such as Nimbus and TIROS
1962 The term "Remote Sensing" first appeared
The launch of Landsat-1, originally ERTS-1,Remote sensing has been extensively investigated and
1972
applied since then
1982 Second generation of Landsat sensor: Thematic Mapper
French SPOT-1 High Resolution Visible sensors MSS, TM, HRV have been the major sensors for
data collection for large areas all over the world. Such data have been widely used in natural
1986
resources inventory and mapping. Major areas include agriculture, forest, wet land, mineral
exploration, mining, etc.
1980- Earth-Resources Satellite from other countries such as India, Japan, and USSR. Japan's Marine
90 Observing Satellite (MOS - 1)
A new type of sensor called an imaging spectrometer, has been developed.

 developers: JPL, Moniteq,ITRES and CCRS.


1986-
 Products: AIS, AVIRIS, FLI, CASI, SFSI, etc. A more detailed description of this subject
can be found in Staenz (1992).

Proposed EOS aiming at providing data for global change monitoring. Various sensors have been
proposed.

 Japan's JERS-1 SAR,


1990-  European ERS Remote Sensing Satellite SAR,
 Canada's Radarsat
 Radar and imaging spectrometer data will be the major theme of this decade and probably
next decade as well

dashsudhikumar@gmail.com
Page 4 of 51

1.3 Resolution and Sampling in Remotely Sensed Data

We begin to ask: what are the factors that make remotely sensed images taken for the same target
different? Remotely sensed data record the dynamics of the earth surface. The three-dimensional earth
surface is changing as time goes. Two images taken at the same place with the same imaging condition
will not be the same if they are obtained at different times. Among many other factors that will be
introduced in later chapters, sensor and platform design affect the quality of remotely sensed data.

Remote sensing data can be considered as models of the earth surface at very low level of
generalization. Among various factors that affect the quality and information content of remotely sensed
data, two concepts are extremely important for us to understand. They determine the level of details of
the modeling process. These are the resolution and the sampling frequency.

the maximum separating or discriminating power of a


Resolution measurement. It can be divided into four types: spectral,
radiometric, spatial and temporal.
determines how frequent are data collected. There are three types
Sampling
of sampling important to remote sensing: spectral, spatial and
frequency
temporal.

Combinations of resolutions and sampling frequencies have made it possible for us to have different
types of remote sensing data.

For example, assume that the level of solar energy coming from the sun and passing through the
atmosphere at a spectral region between 0.4 mm - 1.1 mm is distributed as in Fig. 1.2. This is a
continuous curve.

Fig. 1.2 Solar Energy Reaching the Earth Surface

After the solar energy interacts with a target such as a forest on the earth, the energy is partly absorbed,
transmitted, or scattered and reflected. Assume that the level of the scattered and reflected energy
collected by a sensor behaves in a manner as illustrated in Fig. 1.3.

dashsudhikumar@gmail.com
Page 5 of 51

Fig. 1.3 Reflected Solar Energy by Trees

The process that makes the shape of the energy curve change from Fig. 1.2 change to Fig. 1.3 will be
discussed later. Let us use Fig. 1.3 to discuss the concepts of spectral resolution and spectral sampling.

Fig. 1.4 Example of Differences in Spectral Resolution and Spectral Sampling

In Figure 1.4, the three shaded bars A, B, and C represent three spectral bands. The width of each bar
covers a spectral range within which no signal variation can be resolved. The width of each spectral
band represents its spectral resolution. The resolution of A is coarser than the resolution of B. This is
because spectral details within band A that cannot be discriminated may be partly discriminated with a
spectral resolution as narrow as band B. The resolution relationships among the three bands are:

Resolution of A < Resolution of C < Resolution of B

Sampling determines the various ways we use to record a spectral curve. If data storage is not an issue,
we may choose to sample the entire spectral curve with many narrow spectral bands. Sometimes, we
choose to make a discrete sampling over a spectral curve (Figure 1.4). The questions are: which way of
sampling is more appropriate and what resolution is better? It is obvious that if we use a low resolution,
we are going to blur the curve. The finer the resolution is, the more precise can we restore a curve,
provided that sufficient spectral sampling frequency is used.

The difference between imaging spectrometers and earlier generation sensors is in the difference of the
spectral sampling frequency. Sensors of earlier generations use selective spectral sampling. Imaging

dashsudhikumar@gmail.com
Page 6 of 51

spectrometers have a complete systematic sampling scheme over the entire spectral range. An imaging
spectrometer, such as CASI, has 288 spectral bands between 0.43 - 0.92 spectral region, while earlier
generation sensors only have 3 - 7 spectral bands.

Spatial resolution and sampling

Similar to the spectral case, the surface has to be sampled with certain spatial resolution. The difference
is that spatial sampling is mostly systematic, i.e., a complete sampling over an area of interest. The
difference in spatial resolution can be seen in Figure 1.5.

Figure 1.5. Sampling the same target with different spatial resolutions.

A scene including a house with garage and driveway is imaged with two different spatial resolutions.
For each cell in Figure 1.5a no object occupies an entire cell. Each cell will contain energy from
different cover types. Such cells are called mixed pixels, also known as mixels Mixed pixels are very
difficult to discriminate from each other. Obviously a house cannot be easily recognized at the level of
resolution in Figure 1.5a, but it may be possible in Figure 1.5b. As spatial resolution becomes finer,
more details about objects in a scene become available. In general it is true that with finer spatial
resolutions objects can be better discriminated with human eyes. With computers, however, it may be
harder to recognize objects imaged with finer spatial resolutions. This is because finer spatial resolutions
increase the image size for a computer to handle. More importantly, for many computer analysis
algorithms, they cause the effect of "seeing the tree but not the forest." Computer techniques are far
poorer than human brain in generalization from fine details.

Temporal sampling can be regarded similar to spectral sampling. For example, temporal sampling
means how frequently we are imaging an area of interest. Are we going to use contiguous systematic
sampling as in movie making or selective sampling as in most photographic actions? To decide the
temporal sampling scheme, the dynamic characteristics of the target under study have to be considered.
For instance, if the study subject is to discriminate crop species, the phenological calendar of each crop
type should be considered for when to collect remotely sensed data in order to best characterize each
different crop species. The data could be selected from the entire growing season between late April to
early October for mid and high latitudes in northern hemisphere. If the subject is flood monitoring, the
temporal sampling frequency should be high during the flood period because floods usually last only a
few hours to a few days.

Radiometric resolution can be understood in a similar manner as with spatial resolution. This is a
concept well illustrated in a number of digital image processing books (e.g., Gonzalez and Wintz, 1987;
Pratt, 1991). It is associated with the level of quantization of an image which is in turn related to how to
dashsudhikumar@gmail.com
Page 7 of 51

use the minimum amount of data storage to represent the maximum amount of information. This is often
a concern in data compression. Although we will explain the concept of radiometric resolution in
Chapter 5, we will only touch the topic of data compression in Chapter 7 from an information extraction
point of view.

2.1 Electromagnetic Energy

Energy is a group of particles travelling through a certain media. Electromagnetic energy is a group of
particles with different frequencies travelling at the same velocity. These particles have a dual-mode
nature. They are particles but they travel in a wave form.

Electromagnetic waves obey the following rule:

This equation explains that the shorter wavelength has higher spectral frequency

Electromagnetic energy is a mixture of waves with different frequencies. It may be viewed as:

Each wave represents a group of particles with the same frequency. All together they have different
frequencies and magnitudes.

With each wave, there is an electronic (E) component and a magnetic component (M). The Amplitude
(A) reflects the level of the electromagnetic energy. It may also be considered as intensity or spectral
dashsudhikumar@gmail.com
Page 8 of 51

irradiance. If we plot A against the wavelength we then get an electromagnetic curve, or spectrum
(Figure 2.1).

Figure 2.1. An electromagnetic spectrum

Any matter with a body temperature greater than 0 K emits electromagnetic energy. Therefore, it has a
spectrum. Furthermore, different chemical elements have different spectra. They absorb and reflect
spectral energy differently. Different elements are combined to form compounds. Each compound has a
unique spectrum due to its unique molecular structure. This is the basis for the application of
spectroscopy to identify chemical materials. It is also the basis for remote sensing in discriminating one
matter from the other. Spectrum of a material is like the finger print of human being.

2.2 Major Divisions of Spectral Wavelength Regions

The wavelength of electromagnetic energy has such a wide range that no instrument can measure it
completely. Different devices, however, can measure most of the major spectral regions.

The division of the spectral wavelength is based on the devices which can be used to observe particular
types of energy, such as thermal, shortwave infrared and microwave energy. In reality, there are no real
abrupt changes on the magnitude of the spectral energy. The spectrum are conventionally divided into
various parts as shown below:

The optical region covers 0.3 - 15 mm where energy can be collected through lenses. The reflective
region, 0.4 - 3.0 mm, is a subdivision of the optical region. In this spectral region, we collect solar
energy reflected by the earth surface. Another subdivision of the optical spectral region is the thermal
spectral range which is between 3 mm to 15 mm, where energy comes primarily from surface emittance.
Table 2.1 lists major uses of some spectral wavelength regions.

dashsudhikumar@gmail.com
Page 9 of 51

Table 2.1. Major uses of some spectral wavelength regions

Wavelength Use Wavelength Use

Water content in plant or soil


g ray Mineral 1.55-1.75 µm
Mineral, rock types
X ray Medical 2.04-2.34 µm
Surface temperature
Ultraviolet(UV) Detecting oil spill 10.5-12.5 µm
Surface relief, soil moisture
0.4-0.45µm Water depth, turbidity 3 cm - 15 cm
Canopy penetration, woody
0.7-1.1 µm Vegetation vigor 20 cm - 1 m
biomass

2.3 Radiation Laws

At the reflective spectral region, we are more concerned about the reflective properties of an object. But
in the thermal spectral region, we have to rely on the emittance of an object. This is because most
matters at the conventional temperature (temperature of our environment) emit energy that can be
measured. Therefore, we introduce some basics of the radiation theory.

The first theory treats electromagnetic radiation as many discrete particles called photons or quanta
(terms in Physics). The energy of a quantum is given by

E = hv

Where:

E energy of a quantum (Joules)


h = 6.626 x 10-34 (Planck's constant)
v = frequency

Since:

Thus:

Energy (or radiation) of a quantum is inversely proportional to the wavelength. The longer the
wavelength of a quantum, the smaller is its energy. (The shorter the wavelength, the stronger is its
energy.) Thus, the energy of a very short wavelength (UV and shorter) is dangerous to human health. If

dashsudhikumar@gmail.com
Page 10 of 51

we want to sense emittance from objects at longer wavelength, we will have to either use very sensitive
devices or use less sensitive device to view a larger area to get sufficient amount of energy.

This has implications to remote sensing sensor design. To use the available sensing technology at hand,
we will have to balance between wavelength and spatial resolution. If we wish to make our sensor to
have higher spatial resolution, we may have to use short wavelength regions.

The second radiation theory is Stefan-Boltzmann Law:

M: total radiant existence for a surface of a material watts/m2


: Stefan-Boltzmann constant, 5.6697 x 10-8 Wm-2 µK-4
T: absolute temperature, K

This means that any material with a temperature greater than 0 K will emit energy. The total energy
emitted from a surface is proportional to T4 .

This law is expressed for an energy source that behaves as a blackbody - a hypothetical, ideal radiator
that absorbs and re-emits all energy incident upon it. Actual matters are not perfect blackbody. For any
matter, we can measure its emitting energy (M), and compare it with the energy emitted from a
blackbody at the same temperature (Mb) by:

"€" is the emissivity of the matter. A perfect reflector will have nothing to emit. Therefore, its e will be
"0". A true blackbody has an € of 1. Most other matters fall in between these two extremes.

The third theory is Wien's displacement law which specifies the relationship between the peak
wavelength of emittance and the temperature of a matter.

max = 2897.8/T

As the temperature of a blackbody gets higher, the wavelength at which the blackbody emits its
maximum energy becomes shorter.

Figure 2.2. Blackbody radiation.

dashsudhikumar@gmail.com
Page 11 of 51

Figure 2.2 shows blackbody radiation curves for temperature levels of the Sun, a candescent lamp and
the Earth. During the day time we can see the energy from the sun is overwhelming. During the night,
however, we can use the spectral region between 3 µm and 16 µm to observe the emittance properties of
the earth surface.

At wavelengths longer than the thermal infrared region, i.e. at the microwave region, the energy
(radiation) level is very low. Therefore, we often use human-made energy source to illuminate the target
(such as Radar) and to collect the backscatter from the target. A remote sensing system relying on
human-made energy source is called an "active" remote sensing system. Remote sensing relying on
energy sources which is not human-made is called "passive" remote sensing.

2.4 Energy Interactions in the Atmosphere

The atmosphere has different effects on the EM transfer at different wavelength. In this section, we will
mainly introduce the fact that the atmosphere can have a profound effect on intensity and spectral
composition of the radiation that reaches a remote sensing system. These effects are caused primarily by
the atmospheric scattering and absorption.

Scattering: The redirection of EM energy by the suspended particles in the air.

Different particle sizes will have different effects on the EM energy propagation.

dp << λ Rayleigh scattering Sr

dp = λ Mie scattering Sm

Non-selective
dp >> λ
scattering Sn

The atmosphere can be divided into a number of well marked horizontal layers on the basis of
temperature.

dashsudhikumar@gmail.com
Page 12 of 51

Troposphere:

It is the zone where weather phenomena and atmospheric turbulence are most marked. It contains 75%
of the total molecular and gaseous mass of the atmosphere and virtually all the water vapour and
aerosols.

height 8 - 16 km (pole to equator)

Stratosphere: 50 km Ozone
Mesosphere 80 km
Thermosphere 250 km
Exosphere 500 km ~ 750 km

The atmosphere is a mixture of gases with constant proportions up to 80 km or more from ground. The
exceptions are Ozone, which is concentrated in the lower stratosphere, and water vapor in the lower
troposphere. Carbon dioxide is the principal atmosphere gas, with its concentration varying with time. It
is increasing since the beginning of this century due to the burning of fossil fuels. Air is highly
compressible. Half of its mass occurs in the lowest 5 km and pressure decreases logarithmically with
height from an average sea-level value of 1013 mb.

Figure 2.3 Horizontal layers that divide the atmosphere (Barry and Chorley, 1982)

Scattering causes degradation of image quality for earth observation. At higher altitudes, images
acquired in shorter wavelengths (ultraviolet, blue) contain a large amount of scattered noise which
reduces the contrast of an image.

dashsudhikumar@gmail.com
Page 13 of 51

Absorption: Atmosphere selectively absorbs energy in different wavelengths with different intensity.

The atmosphere is composed of N2 (78%), O2 (21%), CO2, H2O, CO, SO2, etc. Since different
chemical element has a different spectral property, regions with different intensity. As a result, the
atmosphere has the combined absorption features of various atmospheric gases. Figure 2.4 shows the
major absorption wavelengths by CO2, H2O, O2, O3 in the atmosphere.

Figure 2.4 Major absorption wavelengths by CO2, H2O, O2, O3 in the atmosphere

(Source: Lillesand and Kiefer, 1994)

Transmission: The remaining amount of energy after being absorbed and scattered by the atmosphere is
transmitted.

H2O is most variable in the atmosphere.


CO2 varies seasonally.

dashsudhikumar@gmail.com
Page 14 of 51

Therefore, the absorpiton of EM energy by H2O and CO2 is the most difficult part to be characterized.

Atmospheric Window: It refers to the relatively transparent wavelength regions of the atmosphere.

Atmospheric absorption reduces the number of spectral regions that we can work with in observing the
Earth. It affects our decision in selecting and designing sensor. We have to consider

1) the spectral sensitivity of sensors available;

2) the presence and absence of atmospheric windows;

3) the source, magnitude, and spectral composition of the energy available in these ranges.

For the third point, we have to base our decision of choosing sensors and spectral regions on the manner
in which the energy interacts with the target under investigation.

On the other hand, although certain spectral regions may not be as transparent as others, they may be
important spectral ranges in the remote sensing of the atmosphere.

2.5 Energy Interactions with the Earth Surface

What will happen when the EM energy reaches the Earth surface? The answer is that the total energy
will be broken into three parts: reflected, absorbed, and/or transmitted.

r, a and t change from one matter to another. t has been mentioned earlier. Solar energy has to transfer
through the atmosphere in order to reach the Earth surface. Transmitted energy can also be measured
from under water.

At the thermal spectral region, energy is primarily absorbed, and the reflected energy is significantly less
in magnitude than the emission of a target. Since what is absorbed will be emitted, the absorbance "a" or
the emissivity " " is a parameter of concern in the thermal region.

dashsudhikumar@gmail.com
Page 15 of 51

r is the easiest to measure using


remote sensing devices. Therefore, it
is the most important parameter for
remote sensing observation using the
0.3 - 2.5 µm. r is called spectral
reflectance or reflectance or spectral
signature.

Our second question is: how is energy reflected by a target? It can be classified into three cases, specular
reflector, irregular reflector, and perfect diffusor.

Specular reflector is caused by the surface geometry of a mater. It is of little use in remote sensing
because the incoming energy is completely reflected in another direction. Still water, ice and many other
minerals with crystal surfaces have the same property.

Perfect diffuse reflector refers to a matter which reflects energy uniformly to all directions. This type of
reflector is desirable because it is possible to observe the matter at any direction and obtain the same
reflectance.

Unfortunately most targets have a behaviour between the ideal specular reflector and diffuse reflector.
This makes quantitative remote sensing and target identification purely from reflectance data difficult.
Otherwise, it would be easy to discriminate object using spectral reflectances from a spectral library.
Due to the variability of spectral signature, one of the current research direction is to investigate the
bidirectional properties of various targets.

Plotting reflectance against wavelength, we will get a spectral reflectance curve. Examples of spectral
curves of typical materials such as vegetation, soil and water are shown in Figure 2.5. Clear water has a
low spectral reflectance (< 10%) in the visible region. At wavelengths longer than 0.75 µm, water
absorbs almost all the incoming energy. Vegetation generally has three reflectance valleys. The one at
the red spectral wavelength region (0.65 µm) is caused by high absorptance of energy by chloraphyll a
and b in the leaves. The other two at 1.45-1.55 µm and 1.90-1.95 µm are caused by high absorptance of
energy by water in the leaves. Dry soil has a relatively flat reflectance curve. When it is wet, its spectral
reflectance drops due to water absorption.

dashsudhikumar@gmail.com
Page 16 of 51

Figure 2.5 Typical Spectral Reflectance Curves for Soil, Vegetation and Water

(Lillesand and Kiefer, 1994)

3. Remotesensing Systems

One of the important component in remote sensing is the various devices for data collection. Any data
acquisition system for remote sensing includes two parts, the platform and the sensor. The platform
maybe a conventional tripod, a berry picker mounted on a truck, any other types of vehicles, boats,
balloons, aircrafts, spacecrafts and satellites, etc. The sensors may include cameras, radiometers,
multispectral scanners, imaging spectrometers, and radars. In this chapter, we will introduce some
typical sensing systems and their different ways of imaging.

3.1 Camera Systems

A camera system is composed of the camera body, lens, diaphragm, shutter and a film (Figure 3.1):

Figure 3.1. Components of a camera system

Lens collects the energy and it has a focal length.


controls the amount of energy reaching the film with an adjustable
Diaphragm
diameter
works by open and close. The time period between the open and close
Shutter
controls the amount of energy entering into the camera.

The principle of imaging is described by:

dashsudhikumar@gmail.com
Page 17 of 51

where

f = focal length
do = distance from the lens to the object
di = distance from the lens to the image (Figure 3.2)

For aerial photography,

do >> di

Therefore, we have a fixed distance between the lens to the film.

Figure 3.2. The imaging optics

The diameter of a diaphragm controls the depth of field. The smaller the diameter of an opened
diaphram, the wider the distance range in which the scene constructs clearly focused image. The
diaphragm diameter can be adjusted to a particular aperture. What we normally see on a camera's
aperture setting is F 2.8 4 5.6 8 11 16 22

These F#s are obtained by f/diameter. When diameter becomes smaller, F# becomes larger and more
energy is stopped. The actual amount of energy reaching the film is determined by:

where

i is the energy intensity J/m2µs


t is time in second
F is F# as mentioned above
E is energy in J/m2

dashsudhikumar@gmail.com
Page 18 of 51

3.2 Films

A film is primarily composed of an emulsion layer(s) and base (Figure 3.3)

Black and white film Color film

Figure 3.3. Layers in black and white films and colour films

The most important part of the film is the emulsion layer. An emulsion layer contains light sensitive
chemicals. When it is exposed in the light, chemical reaction occurs and a latent image is formed. After
developing the film, the emulsion layer will show the image.

Films can be divided into negative and positive, or divided in terms of their ranges of spectral
sensitivities: black and white (B/W), B/W Infrared, Color, Color Infrared.

B/W negative films are those films that have the brightest part of the scene appearing the darkest while
the darker part of the scene appearing brighter on a developed film.

Color negative films are those on which a color from the scene is recorded by its complementary colors.

There are two important aspects of a film: its spectral sensitivity and its characteristic curve.

Spectral sensitivity specifies the spectral region to which a film is sensitive (Figure 3.4).

Figure 3.4. The sensitivity of films and the transmittance of a filter

Since infrared is also sensitive to visible light, the visible light should be intercepted by some material.
This is done by optical filtering (Figure 3.4). For this case, a dark red filter can be used to intercept
visible light.

Similarly, other filters can be used to stop light at certain spectral ranges from reaching the film.

dashsudhikumar@gmail.com
Page 19 of 51

Characteristic curve indicates the radiative response of a film to the energy level.

[a] [b]

Figure 3.5. Film characteristic curves.

If the desnity of a film develops quickly when the film is exposed to light, we say that the film is fast
(Fig. 3.5a). Otherwise, the film is slow (Fig. 3.5b). Film speed is defined by labels such as ASA 100,
ASA 200, ...., ASA 1000. The greater the ASA number, the faster a film is. High speed films will have a
good contrast on the image, but low speed films will provide better details.

Color Films

There are two types of colors: additive primaries and subtractive primaries:

� Three additive primaries are red, green, and blue

� Three subtractive primaries are cyan, magenta, and yellow

All colors can be made by combining any two primary colors.

Figure 3.6. Additive and subtractive colors

dashsudhikumar@gmail.com
Page 20 of 51

Additive colors apply to the mixing of light (Fig. 3.6a), while subtractive colors are used for the mixing
of paints used in printing (Fig. 3.6b). In order to represent colors onto a medium such as a film or a
colour photographic paper, subtractive colors are needed.

Color Negative Films

Figure 3.7 shows the structure of the three emulsion layers for a color negative film. Figure 3.8 shows
the spectral sensitivities of each emulsion layer of the film. It can be seen from Figure 3.8 that green and
red emulsion layers are also sensitive to blue. Therefore, film producers add a yellow filter to stop the
blue lights from reaching the green and red emulsion layers (Figure 3.7).

Figure 3.7. Layers in a stadard colour film

Figure 3.8. The approximate spectral sensitivities of the three layers.

The development procedure for the colour negative film is shown in Figure 3.9.

Color Infrared Films

The sensitivity curve of a colour infrared film is shown in Figure 3.10. Figure 3.11 shows the structure
of this type of film.

As the light of three primary colors pass through the film

dashsudhikumar@gmail.com
Page 21 of 51

After film development

Yellow
Magenta
Cyan

Print with white light

RGB RGB RGB


Y
M
C
RG through RB through GB through
Y Y
M M
C C

The results are:

B G R

Figure 3.9. The development procedure of a colour film

Figure 3.10. The sensitivity curves of a colour infrared film

dashsudhikumar@gmail.com
Page 22 of 51

Film Dye

NIR + B (Cyan dye forming layer)


G + B (Yellow dye forming layer)
R + B (Magenta dye forming layer)

Figure 3.11. Three layers of a colour-infrared film

The CIR film development process is as the following:

Photography

B G R IR
Clear C
Clear Y
Clear M

Use of white light to pass through the developed film

BGR BGR BGR BGR


Clear C
Clear Y
Clear M

Colour photographic paper exposure

Y Y Y
M M M
C C C

Final result

Black B G R

Figure 3.12. The procedures used to develop a colour film.

dashsudhikumar@gmail.com
Page 23 of 51

3.3 Aerial Photography

In texts on aerial photogrammetry or photo-interpretation, various types of aerial photographic cameras


are discussed in detail (e.g. Lillesand and Kiefer, 1994; Paine, 1981). The implications of flight height,
photographical orientation, and view angle on aerial photographic products are briefly discussed here.

Flight Height

For a given focal length of an aerial camera, the higher the camera is, the larger the area each aerial
photo can cover. Obviously, the scale of aerial photographs taken at higher altitudes will be smaller than
those taken at lower altitudes.

However, photographs taken at higher altitudes will be severely affected by the atmosphere. This is
particularly true when films sensitive to shorter wavelengths are used. Thus, ultraviolet and blue should
be avoided at higher altitudes. Instead, CIR or BWIR is more suitable.

Camera Orientation

Two types of camera orientations maybe used: vertical and oblique(slant) (Figure 3.13). Oblique allows
one to take pictures of a large area while vertical allows for less distortion in photo scale.

Figure 3.13. Vertical and slant aerial photography

View Angle:

View angle is normally determined by the focal length and the frame size of a film. For a camera, the
frame is fixed, therefore the ground coverage is determined by the altitude and the camera viewing angle
(Figure 3.14)

dashsudhikumar@gmail.com
Page 24 of 51

f1 > f2 > f3

a1 < a2 < a3

Figure 3.14. Viewing angle determined by the focal length

In normal cameras Aerial Camera


Normal lens 50 mm 300 mm
Wide angle 28 mm 150 mm
Fisheye lens 7 mm 88 mm

Obviously, wide angles allow a larger area to be photographed.

Photographic Resolution

Spatial resolution of aerial photographs is largely dependent on the following factors:

� Lens resolution
� optical quality
� Film resolution
� Film flatness -normally not a problem
� Atmospheric conditions -changes all the time
� Aircraft vibration and motion -random

Film resolution depends mainly on granularity.

There is a standard definition of photographic resolution is the maximum number of line-pairs per mm
that can be distinguished on a film when taken from a resolution target (Figure 3.15).

If the scale of an aerial photograph is known, we can convert the photographic resolution (rs) to ground
resolution.

Figure 3.15. Resolving power test chart (from Lillesand and Kiefer, 1994).
dashsudhikumar@gmail.com
Page 25 of 51

Ground Coverage

A photograph may have a small coverage if it is taken either at a low flight height or with a narrower
viewing angle.

The advantages of photographs with small coverages are that they provide more detail, and less
distortion and displacement. It is easier to analyze a photograph with a small coverage because similar
target will have less distortion from the center to the edge of the photograph, and from one photograph
to the other.

The disadvantage of photographs with small coverages is that it needs more flight time to cover an area
and thus the cost will be higher. Moreover, mosaicing may cause more distortion.

A large coverage can be obtained by taking the photograph from a higher altitude or using a wider angle.
The quality of photographs with a large coverage is likely to have poorer photographic resolution due to
larger viewing angle and likely stronger atmospheric effect. The advantages are that a large coverage is
simultanuously obtained, requires less geometric mosaicing, and costs less.

The disadvantages are that it is difficult to analyze targets in detail and that target is severely distorted.

Essentially, the size of photo coverage is related to the scale of the raw aerial photographs. Choosing
photographs with a large coverage or a small one should be based on the following:

� budget at hand
� task
� equipment available

The following are some of the advantages/disadvantages of aerial photography in comparison with other
types of data acquisition systems:

Advantages:

� High resolution (ground)


� Flexibility
� High geometric reliability
� Relatively inexpensive

Disadvantages:

� Day light exposure (10:00 am- 2:00 pm) required


� Poorer contrast at shorter wavelengths
� Film non-reusable
� Inconvenient
� Inefficient for digital analysis

dashsudhikumar@gmail.com
Page 26 of 51

3.3 Satellite-Borne Multispectral Systems

What are the differences between a camera system and a scanning system? The following are some of
the major differences:

� A rotating mirror is added in front of the lens of camera


� In a scanning system, films are changed to photo-sensitive detectors and magnetic tapes. They are
used to store the collected spectral energy (Figure 3.16).

Figure 3.16. A multispectral scanning system

Landsat Multispectral Scanner System

The first of the Landsat series was launched in 1972. The satellite was called Earth Resources
Technology Satellites (ERTS-1). It was later renamed as Landsat - 1. On board of Landsat-1 are two
sensing systems: multispectral scanning system (MSS) and return beam vidicon (RBV). RBV was
discontinued since Landsat-3. MSS is briefly introduced here because it is still being used. The MSS
sensor has 6 detectors per band (Figure 3.17). The scanned radiance is measured in four image bands
(Figure 3.18).

Figure 3.17. Each scan will collect six image lines.

dashsudhikumar@gmail.com
Page 27 of 51

Figure 3.18. Four image bands with six detectors in each band.

MSSs have been used on Landsat - 1, 2, 3, 4, 5. They are reliable systems. The spectral region of each
band is listed below:

Landsat 1, 2 Landsat 4, 5
B4 0.5 - 0.6 mm B1
B5 0.6 - 0.7mm B2
B6 0.7 - 0.8 mm B3
B7 0.8 - 9.1 mm B4

Landsat 3 had a short life. The MSS systems on Landsat 3 were modified as compared to Landsat 1 and
2. Landsat-6 was launched unsuccessfully in 1993.

Each scene of MSS image covers 185 km X 185 km in area. It has a spatial resolution of 79 m X 57 m.
An advantage of MSS is that it is less expensive. Sometimes one detector is left blank or its signal is
much different from other ones, creating banding or striping.

Landsat Thematic Mapper System

Since the launch of Landsat 4 in 1982, a new type of scanner, called Thematic Mapper (TM), has been
introduced. It

� Increased the number of spectral bands.


� Improved spatial and spectral resolution.
� Increased the angle of view from 11.56� to 14.92�.

dashsudhikumar@gmail.com
Page 28 of 51

TM1 0.45 - 0.52 mm 30 m


TM2 0.52 - 0.60 30 m
TM3 0.63 - 0.69 30 m
TM4 0.76 - 0.90 30 m
TM5 1.55 - 1.75 30 m
TM7 2.08 - 2.35 30 m
TM6 10.4 - 12.5 mm 120m

MSS data are collected on only one scanning direction. TM data are collected on both scanning
directions (Figure 3.19).

Figure 3.19. Major changes of the TM system as compared to the MSS system.

High Resolution Visible (HRV) Sensors

A French satellite called 'Le Systme Pour l'observation de la Terre' (SPOT) (Earth Observation System)
was launched in 1986. On board this satellite, a different type of sensors called High Resolution Visible
(HRV) were used. The HRV sensors have two modes: the panchromatic (PAN) mode and the
multispectral (XS) mode.

The HRV panchromatic sensor has a relatively wide spectral range, 0.51 - 0.73 mm, with a higher
spatial resolution of 10 x 10 m2

dashsudhikumar@gmail.com
Page 29 of 51

HRV Multispectral (XS) mode

B1 0.50 - 0.59 µm
B2 0.61 - 0.68 µm
B3 0.79 - 0.89 µm

The spatial resolution for the multispectral (XS) mode is 20 x 20 m2.

Besides the difference of spectral and spatial resolution design from the Landsat sensor systems, major
differences between MSS/TM and HRV are the use of linear array (also called pushbroom) detectors
and the off-nadir observation capabilities with the HRV sensors (Figure 3.20). Instead of mirror rotation
in the MSS or the TM sensors which collect data using only a few detectors, the SPOT HRV sensors use
thousands of detectors arranged in arrays called "charge-coupled devices" (CCDs). This has significantly
reduced the weight of the sensing system and power requirement.

Figure 3.20. The SPOT HRV systems

A mirror with the view angle of 4.130 is used to allow 270 off nadir observation. An advantage of the
off-nadir viewing capability is that it allows more frequent observations of certain targeted area on the
earth and acquisitions of stereo-pair images. A disadvantage of the HRV sensors is the difficulties
involved in calibrating thousands of detectors. The radiometric resolution of MSS is 6 to 7 bits, while
both TM and HRVs have an 8 bit radiometric resolution.

The orbital cycle is 18 days for Landsats 1 - 3; 16 days for landsats 4, 5; 26 days for SPOT-1 (SPOT
HRV sensors can repeat the same target in 3 to 5 days due to their off-nadir observing capabilities).
dashsudhikumar@gmail.com
Page 30 of 51

AVHRR - Advanced Very High Resolution Radiometer

Among many meterological satellites, the Advanced Very High Resolution Radiometers (AVHRR) on
board the NOAA series (NOAA-6 through 12) have been widely used. NOAA series were named after
the National Oceanic and Atmospheric Administration of the United States.

The AVHRR sensor has 5 spectral channels

B1 0.58 - 0.68 µm
B2 0.72 - 1.10 µm
B3 3.55 - 3.95 µm
B4 10.3 - 11.30 µm
B5 11.5 - 12.50 µm

Swath width 2400 Km

The orbit repeating cycle is twice daily. This is an important feature for frequent monitoring. NOAA
AVHRRs have been used for large scale vegetation and sea ice studies at continental and global scales.

Earth Observing System (EOS)

To document and understand global change, NASA initiated Mission to Planet Earth. This is a program
involving international efforts to measure the Earth from space and ground. Earth Observing System is a
primary component of the Mission to Planet Earth. EOS includes the launch of a series of satellites with
advanced sensor systems by the end of this century. Those sensors will be used to measure most of the
measurable aspects of the land, ocean and atmosphere, such as cloud, snow, ice, temperature, land
productivity, ocean productivity, ocean circulation, atmospheric chemistry, etc.

Among various sensors on board the first six satellites to be launched, there is a sensor called Moderate
Resolution Imaging Spectrometer (MODIS). It has 36 narrow spectral bands between 10-360 nm. The
spatial resolution changes as the spectral band changes. Two bands have 250 m, 5 have 500 m while the
rest have 1000 m resolution. The sensor is planned to provide data covering the entire Earth daily.

dashsudhikumar@gmail.com
Page 31 of 51

Other Satellite Sensors

GOES - Geostationary Operational Environmental Satellite (Visible to NIR, Thermal)

DMSP - Defense Meterological Satellite Program 600 m resolution (Visible to NIR, Thermal) used, for
example for urban heat island studies

Nimbus - CZCS - coastal zone color scanner, 825 m spatial resolution

Channels (6 total) Spectral Resolution


1-4 0.02 µm for chlorophyll absorption studies
5-6 NIR -thermal

Two private companies, Lockheed, Inc. and Worldview, Inc. are planning to launch their own
commecial satellites in 2-3 years time with spatial resolutions ranging from 1 m to 3 m. In Japan, the
NASDA (National Space Development Agency) has developed the Marine Observation System (MOS).
On board this system, there is a sensor called Multispectral Electronic Self-scanning Radiometer
(MESSR) with similar spectral bands as the Landsat MSS systems. However, the spatial resolution of
the MESSR system is 50 x 502.

Other countries such as India and the former USSR have also launched Earth resources satellites with
different optical sensors.

3.4 Airborne Multispectral Systems

Multispectral scanners

The mechanism of airborne multispectral sensors is similar to the Landsat MSS and TM. The airborne
sensor systems usually have more spectral bands ranging from ultraviolet to visible through near
infrared to thermal areas. For example, the Daedalus MSS system is a widely used system that has 11
channels, with the first 10 channels ranging from 0.38 to 1.06 µm and the 11th is a thermal channel
(9.75 - 12.25 µm).

Another airborne multispectral scanner being used for experimental purposes is the TIMS - Thermal
Infrared Multispectral Scanner. It has 6 channels: 8.2 - 8.6; 8.6 - 9.0; 9.4 - 10.2; 10.2 - 11.2; 11.2 - 12.2
µm.

MEIS-II

Canada Centre for Remote Sensing developed the Multispectral Electro optical Imaging Scanner (MEIS-
II). It uses 1728 - element linear CCD arrays that acquire data in eight spectral bands ranging from 0.39
to 1.1 mm. The spatial resolution of MEIS-II can reach up to0.3 m.

Advantages of multispectral systems over photographic systems are

dashsudhikumar@gmail.com
Page 32 of 51

 Spectral range: photographic systems operate between 0.3 - 1.2 mm while Multispectral
systems operate between 0.3 - 14 mm.
 Multiband photography (photographic system) uses different optical systems to acquire
photos. This leads to problems in data incomparability among different cameras. MSS, on
the other hand, uses the same optical system, eliminating the data incomparability
problems.
 Electronical process used in MSS is easier to calibrate than photo chemical process used
in photographic systems.
 Data transmission is easier for MSS than for photographic systems which require onboard
supply of films.
 Visual interpretation for photographic systems v.s. digital analysis for MSS systems--
Visual analysis is difficult to analyze in 3-dimension.

False Color Composite

Each time, only three colours (red, green and blue) can be used to display data on a colour monitor. The
colours used to display an image may not be the actual colour of the spectral band that is used to acquire
the image. Image displayed with such colour combinations are called false colour composite. We can
make many 3-band combinations out of a multispectral image.

where Nc is the total number of 3-band combinations and nb is the number of spectral bands in a
multispectral image. For each of these 3-band combinations, we can use red, green, and blue to represent
each band and to obtain a false-colour image.

Digital Photography with CCD Arrays

Videographic imaging includes the use of video cameras and digital CCD cameras. Video images can be
frame grabbed,or quantized and stored as digital images; however, the image resolution is relatively low
(up to 550 lines/image). Digital CCD cameras use two-dimensional silicon-based charge coupled
devices that produce a digital image in standard raster format. CCD detectors arranged in imaging chips
of approximately 1024 X 1024 or more photosites produce an 8-bit image (King, 1992).

Digital CCD photography compare favorably to other technologies such as traditional photography,
videography, and line scanning. Comparing to photography, digital CCD cameras have linear response,
greater radiometric sensitivity, wider spectral response, greater geometric stability, and no-need for film
supply (Lenz and Fritsch, 1990, King, 1992). Matching with the fast development of softcopy
photogrammetry, they have the potential to replace the role of aerial photography and photogrammetry
for surveying and mapping.

Imaging Spectrometry

Imaging spectrometry refers to the acquisition of images in many, very narrow, continuous spectral
bands.

The spectral region can range from visible, near-IR to mid-IR.


dashsudhikumar@gmail.com
Page 33 of 51

 The first imaging spectrometer was developed in 1983 by JPL. The system called
Airborne Imaging Spectrometer (AIS) collects data in 128 channels from 1.2 µm to 2.4
mm. Each image acquired has only 32 pixels in a line.
 The Airborne Visible-Infrared Imaging Spectrometer (AVIRIS) represents an immediate
follow-up of the AIS (1987). It collects 224 bands from 0.40 - 2.45 mm with 512 pixels
in each line.
 In Canada, the first system was the FLI - Flourescence Linear Imager manufactured by
Moniteq, a company that used to be located in Toronto, Ontario.

In Calgary, the ITRES Research is producing another imaging spectrometer called the Compact
Airborne Spectroscopy Imager (CASI) (Figure 3.21).

Figure 3.21. The two dimensional linear array of the CASI.

For each line of ground targets, there will be nb x ns data collected at 2 bytes (16 bits) radiometric
resolution where nb is the number of spectral bands and ns is the number of pixels in a line.

Due to constraint of data transmission rate, these nb x ns data cannot be transferred completely. This
leads to a division into two operation modes of CASI, spectral mode and spatial mode.

In spectral mode, all 288 spectral bands are used, but only up to 39 spatial pixels (look directions) can be
transferred.

In the spatial mode, all 512 spatial pixels are used, but only up to 16 spectral bands can be selected.

dashsudhikumar@gmail.com
Page 34 of 51

3.5 Microwave Remote Sensing

Radar represents "radio detection and ranging". As we mentioned before it is an active sensing system.
It uses its own energy source - microwave energy. A radar system transmits pulses in the direction of
interest and records the strength and origin of "echos" or reflection received from objects within the
system's field of view.

Radar systems may or may not produce images

� Doppler radar -used to measure vehicle


speeds.
None imaging }Ground Based
� Plan Position Indicator (PPI) - used to
observe weather systems or air traffic
Imaging radar � Side-looking airborne radar (SLAR) }Air Based

SLAR systems produce continuous strips of imagery depicting very large ground areas located adjacent
to the aircraft flight line. Since cloud system is transparent to microwave region, a SLAR has been used
to map tropical areas such as SLAR Amazon River Basin. Started in 1971 and ended in 1976, the project
RADAM (Radar of the Amazon) was the largest radar mapping project ever undertaken. In this project,
the Amazon area was mapped for the first time. In such remote and cloud covered areas of the world,
radar system is a prime source of information for mineral exploration, forest and range inventory, water
supplies and transportation management and site suitability assessment.

Radar imagery is currently neither as available nor as well understood as other image products. An
increasing amount of research is being conducted on interaction mechanism between energy and surface
targets, such as forest canopy, and on the combination of radar image with other image products.

SLAR system organization and operation are shown in Figure 3.22.

Figure 3.22. A RADAR system components and organization

dashsudhikumar@gmail.com
Page 35 of 51

Spatial Resolution of SLAR systems

The ground resolution of SLAR system is determined by two independent sensing parameters: pulse
length and antenna beam width (Figure 3.23).

The time period can be measured from a transmitted signal travelling through the air reaching the target
and being scattered back to the antenna. We then can determine the distance or the 'slant range' between
the antenna and the target.

where

 Sr: the slant range.


 c: the speed of light.
 t: time period for a returned transmitted pulse.

Figure 3.23

From Figure 3.23, it can be seen that SLAR depends on the time it takes for a transmitted pulse being
scattered back to the antenna to determine the position of a target.

In the across track direction, there is a spatial resolution which is determined by the duration of a pulse
and the depression angle (Figure 3.24). This resolution is called ground range resolution. (rg)

dashsudhikumar@gmail.com
Page 36 of 51

Figure 3.24. The across track spatial resolution

The along track distinguishing ability of a SLAR system is called azimuth resolution ra:

Figure 3.25. The sidelobes of a RADAR signal

It is obvious that in order to minimize rg, one needs to reduce . For the case of ra, the optimal situation
is determined by which is a function of wavelength and antenna length

dashsudhikumar@gmail.com
Page 37 of 51

can be the actual physical length of an antenna or a synthetic one.

Those systems whose beam width is controlled by the physical antenna length are called brute force
or real aperture radar.

For a real aperture radar, the physical antenna length must be considerably longer than the wavelength in
order to achieve higher azimuth resolution. Obviously, it has a limit at which the dimension of the
antenna is not realistic to be put onboard an aircraft or a satellite.

This limiation is overcome in synthetic aperture radar (SAR) systems. Such systems use a short
physical antenna, but through modified data recording and processing techniques, they synthesize the
effect of a very long antenna. This is achieved by making the use of Doppler effect (Figure 3.26).

Figure 3.26. The use of Doppler Effect

Synthetic aperture radar records the frequency differences of backscattering signal at different aircraft
position during the time period when the target is illuminated by the transmitted energy.

SAR records both amplitude and frequency of backscattering signals of objects throughout the time
period in which they are within the beam of moving antenna. These signals are recorded on tapes or on
films. This leads to two types of data processing.

One of the problem associated with processing radar signals from tapes is that the signal is contaminated
by random noise. When displayed on a video monitor, the radar image tends to have a noisy or speckled
appearance. Later in the digital analysis section, we will discussed the speckle reduction strategies.

Radar Equation Derivation

What we actually measure is the backscattered energy Pr in Watts.

Antenna transmitts Pt watts

At a distance R the power is


dashsudhikumar@gmail.com
Page 38 of 51

Power received at Antenna:

If the same antenna is used for both transmitting and receiving, then

All parameters in this formula except "d" is determined by the system. Only is parameter related to
the ground target. Unfortunately, is a poorly understood parameter which largely limits its use in
remote sensing.

We know is related not only to system variables including wavelength, polarization, azimuth,
landscape orientation, and depression angle, but also to landscape parameters including surface
roughness, soil moisture, vegetation cover, and micro topography.

� moisture influences the dielectric constant of the target which in turn could significantly change the
backscattering pattern of the signal. Moisture also stops the microwave penetrating capability.

� Roughness - the standard deviation S(h) of the heights of individual facets.

dashsudhikumar@gmail.com
Page 39 of 51

In the field, we use an array of sticks arranged paralell to each other with a constant distance interval to
measure the surface roughness.

A common definition of a rough surface is one whose S(h) exceeds one eighth of the wavelength
divided by the cosine of the incidence angle

As we illustrated in the spectral reflectance section, a smooth surface will tend to reflect all the energy
input at an angle equal to the incidence angle, while a rough surface tends to scatter the incoming energy
more or less at all direction.

 Polarization

Microwave energy can be transmitted and received by the antenna at a selected orientation of the
electromagnetic field. The orientation or polarization of the EM field is labelled as Horizontal (H) and
Vertical (V) direction. The antenna can transmit using either polarization. This EM energy makes it
posible for a radar system to operate in any of the four models transmit H and recieve H, transmit H
receive V, transmit V recieve H, and transmit V receive V. By operating at different modes, the
polarizing characteristics of ground target can be obtained.

 Corner reflector

It tends to collect

reflected signal at its foreground and returns the signal to the antenna.

dashsudhikumar@gmail.com
Page 40 of 51

Microwave Bands

Band Wavelength 1
Ka 0.75 - 1.1 cm 40 - 26.5 GHz
K 1.1 - 1.67 cm 26.5 - 18 GHz
Ku 1.67 - 2.4cm 18 - 12.5 GHz
X 2.4 - 3.75cm 12.5 - 8GHz
C 3.75 - 7.5cm 8 - 4GHz
S 7.5 - 15cm 4 - 2GHz
L 15 - 30cm 2 - 1GHz
P 30 - 100cm 1 - 300 MHz

Geometric Aspects

Radar uses two types of image recording systems, a slant-range image recording system and a ground-
range image recording system.

In slant-range recording system, the spacing of targets is proportional to the time interval between
returning signals from adjacent targets.

In ground-range image recording system, the spacing is corrected to be approximately proportional to


the horizontal ground distance between ground targets.

If the terrain is flat, we can convert the slant-range spacing SR to Ground range GR

dashsudhikumar@gmail.com
Page 41 of 51

 Relief distortion

Relief displacement is different on the photographs from SLAR images.

Space-borne radars

 Seasat launched in1978, duration 98 days

dashsudhikumar@gmail.com
Page 42 of 51

 Frequency L band
 Swath width 100 km centered at 20� from nadir
 Polarization HH
 Ground Resolution 25 m x 25 m

 Shuttle Imaging Radar, SIR-A, SIR-B, SIR-C

 The Euraopean Space Agency has lauched a satellite in 1991: ERS - 1, with a C
band SAR sensor.
 In 1992, the Japanese JERS -1 satellite was launched with a L band radar
mounted. The L band radar has a higher penetration capability than the C band
SAR.

 Radarsat

 Scheduled to be launched in mid 1995, Radarsat will contain a SAR system which
is very flexible in terms of configurations of incidence angle, resolution, number
of looks and swath width.

Frequency C band 5.3 GHz


Altitude 792 Km
Repeat Cycle 16 days
Radarsat Subcycle 3 day
Radarsat
Period 100.7 min (14 cycles per day)
Equatorial crossing 6:00 A.M.
Platform
Campbell's Book p. 118-129
Satellite Orbits

Thermal Infra Red Remote Sensing


3.6 Thermal Imaging

Many multispectral (MSS) systems sense radiation in the thermal infrared as well as the visible and
reflected infrared portions of the spectrum. However, remote sensing of energy emitted from the Earth's
surface in the thermal infrared (3 µm to 15 µm) is different than the sensing of reflected
energy. Thermal sensors use photo detectors sensitive to the direct contact of photons on their surface,
to detect emitted thermal radiation. The detectors are cooled to temperatures close to absolute zero in
order to limit their own thermal emissions. Thermal sensors essentially measure the surface temperature
and thermal properties of targets.

dashsudhikumar@gmail.com
Page 43 of 51

Thermal imagers are typically across-track scanners (like


those described in the previous section) that detect emitted
radiation in only the thermal portion of the spectrum. Thermal
sensors employ one or more internal temperature references for
comparison with the detected radiation, so they can be related
to absolute radiant temperature. The data are generally recorded
on film and/or magnetic tape and the temperature resolution of
current sensors can reach 0.1 °C. For analysis, an image of
relative radiant temperatures (a thermogram) is depicted in
grey levels, with warmer temperatures shown in light tones,
and cooler temperatures in dark tones. Imagery which portrays
relative temperature differences in their relative spatial
locations are sufficient for most applications. Absolute temperature measurements may be calculated but
require accurate calibration and measurement of the temperature references and detailed knowledge of
the thermal properties of the target, geometric distortions, and radiometric effects.

Because of the relatively long wavelength of thermal radiation (compared to visible radiation),
atmospheric scattering is minimal. However, absorption by atmospheric gases normally restricts thermal
sensing to two specific regions - 3 to 5 µm and 8 to 14 µm. Because energy decreases as the wavelength
increases, thermal sensors generally have large IFOVs to ensure that enough energy reaches the detector
in order to make a reliable measurement. Therefore the spatial resolution of thermal sensors is usually
fairly coarse, relative to the spatial resolution possible in the visible and reflected infrared. Thermal
imagery can be acquired during the day or night (because the radiation is emitted not reflected) and is
used for a variety of applications such as military reconnaissance, disaster management (forest fire
mapping), and heat loss monitoring.

dashsudhikumar@gmail.com
Page 44 of 51

3.7 Thermal Sensors and Satellites

Introduction

Thermal sensors or scanners detect emitted radiant energy. Due to atmospheric effects these sensors
usually operate in the 3 to 5 μm or 8 to 14μm range. Most thermal remote sensing of Earth features is
focused in the 8 to 14 μm range because peak emission (based on Wien's Law) for objects around 300K
(27° C or 80° F) occurs at 9.7μm. Many thermal imaging sensors are on satellite platforms, although
they can also be located on-board aircraft or on ground-base systems. Many thermal systems are
multispectral, meaning they collect data on emitted radiation across a variety of wavelengths

Thermal Sensors

Thermal Infrared Multispectral Scanner (TIMS)


NASA and the Jet Propulsion Laboratory developed the Thermal Infrared Multispectral Scanner (TIMS)
for exploiting mineral signature information. TIMS is a multispectral scanning system with six different
bands ranging from 8.2 to 12.2 μm and a spatial resolution of 18m. TIMS is mounted on an aircraft and
was primarily designed as an airborne geologic remote sensing tool. TIMS acquires mineral signature
data that permits the discrimination of silicate, carbonate and hydrothermally altered rocks. TIMS data
have been used extensively in volcanology research in the western United States, Hawaiian islands and
Europe. The multispectral data allows for the generate of three-band color composites similar other
multispectral data. Many materials have varying emissivities and can be identified by the variation in
emitted energy.

The thermal image to the right was captured the Thermal Infrared Multispectral Scanner (TIMS) and is a
thermal image of Death Valley California. A color composite has been produced using three of the thermal
bands collected by TIMS. There are a variety of different materials and minerals in Death Valley with
varying emissivities. In this image Thermal Band 1 (8.2 - 8.6μm) is displayed in blue, Thermal Band 3 (9.0 -
9.4μm) is displayed in green and Thermal Band 5 (10.2 - 11.2 μm) is displayed in red. Alluvial fans appear
in shades of reds, lavender, and blue-greens; saline soils in yellow; and different saline deposits in blues and
greens.

dashsudhikumar@gmail.com
Page 45 of 51

Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER)


Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) is a sensor on-board the
Terra satellite. In addition to collecting reflective data in the visible, bear and shortwave infrared,
ASTER also collects thermal infrared data. ASTER has five thermal bands ranging from 8.1 to 11.6 μm
with 90m spatial resolution. ASTER data are used to create detailed maps of surface temperature of
land, emissivity, reflectance, and elevation. ASTER data is available for download through
EarthExplorer.

Moderate-resolution Imaging Spectroradiometer (MODIS)


As previously discussed, MODIS has a high spectral resolution and collects data in a variety of
wavelength. Similar to ASTER, MODIS collects collects reflective data and emitted, thermal data.
MODIS has several bands that collects thermal data with 1000m spatial resolution. MODIS has high
temporal resolution with a one to two day return time. This makes it an excellent resource for detecting
and monitoring wildfires. One of the products generated from MODIS data is the Thermal
Anomalies/Fire product which detects hotspots and fires.

MODIS Thermal Anomalies/Fire data from August 2015

Landsat
A variety of the Landsat satellites have carried thermal sensors. The first Landsat satellite to collect
thermal data was Landsat 3, however this part of the sensor failed shorty after the satellite was launched.
Landsat 4 and 5 included a single thermal band (band 6) on the Thematic Mapper (TM) sensor with
dashsudhikumar@gmail.com
Page 46 of 51

120m spatial resolution that has been resampled to 30m. A similar band was included on the Enhanced
Thematic Mapper Plus (ETM+) on Landsat 7. Landsat 8 includes a separate thermal sensor known a the
Thermal Infrared Sensor (TIRS). TIRS has two thermal bands, Band 10 (10.60 - 11.19μm) and Band 11
(11.50 - 12.51μm). The TIRS bands are acquired at 100 m spatial resolution, but are resampled to 30m
in the delivered data products.

Landsat TIRS and Applications

Irrigation accounts for 80% of fresh water use in the U.S and water usage has become an increasingly
important issue, particularly in the West. Thermal infrared data from Landsat 8 is being used to estimate
water use. Landsat 8 data, including visible, near infrared, mid-infrared, and thermal data are fed into a
relatively sophisticated energy balance model that produces evapotranspiration maps.
Evapotranspiration (ET) refers to the conversion of water into water vapor by the dual process of
evaporation from the soil and transpiration (the escape of water though plant’s stomata). For vegetated
land, ET is synonymous with water consumption. Landsat data enable water resources managers and
administrators to determine how much water was consumed from individual fields.

3.8 Interpretation of Thermal Images


Most thermal images are single band images and by default are displayed as greyscale images. Lighters
or brighter areas indicate areas that are warmer, while darker areas are cooler. Single band thermal
images can also be displayed in pseudo-color to better display the variation in temperature. Thermal
imagery can be used for a variety of application including estimating soil moisture, mapping soil types,
determining rock and mineral types, wildland fire management and identifying leaks or emissions.
Multiband color composites can also be created if multiple wavelengths of thermal emission are
recorded. An example of this is the TIMS image shown on the previous page.

Thermal imagery of Las Vegas and Lake Mead acquired during the day on October 12th, 2015 by the Thermal
Infrared Sensor (TIRS) on Landsat 8. Greyscale image is shown on the left, cool areas are dark while light areas
are warmer. On the right is a pseudo-color representation of the same data, temperature is shown as a color
gradient, cool areas are blue and warm areas are red.

dashsudhikumar@gmail.com
Page 47 of 51

Time of Day

Thermal imagery can be acquired during the day or night but can produce very different results because
of a variety of factors. Some of these factors are thermal conductivity, thermal capacity and thermal
inertia. Thermal conductivity is the property of a material to conduct heat or a measure of the rate at
which heat can pass through a material. For example heat passes through metals much faster than
rocks. Thermal capacity is a measure of how well a material can store heat, water has a very high
thermal capacity. Thermal inertia measures how quickly a material responds to temperature changes.
Based on these factors different materials warm and cool at different rates during the day and night. This
gives rise to a diurnal cycle of temperature changes for features at the Earth's surface. The diurnal cycle
encompasses 24 hours. Beginning at sunrise, the Earth begins to receive mainly short wavelength energy
from the Sun. From approximately 6:00 am to 8:00 pm, the terrain intercepts the incoming short
wavelength energy and reflects much of it back into the atmosphere. Some of this energy is is absorbed
and then emitted as long-wave, thermal infrared radiation. Emitted thermal radiation reaches its peak
during the day and usually lags two to four hours after the midday peak of incoming shortwave
radiation, owing to the time it takes to heat the soil. Daytime imagery can contain thermal “shadows” in
area that are shaded from direct sunlight. Slopes may receive differential heating depending on their
orientation in relation to the sun (aspect) . In the above daytime image of the Las Vegas area the
topography and topographic shadows are clearly visible.

The above graph shows the diurnal radiant temperature variation for rocks and soils compared to water.
Water has relatively little temperature variation throughout the day. Dry soils and rocks on the other
hand heat up more and at a quicker rate during the day. They also tend to cool more at night compared to
water. Around dawn and sunset the curves for water and soils intersect. This point is known as the
thermal crossover, which indicate times where there is no difference in the radiant temperature of
materials.

dashsudhikumar@gmail.com
Page 48 of 51

Water generally appears cooler than its surrounding in the daytime thermal images and warmer in
nighttime imaging. The actual kinetic temperature of the water has not changed significantly but the
surrounding areas have cooled. Trees generally appear cooler than their surroundings during the day and
warmer at night. Paved areas appear relatively warm during the day and night. Pavement heats up
quickly and to higher temperatures than the surrounding areas during the day. Paved areas also lose heat
relatively slowly at night so they are relatively warmer than surrounding features.

dashsudhikumar@gmail.com
Page 49 of 51

Things to Consider

Thermal infrared sensors can be difficult to calibrate. Changes in atmospheric moisture and varying
emissivities of surface materials can make it difficult to accurately calibrate thermal data. Thermal IR
imagery is difficult to interpret and process because there is absorption of thermal radiation by moisture
in the atmosphere. Most applications of thermal remote sensing are qualitative, meaning they are not
employed to determine the absolute surface temperature but instead to study relative differences in
radiant temperature. Thermal imagery works well to compare the relative temperatures of objects or
features in a scene.

It is important to note that thermal sensors detect radiation from the surface of objects. Therefore this
radiation might not be indicative of the internal temperature of an object. For example the surface of a
water body might be much warmer than the water temperature several feet deep, but a thermal sensor
would only record the surface temperature.

There are also topographic effects to consider. For example in the northern hemisphere, north facing
slopes will receive less incoming shortwave solar radiation from the sun and will therefore be cooler.
Clouds and fog will usually mask the thermal radiation from surface features. Clouds and fog are
generally cooler and will appears darker. Clouds will also produce thermal cloud shadows, where ares
underneath clouds are cooler than the surrounding areas.

dashsudhikumar@gmail.com
Page 50 of 51

3.9 Hyperspectral Imaging


Technically, hyperspectral sensors (or imaging spectrometers as they are also known) image the earth in
many hundreds of narrow bands (typically over a hundred) while multi-spectral sensors image in an
average of only ten, wide bands. The most common hyperspectral sensors image in the Visible, Near-
Infrared, and Shortwave Infrared wavelength range (~0.35 – 2.5 microns). Hyperspectral sensors
measure the electromagnetic spectrum continuously, rather than piecemeal like their multi-spectral
cousins.
Thematically, hyperspectral sensors are capable of absolute surface material identification while multi-
spectral sensors are only capable of relative material delineation. As an example, historical, multi-
spectral images from NASA’s LANDSAT satellite (collecting 7-bands of data over the visible, near IR
and shortwave IR spectrum) can be used to create maps of the surface, delineating clay from iron-
oxides. With today’s hyperspectral imagers, hundreds of bands allow unique identification of minerals
such as kaolinite vs. alunite or hematite vs. goethite.
Spectral fidelity comes at a price; hyperspectral datasets are large and computationally intensive to work
with (imagine 224 pieces of information—one for each band—stored for each pixel in an image).
However recent processing advances and the ever-increasing speed of computers in the last five years,
means that data is interpreted into usable mineral or other material maps in very short time periods
(weeks vs. months). Though many in the industry still appreciate the 7-band LANDSAT images or the
14-band ASTER images (public-sector multi-spectral imagers producing data at low, government-
subsidized prices), the four common thematic remote sensing-based maps created from hyperspectral
data for use in geothermal exploration, (including mineral maps, cultural maps, vegetation maps, and
high-resolution digital photographs), are categorically more accurate, more precise and richer in
information than multi-spectral datasets.

Use in Geothermal Exploration


Spectral data have been used in geothermal exploration to detect and map soil-mineral, shallow
subsurface thermal, and vegetation anomalies.

Mineral Maps
Mineral maps can be used to show the presence of hydrothermal minerals and mineral
assemblages. Specifically the presence of a mineral in conjunction with other minerals may
indicate the presence of an active geothermal system through the presence of a hydrothermal
mineral assemblage. For example, the detection of kaolinite might not indicate the presence of a
hydrothermal system (since kaolinite can form from weathering, not only hydrothermal
alteration). However, the co-location of kaolinite, alunite, and opal (amorphous silica) could
indicate active or fossil hydrothermal alteration.
Vegetation Maps
Most of the time vegetation maps are only mildly interesting to geothermal exploration, but
sometimes geothermal activity can cause vegetation stress – this is common along faults,
because it’s hot or because changes in groundwater pH or presence of gasses such as CO2 and
H2S can lead to plant stress.

dashsudhikumar@gmail.com
Page 51 of 51

Micro
Ultra Near Water Water Radio
SWI -
Viole Visible Infrare Absorptio Absorptio MWIR LWIR Wave
R wave
t d n n s
s

Wavelength 3,000 1 mm-


390- 750- 1500- 8,000-
(nanometers 1450 2900 - 100
750 1400 2800 14,000
) 8,000 km

Passive Sensors

Aerial
TIR TIR
Photograph FLIR SWIR
-1 -2
y

Types
Hyperspectral Imaging

Multispectral Imaging

Active Sensors

Types LiDAR Radar

dashsudhikumar@gmail.com

You might also like