You are on page 1of 152

10/7/2016

GE OINFORMATICS

NITIN CHAUHAN
D E PA R T M E N T O F R E M O T E S E N S I N G

WHAT IS REMOTE SENSING?

Remote sensing is the art and science of acquiring information


about an object without making any physical contact

1
10/7/2016

SOME REMOTE SENSORS

REMOTE SENSING
• “the measurement or acquisition of
information of some property of an
object or phenomenon, by a
recording device that is not in
physical or intimate contact with
the object or phenomenon under
study” (Colwell, 1997).

2
10/7/2016

REMOTE SENSING DATA COLLECTION


• A remote sensing instrument
collects information about an
object or phenomenon within
the instantaneous-field-of-view
(IFOV) of the sensor system
without being in direct physical
contact with it.

• The sensor is located on a


suborbital or satellite platform.

ADVANTAGES OF REMOTE
SENSING
1. Provides data of large areas (Synoptic Coverage)

2. Provides data of very remote and inaccessible regions

3. Able to obtain imagery of any area over a continuous period of time through which the any anthropogenic or
natural changes in the landscape can be analyzed

4. Relatively inexpensive when compared to employing a team of surveyors

5. Easy and rapid collection of data

6. Rapid production of maps for interpretation


6

3
10/7/2016

LIMITATION OF REMOTE
SENSING
1) The interpretation of imagery requires a certain skill level

2) Needs cross verification with ground (field) survey data

3) Data from multiple sources may create confusion

4) Objects can be misclassified or confused

5) Distortions may occur in an image due to the relative motion of sensor and source

Nitin Chauhan- Department of Remote Sensing 7

ENERGY TRANSFER

Nitin Chauhan- Department of Remote Sensing 8

4
10/7/2016

ENERGY TRANSFER

• Energy may be transferred three ways:

• Conduction - Energy may be conducted directly from one object to another as when a pan is in
direct physical contact with a hot burner.

• Convection - The Sun bathes the Earth’s surface with radiant energy causing the air near the
ground to increase in temperature. The less dense air rises, creating convectional currents in the
atmosphere.

• Radiation- Electromagnetic energy in the form of electromagnetic waves may be transmitted


through the vacuum of space from the Sun to the Earth

Nitin Chauhan- Department of Remote Sensing 9

ELECTROMAGNETIC ENERGY INTERACTIONS

• Energy recorded by remote sensing systems undergoes fundamental interactions which


are worth understanding to interpret the remotely sensed data.

• For example, if the energy being remotely sensed comes from the Sun, the energy:

– is radiated by atomic particles at the source (the Sun


– propagates through the vacuum of space at the speed of light,
– interacts with the Earth's atmosphere,
– interacts with the Earth's surface,
– interacts with the Earth's atmosphere once again, and
– finally reaches the remote sensor where it interacts with various
– optical systems, filters, emulsions, or detectors.

Nitin Chauhan- Department of Remote Sensing 10

5
10/7/2016

ENERGY-MATTER INTERACTIONS IN THE ATMOSPHERE,


AT THE STUDY AREA, AND AT THE REMOTE SENSOR
DETECTOR

Nitin Chauhan- Department of Remote Sensing 11

ELECTROMAGNETIC RADIATION
MODELS
• Two different models can be used to understand
electromagnetic radiation creation, its propagation through
space, and its interaction with other matter :

– the wave model, and


– the particle model.

Nitin Chauhan- Department of Remote Sensing 12

6
10/7/2016

WAVE MODEL OF ELECTROMAGNETIC RADIATION

• James Clerk Maxwell in 1860 conceptualized electromagnetic radiation (EMR) as an


electromagnetic wave that travels through space at the speed of light, c, which is 3 x 108
meters per second.
• The electromagnetic wave consists of two fluctuating fields—one electric and the other
magnetic.
• The two vectors are at right angles (orthogonal) to one another, and both are
perpendicular to the direction of travel.

Nitin Chauhan- Department of Remote Sensing 13

WAVE MODEL OF ELECTROMAGNETIC RADIATION

Nitin Chauhan- Department of Remote Sensing 14

7
10/7/2016

WAVE MODEL OF ELECTROMAGNETIC


RADIATION
• The amplitude, α , is the peak value of the wave.

• The larger the amplitude the higher the energy of the wave .

• In imaging by active sensors, the amplitude of the detected signal is used as intensity
measure.

• The amount of time needed by an EM wave to complete one cycle is called the period
of the wave.

• The reciprocal of the period is called the frequency of the wave.

• Thus, the frequency is the number of cycles of the wave that occur in one second.

• Frequency is measured in hertz (1Hz = 1 cycle per second).

Nitin Chauhan- Department of Remote Sensing 15

WAVE MODEL OF ELECTROMAGNETIC ENERGY

• The relationship between the wavelength, , and frequency, , of electromagnetic radiation


is based on the following formula, where c is the speed of light:
c   v v
c

v
 c
• Frequency,  is inversely proportional to wavelength, 

• The longer the wavelength, the lower the frequency, and vice-versa

Nitin Chauhan- Department of Remote Sensing 16

8
10/7/2016

WAVE MODEL OF ELECTROMAGNETIC ENERGY

• The electromagnetic energy from the Sun travels in eight minutes


across the intervening 150 million km of space to the Earth.

• The Sun produces a continuous spectrum of electromagnetic


radiation ranging from very short, extremely high frequency gaµma
and cosmic waves to long, very low frequency radio waves

Nitin Chauhan- Department of Remote Sensing 17

STEPHEN BOLTZMANN LAW

• Using the wave model, it is possible to characterize the energy of the Sun which represents the initial source of most of
the electromagnetic energy recorded by remote sensing systems (except radar).

• Sun is assumed to be a 6,000 K blackbody.

• The total emitted radiation (Mλ) from a blackbody is proportional to the fourth power of its absolute temperature. This
is known as the Stefan-Boltzmann law and is expressed as:
M   T 4

• where s is the Stefan-Boltzmann constant, 5.6697 x 10-8 Wm-2K-4. Thus, the amount of energy emitted by an object
such as the Sun or the Earth is a function of its temperature.

Nitin Chauhan- Department of Remote Sensing 18

9
10/7/2016

WEIN’S DISPLACEMENT LAW

• In addition to computing the total amount of energy exiting a theoretical blackbody such as the Sun, we can determine its dominant wavelength (λmax) based on
Wein's displacement law:

𝒌
𝝀𝒎𝒂𝒙 =
𝑻
• where k is a constant equaling 2898 µmK, and T is the absolute temperature in kelvin. Therefore, as the Sun approximates a 6000 K blackbody, its dominant
wavelength (λmax) is 0.48 µm:

𝟐𝟖𝟗𝟖 𝝁𝒎 𝑲 𝟐𝟖𝟗𝟖 𝝁𝒎 𝑲
𝟎. 𝟒𝟗𝟖 𝝁𝒎 = 𝟗. 𝟕 𝝁𝒎 =
𝟔𝟎𝟎𝟎 𝑲 𝟑𝟎𝟎 𝑲

• The Earth approximates a 300 K (27 ˚C) blackbody and has a dominant wavelength at approximately 9.7 m.

Nitin Chauhan- Department of Remote Sensing 19

PARTICLE MODEL OF
ELECTROMAGNETIC ENERGY

• Before 1905, light was thought of as a smooth and continuous wave.

• Albert Einstein found that when light interacts with electrons, it has
a different character.

• Light is composed of many individual bodies called photons, which


carry particle-like properties such as energy and momentum.

• Thus, we sometimes describe electromagnetic energy in terms of its


wave-like properties. But, when the energy interacts with matter it
is useful to describe it as discrete packets of energy, or quanta.

Nitin Chauhan- Department of Remote Sensing 20

10
10/7/2016

PARTICLE MODEL OF
ELECTROMAGNETIC ENERGY

• Electrons are the tiny negatively charged particles that move around the positively charged
nucleus of an atom.

• The interaction between the positively charged nucleus and the negatively charged electron
keep the electron in orbit.

• Each electron's motion is restricted to a definite range from the nucleus.

• The allowable orbital paths of electrons about an atom might be thought of as energy classes
or levels.

Nitin Chauhan- Department of Remote Sensing 21

PARTICLE MODEL OF ELECTROMAGNETIC ENERGY

• An amount of energy is required to move the electron up at least one energy level

• Once an electron is in a higher orbit, it possesses potential energy.

• After about 10-8 seconds, the electron falls back to the atom's lowest empty energy level or
orbit and gives off radiation.

• The energy that is left over when the electrically charged electron moves from an excited state
to a de-excited state is emitted by the atom as a packet of electro-magnetic radiation; a particle-
like unit of light called a photon

Nitin Chauhan- Department of Remote Sensing 22

11
10/7/2016

QUANTUM THEORY OF EMR

• Niels Bohr and Max Planck recognized the discrete nature of exchanges of radiant energy
and proposed the quantum theory of electromagnetic radiation.

• This theory states that energy is transferred in discrete packets called quanta or photons.
The relationship between the frequency of radiation expressed by wave theory and the
quantum is:
ℎ𝑐
𝑄 =ℎ×𝜈 𝑄=
𝜆

• where Q is the energy of a quantum measured in joules, h is the Planck constant


(6.626  10Nitin
-34 J s), and  is the frequency of the radiation.
Chauhan- Department of Remote Sensing
23

SOURCES OF
ELECTROMAGNETIC ENERGY
• The Sun is a prime source of EM energy, but it is not the only one.

• All matter with an absolute temperature above zero emits EM energy because of molecular agitation.

• The global mean temperature of the Earth’s surface is 288 K . Earth’s surface features, therefore, emit EM
energy.

• The Sun’s temperature is about 6000 K.

• The Sun emits 41% of its energy as light, 9% as less than blue and 50% as infrared radiation.

Nitin Chauhan- Department of Remote Sensing 24

12
10/7/2016

SOURCES OF
ELECTROMAGNETIC ENERGY
• Thermonuclear fusion taking place on the surface of the Sun yields a continuous spectrum of
electromagnetic energy.

• The 5770 – 6000 kelvin (K) temperature of this process produces a large amount of relatively
short wavelength energy that travels through the vacuum of space at the speed of light.

• Some of this energy is intercepted by the Earth, where it interacts with the atmosphere and
surface materials.

• The Earth reflects some of the energy directly back out to space or it may absorb the short
wavelength energy and then re-emit it at a longer wavelength

Nitin Chauhan- Department of Remote Sensing 25

SOURCES OF
ELECTROMAGNETIC ENERGY

Nitin Chauhan- Department of Remote Sensing 26

13
10/7/2016

BLACK BODY & BLACK BODY


RADIATION • A black-body is a theoretical object.

• A black-body absorbs 100% of the radiation that hits it, it does not reflect any; thus, it appears
perfectly black.

• A black-body has the capability of re-emitting all the energy it receives.

• A black-body has the maximum emissivity of 1.

• A black-body emits energy at every wavelength .

• The energy emitted by a black-body is called black-body radiation.

• A blackbody can have different temperatures.

• The temperature of the black-body determines the most prominent wavelength of black-body
radiation.

Nitin Chauhan- Department of Remote Sensing 27

BLACK BODY & BLACK BODY


RADIATION
• At room temperature, black-bodies emit predominantly infrared energy.

• When heating up a black-body beyond 127 K (1000 ˚C) emission of light becomes dominant,
from red, through orange, yellow, and white (at 6000 K) before ending up at blue, beyond
which the emission includes increasing amounts of ultraviolet radiation

• At 6000 K a Black-body blackbody emits radiant energy of all visible wavelengths equally.

• Higher temperature corresponds to a greater contribution of radiation with shorter


wavelengths.

Nitin Chauhan- Department of Remote Sensing 28

14
10/7/2016

29

ELECTROMAGNETIC SPECTRUM
• Electromagnetic spectrum is a continuous sequence of electromagnetic energy arranged
according to wavelength or frequency.

• The different portions of the spectrum are namely: gaµma rays, X-rays, UV radiation, visible
radiation (light), infrared radiation, microwaves, and radio waves.

• Each of these named portions represents a range of wavelengths, not one specific wavelength.

• The EM spectrum is continuous and does not have any clear-cut class boundaries.

Nitin Chauhan- Department of Remote Sensing 30

15
10/7/2016

ELECTROMAGNETIC SPECTRUM

• Visible light (VIS) 0.4–0.7 µm


– Blue (B) 0.4–0.5 µm
– Green (G) 0.5–0.6 µm
– Red (R) 0.6–0.7 µm
• Visible–photographic infrared 0.5–0.9 µm
• Reflective infrared (IR) 0.7–3.0 µm
– Nearer infrared (NIR) 0.7–1.3 µm
– Short-wave infrared (SWIR) 1.3–3.0 µm
• Thermal infrared (TIR): 3–5 µm,8–14 µm
• Microwave 1mm – 100 cm

Nitin Chauhan- Department of Remote Sensing 31

ULTRAVIOLET SPECTRUM
( G E R M A N S C I E N TI ST J O HA N N W I L HE L M R I TTE R - 1 8 0 1 )

• Ultraviolet region is subdivided into:


– the near ultraviolet (sometimes known as UV-A; 0.32–0.40 μm),
– the far ultraviolet (UV-B; 0.32–0.28 μm), and
– the extreme ultraviolet (UV-C; below 0.28 μm).

– Near ultraviolet radiation is known for its ability to induce fluorescence,


emission of visible radiation, in some materials.

– UV radiation reveals some properties of minerals.

Nitin Chauhan- Department of Remote Sensing 32

16
10/7/2016

VISIBLE SPECTRUM
(ISSAC NEWTON - 1965- 66)

• It constitutes a very small portion of the spectrum, which is of great significance in remote
sensing.

• Limits of the visible spectrum are defined by the sensitivity of the human visual system.

• Issac Newton in 1665 and 1666 by using prism revealed that visible light can be divided into
three segments .

• Primary Additives/ Primary Colour - 0.4 to 0.5 μm (blue), 0.5 to 0.6 μm (green), and 0.6 to 0.7
μm (red)).

Nitin Chauhan- Department of Remote Sensing 33

VISIBLE SPECTRUM
• Primary colors are defined such that no single primary can be
formed from a mixture of the other two and that all other colors can
be formed by mixing the three primaries in appropriate
proportions.

• Equal proportions of the three additive primaries combine to form


white light.

• The color of an object is defined by the color of the light that it


reflects

Nitin Chauhan- Department of Remote Sensing 34

17
10/7/2016

VISIBLE SPECTRUM

• Representations of colors in films, paintings, and similar images are formed by


combinations of the three subtractive primaries that define the colors of pigments and
dyes.

• Each of the three subtractive primaries absorbs a third of the visible spectrum

Nitin Chauhan- Department of Remote Sensing 35

INFRARED SPECTRUM
(BR ITISH ASTRON OMER WIL L IAM HERSCHEL - 1800)

• Wavelengths longer than the red portion of the visible spectrum are designated as the
infrared region.
• It extends from 0.72 to 15 μm—making it more than 40 times as wide as the visible light
spectrum.
• It consists of following regions:
• Near Infrared (0.7-1.3 µm)
• Mid Infrared (1.3-3µm) Reflective Infrared
• Far Infrared (3-15µm)- thermal Emitted Infrared

Nitin Chauhan- Department of Remote Sensing 36

18
10/7/2016

MICROWAVE ENERGY
(SCOTTISH PHYSICIST JAMES CLERK MAXWELL AND THE GERMAN
PHYSICIST HEINRICH HERTZ)

• The longest wavelengths commonly used in remote sensing are those from about 1 mm to
1 μm in wavelength.

• The shortest wavelengths in this range have much in common with the thermal energy of
the far infrared.

• The longer wavelengths of the microwave region merge into the radio wavelengths used
for commercial broadcasts.

Nitin Chauhan- Department of Remote Sensing 37

ENERGY INTERACTION IN THE


ATMOSPHERE
• All radiation used for remote sensing must pass through the Earth’s atmosphere.

• Before the Sun’s energy reaches the Earth’s surface, three Remote Sensing relevant interactions
in the atmosphere happen: absorption, transmission, and scattering.

• Sensor carried by a low-flying aircraft have negligible effects of the atmosphere on image
quality.

• In contrast, energy that reaches sensors carried by Earth satellites must pass through the
entire depth of the Earth’s atmosphere so atmospheric effects may have substantial impact on
the quality of images and data that the sensors generate.

Nitin Chauhan- Department of Remote Sensing 38

19
10/7/2016

SCATTERING
• Scattering is the redirection of electromagnetic energy by particles suspended in the atmosphere or by large
molecules of atmospheric gases.

• The amount of scattering that occurs depends on


– sizes of these particles,
– their abundance,
– the wavelength of the radiation, and
– the depth of the atmosphere through which the energy is traveling.

• The effect of scattering is to redirect radiation so that a portion of the incoming


solar beam is directed back toward space, as well as toward the Earth’s surface.

Nitin Chauhan- Department of Remote Sensing 39

SCATTERING

• Scattering differs from reflection in that the direction associated with scattering is
unpredictable, whereas the direction of reflection is predictable. There are essentially
three types of scattering:

• Rayleigh,

• Mie, and

• Non-selective.

Nitin Chauhan- Department of Remote Sensing 40

20
10/7/2016

ATMOSPHERIC LAYERS AND


CONSTITUENTS
• Major subdivisions of the atmosphere and the types
of molecules and aerosols found in each layer

Nitin Chauhan- Department of Remote Sensing 41

RAYLEIGH SCATTERING
(BRITISH SCIENTIST LORD J. W. S. RAYLEIGH IN THE LATE 1890S)

• Rayleigh scattering occurs when the diameter of the matter (usually air molecules) are many
times smaller than the wavelength of the incident electromagnetic radiation.

• Particles causing Rayleigh scattering could be very small specks of dust or some of the larger
molecules of atmospheric gases, such as nitrogen (N2) and oxygen (O2).

• It can occur in the absence of atmospheric impurities, it is sometimes referred to as clear


atmosphere scattering.

• It is the dominant scattering process high in the atmosphere, up to altitudes of 9–10 km, the
upper limit for atmospheric scattering.

Nitin Chauhan- Department of Remote Sensing 42

21
10/7/2016

RAYLEIGH SCATTERING

• All scattering is accomplished through absorption and re-emission of radiation by atoms or


molecules.

• It is impossible to predict the direction in which a specific atom or molecule will emit a photon,
hence scattering.

• The energy required to excite an atom is associated with short-wavelength, high frequency
radiation.

• The amount of scattering is inversely related to the fourth power of the radiation's wavelength.
For example, blue light (0.4 µm) is scattered 16 times more than near-infrared light (0.8 µm).

Nitin Chauhan- Department of Remote Sensing 43

RAYLEIGH SCATTERING

• The approximate amount of Rayleigh scattering in the atmosphere in optical wavelengths


(0.4 – 0.7 µm) may be computed using the Rayleigh scattering cross-section (τm)
algorithm:
m 
3 2
8 n  1
2
 

3 N 2 4 
• where n = refractive index, N = number of air molecules per unit volume, and l =
wavelength. The amount of scattering is inversely related to the fourth power of the
radiation’s wavelength.

Nitin Chauhan- Department of Remote Sensing 44

22
10/7/2016

RAYLEIGH SCATTERING
• The intensity of Rayleigh
scattering varies inversely with
the fourth power of the
wavelength (λ-4).

Nitin Chauhan- Department of Remote Sensing 45

RAYLEIGH SCATTERING

• Rayleigh scattering is responsible for the blue sky. The short violet and blue wavelengths
are more efficiently scattered than the longer orange and red wavelengths.

• At sunset, observers on the Earth’s surface see only those wavelengths that pass through
the longer atmospheric path caused by the low solar elevation; because only the longer
wavelengths penetrate this distance without attenuation by scattering, we see only the
reddish component of the solar beam.

Nitin Chauhan- Department of Remote Sensing 46

23
10/7/2016

MIE SCATTERING
• Mie scattering takes place when there are essentially spherical particles present in
the atmosphere with diameters approximately equal to the wavelength of
radiation being considered.

• For visible light, water vapor, dust, and other particles ranging from a few tenths of a
micrometer to several micrometers in diameter are the main scattering agents.

• The amount of scatter is greater than Rayleigh scatter and the wavelengths scattered
are longer.

• Pollution also contributes to beautiful sunsets and sunrises. The greater the amount
of smoke and dust particles in the atmospheric column, the more violet and blue
light will be scattered away and only the longer orange and red wavelength light will
reach our eyes.

• Mie scattering tends to be greatest in the lower atmosphere (0 to 5 km), where larger
particles are abundant.

Nitin Chauhan- Department of Remote Sensing 47

NON-SELECTIVE SCATTERING
• Non-selective scattering is produced when there are particles in the atmosphere several times the
diameter of the radiation being transmitted.

• This type of scattering is non-selective, i.e. all wavelengths of light are scattered, not just blue, green, or
red.

• Water droplets, which make up clouds and fog banks, scatter all wavelengths of visible light equally well,
causing the cloud to appear white (a mixture of all colors of light in approximately equal quantities
produces white).

• Scattering can severely reduce the information content of remotely sensed data to the point that the
imagery looses contrast and it is difficult to differentiate one object from another.

Nitin Chauhan- Department of Remote Sensing 48

24
10/7/2016

ABSORPTION
• Absorption is the process by which radiant energy is absorbed and converted into other forms of energy.

• Absorption of radiation occurs when the atmosphere prevents, or strongly attenuates, transmission of
radiation or its energy through the atmosphere.

• Energy acquired by the atmosphere is subsequently reradiated at longer wavelengths.

• Three gases are responsible for most absorption of solar radiation :


1. O3 (Ozone)
2. CO2 (Carbon dioxide)
3. H2O (Water vapor)

Nitin Chauhan- Department of Remote Sensing 49

ABSORPTION - OZONE

• Ozone (O3) is formed by the interaction of high-energy ultraviolet radiation with oxygen
molecules (O2) high in the atmosphere.

• Maximum concentrations of ozone are found at altitudes of about 20–30 km in the


stratosphere.

• Ozone plays an important role in the Earth’s energy balance.

• Absorption of the high energy, short-wavelength portions of the ultraviolet spectrum (mainly
less than 0.24 μm) prevents transmission of this radiation to the lower atmosphere.

Nitin Chauhan- Department of Remote Sensing 50

25
10/7/2016

ABSORPTION – CARBON
DIOXIDE
• Carbon dioxide (CO2) also occurs in low concentrations (about 0.03% by volume of a dry
atmosphere), mainly in the lower atmosphere.

• Aside from local variations caused by volcanic eruptions and mankind’s activities, the
distribution of CO2 in the lower atmosphere is probably relatively uniform

• Carbon dioxide is important in remote sensing because it is effective in absorbing radiation in


the mid and far infrared regions of the spectrum.

• Carbon dioxide absorbs infrared radiation (IR) in three narrow bands of wavelengths, which
are 2.7, 4.3 and 15 micrometers (µm).

Nitin Chauhan- Department of Remote Sensing 51

ABSORPTION – H 2O WATER
VAPOR
• Water vapor (H2O) is commonly present in the lower atmosphere (below about 100 km)
in amounts that vary from 0 to about 3% by volume.

• The role of atmospheric water vapor, unlike those of ozone and carbon dioxide, varies
greatly with time and location.

• Two of the most important regions of absorption are in several bands between 5.5 and 7.0
μm and above 27.0 μm; absorption in these regions can exceed 80% if the atmosphere
contains appreciable amounts of water vapor.

Nitin Chauhan- Department of Remote Sensing 52

26
10/7/2016

ATMOSPHERIC WINDOWS
• Earth’s atmosphere selectively transmits energy of certain wavelengths.

• Those wavelengths that are relatively easily transmitted through the atmosphere are referred to as atmospheric
windows.

• Positions, extents, and effectiveness of atmospheric windows are determined by the absorption spectra of
atmospheric gases.

• Atmospheric windows define those wavelengths that can be used for forming images.

• Energy at other wavelengths, not within the windows, is severely attenuated by the atmosphere and therefore
cannot be effective for remote sensing.

Nitin Chauhan- Department of Remote Sensing 53

ATMOSPHERIC WINDOWS
Major Atmospheric Windows
0.30–0.75 μm
Ultraviolet and visible 0.77–0.91 μm
1.55–1.75 μm
Near infrared 2.05–2.4 μm
8.0–9.2 μm
Thermal infrared 10.2–12.4 μm
7.5–11.5 mm
Microwave 20.0+ mm

Nitin Chauhan- Department of Remote Sensing 54

27
10/7/2016

INTERACTIONS WITH SURFACES


• As electromagnetic energy reaches the Earth’s surface, it must be reflected, absorbed, or
transmitted.

• The proportions accounted for by each process depend on the nature of the surface, the
wavelength of the energy, and the angle of illumination.

• Radiation budget equation states that the total amount of radiant flux in specific
wavelengths (λ) incident to the terrain (ϕiλ) is equal to

55
Nitin Chauhan- Department of Remote Sensing

INTERACTIONS WITH
SURFACES- REFLECTION
• Reflection occurs when a ray of light is redirected as it strikes a nontransparent surface.

• The nature of the reflection depends on :


• Sizes of surface irregularities (roughness or smoothness) in relation to the wavelength of the radiation
considered.

• If the surface is smooth relative to wavelength, specular reflection occurs.

• Specular reflection redirects all, or almost all, of the incident radiation in a


single direction.

Nitin Chauhan- Department of Remote Sensing 56

28
10/7/2016

INTERACTIONS WITH
SURFACES- REFLECTION
• The angle of incidence is equal to the angle of reflection .

• For visible radiation, specular reflection can occur with surfaces such as a
mirror, smooth metal, or a calm water body.

Specular

Nitin Chauhan- Department of Remote Sensing 57

INTERACTIONS WITH
SURFACES- REFLECTION
• If a surface is rough relative to wavelength,
it acts as a diffuse, or isotropic, reflector.

• Energy is scattered more or less equally in


all directions.

• For visible radiation, many natural


surfaces might behave as diffuse
reflectors, including, for example, uniform
grassy surfaces.

• A perfectly diffuse reflector (known as a


Lambertian surface) would have equal
brightnesses when observed from any
angle
Nitin Chauhan- Department of Remote Sensing 58

29
10/7/2016

Nitin Chauhan- Department of Remote Sensing 59

INTERACTIONS WITH
SURFACES- REFLECTION
• Reflection characteristics of a
surface are described by the
bidirectional reflectance
distribution function (BRDF).

• The BRDF is a mathematical


description of the optical
behavior of a surface with
respect to angles of
illumination and observation,
given that it has been
illuminated with a parallel
beam of light at a specified
azimuth and elevation.

Nitin Chauhan- Department of Remote Sensing 60

30
10/7/2016

INTERACTIONS WITH SURFACES- TRANSMISSION

• Transmission of radiation occurs when radiation passes through a substance without


significant attenuation .

• From a given thickness, or depth, of a substance, the ability of a medium to transmit


energy is measured as the transmittance (t):

Transmitted radiation
𝑡=
Incident radiation

Nitin Chauhan- Department of Remote Sensing 61

INTERACTIONS WITH SURFACES- TRANSMISSION

• Water bodies are capable of transmitting significant amounts of


radiation.

• The transmittance of many materials varies greatly with


wavelengths.

• For example, plant leaves are generally opaque to visible radiation


but transmit significant amounts of radiation in the infrared.

Nitin Chauhan- Department of Remote Sensing 62

31
10/7/2016

INTERACTIONS WITH SURFACES- FLUORESCENCE

• Fluorescence occurs when an object illuminated with radiation of one wavelength emits
radiation at a different wavelength.

• The most familiar examples are some sulfide minerals, which emit visible radiation when
illuminated with ultraviolet radiation.

Nitin Chauhan- Department of Remote Sensing 63

INTERACTIONS WITH SURFACES-


POLARIZATION
• The polarization of electromagnetic radiation denotes the orientation of the
oscillations within the electric field of electromagnetic energy.

• A light wave’s electric field is typically oriented perpendicular to the wave’s


direction of travel; the field may have a preferred orientation, or it may rotate as
the wave travels.

• Sunlight within the atmosphere has a mixture of polarizations; when it


illuminates surfaces at steep angles (i.e., when the sun is high in the sky), the
reflected radiation tends to also have a mixture of polarizations.

• When the sun illuminates a surface at low angles (i.e., the sun is near the
horizon), many surfaces tend to preferentially reflect the horizontally polarized
component of the solar radiation.

• Polarization has broader significance in the practice of microwave remote


sensing

Nitin Chauhan- Department of Remote Sensing 64

32
10/7/2016

Interactions with Surfaces- Polarization


 The polarization of electromagnetic radiation denotes the
orientation of the oscillations within the electric field of
electromagnetic energy.

 A light wave’s electric field is typically oriented perpendicular to


the wave’s direction of travel; the field may have a preferred
orientation, or it may rotate as the wave travels.

 Sunlight within the atmosphere has a mixture of polarizations;


when it illuminates surfaces at steep angles (i.e., when the sun is
high in the sky), the reflected radiation tends to also have a mixture
of polarizations.

 When the sun illuminates a surface at low angles (i.e., the sun is
near the horizon), many surfaces tend to preferentially reflect the
horizontally polarized component of the solar radiation.

 Polarization has broader significance in the practice of microwave


remote sensing
65 Nitin Chauhan- Department of Remote Sensing

Interactions with Surfaces- Reflectance


 Reflectance (Rrs) is expressed as the relative brightness of a surface as
measured for a specific wavelength interval:

Observed brightness
𝑹𝒆𝒇𝒍𝒆𝒄𝒕𝒂𝒏𝒄𝒆 =
Irradiance

 As a ratio, it is a dimensionless number (between 0 and 1), but it is


commonly expressed as a percentage. .

 In the usual practice of remote sensing, Rrs is not directly measurable,


because normally we can observe only the observed brightness and must
estimate irradiance"

66 Nitin Chauhan- Department of Remote Sensing

33
10/7/2016

Radiometric Units
 Radiant power : The radiant energy per unit time. Unit = Watt (1W = 1
joule per second).

 Radiant emittance : It is power emitted from a surface and measured in watt


per square meter (Wm-2).

 Spectral Radiant Emittance : It characterizes the radiant emittance per


wavelength; it is measured in Wm-2m-1

 Radiance : It describes the amount of energy being emitted or reflected from


a particular area per unit solid angle and per unit time. Radiance is usually
expressed in Wm-2sr-1.

67 Nitin Chauhan- Department of Remote Sensing

Radiometric Units
 Irradiance :It is amount of incident energy on a surface per unit area
and per unit time. Irradiance is usually expressed in Wm-2.

 Emissivity : The emitting ability of real material is expressed as


dimensionless ratio called emissivity (with values between 0 and 1).

 The emissivity of a material specifies how well a real body emits


energy as compared with a black-body.

68 Nitin Chauhan- Department of Remote Sensing

34
10/7/2016

Irradiance and Exitance


 The amount of radiant flux incident upon a
surface per unit area of that surface is called
Irradiance (Eλ), where:

 The amount of radiant flux leaving per unit


area of the plane surface is called Exitance
(Mλ).

 Both quantities are measured in watts per


meter squared (W m-2).

69 Nitin Chauhan- Department of Remote Sensing

Radiance
 Radiance (Lλ) is the radiant
flux per unit solid angle
leaving an extended source in
a given direction per unit
projected source area in that
direction and is measured in
watts per meter squared per
steradian (W m-2 sr -1 ).

 We are only interested in the


radiant flux in certain
wavelengths (Lλ) leaving the
projected source area (A)
within a certain direction (Θ)
and solid angle (Ω):

70 Nitin Chauhan- Department of Remote Sensing

35
10/7/2016

Various path of radiance received by


the Remote Sensing System 71

 Path 1 contains
spectral solar
irradiance (Eoλ)
that was attenuated
very little before
illuminating the
terrain within the
IFOV.
 Amount of
irradiance reaching
the terrain is a
function of the
atmospheric
transmittance at
Nitin Chauhan- Department of Remote Sensing
this angle (Tθo).

Various path of radiance received by 72

the Remote Sensing System


 Path 2 contains
spectral diffuse sky
irradiance (Edλ) that
never even reaches
the Earth’s surface
(the target study
area) because of
scattering in the
atmosphere.

Nitin Chauhan- Department of Remote Sensing

36
10/7/2016

Various path of radiance received by 73

the Remote Sensing System


 Path 3 contains
energy from the Sun
that has undergone
some Rayleigh, Mie,
and/or nonselective
scattering and
perhaps some
absorption and
reemission before
illuminating the study
area.

 This quantity is
referred as the
downward
reflectance of the
Nitin Chauhan- Department of Remote Sensing atmosphere (Eddλ)

Various path of radiance received by 74

the Remote Sensing System


 Path 4 contains
radiation that was
reflected or scattered
by nearby terrain ( ρλn)
covered by snow,
concrete, soil, water,
and/or vegetation into
the IFOV of the sensor
system.

 The energy does not


actually illuminate the
study area of interest.

 Path 2 and Path 4


combine to produce
what is commonly
Nitin Chauhan- Department of Remote Sensing referred to as Path
Radiance, LP

37
10/7/2016

Various path of radiance received by 75

the Remote Sensing System


 Path 5 is energy that was
also reflected from
nearby terrain into the
atmosphere, but then
scattered or reflected
onto the study area.

Nitin Chauhan- Department of Remote Sensing

Various path of radiance received by 76

the Remote Sensing System

Nitin Chauhan- Department of Remote Sensing

38
10/7/2016

Spectral Signatures & Spectral reflectance curve


 Reflectance characteristics of earth surface features can be quantified by
measuring the portion of incident energy that is reflected.

 It is measured as a function of wavelength and is termed as Spectral


Reflectance (ρλ).

𝒆𝒏𝒆𝒓𝒈𝒚 𝒐𝒇 𝒘𝒂𝒗𝒆𝒍𝒆𝒏𝒈𝒕𝒉 𝝀 𝒓𝒆𝒇𝒍𝒆𝒄𝒕𝒆𝒅 𝒇𝒓𝒐𝒎 𝒐𝒃𝒋𝒆𝒄𝒕


𝝆𝝀 = × 𝟏𝟎𝟎
𝒆𝒏𝒆𝒓𝒈𝒚 𝒐𝒇 𝒘𝒂𝒗𝒆𝒍𝒆𝒏𝒈𝒕𝒉 𝝀 𝒊𝒏𝒄𝒊𝒅𝒆𝒏𝒕 𝒐𝒏 𝒐𝒃𝒋𝒆𝒄𝒕

 Graph between spectral reflectance of an object as a function of wavelength


is termed as spectral reflectance curve and gives insight into the spectral
characteristics of an object.

77 Nitin Chauhan- Department of Remote Sensing

Remote Sensing of Vegetation


 Photosynthesis is an energy storing process that takes place in leaves
and other green parts of plants in presence of light.

𝟔𝑪𝑶𝟐 + 𝟔𝑯𝟐𝑶 + 𝒍𝒊𝒈𝒉𝒕 𝒆𝒏𝒆𝒓𝒈𝒚 → 𝑪𝟔𝑯𝟏𝟐𝑶𝟔 + 𝟔𝑶𝟐

 Plants have adapted their internal and external structure to perform


photosynthesis.

 This structure and its interaction with EM energy has direct impact on
how leaves and canopies appear spectrally when recorded using
remote sensing instruments.

78 Nitin Chauhan- Department of Remote Sensing

39
10/7/2016

Dominant Factors Controlling Leaf Reflectance


 Dominant factors controlling leaf reflectance in region from 0.35-2.6
µm can be understood under three different headings:

1. Visible light interaction with pigments in mesophyll cells.

2. Near- Infrared energy interaction within spongy mesophyll cells.

3. Middle infrared energy interaction with water in spongy mesophyll.

79 Nitin Chauhan- Department of Remote Sensing

80 Nitin Chauhan- Department of Remote Sensing

40
10/7/2016

81
Visible light interaction with pigments in mesophyll
cells
 Structure of leaves is highly variable depending upon species and
environmental conditions during growth.

Visible light interaction with pigments in mesophyll


cells
 Chlorophyll a peak
absorption is at 0.43
and 0.66 µm.

 Chlorophyll b peak
absorption is at 0.45
and 0.65 µm.

 Optimum chlorophyll
absorption windows
are:
0.45 - 0.52 µm and 0.63
- 0.69 µm

82 Nitin Chauhan- Department of Remote Sensing

41
10/7/2016

Visible light interaction with pigments in mesophyll


cells
 Other pigments present in palisade mesophyll cells are :
 Yellow Carotenes
 Pale Yellow Xanthophyll
 β Carotene
 Phycoerythrin
 Phycocyanin

83 Nitin Chauhan- Department of Remote Sensing

Visible light interaction with pigments in mesophyll


cells
 β carotene shows strong
absorption band centered
at about 0.45 µm.

 Phycoerythrin absorb
predominantly in green
region centered at about
0.55 µm allowing blue
and red light to be
reflected.

 Phycocyanin absorb
primarily in green and red
regions centered at about
0.62 µm, allowing much of
blue and some of green
light (B+G=cyan)
84 Nitin Chauhan- Department of Remote Sensing

42
10/7/2016

Visible light interaction with pigments in mesophyll


cells
 Chlorophyll a and b have similar absorption band in blue region
therefore they tend to dominate and mask the effect of other pigments
present.

 When a plant undergoes senescence in the fall or encounters stress,


the chlorophyll pigment disappear , allowing the carotenes and other
pigments to become dominant.

 The absorption characteristics of plant canopies may be coupled with


other remotely sensed data to identify vegetation stress, yield and
other hybrid variables.

85 Nitin Chauhan- Department of Remote Sensing

Near- Infrared energy interaction within spongy


mesophyll cells
 In healthy green leaf, the near –infrared reflectance increases
dramatically in region from 700-1200 nm.

 Healthy plants absorb radiant energy in blue and red portions of


spectrum for photosynthesis but beyond red the reflectance and
transmittance increases and absorption decreases.

 The reason for this is that if plants absorb this high energy infrared
energy then they would become too warm and proteins will be
irreversibly denatured.

86 Nitin Chauhan- Department of Remote Sensing

43
10/7/2016

Near- Infrared energy interaction within spongy


mesophyll cells
 Spongy mesophyll layer in leaf control the amount of infrared energy
that is reflected.

 In near infrared region healthy green vegetation is characterized by


high reflectance (40-60 percent), high transmittance (40-60 percent)
through leaf onto underlying leaves, and relatively low absorbtance
(5-10 percent).

 High diffuse reflectance of near infrared (0.7-1.3 µm) energy from


plant leaves is due to internal scattering at the cell wall air interfaces

87 Nitin Chauhan- Department of Remote Sensing

Near- Infrared energy interaction within


88
spongy mesophyll cells
 Remaining
45-50 % of
energy
transmitted
through leaf
can be
reflected by
once again
by leaves
below it.
This is
called leaf
additive
reflectance.
Nitin Chauhan- Department of Remote Sensing

44
10/7/2016

Near- Infrared energy interaction within spongy


mesophyll cells
 Greater the number of leaf layers in a healthy, mature canopy, greater is the
infrared reflectance.

 If canopy is composed of single , sparse leaf then the near infrared


reflectance will be transmitted through leaf and will be absorbed by ground
cover beneath.

 A direct relationship exists between near infrared region and various


biomass measurements.

 An inverse relationship exists between response in red region and plant


biomass.

89 Nitin Chauhan- Department of Remote Sensing

Near- Infrared energy interaction within


spongy mesophyll cells

90 Nitin Chauhan- Department of Remote Sensing

45
10/7/2016

 Strong chlorophyll absorption bands in blue and red regions


(approx. 6% reflectance at 450 nm and 5% at 650 nm), and a peak
reflectance in green region of visible spectrum (11 % at 550 nm).
About 76% of incident NIR was reflected from leaf at 900 nm

91 Nitin Chauhan- Department of Remote Sensing

 Leaf is undergoing senescence so the chlorophyll pigments diminished


and as a result amount of green (24% at 550 nm) and red (32 %at 650
nm) reflected giving it a yellow appearance. At 750 nm yellow leaf reflect
less NIR as compared to green leaf but it is 76 % reflected at about 900
nm.
92 Nitin Chauhan- Department of Remote Sensing

46
10/7/2016

 Red leaf reflect 7% of blue at 450 nm , 6% of green


energy at 550 nm and 23 % of incident red energy at
650 nm. NIR reflectance at 900 nm dropped to 70 %.
93 Nitin Chauhan- Department of Remote Sensing

 Dark brown leaf produced with a spectral reflectance curve with


low blue (7% at 450 nm), green (9% at 550 nm) and red
reflectance (10% at 650 nm). This combination produced a dark
brown appearance. NIR reflectance dropped to 44 % at 900 nm.

94 Nitin Chauhan- Department of Remote Sensing

47
10/7/2016

Near- Infrared energy interaction within


spongy mesophyll cells

1 2
a

a.

b.

c.
45

95 Nitin Chauhan- Department of Remote Sensing

   

Middle infrared energy interaction with water in


spongy mesophyll.
 Plants require water to grow and leaves obtain this by roots. Water
enter the leaves through petiole and veins carry water to cells of leaf.

 Plant being watered in such a manner that it can’t hold any more
water is called turgid. But if plant is not watered then the amount
stored is less then what it can potentially hold this is called relative
turgidity.

 So remote sensing instrument sensitive to how much water was


actually present in plant leaf were developed .

96 Nitin Chauhan- Department of Remote Sensing

48
10/7/2016

Middle infrared energy interaction with water in


spongy mesophyll.
 Liquid water in atmosphere creates five major absorption bands in
near infrared through middle infrared portions of EMR at 0.97, 1.19,
1.45, 1.94, and 2.7 µm.

 Fundamental vibrational water absorption band at 2.7 µm is strongest


(there is also one thermal IR region at 6.27 µm).

 In MIR vegetation reflectance peaks occur at about 1.6µm and 2.2 µm


between major atmospheric water absorption bands.

97 Nitin Chauhan- Department of Remote Sensing

Middle infrared energy interaction with water in


spongy mesophyll.
 Greater the turgidity of leaves i.e. more water content , lower the MIR
reflectance.

 As the moisture content of leaves decreases, reflectance in MIR


increases because due to absence of water in intercellular spaces
incident MIR is more intensely scattered.

98 Nitin Chauhan- Department of Remote Sensing

49
10/7/2016

Middle infrared energy interaction with water in


spongy mesophyll.

99 Nitin Chauhan- Department of Remote Sensing

Remote Sensing of Soils


 26% of the Earth’s surface is exposed land and 74% of the Earth’s surface is covered by
water.

 Almost all humanity lives on the terrestrial, solid Earth comprised of bedrock and the
weathered bedrock called soil.

 Remote sensing can play a limited role in the identification, inventory, and mapping of
surficial soils not covered with dense vegetation.

 Remote sensing can provide information about the chemical composition of rocks and
minerals that are on the Earth’s surface, and not completely covered by dense vegetation.

 Emphasis is placed on understanding unique absorption bands associated with specific


types of rocks and minerals using imaging spectroscopy techniques.

100 Nitin Chauhan- Department of Remote Sensing

50
10/7/2016

Soil Characteristics
 Soil is unconsolidated material at the surface of the Earth that serves as a natural
medium for growing plants.

 Soil is the weathered material between the atmosphere at the Earth’s surface and
the bedrock below the surface to a maximum depth of approximately 200 cm
(USDA, 1998).

 Soil is a mixture of inorganic mineral particles and organic matter of varying size
and composition.

 The particles make up about 50 percent of the soil’s volume.

 Pores containing air and/water occupy the remaining volume.

101 Nitin Chauhan- Department of Remote Sensing

Soil Grain Size


 To define the taxonomy of a soil grain sixe plays a vital role.

 Three of the universally recognized soil grain size classes are : sand , silt and
clay.

 As per the U.S. Department of Agriculture each of these soil classes are
defined as:

 Sand :
a) Soil particle between 0.05 and 2.0 mm in diameter
b) Soil is composed of a large fraction of sand size particles.
c) Because of large particles they enhance soil drainage i.e. water percolate freely in air
spaces.

102 Nitin Chauhan- Department of Remote Sensing

51
10/7/2016

Soil Grain Size


 Silt:
a) Soil particle between 0.002 and 0.05 mm in diameter
b) Soil is composed of a large fraction of silt size particles.
c) They enhance movement and retention of soil capillary water.

 Clay:
a) Soil particle < 0.002 mm in diameter
b) Soil is composed of a large fraction of clay size particles
c) They enhance movement and retention of soil capillary water.
d) Clay carries electric charge which hold the nutrient in the soil and help to maintain
soil fertility.

103 Nitin Chauhan- Department of Remote Sensing

Soil Texture
 It is the relative proportion of sand , silt and clay in a soil.

 It is percentage of weight of particles in various size classes.

 The soil texture can be identified using the USDA soil texture triangle
which identifies the % of sand , silt and clay comprising standard soil
types.

 For eg : Loam soil found in the lower center of the diagram consist of
40 % sand, 40% silt and 20 % clay.

104 Nitin Chauhan- Department of Remote Sensing

52
10/7/2016

Soil Texture

105
Nitin Chauhan- Department of Remote Sensing

Remote Sensing of Soils


 It is impossible for remote sensing to give all soil related information
without the help of in-situ data collection.
 Optical remote sensing instruments can record spectral reflectance
characteristics of the surface properties of soils.
 The total radiance leaving an exposed soil which is recorded by sensor
(Lt) .

𝑳𝒕 = 𝑳𝒑 + 𝑳𝒔 + 𝑳𝒗

106 Nitin Chauhan- Department of Remote Sensing

53
10/7/2016

Remote Sensing of Soils

107 Nitin Chauhan- Department of Remote Sensing

Remote Sensing of Soils


 Lp : This radiation never actually reaches the soil surface. It is
unwanted atmospheric path radiance noise and it should be removed
from imagery prior to information extraction.

 Ls : This reaches the air-soil interface and penetrates approximately


½ wavelength (λ)deep into the soil.

 Lv: This portion penetrates a few millimeter or even a centimeter or


two into the soil column. This is referred as volume scattering

108 Nitin Chauhan- Department of Remote Sensing

54
10/7/2016

Remote Sensing of Soils


 Spectral reflectance characteristics of soils are a function of several
important characteristic :

 Soil texture (percentage of sand, silt, and clay),

 Soil moisture content (e.G. Dry, moist, saturated),

 Organic matter content,

 Iron-oxide content, and

 Surface roughness.

109 Nitin Chauhan- Department of Remote Sensing

Remote Sensing of Soils – Soil texture and


Moisture content
 Radiant energy may be reflected
from the surface of the dry soil, or
it penetrates into the soil particles,
where it may be absorbed or
scattered. Total reflectance from
the dry soil is a function of specular
reflectance and the internal volume
reflectance.

 As soil moisture increases, each soil


particle may be encapsulated with a
thin membrane of capillary water. The
interstitial spaces may also fill with
water. The greater the amount of
water in the soil, the greater the
absorption of incident energy and the
lower the soil reflectance
110 Nitin Chauhan- Department of Remote Sensing

55
10/7/2016

Remote Sensing of Soils – Soil texture and Moisture


content
 Higher moisture content
in (a) sandy soil, and (b)
clayey soil results in
decreased reflectance
throughout the visible
and near-infrared region,
especially in the water-
absorption bands at 1.4,
1.9, and 2.7 mm.

111 Nitin Chauhan- Department of Remote Sensing

Remote Sensing of Soils – Soil Organic matter


content
 Generally, the
greater the
amount of
organic content
in a soil, the
greater the
absorption of
incident energy
and the lower
the spectral
reflectance

112 Nitin Chauhan- Department of Remote Sensing

56
10/7/2016

Remote Sensing of Soils – Iron oxide content


 Iron oxide in a
sandy loam soil
causes an
increase in
reflectance in the
red portion of the
spectrum (0.6 -
0.7 mm) and a
decrease in in
near-infrared
(0.85 - 0.90 mm)
reflectance

113 Nitin Chauhan- Department of Remote Sensing

Remote Sensing of Soils – Surface Roughness


 Smaller the local surface roughness relative to the size of the incident
radiation, greater the specular reflectance from terrain.

 Dry fine textured clayey soil should produce higher spectral response
throughout visible and NIR portion versus silt and sand surfaces provided
that they contain no moisture, organic content and iron oxide.

 If these things are added to clayey soil it absorb incident radiant flux and
may appear like silt or even perhaps sand which might cause interpretation
problems.

 Dry sand with well drained large grains should diffusely scatter the incident
wavelength of visible and NIR energy more than clayey and silt soils.

114 Nitin Chauhan- Department of Remote Sensing

57
10/7/2016

Spectral Characteristics of Water


 74% of the Earth’s surface is water.

 97% of the Earth’s volume of water is in the saline oceans.

 2.2% in the permanent icecap

 Only 0.02% is in freshwater streams, river, lakes, reservoirs

 Remaining water is in:


 underground aquifers (0.6%),
 the atmosphere in the form of water vapor (0.001%)

115 Nitin Chauhan- Department of Remote Sensing

Spectral Characteristics of Water


 The total radiance, (Lt) recorded by a remote sensing system over a
waterbody is a function of the electromagnetic energy from four sources:
Lt = Lp + Ls + Lv + Lb
 Lp is the the radiance recorded by a sensor resulting from the downwelling
solar (Esun) and sky (Esky) radiation. This is unwanted path radiance that
never reaches the water.

 Ls is the radiance that reaches the air-water interface (free-surface layer or


boundary layer) but only penetrates it a millimeter or so and is then
reflected from the water surface. This reflected energy contains spectral
information about the near-surface characteristics of the water.

 Lv is the radiance that penetrates the air-water interface, interacts with the
organic/inorganic constituents in the water, and then exits the water
column without encountering the bottom. It is called subsurface volumetric
radiance and provides information about the internal bulk characteristics
of the water column

 Lb is the radiance that reaches the bottom of the waterbody, is reflected


from it and propagates back through the water column, and then exits the
water column. This radiance is of value if we want information about the
bottom (e.g., depth, color).
116 Nitin Chauhan- Department of Remote Sensing

58
10/7/2016

Spectral Characteristics of Water


 Total radiance, (Lt) recorded
by a remote sensing system
over water is a function of
the electromagnetic energy
received from:

 Lp = atmospheric path
radiance
 Ls = free-surface layer
reflectance
 Lv = subsurface volumetric
reflectance
 Lb = bottom reflectance

117 Nitin Chauhan- Department of Remote Sensing

Spectral Characteristics of Water


 Scattering by particles that are small relative to wavelength
(Rayleigh scattering) causes shorter wavelengths to be
scattered the most.

 Thus, for deep water bodies, we expect (in the absence of


impurities) water to be blue or blue-green in color.

 Maximum transmittance of light by clear water occurs in the


range 0.44 μm–0.54 μm, with peak transmittance at 0.48 μm.

 The color of water is determined by volume scattering, rather


than by surface reflection.

 Spectral properties of water bodies (unlike those of land


features) are determined by transmittance rather than by
surface characteristics alone

118 Nitin Chauhan- Department of Remote Sensing

59
10/7/2016

Spectral Characteristics of Water- Penetration


 In the blue region, light penetration is not at its
optimum, but at slightly longer wavelengths.

 In the blue-green region, penetration is greater. At these


wavelengths, the opportunity for recording features on
the bottom of the water body is greatest.

 At longer wavelengths, in the red region, absorption of


sunlight is much greater, and only shallow features can
be detected.

 Finally, in the near infrared region, absorption is so


great that only land–water distinctions can be made.

119 Nitin Chauhan- Department of Remote Sensing

Water Penetration

SPOT Band 1 (0.5 - 0.59 m) green SPOT Band 2 (0.61 - 0.68 m) red SPOT Band 3 (0.79 - 0.89 m) NIR

120 Nitin Chauhan- Department of Remote Sensing

60
10/7/2016

Spectral Characteristics of Water- Sediment Load


 As impurities are added to a water body, its spectral properties change.

 Sediments are introduced both from natural sources and by mankind’s


activities.

 It consist of fine-textured silts and clays eroded from stream banks, or by


water running off disturbed land, that are fine enough to be carried in
suspension by moving water.

 Sediment-laden water is referred to as turbid water; we can measure


turbidity by sampling the water body or by using devices that estimate
turbidity by the transparency of the water.

121 Nitin Chauhan- Department of Remote Sensing

Spectral Characteristics of Water- Sediment Load


 One such device is the Secchi disk, a white disk of specified
diameter that can be lowered on a line from the side of a small boat.

 Because turbidity decreases the transparency of the water body, the


depth at which the disk is no longer visible can be related to
sediment content.

 Another indication of turbidity, nephelometric turbidity units


(NTU), are measured by the intensity of light that passes through a
water sample.

 A special instrument (Nephelometer) uses a light beam and a


sensor to detect differences in light intensity.

 Water of high turbidity decreases the intensity of the light in a


manner that can be related to sediment content.

122 Nitin Chauhan- Department of Remote Sensing

61
10/7/2016

Spectral Characteristics of Water- Sediment Load


Nephelometer

123 Nitin Chauhan- Department of Remote Sensing

Spectral Characteristics of Water- Sediment Load


 As sediment concentration increases its overall brightness in the
visible region increases, so the water body ceases to act as a “dark”
object and becomes more and more like a “bright” object.

 As sediment concentrations increase, the wavelength of peak


reflectance shifts from a maximum in the blue region toward the
green.

 The presence of larger particles means that the wavelength of


maximum scattering shifts toward the blue-green and green
regions.

 As sediment content increases there is an increase in brightness


and a shift in peak reflectance toward longer wavelengths, and the
peak itself becomes broader, so that at high levels of turbidity the
color becomes a less precise indicator of sediment content

124 Nitin Chauhan- Department of Remote Sensing

62
10/7/2016

Spectral Characteristics of Water- Sediment Load

125 Nitin Chauhan- Department of Remote Sensing

Spectral Characteristics of Water- Water Depth


 At a depth of 20 m, little or no infrared radiation is present because
the water body is an effective absorber of these longer wavelengths.

 At this depth, only blue-green wavelengths remain; these wavelengths


are therefore available for scattering back to the surface, from the
water itself, and from the bottom of the water body.

 The attenuation coefficient (k) describes the rate at which light


becomes dimmer as depth increases. If E0 is the brightness at the
surface, then the brightness at depth z (Ez) is given by
Ez = E0e–kz

126 Nitin Chauhan- Department of Remote Sensing

63
10/7/2016

Spectral Characteristics of Water- Water Depth

127 Nitin Chauhan- Department of Remote Sensing

Spectral Characteristics of Water- Surface Extent of


Water Bodies
 The best wavelength region for discriminating land from pure water is
in the near-infrared and middle-infrared from 740 - 2,500 nm.

 In the near- and middle-infrared regions, water bodies appear very


dark, even black, because they absorb almost all of the incident
radiant flux, especially when the water is deep and pure and contains
little suspended sediment or organic matter.

128 Nitin Chauhan- Department of Remote Sensing

64
10/7/2016

Spectral Characteristics of Water- Surface Extent of


Water Bodies

129 Nitin Chauhan- Department of Remote Sensing

Spectral Characteristics of Water-


Roughness of the Water Surface
 Wave-roughened surfaces
are brighter than are
smoother surfaces

 Calm, smooth water


surfaces direct only
volume-reflected radiation
to the sensor, but rough,
wavy water surfaces direct
a portion of the solar beam
directly to the sensor

130 Nitin Chauhan- Department of Remote Sensing

65
10/7/2016

Types of aerial cameras


 There are 3 types of aerial cameras:
1. Frame camera
2. Continuous strip camera
3. Panoramic camera

131 Nitin Chauhan - Department of Remote Sensing

Frame Camera
 It exposes a square shape area of the ground all at one instant of
time.

 Film is held flat against the focal plane of camera and entire film
format is exposed at one time.

 To acquire entire picture area at same time the angular coverage


of lens should be adequate.

 To acquire multiband imagery group of frame cameras can be


arranged in a housing to generate different photographs of same
area in different electromagnetic spectrum range.

 Aerial frame cameras are used for reconnaissance, mapping, and


interpretation purposes
132 Nitin Chauhan - Department of Remote Sensing

66
10/7/2016

Frame Camera

133 Nitin Chauhan - Department of Remote Sensing

Continuous Strip Camera


 In this camera the film moves continuously over a
small slit in the focal plane.

 So as the aircraft flies along the flight path, ground


beneath the camera is photographed.

 The speed with which the film moves is a function of


 height of camera above ground,
 focal length of lens and
 ground velocity of aircraft.
134 Nitin Chauhan - Department of Remote Sensing

67
10/7/2016

Continuous Strip Camera


 These camera sometimes had two lenses so arranged that one lens
photographs the ground slightly ahead of and other slightly behind
the aircraft.

 Angular coverage required of the lens is relatively small compared


to frame camera since only small portion of focal plane need to be
covered.

 This camera system was developed to eliminate blurred


photography caused by movement of the camera at the instant of
exposure.

 It allows for sharp images at large scales obtained by high-speed


aircraft flying at low elevations and is particularly useful to the
military
135 Nitin Chauhan - Department of Remote Sensing

Continuous Strip Camera

136 Nitin Chauhan - Department of Remote Sensing

68
10/7/2016

Panoramic camera
 In panoramic cameras the ground areas are covered by
either rotating the camera lens or rotating a prism in
front of the lens

 It operates in the direction normal to the direction of


flight.

 The film is exposed along a curved surface located at the


focal distance from the rotating lens assembly, and the
angular coverage can extend from horizon to horizon.

137 Nitin Chauhan - Department of Remote Sensing

Panoramic camera
 It takes a sweeping picture of the ground from right to left.

 The images in extremities of resulting photograph have squeezed


appearance since the widening coverage must be squeezed into
photograph of constant width.

 Camera with a rotating prism design contain a fixed lens and a flat film
plane. Scanning is accomplished by rotating the prism in front of the
lens

138 Nitin Chauhan - Department of Remote Sensing

69
10/7/2016

Panoramic camera

139 Nitin Chauhan - Department of Remote Sensing

140 Nitin Chauhan - Department of Remote Sensing

70
10/7/2016

Digital Camera
 Small-format digital and film cameras have a similar outward
appearance but they are totally different on the inside.

 The film in a film camera acts as image capture, display, and storage
medium.

 A digital image is formed electronically by solid-state detectors which


are only for image capture and temporary storage for downloading.

141 Nitin Chauhan - Department of Remote Sensing

Digital Camera
 Each digital detector receives an electronic charge when exposed to
electromagnetic energy, which is then amplified, converted into digital
form, and digitally stored on magnetic disks or a flash memory card.

 The magnitude of these charges is proportional to the scene


brightness (intensity).

 There are two types of detectors, charged-coupled devices (CCD) and


complementary metaloxide- semiconductors (CMOS)

142 Nitin Chauhan - Department of Remote Sensing

71
10/7/2016

Digital Camera
 CCD detectors are analog chips that store light energy as electrical
charges in the sensors. These charges are then converted to voltage
and subsequently into digital information.
 CMOS chips are active pixel sensors that convert light energy directly
to voltage.
 CCD – Better resolution but large size
 CMOS – Acceptable size but average image quality

143 Nitin Chauhan - Department of Remote Sensing

Digital Camera

144 Nitin Chauhan - Department of Remote Sensing

72
10/7/2016

Aerial Camera - Components

145 Nitin Chauhan - Department of Remote Sensing

Components- Lens assembly

Classificatio Angle of
 The lens assembly, also called lens n
Focal length
Coverage
cone, consists of the camera lens ,
the diaphragm, the shutter and the 304.8 mm / Less than
Narrow
filter. The diaphragm and the 12 inch 60o

shutter control the exposure. 209.55


Narrow mm/8.25 60o to 75o
inch
 The focus is fixed at infinity and 152.4
Wide 75o to 100o
typically at focal lengths of 6, 8.25 mm/6 inch

and 12 inches 88.9


More than
Super-Wide mm/3.5
100o
inch

146 Nitin Chauhan - Department of Remote Sensing

73
10/7/2016

Components- Inner Cone and Focal Plane


 Inner cone consists of a metal with low coefficient of thermal expansion so
that the lens and the focal plane do not change their relative position.

 The focal plane contains fiducial marks, which define the fiducial
coordinate system that serves as a reference system for metric
photographs.

 The fiducial marks are either located at the corners or in the middle of the
four sides.

 Focal Plane is a perpendicular plate aligned with the axis of the lens; it
includes a vacuum system to fix the film to the plate.

 Additional information is printed on one of the marginal strips which


includes the date and time, altimeter data, photo number, and a level
bubble.
147 Nitin Chauhan - Department of Remote Sensing

Components- Outer Cone and Drive Mechanism


 The outer cone supports the inner cone and holds the drive
 mechanism.

 The function of the drive mechanism is to wind and trip the


shutter, to operate the vacuum, and to advance the film
between exposures.

 The vacuum assures that the film is firmly pressed against the
image plane where it remains flat during exposure.

 Non-flatness would not only decrease the image quality


(blurring) but also displace points, particularly in the corners.
148 Nitin Chauhan - Department of Remote Sensing

74
10/7/2016

Components- Magazine
 The magazine holds the film, both, exposed and unexposed.

 A film roll is 120 m long and provides 475 exposures. The magazine is
also called film cassette.

 It is detachable, allowing to interchange magazines during a flight


mission.

149 Nitin Chauhan - Department of Remote Sensing

Comparison of film and digital cameras


Properties Film camera Digital Camera

photosensitive film with photosensitive solid-state


Image capture
silver halides CCD or CMOS detectors
flash memory cards,
photographic film
computer
Image storage (negatives, diapositives,
chips, or other solid-state
or prints)
devices
Resolution less compared
Resolution better resolution
to films
Silver halides are of
Developing medium pixels are always uniform
random sizes and shapes
sent via phone, computer,
Data transmission Sent via mail or fax
or telemetry
Diapositives (i.e., 35mm require computer or
Soft copy display
slides) can be projected television monitors
150 Nitin Chauhan - Department of Remote Sensing

75
10/7/2016

Comparison of film and digital cameras


Properties Film camera Digital Camera

digital hard
Camera produce film
Hard copy display copy display requires
prints
computer printers
takes hours (or days) for
Access time instantaneous
processing

151 Nitin Chauhan - Department of Remote Sensing

Sensor Classification

152 Nitin Chauhan- Department of Remote Sensing

76
10/7/2016

Sensor Classification
 Sensor type is roughly divided into two: Optical sensor and Microwave sensor.

1. Optical Sensor :
 Optical sensors observe visible lights and infrared rays (near infrared, intermediate infrared, thermal
infrared).

 There are two kinds of observation methods using optical sensors: visible/near infrared remote
sensing and thermal infrared remote sensing.

a) Visible/Near Infrared Remote Sensing :


i. The observation method to acquire visible light and near infrared rays of sunlight reflected by
objects on the ground.

ii. This method can not be observe.

iii. Also, clouds block the reflected sunlight, so this method can not be observed areas under
clouds.

153 Nitin Chauhan- Department of Remote Sensing

Sensor Classification
b) Thermal Infrared Remote Sensing :
i. The observation method to acquire thermal infrared rays, which is radiated from
land surface heated by sunlight.
ii. Also it can observe the high temperature areas, such as volcanic activities and
fires
iii. This method can observe at night when there is no cloud.

2. Microwave Sensor :
 Microwave sensors receive microwaves, which is longer wavelength than visible
light and infrared rays, and observation is not affected by day, night or weather.
 There are two types of observation methods using microwave sensor: active and
passive.

154 Nitin Chauhan- Department of Remote Sensing

77
10/7/2016

Passive Remote Sensing


 The sun provides a very convenient source of energy for remote sensing.

 The sun's energy is either reflected, as it is for visible wavelengths, or


absorbed and then re-emitted, as it is for thermal infrared wavelengths.

 Remote sensing systems which measure energy that is naturally available


are called passive sensors.

 Passive sensors can only be used to detect energy when the naturally
occurring energy is available.

 For all reflected energy, this can only take place during the time when the
sun is illuminating the Earth. There is no reflected energy available from
the sun at night.

 Energy that is naturally emitted (such as thermal infrared) can be detected


day or night, as long as the amount of energy is large enough to be
recorded.
155 Nitin Chauhan- Department of Remote Sensing

Passive Remote Sensing

156 Nitin Chauhan- Department of Remote Sensing

78
10/7/2016

Active Remote Sensing


 Active sensors, on the other hand, provide their own energy source for illumination.

 The sensor emits radiation which is directed toward the target to be investigated.

 The radiation reflected from that target is detected and measured by the sensor.

 Advantages for active sensors include the ability to obtain measurements anytime,
regardless of the time of day or season.

 Active sensors can be used for examining wavelengths that are not sufficiently provided by
the sun, such as microwaves, or to better control the way a target is illuminated.

 However, active systems require the generation of a fairly large amount of energy to
adequately illuminate targets.

 Some examples of active sensors are a laser fluorosensor and a synthetic aperture radar
157 (SAR).

Active Remote Sensing

158 Nitin Chauhan- Department of Remote Sensing

79
10/7/2016

Remote Sensors – Imaging Modes


1. Frame by Frame :

 Snapshot is taken at one instant covering a certain area on the surface


depending on sensor characteristics and platform height.

 Typical example is conventional photographic camera.

 It is also known as Stairing mode.

 Successive frames image a strip of terrain depending on camera orientation.

159 Nitin Chauhan- Department of Remote Sensing

Mainframe

160 Nitin Chauhan- Department of Remote Sensing

80
10/7/2016

Remote Sensors – Imaging Modes


2. Pixel by Pixel :
 Sensor collects the radiation from one pixel at a time .

 Here a scan mirror directs the sensor to the next pixel in cross track
direction and by scan mirror motion, one cross track of line width equal to
one pixel is imaged.

 Successive scan lines are produced by motion of platform.

 This technique of imaging is used by opto-mechanical scanners.

 It is also called as Whiskbroom scanning.

161 Nitin Chauhan- Department of Remote Sensing

Whiskbroom

162 Nitin Chauhan- Department of Remote Sensing

81
10/7/2016

Remote Sensors – Imaging Modes


3. Line by Line :
 The sensor collects radiation from one line in cross track direction at
one instant

 Successive lines are generated by platform movement.

 Linear CCD/ Photode/CMOS arrays generate imaging using this mode.

 This is also known as Pushbroom scanning.

163 Nitin Chauhan- Department of Remote Sensing

Pushbroom

164 Nitin Chauhan- Department of Remote Sensing

82
10/7/2016

Optical Mechanical Scanner


 It is a multispectral radiometer by which two dimensional imagery can be
recorded using a combination of motion of platform and a rotating or
oscillating mirror scanning perpendicular to flight direction.

 It is composed of :

1. Optical system : In this a reflective telescope is used to avoid colour


aberration.

2. Spectrographic system : In this a grating or a prism is used for dispersion


of energy into different spectral bands.
165 Nitin Chauhan- Department of Remote Sensing

Optical Mechanical Scanner


3. Scanning System : Rotating mirror or oscillating mirror is used for scanning
perpendicular to flight direction.

4. Detector System : EM energy is converted into an electric signal by optical


electronic detectors.
a) Near UV and Visible : Photomultiplier detectors
b) Visible and NIR : Silicon diode
c) SWIR : cooled Ingium Antimony (InSb)
d) Thermal : Thermal barometer, Mercury, Tellurium and Cadmium

5. Reference System : Converted electric signal is influenced by change of sensitivity


of the detector. Therefore light sources or thermal sources of constant intensity
or temperature is installed as reference for calibration of electric signal.

Eg : MSS, TM of Landsat, AVHRR of NOAA

166 Nitin Chauhan- Department of Remote Sensing

83
10/7/2016

Optical Mechanical Scanner

167 Nitin Chauhan- Department of Remote Sensing

Pushbroom Scanner
 Pushbroom scanner or linear array sensor is a scanner without any
mechanical scanning mirror but with a linear array of solid
semiconductive elements (CCD, CMOS) which enables it to record one
line of an image simultaneously.
 It has an optical lens through which a line image is detected
simultaneously perpendicular to flight direction.
 Eg: HRV of SPOT, LISS of IRS, MESSR of MOS-1

168 Nitin Chauhan- Department of Remote Sensing

84
10/7/2016

Pushbroom Scanner

169 Nitin Chauhan- Department of Remote Sensing

Resolution
 Resolution refers to the ability of a remote sensing system to record
and display fine spatial, spectral, and radiometric detail.

 Spatial (what area and how detailed)

 Spectral (what colors – bands)

 Temporal (time of day/season/year)

 Radiometric (color depth)

170 Nitin Chauhan- Department of Remote Sensing

85
10/7/2016

Spatial Resolution
 Spatial Resolution describes how much detail in a photographic image
is visible to the human eye.

 The ability to "resolve," or separate, small details is one way of


describing what we call spatial resolution.

 Spatial resolution of images acquired by satellite sensor systems is


usually expressed in meters

171 Nitin Chauhan- Department of Remote Sensing

Spatial Resolution
 For example, we often speak of Landsat as having “30- meter"
resolution, which means that two objects, thirty meters long or wide,
sitting side by side, can be separated (resolved) on a Landsat image.

 Other sensors have lower or higher spatial resolutions.

172 Nitin Chauhan- Department of Remote Sensing

86
10/7/2016

Spatial Resolution
Planimetric data – roads, buildings, driveways

173 Nitin Chauhan- Department of Remote Sensing

Spatial Resolution
80 meter MSS w/ planimetric overlay

174 Nitin Chauhan- Department of Remote Sensing

87
10/7/2016

Spatial Resolution
30 meter TM w/ planimetric overlay

175 Nitin Chauhan- Department of Remote Sensing

Spatial Resolution
10 meter SPOT w/ planimetric overlay

176 Nitin Chauhan- Department of Remote Sensing

88
10/7/2016

Spatial Resolution
1 meter DOQ w/ planimetric overlay

177 Nitin Chauhan- Department of Remote Sensing

Spatial Resolution
Sub-meter data w/ planimetric overlay

178 Nitin Chauhan- Department of Remote Sensing

89
10/7/2016

Spatial Resolution
Looking More Closely at Resolution

179 Nitin Chauhan- Department of Remote Sensing

Spatial Resolution
Looking More Closely at Resolution

180 Nitin Chauhan- Department of Remote Sensing

90
10/7/2016

Spatial Resolution
Looking More Closely at Resolution

Landsat MSS
Satellite
80 Meter
Resolution Grid
Cell

181 Nitin Chauhan- Department of Remote Sensing

Spatial Resolution
Looking More Closely at Resolution

Landsat TM
Satellite
30 Meter
Resolution Grid
Cell

182 Nitin Chauhan- Department of Remote Sensing

91
10/7/2016

Spatial Resolution
Looking More Closely at Resolution

SPOT Satellite 10
Meter
Resolution Grid
Cell

183 Nitin Chauhan- Department of Remote Sensing

Spatial Resolution
Looking More Closely at Resolution

IKONOS Satellite
4 Meter
Resolution
Grid Cell

184 Nitin Chauhan- Department of Remote Sensing

92
10/7/2016

Spatial Resolution
Looking More Closely at Resolution

IKONOS Satellite
1 Meter
Resolution
Grid Cell

185 Nitin Chauhan- Department of Remote Sensing

186 Nitin Chauhan- Department of Remote Sensing

93
10/7/2016

Spectral Resolution
 Number of spectral bands (red, green, blue, NIR, Mid-IR, thermal, etc.)

 Width of each band

 Certain spectral bands (or combinations) are good for identifying specific
ground features

 Panchromatic – 1 band (B&W)


 Color – 3 bands (RGB)
 Multispectral – 4+ bands (e.g. RGBNIR)
 Hyperspectral – hundreds of bands

187 Nitin Chauhan- Department of Remote Sensing

Spectral Resolution

188 Nitin Chauhan- Department of Remote Sensing

94
10/7/2016

189 Nitin Chauhan- Department of Remote Sensing

Spectral Resolution

190 Nitin Chauhan- Department of Remote Sensing

95
10/7/2016

Temporal Resolution
 Remote sensing has the ability to record sequences of images, thereby
representing changes in landscape patterns over time.

 The ability of a remote sensing system to record such a sequence at


relatively close intervals generates a dataset with fine temporal
resolution.

 In contrast, systems that can record images of a given region only at


infrequent intervals produce data at coarse temporal resolution.

191 Nitin Chauhan- Department of Remote Sensing

Temporal Resolution
 Time of day/season image acquisition
 Leaf on/leaf off
 Tidal stage
 Seasonal differences
 Shadows
 Phenological differences
 Relationship to field sampling

192 Nitin Chauhan- Department of Remote Sensing

96
10/7/2016

Spring Summer

193 Nitin Chauhan- Department of Remote Sensing

Temporal Resolution

194 Nitin Chauhan- Department of Remote Sensing

97
10/7/2016

Temporal Resolution

195 Nitin Chauhan- Department of Remote Sensing

Temporal Resolution

196 Nitin Chauhan- Department of Remote Sensing

98
10/7/2016

Temporal Resolution

197 Nitin Chauhan- Department of Remote Sensing

Temporal Resolution
 Revisit period for satellites – how often can you make a measurement
for the same area

 – Landsat – 16 days (continuous collection)

 – Quickbird – varies (point-and-shoot)

 – MODIS – daily (continuous collection)

 • Airborne images – collected as needed

198 Nitin Chauhan- Department of Remote Sensing

99
10/7/2016

Radiometric Resolution
 Radiometric resolution can be defined as the ability of an imaging
system to record many levels of brightness.

 Coarse radiometric resolution would record a scene using only a few


brightness levels or a few bits (i.e., at very high contrast), whereas

 fine radiometric resolution would record the same scene using many
levels of brightness.

 The finer the radiometric resolution of a sensor, the more sensitive it


is to detecting small differences in reflected or emitted energy.

199 Nitin Chauhan- Department of Remote Sensing

Radiometric Resolution
 The maximum number of brightness levels available depends on the
number of bits used in representing the energy recorded. Thus, if a
sensor used 8 bits to record the data, there would be 28=256 digital
values available, ranging from 0 to 255.

200 Nitin Chauhan- Department of Remote Sensing

100
10/7/2016

Radiometric Resolution

201

DIGITAL IMAGE
 An image is a picture, photograph or any form of a two-dimensional
representation of objects or a scene.
 The information in an image is presented in tones or colours.
 A digital image is a two dimensional array of numbers.
 Each cell of a digital image is called a pixel and the number
representing the brightness of the pixel is called a digital number
(DN)

101
10/7/2016

DIGITAL IMAGE
 A digital image is composed of data in lines and columns.

 The position of a pixel is allocated with the line and column of its DN.

 Such regularly arranged data, without x and y coordinates, are usually called raster data.

 DN values do not record true brightness (known as radiances) from the scene but rather
are scaled values that represent relative brightness within each scene.

 Digital image data can also have a third dimension: Layers

 Layers are the images of the same scene but containing different information.

 Some of the most important technologies for digital imaging: CCD (charge Couple Devices)
and CMOS (complementary metal oxide semiconductor).

102
10/7/2016

Advantages of Digital Image over Analog Image


 As a digital image, its advantages include:

 The images do not change with environmental factors as hard copy pictures and
photographs do.

 The images can be identically duplicated without any change or loss of


information.

 The images can be mathematically processed to generate new images without


altering the original images.

 The images can be electronically transmitted from or to remote locations


without loss of information.

Image Data Storage Formats


 Remote sensing data such as those of Landsat MSS, Landsat TM, SPOT,
IRS LISS III etc. are stored in generic binary format
 BSQ(Band Sequential) : Band sequential format stores information for the
image one band at a time. In other words, data for all the pixels for band 1
is stored first, then data for all pixels for band 2, and so on.
 If one wants the information about the area in center in 4 bands it is
necessary to read information of location in 4 separate files to extract
desired information .
 This format is good when all band are of no use.

103
10/7/2016

BSQ

BIL : Band Interleaved by Line


 In this format , the data for the bands are written line by line (i.e line 1
band 1, line 1 band 2, line 1 band 3 etc.)
 It is useful format if all the bands are to be used in analysis.
 If some bands are not of interest , the format is inefficient since it is
necessary to read serially past all the unwanted information

104
10/7/2016

BIL

BIP : Band Interleaved by Pixel


 In this format , the data for the pixel in all band are written together
(i.e pixel(1,1) of band 1, pixel(1,1) of band 2,pixel (1,1) of band 3 etc)
 This is a practical data format if all bands are to be used otherwise it is
inefficient.
 This format is not popular now, but it was used extensively by EROS
data centre for Landsat scene at initial stage.

105
10/7/2016

BIP

Data Formats
 Digital data products are supplied mainly in four Formats:
 LGSOWG (Landsat Ground Station Operators Working Group) or Super
Structure Format
 Fast Format
 GeoTIFF ( Geographic Tagged Image File Format)
 HDF(Hierarchical Data Format)

106
10/7/2016

LGSOWG : Super Structure Format


Super Structure Format is suitable for level 0 and Level 1data products
 Volume Directory File
 Leader File
 Image File
 Trailer File
 Null Volume directory file

LGSOWG : Super Structure Format- Volume Directory


File
 It is composed of volume descriptor record, a number of file pointer
records and a text record
 The volume descriptor record identifies the logical volume and the
number of files it contains
 Text record identifies the data contained in logical volume
 File pointer record for each type of file in the logical volume, which
indicates each file's class, format and attributes

107
10/7/2016

LGSOWG : Super Structure Format- Leader File


 The leader file is composed of a file descriptor record and two types of
data records
 Header : Header contains information related to mission; sensor,
calibration coefficients and processing parameters
 Ancillary : Ancillary records contain information related to ephemeris,
attitude, map projection and Ground Control Points (GCPs) for image
correction

LGSOWG : Super Structure Format- Image File


 Image file consists of file descriptor record and image data records
 It contains actual data in BIL or BSQ Format
 In addition to that it also contains pixel counts, scan line identification
, starting and ending of actual data in line

108
10/7/2016

LGSOWG : Super Structure Format- Trailer File


 This is composed of a file descriptor record and one trailer record for
each band
 It shows the calibration data file and ancillary information file

Null Volume Directory File


 The file, which ends a logical volume, is the null volume directory file.
 The file is referred as 'null' because it defines a non-existent (empty)
logical volume.
 This file contains a volume descriptor record

109
10/7/2016

BSQ BIL

Fast Format
Fast Format is suitable for Level 2 products
 Its consists of two files namely
 Header File: Header file contains header data in ASCII ( American
Standard Code for Information Interchange) format
 Administrative Record : Identifies the product, scene and data specifically needed to
read the imagery
 Radiometric Record : Contains coefficients needed to convert DN values to Spectral
radiances
 Geometric Record : Contains geographic information of the scene

110
10/7/2016

Fast Format
 Image File : Image files are written in BSQ format i.e each image file
contains one band of image data

Administrative record

Radiometric record Header

Geometric record

Image data in BSQ Format Image Data

GeoTIFF
 Formats such as PGM,GIF,BMP,TIFF can’t store geographic information
so they can’t be used in cartographic applications
 It ties image to a known model space or map projection
 It is platform independent
 Mostly IRS data products are supplied in this format

111
10/7/2016

Raw Data
 Different formats available :
 Level 0R (L0R) – Data with no correction applied to it.
 Level 1R (L1R) – Radiometrically corrected only. No
geometric collection.
 Level 1Gs (L1Gs) – Radiometrically corrected and
resampled for geometric correction and registered to a
geographic map projection.
 Level 1 Gst – (L1Gst) - Radiometrically corrected and
resampled for geometric correction and registered to a
geographic map projection. Data image is ortho-rectified
using DEM to correct parallax error due to local
topographic relief
223

Georeferencing
 The data generated by a remote sensor is subjected to geometric
distortions.

 Application of remote sensing requires preparation of planimetrically


correct versions of aerial and satellite images so that they will match
to other imagery and to maps and will provide the basis for accurate
measurements of distance and area.

 The function of georeferencing is to transform the distorted image co-


ordinate(u,v) to a specific map projection.

224 Nitin Chauhan- Department of Remote Sensing

112
10/7/2016

Georeferencing
 Georeferencing can be carried out with the help of physical models
which requires variations of orbit and platform dynamics with time.

 So the solution to this problem is use of ground control points(GCPs)


to generate an accurate transformation model (mapping function) to
relate image coordinates to a map coordinate.

225 Nitin Chauhan- Department of Remote Sensing

Georeferencing - GCP
 It is a feature which can be uniquely identified in the image and whose map
coordinates (or geographic coordinates) are known or can be determined.

 The GCP chosen should be stable so that its appearance is unambiguous.

 GCPs include
 Road intersections
 Airport runway intersection
 River confluence
 Prominent coastline features

 Coordinates of GCPs can be found out from the existing maps or from GPS
surveys.

226 Nitin Chauhan- Department of Remote Sensing

113
10/7/2016

Georeferencing - GCP
 The coordinate system of raw image (ungeoreferenced) is expressed
in terms of pixels and scan lines (column and rows) with its origin at
upper left corner.

 The map coordinate system are usually oriented in Cartesian form in


terms of x and y coordinates corresponding to a latitude and
longitude. So there are two cartesian coordinate system one
describing position of GCPs in reference image (x,y) and other in input
or raw image (u,v)

227 Nitin Chauhan- Department of Remote Sensing

Georeferencing
 The two CS can be related by a pair of mapping function such that
𝑢 = 𝑓 𝑥, 𝑦
𝑣 = 𝑔 𝑥, 𝑦
 The mapping function can be polynomial. A first degree polynomial
can model six kinds of distortion, x any translation, x and y scale
change and skew and rotation. 1st degree equation are as follows:
𝑢 = 𝑎0 + 𝑎1𝑥 + 𝑎2𝑦
𝑣 = 𝑏0 + 𝑏1𝑥 + 𝑏2𝑦

228 Nitin Chauhan- Department of Remote Sensing

114
10/7/2016

Georeferencing
 The equation tries to find out which point in output image (corrected)
should come from (or corresponds) which point in the image

 The coefficients ai and bi are evaluated using GCP.

 Minimum number of GCPs required depends on degree of polynomial


(n) used in the mapping function and is given by:

(𝒏 + 𝟏)(𝒏 + 𝟐)
𝒏𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝑮𝑪𝑷𝒔 𝒓𝒆𝒒𝒖𝒊𝒓𝒆𝒅 =
𝟐

229 Nitin Chauhan- Department of Remote Sensing

Georeferencing
 For moderate distortion in small area linear transformation is
sufficient however for large area higher order polynomial will be
required.
 But with increase in order (>3) it can produce errors in area devoid of
GCPs.

230 Nitin Chauhan- Department of Remote Sensing

115
10/7/2016

Georeferencing
 The accuracy of corrected products depends on the quality of GCPs.

 The GCPs should be evenly distributed throughout the image.

 The next task of georeferencing is to assign DN values for each pixel in


the new transformed image.

 The image to map registration is like laying a new rectified image in its
correct orientation on top of the old (distorted) image.

231 Nitin Chauhan- Department of Remote Sensing

Georeferencing
Original Image

Geometrically corrected
and map projected Image
232 Nitin Chauhan- Department of Remote Sensing

116
10/7/2016

Georeferencing- Resampling
 The process of determining what DN value is to be assigned to the
new pixels(or how to estimate the new pixel DN value) is known as
Resampling.

 Three types of Resampling methods commonly used are :

1. Nearest neighborhood interpolation


2. Bilinear Interpolation
3. Cubic Convolution Interpolation

233 Nitin Chauhan- Department of Remote Sensing

Resampling - Nearest neighborhood


 The simplest strategy from a computational perspective is simply to assign
each “corrected” pixel the value from the nearest “uncorrected” pixel. This is
the nearest-neighbor approach to resampling

 It has the advantages of simplicity and the ability to preserve the original
values of the unaltered scene—an advantage that may be critical in some
applications.

 The nearest neighbor method is considered the most computationally


efficient of the methods usually applied for resampling but it may create
noticeable positional errors, which may be severe in linear features where
the realignment of pixels may be noticeable

234 Nitin Chauhan- Department of Remote Sensing

117
10/7/2016

Resampling - Nearest neighborhood

Nearest-neighbor
resampling.
Each estimated value (.)
receives its value
from the nearest point on
the reference grid
(O)

235 Nitin Chauhan- Department of Remote Sensing

Resampling - Bilinear
 Bilinear interpolation calculates a value for each output pixel based on a
weighted average of the four nearest input pixels.

 In this context, “weighted” means that nearer pixel values are given greater
influence in calculating output values than are more distant pixels.

 Since bilinear interpolation creates new pixel values, the brightness values
in the input image are lost. Such changes to digital brightness values may be
significant in later processing steps.

 Because the resampling is conducted by averaging over areas (i.e., blocks of


pixels), it decreases spatial resolution by a kind of “smearing” caused by
averaging small features with adjacent background pixels.

236 Nitin Chauhan- Department of Remote Sensing

118
10/7/2016

Resampling - Bilinear

237 Nitin Chauhan- Department of Remote Sensing

Resampling – Cubic Convolution


 Cubic convolution uses a weighted average of values within a
neighborhood that extends about two pixels in each direction, usually
encompassing 16 adjacent pixels.

 The images produced by cubic convolution resampling are much more


attractive than those of other procedures.

 But the data are altered more than are those of nearest-neighbor or
bilinear interpolation, the computations are more intensive, and the
minimum number of GCPs is larger.

238 Nitin Chauhan- Department of Remote Sensing

119
10/7/2016

Resampling – Cubic Convolution

239 Nitin Chauhan- Department of Remote Sensing

Resampling – Cubic Convolution

240 Nitin Chauhan- Department of Remote Sensing

120
10/7/2016

Pattern Recognition
 Pattern refers to the set of radiance measurement obtained in various
wavelength bands for each pixel.

 These techniques have been born with the motive of replacing the
more qualitative approach of visual interpretation with the
quantitative approach for automatic identifying features in a scene.

241 Nitin Chauhan

Pattern Recognition
 Pattern recognition can be classified into three categories :

 Spatial Pattern Recognition

 Spectral Pattern Recognition

 Temporal Pattern Recognition

242 Nitin Chauhan

121
10/7/2016

Spatial Pattern Recognition

 It involves the categorization of image pixels on the basis of


their spatial relationship with pixels surrounding them.

 These classifiers consider spatial aspects such as image


texture, pixel proximity, feature size, shape, directionality,
repetition and context.

 These spatial classifiers attempt the replication of spatial


synthesis as done by human analyst during visual
interpretation process.

 Due to this factor they tend to be more complex and


computation intensive.
243 Nitin Chauhan

244 Nitin Chauhan

122
10/7/2016

Spectral Pattern Recognition


 It refers to family of classification procedures which utilizes pixel by
pixel spectral information as basis for automated landcover
classification.

 These can be accomplished either parametrically or non


parametrically in multispectral domain.

 Their complexity varies with the algorithm , sometimes they may be


very simple and otherwise very computation intensive.

245 Nitin Chauhan

Temporal Pattern Recognition


 These techniques uses time as an aid in feature identification.
 In different agricultural surveys , distinct spatial and spectral changes
during growth period of a crop would be available in multi-date data
rather than single date.
 Interpretation from single date is unsuccessful regardless of number
of bands.

246 Nitin Chauhan

123
10/7/2016

Informational Classes

 Informational classes are the categories of interest to the


users of the data.

 For example, the different kinds of geological units, different


kinds of forest, or the different kinds of land use

 They are the object of our analysis.

 These classes are not directly recorded on remotely sensed


images; we can derive them only indirectly, using the
evidence contained in brightnesses recorded by each image.
247 Nitin Chauhan

Spectral Classes

 These are groups of pixels that are uniform with respect to the
brightnesses in multispectral space.

 Image forms a valuable source of information only and only when


analyst can define a link between the spectral classes on the image
and the informational classes that are of primary interest.

 If the match can be made with confidence, then the information is


likely to be reliable.

 We rarely expect to find exact one-to-one matches between


informational and spectral classes. Any informational class
includes spectral variations arising from natural variations within
the class. For eg. variations in illumination and shadowing and
Nitin Chauhan
variations in age, species composition, density, and vigor of a forest
248

124
10/7/2016

Signature Bank/ Spectral Library

 A spectral library is similar to the fingerprint database of FBI


except instead of containing human fingerprints, the spectral
library contains spectral signatures, the “fingerprints” that are
unique to materials on the Earth’s surface.

 The spectral library is a collection of spectra of natural and man


made materials.

 These libraries provide a source of reference spectra of varieties of


targets( minerals, rocks, vegetation species) for remote
identification for these targets.

 The relative distance between sensor and target does not affects
the results of spectral library
249 Nitin Chauhan

Signature Bank/ Spectral Library


 Over the years, researchers have been collecting spectral signatures of
known objects and cataloguing them in spectral databases or libraries.

 After the hyperspectral/Multispectral data from a given scene have


been analyzed and the spectral signatures of the objects have been
identified, these signatures can be compared with those in a spectral
library and the objects can be identified.

 Image processing software packages include vast spectral libraries

250 Nitin Chauhan

125
10/7/2016

Spectral library
Conifer

Conifer
Deciduous

Deciduous

251 Nitin Chauhan

Classifier
 The term classifier refers loosely to a computer program that
implements a specific procedure for image classification.

 Choice of the best possible classifier is difficult because the


characteristics of each image and the circumstances for each study
vary so greatly.

 The simplest form of digital image classification is to consider each


pixel individually, assigning it to a class based on its several values
measured in separate spectral bands

252 Nitin Chauhan

126
10/7/2016

Distance In Spectral Domain


 Distance between any two pixels is measured by disparity in their
DN’s in the same band.
 There are three spectral distance measures :
1. Euclidean spectral distance
2. Mahalanobis Spectral Distance
3. Normalized Distance

253 Nitin Chauhan

Euclidean spectral distance


 It is straight distance between two pixels A and B in a
two band domain.
 Euclidean distance De between two pixels in a
multiband space is calculated as :
n= no. of spectral bands used in classification
𝒏 DNAi = DN of pixel A in ith band
𝑫𝒆 = ෍ 𝑫𝑵𝑩𝒊 − 𝑫𝑵𝑨𝒊 𝟐 DNBi = DN of pixel B in ith band

𝒊=𝟏

 More spectral bands simply increases the terms in


summation
254
 De shows how far apart one pixel is from other.
Nitin Chauhan

127
10/7/2016

Euclidean spectral distance


 De also shows how away is a pixel from a particular class mean.
 De serves as a membership function i.e. if a pixel has a shorter distance
to the center of one of cluster than to center of another , then it is
more likely to be member of former cluster because of spectral
similarity.
 Unsupervised classification proceeds by making thousands of distance
calculations as a means of determining similarities for the many pixels
and groups within an image.

255 Nitin Chauhan

Euclidean spectral distance

256 Nitin Chauhan

128
10/7/2016

Mahalanobis Spectral Distance


 It is also known as Manhattan distance.

 It is the algebraic sum of absolute differences between two pixels in


the same band and in every band used in classification.
 It is given by :

𝐷𝑚 = σ𝑛𝑖=1 | DNAi – DNBi|

257 Nitin Chauhan

Mahalanobis Spectral Distance


 Its absolute value eliminates the need to worry about which pixel
value should be subtracted from which pixel.

 No difference negate themselves in summation since they are positive.

 It is easier as compared to Euclidean distance.

258 Nitin Chauhan

129
10/7/2016

Normalized Distance
 Dnorm/ND refers to absolute value of the difference between the means
of two clusters divided by the sum of their standard deviations.
 For two classes A and B, with means A and B, and standard deviations
sa and sb, the ND is defined as

259 Nitin Chauhan

Normalized Distance
 It is applicable to two cluster of pixels rather than individual pixel

 It is distance between the center of one cluster to another.

 It can contain any number of members within the cluster.

 It indicates the spectral separability b/w two clusters.

 Larger of this distance indicates the more prominent distinction between two
cluster.

 It helps to judge the quality of selected training samples between any two covers
before these samples are used in classification

260 Nitin Chauhan

130
10/7/2016

Unsupervised Classification
 It can be defined as the identification of natural groups, or structures,
within multispectral data.

 It is basically a cluster analysis in which pixels are grouped into


certain categories in terms of their similar spectral values.

 In this the input data is categorized into user predefined number of


groups.

261 Nitin Chauhan

Unsupervised Classification
 No prior information regarding the scene or cover need to be known.
 Post processing steps links the spectral classes to meaningful ground
cover.

262 Nitin Chauhan

131
10/7/2016

Unsupervised Classification- Procedure


 Analyst specify minimum and maximum numbers of categories to be
generated by the classification algorithm.
 These are based on the analyst’s knowledge of the scene or on the
user’s requirements that the final classification display a certain
number of classes
 Classification starts with a set of arbitrarily selected pixels as cluster
centers.
 These are selected at random to ensure that the analyst cannot
influence the classification and that the selected pixels are
representative of values found throughout the scene.

263 Nitin Chauhan

Unsupervised Classification- Procedure


 Algorithm then finds distances between pixels and forms initial
estimates of cluster centers as permitted by constraints specified by
the analyst.

 The class can be represented by a single point, known as the “class


centroid,” which can be thought of as the center of the cluster of pixels
for a given class.

 At this point, classes consist only of the arbitrarily selected pixels


chosen as initial estimates of class centroids

264 Nitin Chauhan

132
10/7/2016

Unsupervised Classification- Procedure

 Next, all the remaining pixels in the scene are assigned to the nearest class
centroid.

 But the classes formed by this initial attempt are unlikely to be the optimal
set of classes and may not meet the constraints specified by the analyst.

 So the algorithm finds new centroids for each class.

 Then the entire scene is classified again, with each pixel assigned to the
nearest centroid.

 This process repeats until there is no significant change between the


location of old and new centroids and the classes meet all constraints
required by the operator

265 Nitin Chauhan

Unsupervised Classification- Classifiers


 Various unsupervised classification algorithms available are as
follows:
1. Moving Cluster Analysis – K-means clustering
2. Iterative Self- Organizing Data Analysis (ISODATA)
3. Agglomerative Hierarchical Clustering
4. Histogram based clustering

266 Nitin Chauhan

133
10/7/2016

Unsupervised Classification-Moving Cluster Analysis – K-means


clustering
 Starts with specification of total number of spectral classes (K) to be
clustered from input data

 Computer arbitrarily selects this number of cluster centers as the


candidates.

 Euclidean distance b/w cluster center and all pixels is calculated and pixel is
assigned to cluster having spectral distance shortest.

 Once all the pixels have been assigned to the cluster the sum of squared
error is calculated from pixels belonging to respective clusters
267 Nitin Chauhan

Unsupervised Classification-Moving Cluster Analysis – K-


means clustering
𝑘 𝑛

𝑆𝑆𝐸 = ෍ ෍ 𝐷𝑁 𝑖, 𝑗 − 𝑚𝑗 2

𝑗=1 𝑖=1
Where :
n= number of pixels enclosed in a given cluster.(Varies cluster to
cluster)
DN(i , j) = value of ith pixel in the jth cluster
mj= mean of jth cluster

268 Nitin Chauhan

134
10/7/2016

Unsupervised Classification-Moving Cluster Analysis – K-


means clustering
 This is done iteratively.
 At end of each iteration the mean of each cluster mj is updated by
mean of all pixels in that cluster
 Reiteration continues every time with newly derived mean.
 Process continues until SSE become stabilized and levelled off.
 Iteration can be terminated by :
 Either no. of iteration reaches specified value.
 SSE convergence threshold is reached

269 Nitin Chauhan

Unsupervised Classification-Moving Cluster Analysis –


ISODATA clustering
 Iterative Self organizing Data Analysis Technique (ISODATA) is very
much similar to K-means clustering and only differs in 3 additional
steps that are undertaken to optimize the clusters.

1. Deletion : After a certain number of iterations a particular cluster


may be deleted if its number of member pixel fall below the pre-
specified threshold.

270 Nitin Chauhan

135
10/7/2016

Unsupervised Classification-Moving Cluster Analysis –


ISODATA clustering
2. Merging : During clustering the spectral distance between any two
clusters is constantly monitored. They are merged if their spectral
distance falls within the pre-specified threshold.

3. Splitting : New clusters may be created by splitting an existing


cluster if its variance is too large or if it contains a large containment
of pixels exceeding the specified threshold.

271 Nitin Chauhan

Unsupervised Classification-Moving Cluster Analysis –


ISODATA clustering
 These additional steps in the adaptivity of the algorithm but makes
the computation more complex.
 Compared to K-means ISODAT requires more parameters such as :
 Minimum Size (%)- For deletion
 Maximum Standard Deviation – For Splitting
 Minimum Distance - For Merging

272 Nitin Chauhan

136
10/7/2016

Supervised Classification
 Supervised classification is the process of using samples of known identity
(i.e., pixels already assigned to informational classes) to classify pixels of
unknown identity (i.e., to assign unclassified pixels to one of several
informational classes).

 Samples of known identity are those pixels located within training areas, or
training fields

 Steps in the supervised classification involves:


 Development of Classification scheme
 Determination of spectral bands
 Selection of classifier.

273 Nitin Chauhan

Supervised Classification- Procedure

Development of Selection of spectral


Selection of classifier
classification scheme bands

Selection of training
samples

Result Training No
representation samples
satisfactory
Yes
Classified
Post classification
results Classification
274 processing
Nitin Chauhan
satisfactory
No

137
10/7/2016

Supervised Classification-
Training Data

 Training fields are areas of known identity delineated


on the digital image, usually by specifying the corner
points of a square or rectangular area using line and
column numbers within the coordinate system of the
digital image.

 The analyst begins by assembling and studying maps


and aerial photographs of the area to be classified and
by investigating selected sites in the field.

 The objective is to identify a set of pixels that accurately


represent spectral variation present within each
Nitin Chauhan
informational region
275

Supervised Classification-
Key Characteristics of Training Areas
1. Numbers of Pixels
2. Size
3. Location
4. Number
5. Placement
6. Uniformity

276 Nitin Chauhan

138
10/7/2016

Supervised Classification- Number of Pixels


 The operator should ensure that several individual training areas for
each category provide a total of at least 100 pixels for each category

 Number of training pixels also varies with image classifier. For eg.
MLC require at least 10 to 30 times the number of features for each
class.

277 Nitin Chauhan

Supervised Classification- Size


 Each must be large enough to provide accurate estimates of the
properties of each informational class.

 But too large areas tend to include undesirable variation

 Joyce (1978) recommends that individual training areas be at least 4


ha in size at the absolute minimum and 65ha as maximum.

 Small training fields are difficult to locate accurately on the image

278 Nitin Chauhan

139
10/7/2016

Supervised Classification- Location


 Information class should be represented by several training areas
positioned throughout the image.

 It is desirable for the analyst to use direct field observations in the


selection of training data

 aerial observation or use of good maps and aerial photographs can


provide the basis for accurate delineation of training fields that cannot
be inspected in the field.

279 Nitin Chauhan

Supervised Classification- Number

 The optimum number of training areas depends on


 the number of categories to be mapped,
 their diversity, and
 the resources that can be devoted to delineating training areas

 each informational category or each spectral subclass should


be represented by a number (5–10 at a minimum) of training
areas to ensure that the spectral properties of each category
are represented.

 Selection of multiple training areas is also desirable because


later in the classification process it may be necessary to
discard some training areas if they are discovered to be
unsuitable.
280 Nitin Chauhan

140
10/7/2016

Supervised Classification- Placement


 Training areas should be placed within the image in a manner that
permits convenient and accurate location with respect to distinctive
features, such as water bodies, or boundaries between distinctive
features on the image.

 Boundaries of training fields should be placed well away from the


edges of contrasting parcels so that they do not encompass edge pixels

281 Nitin Chauhan

Supervised Classification- Uniformity


 Data within each training area should exhibit a unimodal frequency
distribution for each spectral band to be used

 Prospective training areas that exhibit bimodal histograms should be


discarded

282 Nitin Chauhan

141
10/7/2016

Supervised Classification- Classifiers


 A variety of different methods have been devised to implement the
basic strategy of supervised classification.
 All of them use information derived from the training data as a means
of classifying those pixels not assigned to training fields.
 Three commonly used classifiers are :
1. Parallelepiped
2. Minimum Distance to Mean
3. Maximum Likelihood

283 Nitin Chauhan

Supervised Classification- Parallelepiped


Classification
 It is sometimes also known as box decision rule, or level-slice
procedures.
 It is based on the ranges of values within the training data to define
regions within a multidimensional data space.
 The spectral values of unclassified pixels are projected into data space;
those that fall within the regions defined by the training data are
assigned to the appropriate categories

284 Nitin Chauhan

142
10/7/2016

Supervised Classification- Parallelepiped


Classification
 It follows a relationship that :

Pixel X ∊ Cj if min DNj ≤DNx ≤max DNj


 The decision rule state that pixel X is a member of information class Cj
if and only if its value falls inside the DN range of this class in same
band.
 The statistical parameters mean and standard deviation are used as
minimum and maximum are defined from training samples of
information class

285 Nitin Chauhan

Supervised Classification- Parallelepiped


Classification
 Maximum value is mean plus one standard deviation and minimum is
mean minus one standard deviation.

 In a two dimensional feature space this forms a rectangular box equal


to the number of classes.

 All the pixels falling within the box is labelled to that class.

 When the boxes overlap the pixels in overlap region may be labelled as
unclassified (AND)or arbitrarily assigned to one of the classes(OR).

286 Nitin Chauhan

143
10/7/2016

Supervised Classification- Parallelepiped


Classification
 Overlap of decision regions happens when the two bands are
correlated for that class.

 To solve resulting problem the analyst may taper the boxes avoiding
the overlap by interactively modifying the decision boundary in a
stepped fashion.

 Its accuracy is limited but it is computationally very efficient.

287 Nitin Chauhan

Supervised Classification- Parallelepiped


Classification

288 Nitin Chauhan

144
10/7/2016

Supervised Classification- Minimum Distance to Mean


Classification
 It is based on relativity among the spectral distances between the
pixel to be assigned and the center (mean) of all information classes
that have been derived from training samples.

Pixel X ∊ Cj if d(Cj) = min[d(C1),d(C2),……,d(Cm)] 1

 min[d(C1),d(C2),……,d(Cm)] it is function for identifying smallest


distance among all inside the bracket.

289 Nitin Chauhan

Supervised Classification- Minimum Distance to Mean


Classification
 D(Cj) is the Euclidian distance between pixel X and center of
information class Cj.

𝑫𝒆 = ෍ 𝑫𝑵(𝒊, 𝒋) − 𝑪𝒊𝒋 𝟐

𝒊=𝟏
 For every pixel in input image this computation is repeated m times,
each time for one of information class.

290 Nitin Chauhan

145
10/7/2016

Supervised Classification- Minimum Distance to Mean


Classification
 This classifier causes spectral space to be partitioned into Voronoi
polygons.
 The boundaries of these polygons are formed by straight lines that
bisect the line connecting the centers(means) of two nearest clusters.
 All pixels falling inside one of the polygons receive the identity of
information class enclosed by the polygon.
 This classifier only consider mean and not standard deviation.

291 Nitin Chauhan

Supervised Classification- Minimum Distance to Mean


Classification

292 Nitin Chauhan

146
10/7/2016

Supervised Classification- Maximum Likelihood


Classification
 This classifier makes use of probability of a pixel belonging to an
information class as a decision rule.

 It depends on second order statistics of Gaussian probability density


function model for each class.

Pixel X ∊ Cj if p(Cj/X) = max[p(C1/X), p(C2/X),……, p(Cm/X)]

 Where max[p(C1/X), p(C2/X),……, p(Cm/X)] is a function that returns


the largest probability among those inside the bracket.

293 Nitin Chauhan

Supervised Classification- Maximum Likelihood


Classification
 p(Cj/X) denotes conditional probability of pixel X being a member of
class Cj. It is solved using Bayes theorem:

p(Cj/X)
 p(X/Cj) = represents the=conditional
p(X/Cj) * p(C j)/p(X) of encountering pixel
probability 1

X in class Cj.
 p(Cj) stands for occurrence probability of class Cj in input image
(priori)
 p(X) demotes probability of pixel X occurring in input image. (priori)

294 Nitin Chauhan

147
10/7/2016

Supervised Classification- Maximum Likelihood


Classification
 p(X) is obtained from training samples by summing up the probability
of finding it in every information class multiplied by proportion of
respective class.

𝒎
𝑿
𝒑 𝑿 = ෍𝒑 × 𝒑 𝑪𝒋 2
𝑪𝒋
𝒋=𝟏
 There is a contradiction viewing equation 1 and 2 that how can the
encountering probability of a specific information class p(Cj) can be
known before classification.

295 Nitin Chauhan

Supervised Classification- Maximum Likelihood


Classification
 This contradiction id resolved by two approaches :
1. Percentage of each landcover can be derived from another
classification such as unsupervised classification
2. Probability can be assumed to be equal for all cover classes.
 No matter which method is used to resolve p(Cj) it does not affect the
determination of p(Cj/X)

296 Nitin Chauhan

148
10/7/2016

Supervised Classification- Maximum Likelihood


Classification
𝐶𝑗 p(X/Cj) ∗ p(Cj) p(X/Cj)
𝑝( ) = =
𝑋 𝑿 𝑿
σ𝒎𝒋=𝟏 𝒑 𝑪 × 𝒑 𝑪𝒋 σ𝒎𝒋=𝟏 𝒑 𝑪
𝒋 𝒋
 This calculation is based on Gaussian normal distribution probability
model with assumption of a normal distribution for all training
samples.
𝑋 1 − 𝑥−𝜇𝑗 2
2
𝑝 = 𝑒 2𝜎𝑗
𝐶𝑗 𝜎𝑗 2𝜋

297 Nitin Chauhan

Supervised Classification- Maximum Likelihood


Classification

 The probability for an


information class to
occur is highest at the
center of a cluster and
decreases at a distance
away from it.
 The probability is
represented as circles
of equiprobability.

298 Nitin Chauhan

149
10/7/2016

Which Classifier to Choose?

Classifier Parallelepiped Minimum MLC


Distance
Ease of Easy to Moderate, Complex, Hard to
Understanding understand relatively easy to understand
understand
Usefulness Not useful Useful Very Useful
Computation Simple Moderate Intensive
Intensity
Statistical Minimum Mean only Mean and
Parameters used Maximum Standard
Deviation
Assumption No No Yes

299 Nitin Chauhan

Advantages of Unsupervised over Supervised


Classification
1. No extensive prior knowledge of the region is required.

2. Opportunity for human error is minimized since the operator specify


only the number of categories desired and sometimes constraints
governing the distinctness and uniformity of groups.

3. Unique classes are recognized as distinct units

300 Nitin Chauhan

150
10/7/2016

Disadvantages of Unsupervised over Supervised


Classification

1. Unsupervised classification identifies spectrally


homogeneous classes within the data that do not
necessarily correspond to the informational categories that
are of interest to the analyst.

2. The analyst has limited control over the menu of classes


and their specific identities.

3. Spectral properties of specific informational classes will


change over time. relationships between informational
classes and spectral classes are not constant, and
relationships defined for one image cannot be extended to
others.
301 Nitin Chauhan

Advantages of Supervised over Unsupervised


Classification
1. First, the analyst has control of a selected menu of informational
categories tailored to a specific purpose and geographic region.

2. Second, supervised classification is tied to specific areas of known identity,


determined through the process of selecting training areas.

3. Third, the analyst using supervised classification is not faced with the
problem of matching spectral categories on the final map with the
informational categories of interest

4. Fourth, the operator may be able to detect serious errors in classification


by examining training data to determine whether they have been correctly
classified by the procedure

302 Nitin Chauhan

151
10/7/2016

Disadvantages of Supervised over Unsupervised


Classification
1. Operator-defined classes may not match the natural classes that exist
within the data and therefore may not be distinct or well defined in
multidimensional data space.

2. Training data are often defined primarily with reference to informational


categories and only secondarily with reference to spectral properties.

3. Training data selected by the analyst may not be representative of


conditions encountered throughout the image.

4. Selection of training data can be a time-consuming, expensive, and tedious


undertaking

303 Nitin Chauhan

152

You might also like