You are on page 1of 27

REMOTE SENSING AND GIS APPLICATIONS

UNIT – I SYLLABUS
Introduction to remote sensing: Basic concepts of remote sensing, electromagnetic radiation,
electromagnetic spectrum, interaction with atmosphere, energy interaction with the earth
surfaces, Characteristics of remote sensing systems
Sensors and platforms: Introduction, types of sensors, airborne remote sensing, space borne
remote sensing, image data characteristics, digital image data formats-band interleaved by pixel,
band interleaved by line, band sequential, IRS, LANDSAT, SPOT, MODIS,
ASTER,RISAT and CARTOSAT

INTRODUCTION
Now-a-days the field of Remote Sensing and GIS has become exciting and glamorous with
rapidly expanding opportunities. Many organizations spend large amounts of money on these
fields. Here the question arises why these fields are so important in recent years. Two main
reasons are there behind this. 1) Now-a-days scientists, researchers, students, and even common
people are showing great interest for better understanding of our environment. By environment
we mean the geographic space of their study area and the events that take place there. In other
words, we have come to realize that geographic space along with the data describing it, is part of
our everyday world; almost every decision we take is influenced or dictated by some fact of
geography. 2) Advancement in sophisticated space technology (which can provide large volume
of spatial data), along with declining costs of computer hardware and software (which can handle
these data) has made Remote Sensing and G.I.S. affordable to not only complex environmental /
spatial situation but also affordable to an increasingly wider audience.

REMOTE SENSING AND ITS COMPONENTS


Remote sensing is the science of acquiring information about the Earth's surface without
actually being in contact with it. This is done by sensing and recording reflected or emitted
energy and processing, analyzing, and applying that information." In much of remote sensing,
the process involves an interaction between incident radiation and the targets of interest. This is
exemplified by the use of imaging systems where the following seven elements are involved.
However that remote sensing also involves the sensing of emitted energy and the use of non-
imaging sensors.
Fig 1.1- Components of Remote Sensing

1. Energy Source or Illumination (A) – the first requirement for remote sensing is to have an
energy source which illuminates or provides electromagnetic energy to the target of interest.
2. Radiation and the Atmosphere (B) – as the energy travels from its source to the target, it
will come in contact with and interact with the atmosphere it passes through. This interaction
may take place a second time as the energy travels from the target to the sensor.
3. Interaction with the Target (C) - once the energy makes its way to the target through the
atmosphere, it interacts with the target depending on the properties of both the target and the
radiation.
4. Recording of Energy by the Sensor (D) - after the energy has been scattered by, or emitted
from the target, we require a sensor (remote - not in contact with the target) to collect and record
the electromagnetic radiation.
5. Transmission, Reception, and Processing (E) - the energy recorded by the sensor has to be
transmitted, often in electronic form, to a receiving and processing station where the data are
processed into an image (hardcopy and/or digital).
6. Interpretation and Analysis (F) - the processed image is interpreted, visually and/or digitally
or electronically, to extract information about the target which was illuminated.
7. Application (G) - the final element of the remote sensing process is achieved when we apply
the information we have been able to extract from the imagery about the target in order to better
understand it, reveal some new information, or assist in solving a particular problem.

HISTORY OF REMOTE SENSING:


1839 - first photograph 1858 - first photo from a balloon 1903 - first plane 1909 first photo
from a plane 1903-4 -B/W infrared film WW I and WW II 1960 - space

ELECTROMAGNETIC RADIATION: Electromagnetic energy or electromagnetic radiation


(EMR) is the energy propagated in the form of an advancing interaction between electric and
magnetic fields (Sabbins, 1978). It travels with the velocity of light. Visible light, ultraviolet
rays, infrared rays, heat, radio waves, X-rays all are different forms of electro-magnetic energy.
Electro-magnetic energy (E) can be expressed either in terms of frequency (f) or wave
length (λ) of radiation as
E = h c f or hc/λ ---- (1)
where h is Planck's constant (6.626 x 10 -34 Joules-sec), c is a constant that expresses the
celerity or speed of light (3 x 10 8 m/sec), f is frequency expressed in Hertz and λ is the
wavelength expressed in micro meters (1µm = 10-6 m).

As can be observed from equation (1), shorter wavelengths have higher energy content and
longer wavelengths have lower energy content.

ELECTROMAGNETIC SPECTRUM: The first requirement for remote sensing is to have an


energy source to illuminate the target (unless the sensed energy is being emitted by the target).
This energy is in the form of electromagnetic radiation. All electromagnetic radiation has
fundamental properties and behaves in predictable ways according to the basics of wave theory.
Electromagnetic radiation consists of an electrical field (E) which varies in magnitude in a
direction perpendicular to the direction in which the radiation is travelling, and a magnetic field
(M) oriented at right angles to the electrical field. Both these fields travel at the speed of light (c).
Two characteristics of electromagnetic radiation are particularly important to understand remote
sensing. These are the wavelength and frequency. Electromagnetic radiation (EMR) as an
electromagnetic wave that travels through space at the speed of light C which is 3x108 meters
per second. Theoretical model of random media including the anisotropic effects, random
distribution discrete scatters, rough surface effects, have been studied for remote sensing with
electromagnetic waves.

The wavelength is the length of one wave cycle, which can be measured as the distance
between successive wave crests. Wavelength is usually represented by the Greek letter lambda
(λ). Wavelength is measured in meters (m) or some factor of meters such as nanometers (nm, 10-
9 meters), micrometers (μm, 10-6 meters) (μm, 10-6 meters) or centimeters (cm, 10-2 meters).
Frequency refers to the number of cycles of a wave passing a fixed point per unit of time.
Frequency is normally measured in hertz (Hz), equivalent to one cycle per second, and various
multiples of hertz.
Wavelength and frequency are related by the following formula:

Therefore, the two are inversely related to each other. The shorter the wavelength, the
higher the frequency. The longer the wavelength, the lower the frequency. Understanding the
characteristics of electromagnetic radiation in terms of their wavelength and frequency is crucial
to understanding the information to be extracted from remote sensing data.
The electromagnetic spectrum ranges from the shorter wavelengths (including gamma and
x-rays) to the longer wavelengths (including microwaves and broadcast radio waves). There are
several regions of the electromagnetic spectrum which are useful for remote sensing.
WAVELENGTH REGIONS IMPORTANT TO REMOTE SENSING:
Ultraviolet or UV: For the most purposes ultraviolet or UV of the spectrum shortest
wavelengths are practical for remote sensing. This wavelength beyond the violet portion of the
visible wavelengths hence it name. Some earth surface materials primarily rocks and materials
are emit visible radiation when illuminated by UV radiation.
Visible Spectrum: The light which our eyes - our "remote sensors" - can detect is part of
the visible spectrum. It is important to recognize how small the visible portion is relative to the
rest of the spectrum. There is a lot of radiation around us which is "invisible" to our eyes, but can
be detected by other remote sensing instruments and used to our advantage. The visible
wavelengths cover a range from approximately 0.4 to 0.7 μm. The longest visible wavelength is
red and the shortest is violet. Common wavelengths of what we perceive as particular colours
from the visible portion of the spectrum are listed below. It is important to note that this is the
only portion of the spectrum we can associate with the concept of colours.
Violet: 0.4 -0.446 μm
Blue: 0.446 -0.500 μm
Green: 0.500 -0.578 μm
Yellow: 0.578 -0.592 μm
Orange: 0.592 -0.620 μm
Red: 0.620 -0.7 μm
Blue, green, and red are the primary colours or wavelengths of the visible spectrum. They
are defined as such because no single primary colour can be created from the other two, but all
other colours can be formed by combining blue, green, and red in various proportions. Although
we see sunlight as a uniform or homogeneous colour, it is actually composed of various
wavelengths of radiation in primarily the ultraviolet, visible and infrared portions of the
spectrum. The visible portion of this radiation can be shown in its component colours when
sunlight is passed through a prism, which bends the light in differing amounts according to
wavelength.
Infrared (IR): The next portion of the spectrum of interest is the infrared (IR) region
which covers the wavelength range from approximately 0.7 μm to 100 μm more than 100 times
as wide as the visible portion. The infrared can be divided into 3 categories based on their
radiation properties-the reflected near- IR middle IR and thermal IR. The reflected near IR
covers wavelengths from approximately 0.7 μm to 1.3 μm is commonly used to expose black and
white and color-infrared sensitive film. The middle-infrared region includes energy with a
wavelength of 1.3 to 3.0 μm. The thermal IR region is quite different than the visible and
reflected IR portions, as this energy is essentially the radiation that is emitted from the Earth's
surface in the form of heat. The thermal IR covers wavelengths from approximately 3.0 μm to
100 μm.
Microwave: This wavelength (or frequency) interval in the electromagnetic spectrum is
commonly referred to as a band, channel or region. The major subdivision the portion of the
spectrum of more recent interest to remote sensing is the microwave region from about 1 mm to
1 m. This covers the longest wavelengths used for remote sensing. The shorter wavelengths have
properties similar to the thermal infrared region while the longer wavelengths approach the
wavelengths used for radio broadcasts.
PRINCIPLES OF REMOTE SENSING
Different objects reflect or emit different amounts of energy in different bands of the electromagnetic
spectrum. The amount of energy reflected or emitted depends on the properties of both the material and
the incident energy (angle of incidence, intensity and wavelength). Detection and discrimination of
objects or surface features is done through the uniqueness of the reflected or emitted electromagnetic
radiation from the object. A device to detect this reflected or emitted electro-magnetic radiation from an
object is called a “sensor” (e.g., cameras and scanners). A vehicle used to carry the sensor is called a
“platform” (e.g., aircrafts and satellites).

MAIN STAGES IN REMOTE SENSING


A. Emission of electromagnetic radiation
• The Sun or an EMR source located on the platform
B. Transmission of energy from the source to the object
• Absorption and scattering of the EMR while transmission
C. Interaction of EMR with the object and subsequent reflection and emission
D. Transmission of energy from the object to the sensor
E. Recording of energy by the sensor
• Photographic or non-photographic sensors
F. Transmission of the recorded information to the ground station
G. Processing of the data into digital or hard copy image
H. Analysis of data

Fig:- Important stages in remote sensing


Fig: - Electromagnetic Remote Sensing Process with overview in GIS

ENERGY INTERACTIONS WITH THE ATMOSPHERE


Before radiation used for remote sensing reaches the Earth's surface it has to travel through
some distance of the Earth's atmosphere. Particles and gases in the atmosphere can affect the
incoming light and radiation. These effects are caused by the mechanisms of scattering and
absorption.

SCATTERING: Scattering occurs when particles or large gas molecules present in the atmosphere
interact with and cause the electromagnetic radiation to be redirected from its original path. How
much scattering takes place depends on several factors including the wavelength of the radiation,
the abundance of particles or gases, and the distance the radiation travels through the
atmosphere. There are three (3) types of scattering which take place.
RAYLEIGH SCATTERING: Rayleigh scattering occurs when particles are very small compared to
the wavelength of the radiation. These could be Particles such as small specks of dust or nitrogen
and oxygen molecules. Rayleigh scattering causes shorter wavelengths of energy to be scattered
much more than longer wavelengths. Rayleigh scattering is the dominant scattering mechanism
in the upper atmosphere. The fact that the sky appears "blue" during the day is because of this
phenomenon. As sunlight passes through the atmosphere, the shorter wavelengths (i.e. blue) of
the visible spectrum are scattered more than the other (longer) visible wavelengths. At sunrise
and sunset the light has to travel farther through the atmosphere than at midday and the scattering
of the shorter wavelengths is more complete; this leaves a greater proportion of the longer
wavelengths to penetrate the atmosphere.

MIE SCATTERING: Mie scattering occurs when the particles are just about the same size as
the wavelength of the radiation. Dust, pollen, smoke and water vapour are common causes of
Mie scattering which tends to affect longer wavelengths than those affected by Rayleigh
scattering. Mie scattering occurs mostly in the lower portions of the atmosphere where larger
particles are more abundant, and dominates when cloud conditions are overcast.
NON-SELECTIVE SCATTERING: The final scattering mechanism of importance is called
non-selective scattering. This occurs when the particles are much larger than the wavelength of
the radiation. Water droplets and large dust particles can cause this type of scattering. Non-
selective scattering gets its name from the fact that all wavelengths are scattered about equally.
This type of scattering causes fog and clouds to appear white to our eyes because blue, green,
and red light are all scattered in approximately equal quantities (blue+green+red light = white
light).
ABSORPTION: Absorption is the other main mechanism at work when electromagnetic
radiation interacts with the atmosphere. In contrast to scattering, this phenomenon causes
molecules in the atmosphere to absorb energy at various wavelengths. Ozone, carbon dioxide,
and water vapour are the three main atmospheric constituents which absorb radiation. Ozone
serves to absorb the harmful (to most living things) ultraviolet radiation for the sun. Without this
protective layer in the atmosphere our skin would burn when exposed to sunlight. Carbon
dioxide referred to as a greenhouse gas. This is because it tends to absorb radiation strongly in
the far infrared portion of the spectrum - that area associated with thermal heating - which serves
to trap this heat inside the atmosphere. Water vapour in the atmosphere absorbs much of the
incoming long wave infrared and shortwave microwave radiation (between 22μm and 1m). The
presence of water vapour in the lower atmosphere varies greatly from location to location and at
different times of the year. For example, the air mass above a desert would have very little water
vapour to absorb energy, while the tropics would have high concentrations of water vapour (i.e.
high humidity).
ATMOSPHERIC WINDOWS
While EMR is transmitted from the sun to the surface of the earth, it passes through the
atmosphere. Here, electromagnetic radiation is scattered and absorbed by gases and dust
particles. Besides the major atmospheric gaseous components like molecular nitrogen and
oxygen, other constituents like water vapour, methane, hydrogen, helium and nitrogen
compounds play important role in modifying electro-magnetic radiation. This affects image
quality. Regions of the electromagnetic spectrum in which the atmosphere is transparent are
called atmospheric windows. In other words, certain spectral regions of the electromagnetic
radiation pass through the atmosphere without much attenuation are called atmospheric
windows. The atmosphere is practically transparent in the visible region of the electromagnetic
spectrum and therefore, many of the satellite based remote sensing sensors are designed to
collect data in this region. Some of the commonly used atmospheric windows are shown in the
figure.
0.38-0.72 microns (visible), 0.72-3.00 microns (near infra-red and middle infra-red), and
8.00-14.00 microns (thermal infra-red). Transmission100% UV, Visible, Infrared Energy
Blocked 0.3 Wavelength (microns) 1101001 mm

ENERGY INTERACTIONS WITH THE EARTH'S SURFACE FEATURES


When electromagnetic energy is incident on any feature of earth's surface, such as a water body,
various fractions of energy get reflected, absorbed, and transmitted as shown in Fig. Applying
the principle of conservation of energy, the relationship can be expressed as:
EI (λ) = ER (λ) + EA (λ) + ET (λ)
Where, EI = Incident energy
ER = Reflected energy
EA = Absorbed energy
and, ET = Transmitted energy

Fig:- Basic Interaction between Electromagnetic Energy and a water body


All energy components are functions of wavelength, (I). In remote sensing, the amount of
reflected energy ER(λ) is more important than the absorbed and transmitted energies. Therefore,
it is more convenient to rearrange these terms like
ER (λ) = EI (λ) -[EA (λ) + ET (λ)]

Characteristics of Real Remote Sensing Systems: Real remote sensing systems employed in
general operation and utility have many shortcomings when compared with an ideal system
explained above.
i. Energy Source: The energy sources for real systems are usually non-uniform over various
wavelengths and also vary with time and space. This has major effect on the passive remote
sensing systems. The spectral distribution of reflected sunlight varies both temporally and
spatially. Earth surface materials also emit energy to varying degrees of efficiency. A real remote
sensing system needs calibration for source characteristics.
ii. The Atmosphere: The atmosphere modifies the spectral distribution and strength of the
energy received or emitted (Fig. 8). The effect of atmospheric interaction varies with the
wavelength associated, sensor used and the sensing application. Calibration is required to
eliminate or compensate these atmospheric effects.
iii. The Energy/Matter Interactions at the Earth's Surface: Remote sensing is based on the
principle that each and every material reflects or emits energy in a unique, known way. However,
spectral signatures may be similar for different material types. This makes differentiation
difficult. Also, the knowledge of most of the energy/matter interactions for earth surface features
is either at elementary level or even completely unknown.
iv. The Sensor: Real sensors have fixed limits of spectral sensitivity i.e., they are not sensitive to
all wavelengths. Also, they have limited spatial resolution (efficiency in recording spatial
details). Selection of a sensor requires a trade-off between spatial resolution and spectral
sensitivity. For example, while photographic systems have very good spatial resolution and poor
spectral sensitivity, non-photographic systems have poor spatial resolution.
v. The Data Handling System: Human intervention is necessary for processing sensor data;
even though machines are also included in data handling. This makes the idea of real time data
handling almost impossible. The amount of data generated by the sensors far exceeds the data
handling capacity.
vi. The Multiple Data Users: The success of any remote sensing mission lies on the user who
ultimately transforms the data into information. This is possible only if the user understands the
problem thoroughly and has a wide knowledge in the data generation. The user should know how
to interpret the data generated and should know how best to use them.

PLATFORMS AND SENSORS


TYPES OF PLATFORMS
The base, on which remote sensors are placed to acquire information about the Earth’s
surface, is called platform. Platforms can be stationary like a tripod (for field observation) and
stationary balloons or mobile like air craft’s and spacecraft’s. The types of platforms depend
upon the needs as well as constraints of the observation mission. Remote sensing platforms can
be classified as follows, based on the elevation from the Earth’s surface at which these platforms
are placed.
Ground level remote sensing: Ground level remote sensors are very close to the ground.
They are basically used to develop and calibrate sensors for different features on the Earth’s
surface.
Aerial remote sensing: these are of two different types. Low altitude aerial remote sensing,
High altitude aerial remote sensing
Space borne remote sensing: Space shuttles, Polar orbiting satellites, Geo-stationary
satellites are under this. From each of these platforms, remote sensing can be done either in
passive or active mode.
Airborne and Space-borne Remote Sensing: In airborne remote sensing, downward or
sideward looking sensors mounted on aircrafts are used to obtain images of the earth's surface.
Very high spatial resolution images (20 cm or less) can be obtained through this. However, it is
not suitable to map a large area. Less coverage area and high cost per unit area of ground
coverage are the major disadvantages of airborne remote sensing. While airborne remote sensing
missions are mainly one-time operations, space-borne missions offer continuous monitoring of
the earth features. LiDAR, analog aerial photography, videography, thermal imagery and digital
photography are commonly used in airborne remote sensing.
In space-borne remote sensing, sensors mounted on space shuttles or satellites orbiting the
Earth are used. There are several remote sensing satellites (Geostationary and Polar orbiting)
providing imagery for research and operational applications. While Geostationary or
Geosynchronous Satellites are used for communication and meteorological purposes, polar
orbiting or sun-synchronous satellites are essentially used for remote sensing. The main
advantages of space-borne remote sensing are large area coverage, less cost per unit area of
coverage, continuous or frequent coverage of an area of interest, automatic/ semiautomatic
computerized processing and analysis. However, when compared to aerial photography, satellite
imagery has a lower resolution. Landsat satellites, Indian remote sensing (IRS) satellites,
IKONOS, SPOT satellites, AQUA and TERRA of NASA and INSAT satellite series are a few
examples.
TYPES OF REMOTE SENSING
Passive remote sensing: The sun provides a very convenient source of energy for remote
sensing. The sun's energy is either reflected, as it is for visible wavelengths, or absorbed and then
reemitted, as it is for thermal infrared wavelengths. Remote sensing systems which measure
energy that is naturally available are called passive sensors. Passive sensors can only be used to
detect energy when the naturally occurring energy is available. For all reflected energy, this can
only take place during the time when the sun is illuminating the Earth. There is no reflected
energy available from the sun at night. Energy that is naturally emitted (such as thermal infrared)
can be detected day or night, as long as the amount of energy is large enough to be recorded.

Active remote sensing: Active sensors, on the other hand, provide their own energy
source for illumination. The sensor emits radiation which is directed toward the target to be
investigated. The radiation reflected from that target is detected and measured by the sensor.
Advantages for active sensors include the ability to obtain measurements anytime, regardless of
the time of day or season. Active sensors can be used for examining wavelengths that are not
sufficiently provided by the sun, such as microwaves, or to better control the way a target is
illuminated. However, active systems require the generation of a fairly large amount of energy to
adequately illuminate targets. Some examples of active sensors are a laser fluorosensor and a
synthetic aperture radar (SAR).
Ideal Remote Sensing System: The basic components of an ideal remote sensing system
include:
i. A Uniform Energy Source which provides energy over all wavelengths, at a constant,
known, high level of output.
ii. A Non-interfering Atmosphere which will not modify either the energy transmitted
from the source or emitted (or reflected) from the object in any manner.
iii. A Series of Unique Energy/Matter Interactions at the Earth's Surface which
generate reflected and/or emitted signals that are selective with respect to wavelength and also
unique to each object or earth surface feature type.
iv. A Super Sensor which is highly sensitive to all wavelengths. A super sensor would be
simple, reliable, accurate, economical, and requires no power or space. This sensor yields data on
the absolute brightness (or radiance) from a scene as a function of wavelength.
v. A Real-Time Data Handling System which generates the instance radiance versus
wavelength response and processes into an interpretable format in real time. The data derived is
unique to a particular terrain and hence provide insight into its physical, chemical-biological
state.
vi. Multiple Data Users having knowledge in their respective disciplines and also in
remote sensing data acquisition and analysis techniques. The information collected will be
available to them faster and at less expense. This information will aid the users in various
decision making processes and also further in implementing these decisions.
WHAT IS AN IMAGE?
In a broad sense, an image is a picture or photograph. They are most common and
convenient means of storing, conveying and transmitting information. They concisely convey
information about positions, sizes and interrelationships between objects and portray spatial
information that we can recognize as objects.
An image is usually a summary of the information in the object it represents. The
information of an image is presented in tones and colors. In a strict sense, photographs are
images, which are recorded on photographic film and have been converted into paper form by
some chemical processing of the film whereas an image is any pictorial representation of
information. So, it can be said that all photographs are images but not all images are
photographs.

WHAT IS A DIGITAL IMAGE?


When a paper photograph is scanned through a scanner and stored in a computer, it becomes a
digital image as it has been converted into digital mode. When you see a paper photograph and
its digital version in a computer, you do not see any difference. In digital mode, photographic
information is stored as an array of discrete numbers. Each number corresponds to a discrete dot,
i.e. one image element in an image. This image element is the smallest part of an image and is
generally known as picture element or pixel or pel.
These numbers vary from place to place within the image depending upon the tonal
variation. Number of pixels in an image depends upon the image size (length and width of the
image). In any image, bright areas are represented by higher values whereas dark areas are
represented by lower values. The values are known as digital number. We know now that a
digital image is composed of a finite number of pixels, each of which has a particular location
and value. In other words, when (x,y)and amplitude values of ‘f’ are all finite, discrete quantities
both in spatial coordinates and in brightness, the image is called a digital image.

Fig- A digital image (left) and its corresponding values (centre). Note the variation in the brightness and
the change in the corresponding digital numbers. Highlighted block in the centre figure shows one pixel.
The figure at right shows the range of values corresponding to the brightness
Fig- Arrangement of rows and columns of an image of size 4 × 4 (4 rows and 4columns). Left figure
shows the numerical values in the image and the table at right shows the representation of pixel location
for an image of size 4 × 4. You can observe that at location (1, 4), i.e. row 1 and column 4, the pixel value
is 24

TYPES OF DIGITAL IMAGE


Digital image can be classified into several types based on their form or method of generation. The actual
information stored in the digital image data is the brightness information in each spectral band and in
general, digital images are of following three types.
1) Black and White or Binary image
2) Grey Scale or Monochrome Image
3) Color or RGB Image
1. Black and White or Binary image
Pixels in this type of images show only two colors, either black or white and hence the
pixels are represented by only two possible values for each pixel, 0 for black and 1 for white.
Since a black and white image can be described in terms of binary values, such images are also
known as binary images or bi-level or two-level Images.
This also means that the binary images require only a single bit (0 or 1) to represent each
pixel hence storing of these kinds of images require only one bit per pixel. Inability to represent
intermediate shades of gray limits usefulness of binary images in dealing with remote sensing or
photographic images.

Fig- Representation of (1) black and white and (2) gray scale images. Note the range of values
for the highlighted boxes in the two types of images
2. Grey Scale or Monochrome Image

Pixels in this type of images show white and black colors including the different shades
between the two colors as shown in Fig. Generally, black color is represented by 0 value, white
by 255 and other in between gray shades by values between the two values. This range means
that each pixel can be represented by eight bits or exactly one byte. In other words, storing of
gray scale images require 8 bits per pixel.

3. Color or RGB Image

Each pixel in this type of image has a particular color which is described by the amount of red,
green and blue colors in it (Fig. 10.5). Color images are constructed by stacking three gray scale
images where each image (i.e. band) corresponds to a different color hence there are three values
(one each for red, green and blue components) corresponding to each pixel. RGB (Red, Green
and Blue) is the commonly used color space to visualize color images. Thus, RGB are primary
colors for mixing light and are called additive primary colors. Any other color can be created by
mixing the correct amounts of red, green and blue light. If each of these three components has a
range of 0 - 255, there could be a total of 2563 different possible colors in a color image. Storing
of a color images require 24 bit for each pixel.

Fig- Representation of a colour image. Note the range of values of its three components, i.e. red,
green and blue

CHARACTERISTICS OF DIGITAL IMAGE

1. Spatial resolution- it refers to variations in the reflectance/emittance determined by the


shape. Size and texture of the target
2. Spectral resolution- it refers changes in the reflectance or emittance as a function of
wavelength
3. Temporal resolution- it involves diurnal and/or seasonal changes in reflectance or
emittance
4. Radiometric resolution- it include changes in the polarisation of the radiation reflected
or emitted by the object
5. Spatial Resolution

There are different definitions of spatial resolution but in a general and practical sense, it can be
referred to as the size of each pixel. It is commonly measured in units of distance, i.e. cm or m.
In other words, spatial resolution is a measure of the sensor’s ability to capture closely spaced
objects on the ground and their discrimination as separate objects. Spatial resolution of a data
depends on altitude of the platform used to record the data and sensor parameters. Relationship
of spatial resolution with altitude can be understood with the following example. You can
compare an astronaut on-board a space shuttle looking at the Earth to what he/she can see from
an airplane.

Fig- Spatial variations of remote sensing data. Note the variations in resolution from 1 km
till 1 m, in the series of photographs. The photograph taken from 1 km shows lesser details
as compared to that at 1m
Fig- Understanding concept of spatial resolution

2. Spectral Resolution

We all know that the Sun is a major source of electromagnetic radiation used in the optical
remote sensing. Different materials on the Earth’s surface exhibit different spectral reflectance
and emissivities. The differences (variations) in reflectance and emissivity's are used to
distinguish features. However, the spectral signature does not give continuous spectral
information and rather it gives spectral information at some selected wavelengths. These
wavelength regions of observation are called given spectral information at some selected
wavelengths. These wavelength regions of observation are called spectral bands. These spectral
bands are defined in terms of a ‘central wavelength’ and a ‘band width’. The number and
dimension of specific wavelength intervals in the electromagnetic spectrum to which a remote
sensing instrument is sensitive is called spectral resolution.
Fig- Spectral variations of remote sensing data

3. Radiometric Resolution

As the arrangement of pixels describes spatial structure of an image, the radiometric


characteristics describe actual information content in an image. The information content in an
image is determined by the number of digital levels (quantisation levels) used to express the data
collected by the sensors. In other words, a definite number of discrete quantisation levels are
used to record (digitise) the intensity of flow of radiation (radiant flux) reflected or emitted from
ground features. The smallest change in intensity level that can be detected by a sensing system
is called radiometric resolutions. The quantisation levels are expressed as n binary bits, such as 7
bit, 8 bit, 10 bit, etc. 8 bit digitisation implies 28 or 256 discrete levels (i.e. 0-255). Similarly, 7
bit digitisation implies 27 or 128th discrete levels (i.e. 0-127).

Fig- Images showing the effect of degrading the radiometric resolution

4. Temporal Resolution

In addition to spatial, spectral and radiometric resolution, it is also important to consider the
concept of temporal resolution in a remote sensing system. One of the advantages of remote
sensing is its ability to observe a part of the Earth (scene) at regular intervals. The interval at
which a given scene can be imaged is called temporal resolution. Temporal resolution is usually
expressed in days. For instance, IRS-1A has 22 days temporal resolution, meaning it can acquire
image of a particular area in 22 days interval, respectively. Low temporal resolution refers to
infrequent repeat coverage whereas high temporal resolution refers to frequent repeat coverage.
Temporal resolution is useful for agricultural application or natural disasters like flooding when
you would like to re-visit the same location within every few days. The requirement of temporal
resolution varies with different applications. For example, to monitor agricultural activity, image
interval of 10 days would be required, but intervals of one year would be appropriate to monitor
urban growth patterns.
Fig- Temporal variations of remote sensing data used to monitor changes in agriculture,
showing crop conditions in different months

Fig- Showing the importance of temporal resolution. View of the flood situation at
Brisbane, Australia (a) pre flood and (b) post flood

DATA FORMATS

Image data are rasters, stored in a rectangular matrix of rows and columns. Radiometric resolution determines
how many gradations of brightness can be stored for each cell (pixel) in the matrix; 8-bit resolution, where each
pixel contains an integer value from 0 to 255, is most common. Modern sensors often collect data at higher
resolution, and advanced image processing software can make use of these values for analysis. The human eye
cannot detect very small differences in brightness, and most GIS software can only read an 8-bit value.
In a greyscale image, 0 = black and 255 = white; and there is just one 8-bit value for each pixel. However, in
a natural color image, there is an 8-bit value for red, an 8-bit brightness value for green, and an 8-bit value for blue.
Therefore, each pixel in a color image requires 3 separate values to be stored in the file. There are three possible
ways to organize these values in a raster file.

 BIP - Band Interleaved by Pixel: The red value for the first pixel is written to the file, followed by the green
value for that pixel, followed by the blue value for that pixel, and so on for all the pixels in the image.
 BIL - Band Interleaved by Line: All of the red values for the first row of pixels are written to the file, fol lowed
by all of the green values for that row followed by all the blue values for that row, and so on for every row of
pixels in the image.
 BSQ - Band Sequential: All of the red values for the entire image are written to the file, followed by all of the
green values for the entire image, followed by all the blue values for the entire image.

BIP - Band Interleaved by Pixel:

Most digital data are stored on nine-track tape (800, 1600, and 6250 bpi), 4- or 8- mm tape, or on optical
disks. The nine-track and 4- or 8-mm tapes must be read serially while it is possible to randomly select areas of
interest from within the optical disk. This may result in significant savings of time when unloading remote sensor
data. The 4- and 8-mm tape and compact disks are very efficient storage mediums, as opposed to the large number
of nine-track tapes required to store most images (Jensen, 1996). Band Interleaved by Pixel Format (BIP) one of the
earliest digital formats used for satellite data is band interleaved by pixel (BIP) format. This format treats pixels as
the separate storage unit. Brightness values for each pixel are stored one after another. It is practical to use if all
bands in an image are to be used. Figure 2-3.1 shows the logic of how the data is recorded to the computer tape in
sequential values for a four-band image in BIP format.

BIL - Band Interleaved by Line:

All four bands are written to the tape before values for the next pixel are represented. Any given pixel located
on the tape contains values for all four bands written directly in sequence. This format may be awkward to use if
only certain bands of the imagery are needed. Often data in BIP format is organized into four separate panels, or
tiles, consisting of vertical strips each 840 lines wide in the x direction and 2,342 lines long in the y direction. In
order to read all four bands of the image, all four panels must be pieced together to form the entire scene (Campbell,
1987). Band Interleaved By Line Format (BIL) Just as the BIP format treats each pixel of data as the separate unit,
the band interleaved by line (BIL) format is stored by lines. Figure 2-3.2 shows the logic of how the data is recorded
to the computer tape in sequential values for a four band image in BIL format.
BSQ - Band Sequential:

Each line is represented in all four bands before the next line is recorded. Like the BIP format, it is a useful to
use if all bands of the imagery are to be used in the analysis. If some bands are not of interest, the format is
inefficient if the data are on tape, since it is necessary to read serially past unwanted data. Band Sequential Format
The band sequential format requires that all data for a single band covering the entire scene be written as one file
(see Fig. 2-3.3). Thus, if an analyst wanted to extract the area in the center of a scene in four bands, it would be
necessary to read into this location in four separate files to extract the desired information. Many researchers like
this format because it is not necessary to read serially past unwanted information if certain bands are of no value,
especially when the data are on a number of different tapes. Random-access optical disk technology, however,
makes this serial argument obsolete.

You might also like