What is Remote Sensing?

We perceive the surrounding world through our five senses. Some senses (touch and taste) require contact of our sensing organs with the objects. However, we acquire much information about our surrounding through the senses of sight and hearing which do not require close contact between the sensing organs and the external objects. In another word, we are performing Remote Sensing all the time. Generally, Remote sensing refers to the activities of recording/observing/perceiving ( ensing) s objects or events at far away (remote) places. In remote sensing, the sensors are not in direct contact with the objects or events being observed. The information needs a physical carrier to travel from the objects/events to the sensors through an intervening medium. The electromagnetic radiation is normally used as an information carrier in remote sensing. The output of a remote sensing system is usually an image representing the scene being observed. A further step of image analysis and interpretation is required in order to extract useful information from the image. The human visual system is an example of a remote sensing system in this general sense. In a more restricted sense, remote sensing usually refers to the

technology of acquiring information about the earth's surface (land and ocean) and atmosphere using sensors onboard airborne (aircraft, balloons) or spaceborne (satellites, space shuttles) platforms.

Satellite Remote Sensing
In this CD, you will see many remote sensing images around Asia acquired by earth observation satellites. These remote sensing satellites are equipped with sensors looking down to the earth. They are the "eyes in the sky" constantly observing the earth as they go round in predictable orbits.

Effects of Atmosphere
In satellite remote sensing of the earth, the sensors are looking through a layer of atmosphere separating the sensors from the Earth's surface being observed. Hence, it is essential to understand the effects of atmosphere on the electromagnetic radiation travelling from the Earth to the sensor through the atmosphere. The atmospheric constituents cause wavelength dependent absorption and scattering of radiation. These effects degrade the quality of images. Some of the atmospheric effects can be corrected before the images are subjected to further analysis and interpretation. A consequence of atmospheric absorption is that certain wavelength bands in the electromagnetic spectrum are strongly absorbed and effectively blocked by the atmosphere. The wavelength regions in the electromagnetic spectrum usable for remote sensing are determined by their ability to penetrate atmosphere. These regions are known as the atmospheric transmission windows. Remote sensing systems are often designed to operate within one or more of the atmospheric windows. These windows exist in the microwave region, some wavelength bands in the infrared, the entire visible region and part of the near ultraviolet regions. Although the atmosphere is practically transparent to x -rays and gamma rays, these radiations are not normally used in remote sensing of the earth.

Optical and Infrared Remote Sensing
In Optical Remote Sensing, optical sensors detect solar radiation reflected or scattered from the earth, forming images resembling photographs taken by a camera high up in space. The wavelength region usually extends from the visible and near infrared (commonly abbreviated as VNIR) to the short-wave infrared (SWIR).

Different materials such as water, soil, vegetation, buildings and roads reflect visible and infrared light in different ways. They have different colours and brightness when seen under the sun. The interpretation of optical images require the knowledge of the spectral reflectance signatures of the various materials (natural or man-made) covering the surface of the earth.

There are also infrared sensors measuring the thermal infrared radiation emitted from the earth, from which the land or sea surface temperature can be derived.

Microwave Remote Sensing
There are some remote sensing satellites which carry passive or active microwave sensors. The active sensors emit pulses of microwave radiation to illuminate the areas to be imaged. Images of the earth surface are formed by measuring the microwave energy scattered by the ground or sea back to the sensors. These satellites carry their own "flashlight" emitting microwaves to illuminate their targets. The images can thus be acquired day and night. Microwaves have an additional advantage as they can penetrate clouds. Images can be acquired even when there are clouds covering the earth surface. A microwave imaging system which can produce high resolution image of the Earth is the synthetic aperture radar (SAR). The intensity in a SAR image depends on the amount of microwave backscattered by the target and received by the SAR antenna. Since the physical mec anisms h

This thematic map can be combined with other databases of the test area for further analysis and utilization. The cones are insensitive under low light illumination condition. everything appears in shades of grey when there is insufficient light. The human visual system is an example of a remote sensing system in the general sense. i i techniques the image may be employed to to help visual interpretation. at the retina of the eyes. and blue regions of the visible spectrum. Thus. In order to t e form of i i l i extract useful information from t e images. The cones are responsible for colour vision.T i :T i i x l i i . t e i i of SAR images requir i Remote sensing images are normall in . known as the and the . when their jobs are taken over by the rods. Hence. The resulting product is a i of the study area. The rods are sensitive only to the total light intensity. each being sensitive to one of the red. it is not coincidental that the modern computer display monitors make use of the same three primary colours to generate a multitude of colours for displaying colour images. There are three types of cones. .responsi le for t is backscatter is different for microwave. compared to visible/infrared radiation. The in this example are the two types of photosensitive cells. blurring or degradation by other factors. image i and l i i i algorithms are used to delineate different areas in an image into i l . green. and to or the image if the image has been subjected to geometric distortion. There are many image analysis techniques available and the methods used depend on the requirements of the specific problem concerned. i l P i i l i i j ill l . In many cases.

forming an i on the retina after passing through the optical system of the eyes. with a mass of 5. When operating in this mode. The signals l i generated at the retina are carried via the nerve fibres to the brain. with the aid of previous experiences. we are performing " i own source of energy for illuminating the objects. spherical model to the currently accepted ellipsoidal model . The Planet Earth The planet E is the third planet in the l located at a mean distance of about 1. the i i i is i .50 x 108 km from the sun. we can still see at night if we provide our own source of illumination by carrying a flashlight and shining the beam towards the object we i ". the information needs a carrier to travel from the object to the eyes. However. These signals are processed and interpreted at the brain. In this case.As the objects/events being observed are located far away from the eyes. the i CPU of the visual system. the visual system is an example of a "P i i " system which depends on an external source of energy to operate. The objects l / the the visible light. i i ill i i :T i i i . Part of the scattered light is intercepted by the eyes. Descriptions of the shape of the earth have evolved from the flat earth model.97 x 1024 kg. by supplying our want to observe. In this case. We all know that this system won't work in darkness. a part of the l ambient light falling onto them.

though there is no well defined boundary for the upper li mit of the atmosphere. position in space) of a point on or above the earth surface for the purpose of surveying.derived from accurate ground surveying and satellite measurements. Vertical Structure of the Atmosphere .1% of the earth's crust area is above sea level. About 29. mapping and navigation. A layer of gaseous atmosphere envelopes the earth's surface. The first 80 km of the atmosphere contains more than 99% of the total mass of the earth's atmosphere. Atmosphere The Earth's Atmosphere The earth's surface is covered by a layer of atmosphere consisting of a mixture of gases and other solid and liquid particles. A number of reference elli i have been defined for use in identifying the three dimensional coordinates (i. The reference ellipsoid in the rl Geodetic System 1984 (WGS-84) commonly used in satellite Global Positioning System (GPS) has the following parameters: y y Eq atorial Radi s = 6378.e.7523 km The earth's crust is the outermost layer of the earth's land surface.1370 km Polar Radi s = 6356. The rest is covered by water. The gaseous materials extend to several hundred kilometers in altitude.

up to an altitude of about 50 km. mesopause and thermopause. clouds. precipitation) are confined to this layer. y y y y Troposphere: This layer is characteri ed by a decrease in temperature with respect to height. after which the temperature increases with height.The vertical profile of the atmosphere is divided into four layers: troposphere. Thermosphere: This layer extends from about 85 km upward to several hundred kilometers. The tops of these layers are known as the tropopause. i. Mesosphere: The temperature decreases in this layer from an altitude of about 50 km to 85 km. up to a height of about 10 km. The gases exist mainly in the form of thin plasma. they are ioni ed due to bombardment by solar ultraviolet radiation and energetic cosmic rays. respectively. .5ºC per kilometer. which is well above the thermopause. All the weather activities (water vapour. Many remote sensing satellites follow the near polar sun-synchronous orbits at a height around 800 km. mesosphere and thermosphere. at a rate of about 6. Stratosphere: The temperature at the lower 20 km of the stratosphere is approximately constant. stratosphere. stratopause. The term upper atmosphere usually refers to the region of the atmosphere above the troposphere.e. The temperature may range from 500 K to 2000 K. with a characteristic height of about 2 km. Ozone exists mainly at the stratopause. A layer of aerosol particles normally exists near to the earth surface. The troposphere and the stratosphere together account for more than 99% of the total mass of the atmosphere. The aerosol concentration decreases nearly exponentially with height.

The aerosol concentration decreases nearly exponentially with height. Vertical Structure of the Atmosphere The vertical profile of the atmosphere is divided into four layers: troposphere. stratopause. mesosphere and thermosphere. A layer of aerosol particles normally exists near to the earth surface. y Troposphere: This layer is characterized by a decrease in temperature with respect to height. mesopause and thermopause.5ºC per kilometer. All the weather activities (water vapour.The Earth's Atmosphere The earth's surface is covered by a layer of atmosphere consisting of a mixture of gases and other solid and liquid particles. up to a height of about 10 km. The tops of these layers are known as the tropopause. precipitation) are confined to this layer. at a rate of about 6. The first 80 km of the atmosphere contains more than 99% of the total mass of the earth's atmosphere. stratosphere. with a characteristic height of about 2 km. respectively. clouds. . The gaseous materials extend to several hundred kilometers in altitude. though there is no well defined boundary for the upper li mit of the atmosphere.

Many remote sensing satellites follow the near polar sun-synchronous orbits at a height around 800 km. Electromagnetic Radiation Electromagnetic Waves Electromagnetic waves are energy transported through space in the form of periodic disturbances of electric and magnetic fields. the atmosphere also contains solid and liquid particles such as aerosols. They consist of water vapour. Mesosphere: The temperature decreases in this layer from an altitude of about 50 km to 85 km. All electromagnetic waves travel through space at . Ozone exists mainly at the stratopause. nitrogeneous and sulphurous compounds. The troposphere and the stratosphere together account for more than 99% of the total mass of the atmosphere. Thermosphere: This layer extends from about 85 km upward to several hundred kilometers. These particles may congregate to form clouds and haze. which is well above the thermopause. they are ioni ed due to bombardment by solar ultraviolet radiation and energetic cosmic rays. The remaining one percent consistsof the inert gases. The gases exist mainly in the form of thin plasma.y y y Stratosphere: The temperature at the lower 20 km of the stratosphere is approximately constant. The temperature may range from 500 K to 2000 K. carbon dioxide and other gases. water droplets and ice crystals. i. with little spatial variation. Solid and li uid particulates Other than the gases. after which the temperature increases with height. up to an altitude of about 50 km. o one. Gases with Variable oncentration The concentration of these gases may vary greatly over space and time. Atmospheric onstituents The atmosphere consists of the following components: y y y Permanent Gases They are gases present in nearly constant concentration. The term upper atmosphere usually refers to the region of the atmosphere above the troposphere. About 78% by volume of the atmosphere is nitrogen while the life sustaining oxygen occupies 21%.e.

8 . speed of light = frequency x wavelength The frequency (and hence.4 cm) o K band: 18 .3 .26.the same speed.100 cm) o L band: 1 . Wavelength units: 1 mm = 1000 µm.5 . These two quantities are related to the speed of light by the equation. Note that there is no sharp boundary between these regions.7 . c = 2.1 cm) .40 GHz (0. Microwaves: 1 mm to 1 m wavelength.1.1.3.18 GHz (1. The microwaves are further divided into different frequency (wavelength) bands: (1 GH = 109 H ) o P band: 0.4 GHz (7.1 GHz (30 .1 . commonly known as the speed of light.2 GHz (15 . y y Radio Waves: 10 cm to 10 km wavelength.5 cm) o X band: 8 .5 . The Electromagnetic Spectrum The electromagnetic spectrum can be divided into several wavelength (frequency) regions.5 GHz (2.12.75 . ranging from the low frequency of the electric waves generated by the power transmission lines to the very high frequency of the gamma rays originating from the atomic nuclei.5 GHz (1.99792458 x 10 8 m/s. The boundaries shown in the above figures are approximate and there are overlaps between two adjacent regions.15 cm) o C band: 4 .8 cm) o Ku band: 12. This wide frequency range of electromagnetic waves constitute the Electromagnetic Spectrum. There is a wide range of frequency encountered in our physical world.7 cm) o Ka band: 26.8 GHz (3. An electromagnetic wave is characterized by a frequency and a wavelength. 1 µm = 1000 nm.7.4 .5 . the wavelength) of an electromagnetic wave depends on its source.2.30 cm) o S band: 2 . among which only a narrow band from about 400 to 700 nm is visible to the human eyes.

The energy E of a photon is proportional to the wave frequency f.500 nm o Indigo: 430 . This region is further divided into the following bands: o Near Infrared (NIR): 0.590 nm o Green: 500 . o Short Wavelength Infrared (SWIR): 1.430 nm Back to Spectrum y y Ultraviolet: 3 to 400 nm X-Rays and Gamma Rays Photons According to quantum physics. E=hf where the constant of proportionality h is the Planck's Constant. i. Atmospheric Effects Effects of Atmosphere .Back to Spectrum y Infrared: 0. the energy of an electromagnetic wave is quantized.626 x 10-34 J s. o Long Wanelength Infrared (LWIR): 8 to 15 µm. o Mid Wavelength Infrared (MWIR): 3 to 8 µm.7 to 300 µm wavelength.7 to 1. The various colour components of the visible spectrum fall roughly within the following wavelength regions: o Red: 610 .610 nm o Yellow: 570 . h = 6. referring to the main infrared component of the solar radiation reflected from the earth's surface.5 µm.700 nm o Orange: 590 . The NIR and SWIR are also known as the Reflected Infrared. it can only exist in discrete amount. Back to Spectrum y Visible Light: This narrow band of electromagnetic radiation extends from about 400 nm (violet) to about 700 nm (red).5 to 3 µm. o Far Infrared (FIR): longer than 15 µm. The MWIR and LWIR are the Thermal Infrared.e. The basic unit of energy for an electromagnetic wave is called a photon.450 nm o Violet: 400 .570 nm o Blue: 450 .

Effects of Atmospheric Absorption on Remote Sensing Images Atmospheric absorption affects mainly the visible and infrared bands. These regions are known as the Atmospheric Transmission Windows. The reflected radiance is also . Optical remote sensing depends on solar radiation as the source of illumination. The various effects of absorption and scattering are outlined in the following sections. it may be absorbed or scattered by the constituent particles of the atmosphere. Scattering redistribu the energy of the tes incident beam to all directions. Atmospheric Transmission Windows Each type of molecule has its own set of absorption bands in various parts of the electromagnetic spectrum. near-infrared. Absorption reduces the solar radiance within the absorption bands of the atmospheric gases. These windows are found in the visible. Molecular absorption converts the radiation energy into excitation energy of the molecules. As a result.When electromagnetic radiation travels through the atmosphere. certain bands in thermal infrared and the microwave regions. only the wavelength regions outside the main absorption bands of the atmospheric gases can be used for remote sensing. The overall effect is the removal of energy from the incident radiation. The wavelength bands used in remote sensing systems are usually designed to fall within these windows to minimi e the atmospheric absorption effects.

atmospheric absorption will alter the apparent spectral signature of the target being observed. i. A photon of electromagnetic radiation can be absorbed by a molecule when its frequency matches one of the available transitional energies. Effects of Atmospheric Scattering on Remote Sensing Images Atmospheric scatterring is important only in the visible and near infrared regions. Hence. the adjacency effect results in an increase in the apparent brightness of the darker region while the apparent brightness of the brighter region is reduced. resulting in a reduced resolution image. the light from a target outside the field of view of the sensor may be scattered into the field of view of the sensor.attenuated after passing through the atmosphere. This effect is known as the adjacency effect. Scattering also produces blurring of the targets in remotely sensed images due to spreading of the reflected radiation by scattering.e. Vibrational Energy: Energy due to vibration of the component atoms of a molecule about their equilibrium positions. This vibration is associated with stretching of chemical bonds between the atoms. Electronic Energy: Energy due to the energy states of the electrons of the molecule. Due to the ultraviolet absorption. This attenuation is wavelength dependent. The average translational kinetic energy of a molecule is equal to kT/2 where k is the Boltzmann's constant and T is the absolute temperature of the gas. Ultraviolet Absorption Absorption of ultraviolet (UV) in the atmosphere is chiefly due to electronic transitions of the atomic and molecular oxygen and nitrogen. Absorption of Radiation Absorption by Gaseous Molecules The energy of a gaseous molecule can exist in various forms: y y y y Translational Energy: Energy due to translational motion of the centre of mass of the molecule. the energy can change only in discrete amount. the solar radiation scattered by the atmosphere towards the sensor without first reaching the ground produces a ha y appearance of the image. The last three forms are quantized. some of the oxygen and nitrogen molecules in the upper atmosphere undergo photochemical dissociation to . Furthermore. Rotational Energy: Energy due to rotation of the molecule about an axis through its centre of mass. known as the transitional energy. Near to the boundary between two regions of different brightness. This effect is particularly severe in the blue end of the visible spectrum due to the stronger Rayleigh Scattering for shorter wavelength radiation. Most noticeably. Scattering of radiation by the constituent gases and aerosols in the atmosphere causes degradation of the remotely sensed images.

These atoms play an important role in the absorption of solar ultraviolet radiation in the thermosphere. Infrared Absorption The absorption in the infrared (IR) region is mainly due to rotational and vibrational transitions of the molecules. Microwave Region The atmosphere is practically transparent to the microwave radiation. increasing use of the flurocarbon compounds in aerosol sprays and refrigerant results in the release of atomic chlorine into the upper atmosphere due to photochemical dissociation of the fluorocarbon compounds.become atomic oxygen and nitrogen. The ozone molecules also undergo photoch emical dissociation to atomic O and molecular O2. It is formed in three-body collisions of atomic oxygen (O) with molecular oxygen (O2) in the presence of a third atom or molecule. contributing to the depletion of the ozone layers. In recent years. ozone exists at a constant concentration level. Scattering of Electromagnetic Radiation by Atmosphere Scattering of Electromagnetic Radiation Scattering of electromagnetic radiation is caused by the interaction of radiation with matter resulting in the reradiation of part of the energy to other directions not along the path of the incidint radiation. most of the radiation is absorbed by the atmosphere.7 to 15 µm). . The main atmospheric constituents responsible for infrared absorption are water vapour (H2 O) and carbon dioxide (CO2) molecules. causing a small increase in solar ultraviolet radiation reaching the earth. this energy is not lost. The water and carbon dioxide molecules have absorption bands centred at the wavelengths from near to long wave infrared (0. Scattering effectively removes energy from the incident beam. existence of certain atoms (such as atomic chlorine) will catalyse the dissociation of O3 back to O2 and the ozone concentration will decrease. The photochemical dissociation of oxygen is also responsible for the formation of the ozone layer in the stratosphere. Visible Region There is little absorption of the electromagnetic radiation in the visible part of the spectrum. When the formation and dissociation processes are in equilibrium. It has been observed by measurement from space platforms that the ozone layers are depleting over time. but is redistributed to other directions. Ozone Layers Ozone in the stratosphere absorbs about 99% of the harmful solar UV radiation shorter than 320 nm. Unlike absorption. However. In the far infrared region.

However. Scattering by gaseous molecules The law of scattering by air molecules was discovered by Rayleigh in 1871. Airborne Remote Sensing In airborne remote sensing. the scattered radiation in Mie scattering is mainly confined within a small angle about the forward direction. The scattered light intensity is inversely proportional to the fourth power of the wavelength. the angle between the directions of the incident and scattered rays. the scattering is named Mie Scattering. If the size of the particle is similar to or larger than the radiation wavelength. for irregular particles. the calculation can become very complicated. In general.Both the gaseous and aerosol components of the atmosphere cause scattering in the atmosphere. Hence. and hence this scattering is named Rayleigh Scattering.e. downward or sideward looking sensors are mounted on an aircraft to . Rayleigh scattering occurs when the size of the particle responsible for the scattering event is much smaller than the wavelength of the radiation. i. blue light is scattered more than red light. This phenomenon explains why the sky is blue and why the setting sun is red. Scattering by Aerosols Scattering by aerosol particles depends on the shapes. The scattering intensity and its angular distribution may be calculated numerically for a spherical particle. The radiation is said to be very strongly forward scattered. sizes and the materials of the particles. The scattered light intensity in Rayleigh scattering for unpolarized light is proportional to (1 + cos2 s) where s is the scattering angle.

The canopy of each individual tree can be clearly seen. Airborne remote sensing missions are often carried out as one-time operations. This type of very high resolution imagery is useful in identification of tree types and in assessing the conditions of the trees. Synthetic Aperture Radar imaging is also carried out on airborne platforms. videography. The interpretation of analog aerial photographs is usually done visually by experienced analysts. is the capability of offering very high spatial resolution images (20 cm or less). The photographs may be digitized using a scanning device for computer-assisted analysis. Digital photography permits real-time transmission of the remotely sensed data to a ground station for immediate analysis.obtain images of the earth's surface. It is not cost-effective to map a large area using an airborne remote sensing system. compared to satellite remote sensing. The digital images can be analysed and interpreted with the aid of a computer. . An advantage of airborne remote sensing. Analog aerial photography. whereas earth observation satellites offer the possibility of continuous monitoring of the earth. A high resolution aerial photograph over a forested area. Analog photography is capable of providing high spatial resolution. Another example of a high resolution aerial photograph over a residential area. The disadvantages are low coverage area and high cost per unit area of ground coverage. and digital photography are commonly used in airborne remote sensing.

Spaceborne Remote Sensing IKONOS 2 SPOT 1. 4 EROS A1 TERRA OrbView 2 (SeaStar) NOAA 12. 16 ERS 1. 2. 14. 2 RADARSAT 1 .

Spaceborne remote sensing provides the following advantages: y y y y y Large area coverage. Relatively lower cost per unit area of coverage. the satellite will appear stationary with respect to the earth surface. sensors are mounted on-board a spacecraft (space shuttle or satellite) orbiting the earth. there are several remote sensing satellites providing imagery for research and operational applications. Semiautomated computerised processing and analysis. The satellite traces out a path on the earth surface. The time taken to complete one revolution of the orbit is called the orbital period. As the earth below is rotating. Geostationary Orbits Geostationary Orbit: The satellite appears stationary with respect to the Earth's surface. the satellite traces out a different path on the ground in each subsequent cycle.The receiving ground station at CRISP receives data from these satellites In spaceborne remote sensing. very high resolution imagery (up to 1-m resolution) is now commercially available to civilian users with the successful launch of the IKONOS-2 satellite in September 24. . This time interval is called the repeat cycle of the satellite. as it moves across the sky. Frequent and repetitive coverage of an area of interest. called its ground track. Satellite imagery has a generally lower resolution compared to aerial photography. 1999. However. Satellite Orbits A satellite follows a generally elliptical orbit around the earth. If a satellite follows an orbit parallel to the equator in the same direction as the earth's rotation and with the same period of 24 hours. Quantitative measurement of ground features using radiometrically calibrated sensors. Remote sensing satellites are often launched into special orbits such that the satellite repeats its path after a fixed time interval. At present.

This orbit is a geostationary orbit. In terms of the spatial resolution. i. Sun Synchronous Orbits A near-polar sun synchronous orbit Earth observation satellites usually follow the sun synchronousorbits. 5 m to 100 m) Very high resolution systems (approx. Each of these satellite-sensor platform is characterised by the wavelength bands employed in image acquisition. A sun synchronous orbit is a nearpolar orbit whose altitude is such that the satellite will alwayspass over a location at a given latitude at the same local solartime.e.000 km. the same solarillumination condition (except for seasonal variation) can be achieved for the images of a given location taken by the satellite. Satellites in the geostationary orbits are located at a high altitude of 36. the satellite imaging systems can be classified into: y y y y Low resolution systems (approx. Near Polar Orbits A near polar orbit is one with the orbital plane inclined at a small angle with respect to the earth's rotation axis. spatial resolution of the sensor. how frequent a given location on the earth surface can be imaged by the imaging system. 100 m to 1 km) High resolution systems (approx. 1 km or more) Medium resolution systems (approx. The geostationary orbits are commonly used by meteorological satellites. In this way. the satellite imaging systems can be classified . 5 m or less) In terms of the spectral regions used in data acquisition. providing imagery suitable for various types of applications. the coverage area and the temporal coverge. These orbits enable a satellite to always view the same area on the earth. A large area of the earth can also be covered by the satellite. A satellite following a properly designed near polar orbit passes close to the poles and is able to cover nearly the whole earth surface in a repeat cycle. Remote Sensing Satellites Several remote sensing satellites are currently available.

e. Aerial photographs are examples of analog images while satellite images acquired using electronic sensors are examples of digital images. and shortwave infrared systems) Thermal imaging systems Synthetic aperture radar (SAR) imaging systems Optical/thermal imaging systems can be classified according to the number of spectral bands used: y y y y Monospectral or panchromatic (single wavelength band. or HH.: y y y y Single frequency (L-band. The images may be analog or digital. Digital Image Analog and Digital Images An image is a two-dimensional representation of objects in a real scene. grey-scale image) systems Multispectral (several spectral bands) systems Superspectral (tens of spectral bands) systems Hyperspectral (hundreds of spectral bands) systems Synthetic aperture radar imaging systems can be classified according to the combination of frequency bands and polarization modes used in data acquisition. or HV) Multiple polarization (Combination of two or more polarization modes) Descriptions of some of the operational and planned remote sensing sattelite platforms and sensors are provided in the appendix of this tutorial. Remote sensing images are representations of parts of the earth surface as seen from space. . "black-and-white".into: y y y Optical imaging systems (include visible. near infrared. or X-band) Multiple frequency (Combination of two or more frequency bands) Single polarization (VV.g. or C-band.

Each pixel represents an area on the Earth's surface. Longitude.1).g. a digital number is stored with a finite number of bits (binary digits).A digital image is a two-dimensional array of pixels. The intensity value represents the measured physical quantity such as the solar radiance in a given wavelength band reflected from the ground. Due to the finite storage capacity.e. The address of a pixel is denoted by its row and column coordinates in the two-dimensional image. This value is normally the average value for the whole ground area covered by the pixel. The intensity of a pixel is digitised and recorded as a digital number. The number of bits determine the radiometric resolution of the image. In a Radiometrically Calibrated image. Each pixel has an intensity value (represented by a digital number) and a location address (referenced by its row and column numbers). For example. Pixels A digital image comprises of a two dimensional array of individual picture elements called pixels arranged in columns and rows. A pixel has an intensity value and a location address in the two dimensional image. latitude) of the imaged location. the actual intensity value can be derived from the pixel digital number. There is a one-to-one correspondence between the column-row address of a pixel and the geographical coordinates (e. given the imaging geometry and the satellite orbit parameters. an 8-bit digital number ranges from 0 to 255 (i. 28 . In order to be useful. the exact geographical location of each pixel on the ground must be derivable from its row and column indices. while a 11-bit digital number ranges from 0 to 2047. . The detected intensity value needs to be scaled and quantized to fit within this range of value. emitted infrared radiation or backscattered radar intensity.

a multilayer image is formed. As the detector array flies along its track. By "stacking" these images from the same area together. Multilayer images can also be formed by combining images obtained from different sensors. Each detector element projects an "instantaneous field of view (IFOV)" on the ground. At any instant. a layer of ERS synthetic aperture radar image. a row of pixels are formed. and perhaps a layer consisting of the digital elevation map of the area being studied. and other subsidiary data. a multilayer image may consist of three layers from a SPOT multispectral image. the row of pixels sweeps along to generate a twodimensional image. Multilayer Image Several types of measurement may be made from the ground area covered by a single pixel. For example. The signal recorded by a detector element is proportional to the total radiation collected within its IFOV. . Each component image is a layer in the multilayer image."A Push-Broom" Scanner: This type of imaging system is commonly used in optical remote sensing satellites such as SPOT. The imaging system has a linear detector array (usually of the CCD type) consisting of a number of detector elements (6000 elements in SPOT HRV). Each type of measurement forms an image which carry some specific information about the area.

An illustration of a multilayer image consisting of five component layers.

Multispectral Image
A multispectral image consists of a few image layers, each layer represents an image acquired at a particular wavelength band. For example, the SPOT HRV sensor operating in the multispectral mode detects radiations in three wavelength bands: the green (500 - 590 nm), red (610 - 680 nm) and near infrared (790 - 890 nm) bands. A single SPOT multispectral scene consists of three intensity images in the three wavelength bands. In this case, each pixel of the scene has three intensity values corresponding to the three bands. A multispectral IKONOS image consists of four bands: Blue, Green, Red and Near Infrared, while a landsat TM multispectral image consists of seven bands: blue, green, red, near-IR bands, two SWIR bands, and a thermal IR band.

Superspectral Image
The more recent satellite sensors are capable of acquiring images at many more wavelength bands. For example, the MODIS sensor on-board the NASA's TERRA satellite consists of 36 spectral bands, covering the wavelength regions ranging from the visible, near infrared, shortwave infrared to the thermal infrared. The bands have narrower bandwidths, enabling the finer spectral characteristics of the targets to be captured by the sensor. The term "superspectral" has been coined to describe such sensors.

Hyperspectral Image
A hyperspectral image consists of about a hundred or more contiguous spectral bands. The characteristic spectrum of the target pixel is acquired in a hyperspectral image. The precise spectral information contained in a hyperspectral image enables better characterisation and identification of targets. Hyperspectral images have potential applications in such fields as precision agriculture (e.g. monitoring the types, health, moisture status and maturity of crops), coastal management (e.g. monitoring of phytoplanktons, pollution, bathymetry changes).

Currently, hyperspectral imagery is not commercially available from satellites. There are experimental satellite-sensors that acquire hyperspectral imagery for scientific investigation (e.g. NASA's Hyperion sensor on-board the EO1 satellite, CHRIS sensor onboard ESA's PRABO satellite).

An illustration of a hyperspectral image cube. The hyperspectral image data usually consists of over a hundred contiguous spectral bands, forming a three-dimensional (two spatial dimensions and one spectral dimension) image cube. Each pixel is associated with a complete spectrum of of the imaged area. The high spectral resolution of hyperspectral images enables better identificaiton of the land covers.

Spatial Resolution
Spatial resolution refers to the size of the smallest object that can be resolved on the ground. In a digital image, the resolution is limited by the pixel size, i.e. the smallest resolvable object cannot be smaller than the pixel size. The intrinsic resolution of an imaging system is determined primarily by the instantaneous field of view (IFOV) of the sensor, which is a measure of the ground area viewed by a single detector element in a give instant in time. n However this intrinsic resolution can often be degraded by other factors which introduce blurring of the image, such as improper focusing, atmospheric scattering and target motion. The pixel size is determined by the sampling distance. A "High Resolution" image refers to one with a small resolution size. Fine details can be seen in a high resolution image. On the other hand, a "Low Resolution" image is one with a large resolution size, i.e. only coarse features can be observed in the image.

A low resolution MODIS scene with a wide coverage. This image was received by CRISP's ground station on 3 March 2001. The intrinsic resolution of the image was approximately 1 km, but the image shown here has been resampled to a resolution of about 4 km. The coverage is more than 1000 km from east to west. A large part of Indochina, Peninsular Malaysia, Singapore and Sumatra can be seen in the image. (Click on the image to display part of it at a resolution of 1 km.)

and hence the resolution has been reduced. .A browse image of a high resolution SPOT scene. The multispectral SPOT scene has a resolution of 20 m and covers an area of 60 km by 60 km. This scene shows Singapore and part of the Johor State of Malaysia. The browse image has been resampled to 120 m pixel size.

6 km. The image shown here covers an area of approximately 4.Part of a high resolution SPOT scene shown at the full resolution of 20 m. At this resolution. vegetation and blocks of buildings can be seen. . roads.8 km by 3.

shadows and roads can be seen. individual trees. The effective resolution is thus determined by the resolution of the panchromatic image. The image shown here covers an area of about 400 m by 400 m. A full scene of an IKONOS image has a coverage area of about 10 km by 10 km. vehicles. . The subsequent images show the effects of digitizing the same area with larger pixel sizes. Spatial Resolution and Pixel Size The image resolution and pixel size are often used interchangeably. This image is further processed to degrade the resolution while maintaining the same pixel size. The effective resolution of the image is 1 m. details of buildings. A very high spatial resolution image usually has a smaller area of coverage. The first image is a SPOT image of 10 m pixel size. At this resolution. but still digitized at the same pixel size of 10 m.Part of a very high resolution image acquired by the IKONOS satellite. 10 m pixel size 30 m resolution. The next two images are the blurred versions of the image with larger resolution size. The first image is a SPOT image of 10 m pixel size derived by merging a SPOT panchromatic image with a SPOT multispectral image. The following three images illustrate this point. 10 m pixel size The following images illustrate the effect of pixel size on the visual appearance of an area. 10 m resolution. It was derived by merging a SPOT panchromatic image of 10 m resolution with a SPOT multispectral image of 20 m resolution. This true-colour image was obtained by merging a 4-m multispectral image with a 1-m panchromatic image of the same area acquired simultaneously. they do not have the same resolution. An image sampled at a small pixel size does not necessarily has a high resolution. which is 10 m. they are not equivalent. 10 m pixel size 80 m resolution. The merging procedure "colours" the panchromtic image using the colours derived from the multispectral image. In realiaty. Even though they have the same pixel size as the first image.

256 levels) per pixel. the radiometric resolution is limited by the number of discrete quantization levels used to digitize the continuous intensity value. The intrinsic radiometric resolution of a sensing system depends on the signal to noise ratio of the detector. Height = 160 pixels Pixel Size = 20 m Image Width = 80 pixels. In a digital image. Height = 20 pixels Radiometric Resolution Radiometric Resolution refers to the smallest change in intensity level that can be detected by the sensing system. The following images illustrate the effects of the number of quantization levels on the digital image. The first image is a SPOT panchromatic image quantized at 8 bits (i.Pixel Size = 10 m Image Width = 160 pixels. The subsequent images show the effects of degrading the radiometric resolution by using fewer quantization levels. . Height = 40 pixels Pixel Size = 80 m Image Width = 20 pixels. Height = 80 pixels Pixel Size = 40 m Image Width = 40 pixels.e.

The high radiometric resolution enables features under shadow to be recovered. the accuracy of analysis will be compromised if few quanti ation levels are used. The IKONOS uses 11-bit digiti ation during image acquisition. .8-bit quanti ation (256 levels) 6-bit quanti ation (64 levels) 4-bit quanti ation (16 levels) 3-bit quanti ation (8 levels) 2-bit quanti ation (4 levels) 1-bit quanti ation (2 levels) Digiti ation using a small number of quanti ation levels does not affect very much the visual quality of the image. if the image is to be subjected to numerical analysis. Part of the running track in this IKONOS image is under cloud shadow. However. Even 4-bit quanti ation (16 levels) seems acceptable in the examples shown.

If a multispectral SPOT scene is digitized also at 10 m pixel size. spatial resolution may have to be compromised to accommodate a larger number of spectral bands. For example. giving about 6000 x 6000 pixels and a total of about 36 million bytes per image. the data volume is even more significant. For very high spatial resolution imagery. it is desirable to have a high spatial resolution image with many spectral bands covering a wide area. 1 byte) digital number. a SPOT panchromatic scene has the same coverage of about 60 x 60 km2 but the pixel size is 10 m. Thus. For example. an IKONOS 4-band multispectral image at 4-m pixel size covering an area of 10 km by 10 km. The images taken by a remote sensing satellite is transmitted to Earth through telecommunication. In comparison. Each pixel intensity in each band is coded using an 8-bit (i. has a data volume of 4 x 2500 x 2500 x 2 bytes.e. giving a total of about 27 million bytes per image. In reality. as a given area is covered in many different wavelength bands. Optical Remote Sensing . So there are about 3000 x 3000 pixels per image. the data volume will be 108 million bytes. or 50 million bytes per image.The features under cloud shadow are recovered by applying a simple contrast and brightness enhancement technique. the panchromatic data has only one band. The bandwidth of the telecommunication channel sets a limit to the data volume for a scene taken by the imaging system. Data Volume The volume of the digital data can potentially be large for multispectral data. a 3-band multispectral SPOT image covers an area of about 60 x 60 km2 on the ground with a pixel separation of 20 m. A 1-m resolution panchromatic image covering the same area would have a data volume of 200 million bytes per image. depending on the intended application. A small number of spectral bands or a smaller area of coverage may be accepted to allow high spatial resolution imaging. Ideally. panchromatic systems are normally designed to give a higher spatial resolution than the multispectral system. For example. or a wide area coverage. digitized at 11 bits (stored at 16 bits). such as the one acquired by the IKONOS satellite.

depending on the number of spectral bands used in the imaging process. Different materials reflect and absorb differently at different wavelengths.Optical remote sensing makes use of visible. near infrared and short-waveinfrared sensors to form images of the earth's surface by detecting thesolar radiation reflected from targets on the ground. Each channel is sensitive to radiation within a narrow wavelength band. Examples of panchromatic imaging systems are: o IKONOS PAN o SPOT HRV-PAN Multispectral imaging system: The sensor is a multichannel detector with a few spectral bands. The resulting image is a multilayer image which contains both the brightness and spectral (colour) information of the targets being observed. Optical remote sensing systems are classified into the following types. The physical quantity being measured is the apparent brightness of the targets. The bands have narrower bandwidths. Thus. the targets can be differentiated by their spectral reflectance signatures in the remotely sen sed images. y y y y Panchromatic imaging system: The sensor is a single channel detector sensitive to radiation within a broad wavelength range. enabling the finer spectral characteristics of the targets to be captured by the sensor. Examples of superspectral systems are: o MODIS o MERIS Hyperspectral Imaging Systems: A hyperspectral imaging system is also known as . If the wavelength range coincide with the visible range. then the resulting image resembles a "black-and-white" photograph taken from space. Examples of multispectral systems are: o LANDSAT MSS o LANDSAT TM o SPOT HRV-XS o IKONOS MS Superspectral Imaging Systems: A superspectral imaging sensor has many more spectral channels (typically >10) than a multispectral sensor. The spectral information or "colour" of the targets is lost.

health. monitoring the types. The solar irradiation spectrum above the atmosphere can be modeled by a black body radiation spectrum having a source temperature of 5900 K. Hyperspectral images have potential applications in such fields as precision agriculture (e. it acquires images in about a hundred or more contiguous spectral bands.g. the solar irradiation spectrum at the ground is modulated by the atmospheric transmission windows. An example of a hyperspectral system is: o Hyperion on EO1 satellite Solar Irradiation Optical remote sensing depends on the sun as the sole source of illumination. pollution.g. moisture status and maturity of crops). Solar Irradiation Spectra above the atmosphere and at sea-level. monitoring of phytoplanktons. Spectral Reflectance Signature .25 to 3 µm.an "imaging spectrometer". The precise spectral information contained in a hyperspectral image enables better characterisation and identification of targets. Physical measurement of the solar irradiance has also been performed using ground based and spaceborne sensors. coastal management (e. Significant energy remains only within the wavelength range from about 0. with a peak irradiation located at about 500 nm wavelength. After passing through the atmosphere. bathymetry changes).

Different materials reflect and absorb differently at different wavelengths. it should appear yellowish-red to the eye. accounting for its brownish appearance. Turbid water has some sediment suspension which increases the reflectance in the red end of the spectrum. it may be transmitted. Reflectance Spectrum of Five Types of Landcover The reflectance of clear water is generally low. In the example shown. the reflectance increases monotonically with increasing wavelength. Hence. vegetation can be identified by the high NIR but generally low visible reflectances.When solar radiation hits a target surface. the reflectance is much higher than that in the visible band due to the cellular structure in the leaves. absorbed or reflected. It has a peak at the green region which gives rise to the green colour of vegetation. turbid water. Vegetation has a unique spectral signature which enables it to be distinguished readily from other types of land cover in an optical/near-infrared image. the reflectance is maximum at the blue end of the spectrum and decreases as wavelength increases. Hence. due to absorption by chlorophyll for photosynthesis. This property has been used in early reconnaisance . In the near infrared (NIR) region. In principle. Hence. The following graph shows the typical reflectance spectra of five materials: clear water. This premise provides the basis for multispectral remote sensing. The reflectance is low in both the blue and red regions of the spectrum. bare soil and two types of vegetation. However. The reflectance spectrum of a material is a plot of the fraction of radiation reflected as a function of the incident wavelength and serves as a unique signature for the material. a material can be identified from its spectral reflectance signature if the sensing system has sufficient spectral resolution to distinguish its spectrum from those of other materials. The reflectance of bare soil generally depends on its composition. clear water appears dark-bluish.

The SWIR band is also sensitive to the thermal radiation emitted by intense fires.95 and 2.50 µm. D: near IR band.g. the reflectance spectra of vegetation 1 and 2 in the above figures can be distinguished although they exhibit the generally characteristics of high NIR but low visible reflectances. The reflectance of vegetation in the SWIR region (e. Vegetation 1 has higher reflectance in the visible region but lower reflectance in the NIR region. depending on the types of plants and the plant's water content. This property can be used for identifying tree types and plant conditions from remote sensing images. Outside these absorption bands in the SWIR region.missions during war times for "camouflage detection". Water has strong absorption bands around 1. 1. The SWIR band can be used in detecting plant drought stress and delineating burnt areas and fire-affected vegetation. Typical Reflectance Spectrum of Vegetation. especially during night-time when the background interference from SWIR in reflected sunlight is absent. The labelled arrows indicate the common wavelength bands used in optical remote sensing of vegetation: A: blue band. band 5 of Landsat TM and band 4 of SPOT 4 sensors) is more varied. For the same vegetation type. For example. The shape of the reflectance spectrum can be used for identification of vegetation type. reflectance of leaves generally increases when leaf liquid water content decreases. and hence can be used to detect active fires.45. the reflectance spectrum also depends on other factors such as the leaf moisture content and health of the plants. C: red band. E: short-wave IR band Interpretation of Optical Images Interpreting Optical Remote Sensing Images . B: green band.

Te tural Information. i.Four main types of information contained in an optical image are often utili ed for image interpretation: y y y y Radiometric Information (i e brightness. Panchromatic Images A panchromatic image consists of only one band. Spectral Information (i e colour. intensity. tone). .e. The Radiometric Information is the main information type utili ed in the interpretation. Geometric and onte tual Information They are illustrated in the following examples. It is usually displayed as a grey scale image. Thus. the displayed brightness of a particular pixel is proportional to the pixel digital number which is related to the intensity of solar radiation reflected by the targets in the pixel and detected by the detector. hue). a panchromatic image may be similarly interpreted as a black -and-white aerial photograph of the area.

each band of the image may be displayed one band at a time as a grey scale image. The area covered is the same as that shown in the above panchromatic image. corresponding to different types of vegetation.5 km (height). cutting across the top right corner of the image can be seen. the vegetated areas now appear bright in theXS3 (near infrared) band due to high reflectance of leaves in the near infrared wavelength region. In this case. Note that both the XS1 (green) and XS2 (red) bands look almost identical to the panchromatic image shown above. Multispectral Images A multispectral image consists of several bands of data. or in combination of three bands at a time as a colour composite image. Interpretation of a multispectral colour composite image will require the knowledge of the spectral reflectance signature of the targets in the scene. The ground coverage is about 6. For visual display. A river flowing through the vegetated area. Roads and blocksof buildings in the ur ban area are visible. . The river appears bright due to sediments while the sea at the bottom edge of the image appears dark.A panchromatic image extracted from a SPOT panchromatic scene at a ground resolutionof 10 m.while the vegetated areas on the right part of the image are generally dark. Several shades of grey can be identified for the vegetated areas. Water mass (both the river and the sea) appear dark in the XS3 (near IR) band. The following three images show the three bands of a multispectral image extracted from a SPOT multispectral scene at a ground resolution of 20 m.5 km (width) by 5. The urban area at the bottom left and a clearing near the top of the image hav e high reflected intensity. the spectral information content of the image is utili ed in the interpretation. In contrast.

SPOT XS1 (green band) SPOT XS2 (red band) .

green and blue) are used. Green. Associating each spectral band (not necessarily a visible band) to a separate primary colour results in a colour composite image. the three bands . three primary colours (red. green. Blue) in various proportions True Colour Composite If a multispectral image consists of the three visual primary colour bands (red. blue). When these three colours are combined in various proportions. they produce different colours in the visible spectrum. Many colours can be formed by combining the three primary colours (Red.SPOT XS3 (Near IR band) Colour Composite Images In displaying a colour composite image.

the colour of a target in the displayed image does not have any resemblance to its actual colour. A 1-m resolution true-colour IKONOS image. There are many possible schemes of producing false colour composite images. and B colours for display. However. the bands 3 (red band). . In this case. the colours of the resulting colour composite image resemble closely what would be observed by the human eyes. vegetation appears in different shades of red depending on the types and conditions of the vegetation. some scheme may be more suitable for detecting certain objects in the image. A very common false colour composite scheme for displaying a SPOT multispectral image is shown below: R = XS3 (NIR band) G = XS2 (red band) B = XS1 (green band) This false colour composite scheme allows vegetation to be detected readily in the image. In this way. False Colour Composite The display colour assignment for any band of a multispectral image can be done in an entirely arbitrary manner. G. 2 (green band) and 1 (blue band) of a LANDSAT TM image or an IKONOS multispectral image can be assigned respectively to the R. The resulting product is known as a false colour composite image.may be combined to produce a "true colour" image. For example. since it has a high reflectance in the NIR band (as shown in the graph of spectral reflectance signature). In this type of false colour composite images.

Bare soils. . yellow or grey. depending on their composition. Green: XS2. Blue: XS1 Another common false colour composite scheme for displaying an optical image with a short-wave infrared (SWIR) band is shown below: R = SWIR band (SPOT4 band 4. Landsat TM band 3) An example of this false colour composite display is shown below for a SPOT 4 image. while turbid water appears cyan (higher red reflectance due to sediments) compared to clear water. False colour composite multispectral SPOT image: Red: XS3.Clear water appears dark-bluish (higher green band reflectance). Landsat TM band 4) B = Red band (SPOT4 band 2. roads and buildings may appear in various shades of blue. Landsat TM band 5) G = NIR band (SPOT4 band 3.

Vegetation appears in sh ades of red. False colour composite of a SPOT 4 multispectral image without displaying the SWIR band: Red: NIR band. A smoke plume originating from the active fire site appears faint bluish in colour. The smoke plume appears bright bluish white. Green: NIR band. Blue: Red band. vegetation appears in shades of green. In this display scheme. Blue: Green band.False colour composite of a SPOT 4 multispectral image including the SWIR band: Red: SWIR band. The patch of bright red area on the left is the location of active fire s. Natural Colour Composite . Bare soils and clearcut areas appear purplish or magenta. Green: Red band.

i. etc. the spectral bands (some of which may not be in the visible region) may be combined in such a way that the appearance of the displayed image resembles a visible colour photograph. Natural colour composite multispectral SPOT image: Red: XS2. this term is misleading since in many instances the colours are only simulated to look similar to the "true" colours of the targets. This ratio is known as the Ratio Vegetation Index (RVI) RVI = NIR/Red Since vegetation has high NIR reflectance but low red reflectance. Blue: 0. vegetation in green. The SPOT HRV multispectral sensor does not have a blue band.0. But a reasonably good natural colour composite can be produced by the following combination of the spectral bands: R = XS2 G = (3 XS1 + XS3)/4 B = (3 XS1 . vegetated areas will have higher RVI values compared to non-vegetated aeras. The term "natural colour" is preferred.XS3)/4 where R.25 XS3 Vegetation Indices Different bands of a multispectral image may be combined to accentuate the vegetated areas. One such combination is the ratio of the near-infrared band to the red band.75 XS2 + 0. red. and NIR bands respectively. water in blue. Another commonly used vegetation index is the Normalised Difference Vegetation Index (NDVI) computed by . G and B are the display colour channels. However.25 XS3. soil in brown or grey. Green: 0. XS2 and XS3 correspond to the green. red.For optical images lacking one or more of the three visual primary colour bands (i. XS1.75 XS2 . green and blue).e. The three bands. Many people refer to this composite as a "true colour" composite.e.

clearings. the bright areas are vegetated while the nonvegetated areas (buildings. the display colour assignment is: R = XS3 (Near IR band) G = (XS3 .NDVI = (NIR . river. sea) are generally dark. Note that the trees lining the roads are clearly visible as grey linear features against the dark background.XS2)/(XS3 + XS2) (NDVI band) B = XS1 (green band) .Red)/(NIR + Red) Normalised Difference Vegetation Index (NDVI) derived from the above SPOT image In the NDVI map shown above. The NDVI band may also be combined with other bands of the multispectral image to form a colour composite image which helps to discriminate different types of vegetation. One such example is shown below. In this image.

the trees are closer together. The predominant texture is the regular pattern formed by the tree crowns. The bright yellow areas are covered with shrubs or less dense trees. Even though the general colour is green throughout.NDVI Colour Composite of the SPOT image:Red: XS3. An example is shown below. especially for high spatial resolution imagery. At least three types of vegetation can be discriminated in this colour composite image: green. Individual trees can be seen. Near to the top of the image. bright yellow and golden yellow areas. colour is more homogeneous. . Textural Information Texture is an important aid in visual image interpretation. The golden yellow areas are covered with grass. The image is 300 m across. The green areas consist of dense trees with closed canopy. Blue: XS1. The triangular patch at the bottom left corner is the oil palm plantation with matured palm trees. The non vegetated areas appear in dark blue and magenta. forming another distinctive textural pattern. This is an IKONOS 1-m resolution pan-sharpened color image of an oil palm plantation. Green: NDVI. three distinct land cover types can be identified from the image texture. indicating that it is probably an open field with short grass. This area is probably inhibated by shrubs or abandoned trees with tall undergrowths and shrubs in between the trees. and the tree canopies merge together. and algorithms for computer-aided automatic descrimination of different textures in an image are available. It is also possible to characterize the textural features numerically. At the bottom right corner.

This is an IKONOS image of a container port. The port is probably not operating at its maximum capacity. and association with other familiar features. roadside trees. location. . Contextual and geometric information plays an important role in the interpretation of very high resolution imagery. and regular rows of rectangular containers. such as the buildings. make interpretation of the image st raight forward. size. evidenced by the presence of ships. cranes. roads and vehicles.Geometric and Contexture Information Using geometric and contextual features for image interpretation requires some a-priori information about the area of interest. The "interpretational keys" commonly employed are: shape. Familiar features visible in the image. pattern. as empty spaces can be seen in between the containers.

A smoke plume can be seen emanating from a site of active fires.6 km by 6 . The logging tracks are also seen in the cleared areas (dark greenish areas).4 km. The dark red regions are the remaining forests. It is obvious that the land clearing activities are carried out with the aid of fires. Sumatra. implicating some logging activities in the forests. This SPOT image shows land clearing being carried out in a logged over forest. The image area is 8. Tracks can be seen intruding into the forests.This SPOT image shows an oil palm plantation adjacent to a logged over forest in Riau. Infrared Remote Sensing . The rectangular grid pattern seen here is a main characteristic of large scale oil palm plantations in this region.

Infrared remote sensing makes use of infrared sensors to detect infrared radiation emitted from the Earth's surface. They are used in satellite remote sensing for measurements of the earth's land and sea surface temperature. Black Body Radiation . These radiations are emitted from warm objects such as the Earth's surface. Thermal infrared remote sensing is also often used for detection of forest fires. The middle-wave infrared (MWIR) and long-wave infrared (LWIR) are within the thermal infrared region.

Besides the measurement of regular surface temperature.8 µm. The amount of thermal radiation emitted at a particular wavelength from a warm object depends on its temperature. . For a surface at a brightness temperature around 300 K.Thermal emission from a surface at various temperatures. the radiance versus wavelength curves peak at around 3. AVHRR band 3) and 10 µm (e. most satellite sensors for measurement of the earth surface temperature have a band detecting infrared radiation around 10 µm. modeled by the Planck's equation for an ideal black body. For typical fire temperatures from about 500 K (smouldering fire) to over 1000 K (flaming fire). over the Sumatra and Peninsular Malaysia area. The two bands around 3.8 µm (e.g. Sensors such as the NOAA-AVHRR. If the earth's surface is regarded as a blackbody emitter. This is a true-colour image (at 500 m resolution) acquired by MODIS on 9 July 2001. ERS-ATSR and TERRA-MODIS are equipped with this band that can be used for detection of fire hot spots. Smoke plumes can be seen spreading northwards from the fire area towards the Northern part of Peninsular Malaysia. AVHRR band 4) commonly available in infrared remote sensing satellite sensors are marked in the figure. For this reason. its apparent temperature (known as the brightness temperature) and the spectral radiance are related by the Planck's blackbody equation. infrared sensors can be used for detection of forest fires or other warm/hot objects. plotted in the above figure for several temperatures.g. Hot spots detected by the MODIS thermal infrared bands are indicated as red dots in the image. The peak wavelength decreases as the brightness temperature increases. the spectral radiance peaks at a wavelength around 10 µm.

the deviation of the daily SST from the mean SST. i.e. land and ocean. (Credit: NOAA/NESDIS) Microwave Remote Sensing Electromagnetic radiation in the microwave wavelength region is used in remote sensing to provide useful information about the Earth's atmosphere. A microwave radiometer is a passive device which records the natural microwave emission from the earth. Occurrence of abnormal climatic conditions such as the El-Nino can be predicted by observations of the SST anomaly. It can be used to measure the total water content of the atmosphere within its .50-km resolution Global Sea Surface Temperature (SST) Field for the period 11 to 14 August 2001 derived from NOAA AVHRR thermal infrared data.

The microwave energy scattered back to the spacecraft is measured. it sends out pulses of microwaves along several directions an records the magnitude of the d signals backscattered from the ocean surface. the longer the antenna. A wind scatterometer can be used to measure wind speed and direction over the ocean surface. The height of the surface can be measured from the time delay of the return signals. orne platforms to generate r high resolution images of the earth surface using microwave energy. the ground resolution is limited by the si e of the microwave beam sent out from the antenna. Finer details on the ground can be resolved by using a narrower beam. Synthetic Aperture Radar (SAR) In synthetic aperture radar (SAR) imaging. i. The SAR makes use of the radar principle to form an image by utilising t e time delay of the h backscattered signals.e. The magnitude of the backscattered signal is related to the ocean surface roughness. microwave pulses are transmitted by an antenna towards the earth surface. A radar pulse is transmitted from the antenna to the ground The radar pulse is scattered by the ground targets back to the antenna In real aperture radar imaging.field of view. and hence the wind speed and di ection can be derived. the narrower the beam. . which in turns is dependent on the sea surface wind condition. A radar altimeter sends out pulses of microwave signals and record the signal scattered back from the earth surface. The beam width is inversely proportional to the si e of the antenna.

To overcome this limitation. It is not feasible for a spacecraft to carry a very long antenna which is required for high resolution imaging of the earth surface. The antenna's footprint . SAR capitalises on the motion of the space craft to emulate a large antenna (about 4 km for the ERS SAR) from the small antenna (10 m on the ERS satellite) it actually carries on board.The microwave beam sent out by the antenna illuminates an area on the ground (known as the antenna's "footprint"). the recorded signal strength depends on the microwave energy backscattered from the ground targets inside this footprint. In radar imaging. Increasing the length of the antenna will decrease the width of the footprint. Imaging geometry for a typical strip-mapping synthetic aperture radar imaging system.

Generally. Geometric factors such as surface roughness. On the other h hand. The SAR backscattered intensity generally increases with the surface roughness. This is especially useful in the tropical regions which are frequently under cloud covers throughout the year. For example. Click here to read more about microwave fre uency.Frequenc . then the surface is considered smooth.sweeps out a strip parallel to the direction of the satellite's ground track. SAR Imaging . polarisation and incident angle. However. the penetration power increases for longer wavelength (lower frequency). or land surface cover depends on its frequency. In SAR imaging. Polarisation and Incident Angle Microwave Fre uency The ability of microwave to penetrate clouds. Interaction between Microwaves and Earth's Surface When microwaves strike a surface. the reference length scale for surface roughness is the wavelength of the microwave. SAR is able to acquire "cloud -free" images in all weather. Microwave frequency. Whether a surface is considered rough or not depends on the length scale of the measuring instrument. orientation of the objects relative to the radar beam direction. "roughness" is a relative quantity. vegetation or man-made objects). However. Being an active remote sensing dev it is also capable of nightice. the same surface will   . If the surface fluctuation is less than the microwave wavelength. If a meter-rule is used to measure surface roughness. then a fluctuation of the order of a fraction of a millimiter is considered very rough. time operation. then any surface fluctuation of the order of 1 cm or less will be considered smoot . slopes. precipitation. The types of landcover (soil. polarisation and incident angle in SAR imaging All-Weather Imaging Due to the cloud penetrating property of microwave. little radiation is backscattered from a surface with a fluctuation of the order of 5 cm if a L-band (15 to 30 cm wavelength) SAR is used and the surface will appear dark. the proportion of energy scattered back to the sensor depends on many factors: y y y y Physical factors such as the dielectric constant of the surface materials which also depends strongly on the moisture content. if a surface is examined under a microscope.

appear bright due to increased backscattering in a X-band (2. The surface appears bright in the radar image due to increased backscattering from the surface. . Both the ERS and RADARSAT SARs use the C band microwave while the JERS SAR uses the L band. it is more useful in forest and vegetation study as it is able to penetrate deeper into the vegetation canopy.4 to 3. The same land surface appears rough to a short wavelength radar. it also finds numerous land applications. The C band is useful for imaging ocean and ice features. However. Hence. The L band has a longer wavelength and is more penetrating than the C band. Little radiation is backscattered from the surface. The land surface appears smooth to a long wavelength radar.8 cm wavelength) SAR image.

the beam is "V" polarised. If the electric field vector oscillates along a direction parallel to the horizontal direction. In comparison. trunks and soil. so it is a "VV" polarised SAR. If the electric field vector oscillates perpendicular to the horizontal direction. "HV" and "VH" depending on the polarisation states of the transmitted and received microwave signals. Microwave Polarisation in Synthetic Aperture Radar The microwave polarisation refers to the orientation of the electric field vector of the transmitted beam with respect to the horizontal direction. After interacting with the earth surface. the polarisation state may be altered.The short wavelength radar interacts mainly with the top layer of the forest canopy while the longer wavelength radar is able to penetrate deeper into the canopy to undergo multiple scattering between the canopy. the SAR onboard the ERS satellite transmits V polarised and receives only the V polarised microwave pulses. Microwave Polarisation: If the electric field vector oscillates along the horizontal direction. the wave is V polarised. Incident Angles . the beam is said to be "H" polarised. "VV". there are four possible polarisation configurations for a SAR system: "HH". On the other hand. the wave is H polarised. The SAR sensor may be designed to detect the H or the V component of the backscattered radiation. So the backscattered microwave energy usually has a mixture of the two polarisation states. For example. Hence. the SAR onboard the RADARSAT satellite is a "HH" polarised SAR. if the electric field vector oscillates along a direction perpendicular to the horizontal direction.

The incident angle of 23o for the ERS SAR is optimal for detecting ocean waves and other ocean surface features. For example.5 m) ¡ . A larger incident angle may be more suitable for other applications. ERS SAR image (pixel si e=12. JERS and RADARSAT. RADARSAT is the first spaceborne SAR that is equipped with multiple beam modes enabling microwave imaging at different incident angles and resolutions. An example of a ERS SAR image is shown below together with a SPOT multispectral natural colour composite image of the same area for comparison. nterpreting SAR Images SAR Images Synthetic Aperture Radar(SAR) images can be obtained from satellites such as ERS. The interaction between microwaves and the surface depends on the incident angle of the radar pulse on the surface. a large incident angle will increase the contrast between the forested and clearcut areas. Since radar interacts with the ground features in ways different from the optical radiation. Acquisition of SAR images of an area using two different incident angles will also enable the construction of a stereo image for the area. special care has to be taken when interpreting radar images. ERS SAR has a constant incident angle of 23o at the scene centre.The incident angle refers to the angle between the incident radar beam and the direction perpendicular to the ground surface.

SPOT Multispectral image in Natural Colour (pixel size=20 m) The urban area on the left appears bright in the SAR image while the vegetated areas on the right have intermediate tone. Additional clearings can be seen in the SAR image. Speckle Noise Unlike optical images. radar images are formed by coherent interaction of the transmitted microwave with the targets. These features will be explained in the following sections. it suffers from the effects of speckle noise which arises from coherent summation of the signals scattered from ground scatterers distributed randomly within each pixel. . A radar image appears more noisy than an optical image. The clearings and water (sea and river) appear dark in the image. The speckle noise is sometimes suppressed by applying a speckle removal filter on the digital image before display and further analysis. The SAR image was acquired in September 1995 while the SPOT image was acquired in February 1994. Hence.

showing the clearing areas between the river and the coastline The image appears "grainy" due to the presence of speckles This image shows the effect of applying a speckle removal filter to the SAR image The vegetated areas and the clearings now appear more homogeneous Backscattered Radar Intensity A single radar image is usually displayed as a grey scale image. moisture content of the target area. The pixel intensity values are often converted to a physical quantity called the backscattering coefficient or normalised radar cross-section measured in decibel (dB) units with values ranging from +5 dB for very bright objects to -40 dB for very dark surfaces. such as the one shown above. shapes and orientations of the scatterers in the target area. frequency and polarisation of the radar pulses.This image is e tracted from the above SAR image. The intensity of each pixel represents the proportion of microwave backsc attered from that area on the ground which depends on a variety of factors: types. si es. as well as the incident angles of the radar beam. .

Hence.Interpreting SAR Images Interpreting a radar image is not a straightforward task. oil films can be detected as dark patches against a bright background. A ship (bright target near the bottom left corner) is seen discharging oil into the sea in this ERS SAR image Trees and other vegetations are usually moderately rough on the wavelength scale. It very often requires some familiarity with the ground conditions of the areas imaged. However. Flat surfaces such as paved roads. i e the angle of reflection is e ual to the angle of incidence Very little energy is scattered back to the radar sensor Diffused Reflection A rough surface reflects the incident radar pulse in all directions Part of the radar energy is scattered back to the radar sensor The amount of energy backscattered depends on the properties of the target on the ground Calm sea surfaces appear dark in SAR images. which is spatially homogeneous and remains stable in time. The presence of oil films smoothen out the sea surface. they appear as moderately bright features in the image. Very bright targets may appear in the image due to the corner-reflector or double-bounce effect where the radar pulse bounces off the hori ontal ground (or the sea) towards the target. For this reason. runways or calm water normally appear as dark areas in a radar image since most of the incident radar pulses are specularly reflected away. the tropical rainforests have been used as calibrating targets in performing radiometric calibration of SAR images. The tropical rain forests have a characteristic backscatter coefficient of between -6 and -7 dB. Specular Reflection A smooth surface acts like a mirror for the incident radar pulse Most of the incident radar energy is reflected away according to the law of specular reflection. As a useful rule of thumb. the rougher is the surface being imaged . Under certain conditions when the sea surface is sufficiently rough. rough sea surfaces may appear bright especially when the incidence angle is small. the higher the backscattered intensity. .

Built-up areas and many man-made features usually appear as bright patches in a radar image due to the corner reflector effect. high-rise buildings and regular metallic objects such as cargo containers. rough soil appears bright in the image. Examples of such targets are ships on the sea. For similar soil roughness. the beam bounces twice off the surfaces and most of the radar energy is reflected back to the radar sensor This SAR image shows an area of the sea near a busy port Many ships can be seen as bright spots in this image due to corner reflection The sea is calm. ill . Typically. Corner Reflection When two smooth surfaces form a right angle facing the radar beam. the surface with a higher moisture content w appear brighter.and then reflected from one vertical surface of the target back to the sensor. and hence the ships can be easily detected against the dark background The brightness of areas covered by bare soil may vary from very dark to very bright depending on its roughness and moisture content.

For example. they can be combined to give a multitemporal colour composite image of the area.Dry Soil: Some of the incident radar energy is able to penetrate into the soil surface. the second to the Green and the third to the Blue colour channels for display. resulting in low backscattered intensity. Wet Soil: The large difference in electrical properties between water and air results in higher backscattered radar intensity. This technique is especially useful in detecting landcover changes over the period of image acquisition. then one image can be assigned to the Red. The flooded area appears dark in the SAR image. if three images are available. resulting in less backscattered intensity. Multitemporal SAR images If more than one radar images of the same area acquired at different time are available. Flooded Soil: Radar is specularly reflected off the water surface. . The areas where no change in landcover occurs will appear in grey while areas with landcover changes will appear as colourful patches in the image.

This image is an example of a multitemporal colour composite SAR image. The grey patch near the bottom of the image is wetland forest. The greyish linear features are the more permanent trees lining the canals. Image Processing and Analysis . near the towns of Soc Trang and Phung Hiep. The two towns appear as bright white spots in this image. The area shown is part of the rice growing areas in the Mekong River delta. 9 June and 14 July in 1996 are assigned to the red. green and blue channels respectively for display. Vietnam. The colourful areas are the rice growing areas. An area of depression flooded with water during this season is visible as a dark region. where the landcovers change rapidly during the rice season. Three SAR images acquired by the ERS satellite during 5 May.

some standard correction procedures may be carried out by the ground station operators before the data is delivered to the end -user.Image Processing and Analysis Many image processing and analysis techniques have been developed to aid the interpretation of remote sensing images and to extract as much information as possible from the images. initial processing on the raw data is usually carried out to correct for an y distortion due to the characteristics of the imaging system and imaging conditions. The image may also be transformed to conform to a specific map projection system. The choice of specific techniques or algorithms to use depends on the goals of each individual project. we will examine some procedures commonly used in analysing/interpreting remote sensing images. Pre-Processing Prior to data analysis. Furthermore. ground control points (GCP's) are used to register the image to a precise map (geo-referencing). Depending on the user's requirement. These procedures include radiometric correction to correct for uneven sensor response over the whole image and geometric correction to correct for geometric distortion due to Earth's rotation and other imaging conditions (such as oblique viewing). In this section. . if accurate geographical location of an area on the image needs to be known.

i. This hazy appearance is due to scattering of sunlight by atmosphere into the field of view of the sensor. Multispectral SPOT image of the same area shown in a previous section.e. An example of an enhancement procedure is shown here. a bluish tint can be seen all-over the image. producing a hazy apapearance. 0 to 255.Image Enhancement In order to aid visual interpretation. The histograms of the three bands of this image is shown in the following figures. visual appearance of the objects in the image can be improved by image enhancement techniques such as grey level stretching to improve the contrast and spatial filtering for enhancing the edges. Radiometric and geometric corrections have been done. This effect also degrades the contrast between different landcovers. . This image is displayed without any further enhancement. The y-axis is the number of pixels in the image having a given digital number. but acquired at a later date. In the above unenhanced image. The image has also been transformed to conform to a certain map projection (UTM projection). It is useful to examine the image Histograms before performing any image enhancement. The xaxis of the histogram is the range of the available digital numbers.

Histogram of the XS2 (red) band (displayed in green).Histogram of the XS3 (near infrared) band (displayed in red). .

The shift is particular large for the XS1 band compared to the other two bands due to the higher contribution from Rayleigh scattering for the shorter wavelength. Note that the minimum digital number for each band is not zero. The image can be enhanced by a simple linear grey-level stretching. The sensor's gain factor has been adjusted to anticipate any possibility of encountering a very bright object. In this method. The Grey-Level Transformation Table is shown in the following graph.Histogram of the XS1 (green) band (displayed in blue). Hence. This shift is due to the atmospheric scattering component adding to the actual radiation reflected from the ground. Each histogram is shifted to the right by a certain amount. a level threshold value is chosen so that all pixel values below this threshold are mapped to zero. An upper threshold value is also chosen so that all pixel values above this threshold are mapped to 255. The lower and upper thresholds are usually chosen to be values close to the minimum and maximum pixel values of the image. All other pixel values are linearly interpolated to lie between 0 and 255. The maximum digital number of each band is also not 255. most of the pixels in the image have digital numbers well below the maximum value of 255. .

Blue line: XS1 band. Multispectral SPOT image after enhancement by a simple linear greylevel stretching. Note that the hazy appearance has generally been removed. The result of applying the linear stretch is shown in the following image. . The contrast between different features has been improved. Red line: XS3 band.Grey-Level Transformation Table for performing linear grey level stretching of the three bands of the image. Green line: XS2 band. except for some parts near to the top of the image.

the brightness and "colour" information contained in each pixel. The following image shows an example of a thematic map. Every pixel in the whole image is then classified as belonging to one of the classes depending on how close its spectral features are to the spectral features of the training areas. These areas are known as the "training areas". the spectral features of some areas of known landcover types are extracted from the image. depending on their spectral features. This map was derived from the multispectral SPOT image of the test area shown in a previous section using an unsupervised classification algorithm. The classification procedures can be "supervised" or"unsupervised". Each class of landcover is referred to as a "theme"and the product of classification is known as a "thematicmap". In unsupervised classification.e. In supervised classification. SPOT multispectral image of the test area . i.Image Classification Different landcover types in an image can be discriminated usingsome image classification algorithms using spectral features. Each cluster will then be assigned a landcover type by the analyst. the computer program automatically groups the pixels in the image into separate clusters.

built-up areas bare soil. built-up areas The spectral features of these Landcover classes can be exhibited in two graphs shown below. Class No. The second graph is a plot of the mean pixel values of the XS2 (red) versus XS1 bands. built-up areas bare soil. The accuracy of the thematic map derived from remote sensing images should be verified by field observation.Thematic map derived from the SPOT image using an unsupervised classification algorithm. built-up areas Turbid water. (Colour in Map) 1 (black) 2 (green) 3 (yellow) 4 (orange) 5 (cyan) 6 (blue) 7 (red) 8 (white) Landcover Type Clear water Dense Forest with closed canopy Shrubs. The first graph is a plot of the mean pixel values of the XS3 (near infrared) band versus the XS2 (red) band for each class. A plausible assignment of landcover types to the thematic classes is shown in the following table. The standard deviations of the pixel values for each class is also shown. . Less dense forest Grass Bare soil. bare soil.

all the data points generally lie on a straight line. pixel based method can be used in the lower resolution mode and merged with the contextual and textural method at higher reso lutions. analysis at different spatial scales and combining the resoluts) is also a useful strategy when dealing with very high resolution imagery. image processing and analysis algorithms utilising the textural. The vegetated landcover classes lie above the soil line due to the higher reflectance in the near infrared region (XS3 band) relative to the visible region.e. In order to fully exploit the spatial information contained in the imagery. This line is called the "soil line". In the XS2 (visible red) versus XS1 (visible green) scatterplot. and aggregates of people can b seen clearly. This plot shows that the two visible bands are very highly correlated. Such algorithms make use of the relationship between neighbouring pixels for information extraction. A multi-resolutional approach (i. even road markings.Scatter Plot of the mean pixel values for each landcover class. In this case. individual tree crowns. . details such as buildings and roads can be seen. contextual and g eometrical properties are required. Incorporation of a-priori information is sometimes required. In very high resolution imagery. Spatial Feature E traction In high spatial resolution imagery. The vegetated areas and clear water are generally dark while the other nonvegetated landcover classes have varying brightness in the visible bands. vehicles. the data points for the non vegetated landcover classes generally lie on a straight line passing through the origin. The amount of details depend on the image resolution. Pixel-based e methods of image analysis will not work successfully in such imagery. In the scatterplot of the class means in the XS3 and XS2 bands.

More detailed information can be derived by combining several different types of images. Measurement of Bio-geophysical Parameters Specific instruments carried on-board the satellites can be used to make measurements of the biogeophysical parameters of the earth. Specific satellite missions have been launched to continuously monitor the global variations of these environmental parameters that may show the causes or the effects of global climate change and the impacts of human activities on the environment. For example. the building height of the building shown here can be determined by measuring the distance between a point on the top of the building and the corresponding point of the shadow on the ground. etc.Building height can be derived from a single image using a simple geometric method if shadows of the buildings can be located in the image. In this case. stratospheric ozone. Oil palm trees in an IKONOS image Detected trees (white dots) superimposed on the image. An automated technique for detecting and counting oil palm trees in IKONOS images based on differential geometry concepts of edge and curvature has been developed at CRISP. radar image can form oneof the layers in combination with the visible and near infraredlayers when performing classification. forest biomass. For example. . land and sea surface temperature. using a simple geometric relation. tropospheric aerosol. Individual trees in very high resolution imagery can be detected based on the tree crown's intensity profile. Some of the examples are: atmospheric water vapour content. Geographical Information System (GIS) Different forms of imagery such as optical and radar images provide complementary information aboutthe landcover. sea surface wind field. the solar illumination direction the satellite sensor viewing direction need to be known. sea water chlorophyll concentration.

AGIS is a database of different layers. where each layer containsinformation about a specific aspect of the same area which isused for analysis by the resource scientists.The thematic information derived fromthe remote sensing images are often combined with other auxiliary datato form the basis for a Geographic Information System (GIS). End of Tutorial Image Processing and Analysis .

Sign up to vote on this title
UsefulNot useful