This action might not be possible to undo. Are you sure you want to continue?
We perceive the surrounding world through our five senses. Some senses (touch and taste) require contact of our sensing organs with the objects. However, we acquire much information about our surrounding through the senses of sight and hearing which do not require close contact between the sensing organs and the external objects. In another word, we are performing Remote Sensing all the time. Generally, Remote sensing refers to the activities of recording/observing/perceiving ( ensing) s objects or events at far away (remote) places. In remote sensing, the sensors are not in direct contact with the objects or events being observed. The information needs a physical carrier to travel from the objects/events to the sensors through an intervening medium. The electromagnetic radiation is normally used as an information carrier in remote sensing. The output of a remote sensing system is usually an image representing the scene being observed. A further step of image analysis and interpretation is required in order to extract useful information from the image. The human visual system is an example of a remote sensing system in this general sense. In a more restricted sense, remote sensing usually refers to the
technology of acquiring information about the earth's surface (land and ocean) and atmosphere using sensors onboard airborne (aircraft, balloons) or spaceborne (satellites, space shuttles) platforms.
Satellite Remote Sensing
In this CD, you will see many remote sensing images around Asia acquired by earth observation satellites. These remote sensing satellites are equipped with sensors looking down to the earth. They are the "eyes in the sky" constantly observing the earth as they go round in predictable orbits.
Effects of Atmosphere
In satellite remote sensing of the earth, the sensors are looking through a layer of atmosphere separating the sensors from the Earth's surface being observed. Hence, it is essential to understand the effects of atmosphere on the electromagnetic radiation travelling from the Earth to the sensor through the atmosphere. The atmospheric constituents cause wavelength dependent absorption and scattering of radiation. These effects degrade the quality of images. Some of the atmospheric effects can be corrected before the images are subjected to further analysis and interpretation. A consequence of atmospheric absorption is that certain wavelength bands in the electromagnetic spectrum are strongly absorbed and effectively blocked by the atmosphere. The wavelength regions in the electromagnetic spectrum usable for remote sensing are determined by their ability to penetrate atmosphere. These regions are known as the atmospheric transmission windows. Remote sensing systems are often designed to operate within one or more of the atmospheric windows. These windows exist in the microwave region, some wavelength bands in the infrared, the entire visible region and part of the near ultraviolet regions. Although the atmosphere is practically transparent to x -rays and gamma rays, these radiations are not normally used in remote sensing of the earth.
Optical and Infrared Remote Sensing
In Optical Remote Sensing, optical sensors detect solar radiation reflected or scattered from the earth, forming images resembling photographs taken by a camera high up in space. The wavelength region usually extends from the visible and near infrared (commonly abbreviated as VNIR) to the short-wave infrared (SWIR).
Different materials such as water, soil, vegetation, buildings and roads reflect visible and infrared light in different ways. They have different colours and brightness when seen under the sun. The interpretation of optical images require the knowledge of the spectral reflectance signatures of the various materials (natural or man-made) covering the surface of the earth.
There are also infrared sensors measuring the thermal infrared radiation emitted from the earth, from which the land or sea surface temperature can be derived.
Microwave Remote Sensing
There are some remote sensing satellites which carry passive or active microwave sensors. The active sensors emit pulses of microwave radiation to illuminate the areas to be imaged. Images of the earth surface are formed by measuring the microwave energy scattered by the ground or sea back to the sensors. These satellites carry their own "flashlight" emitting microwaves to illuminate their targets. The images can thus be acquired day and night. Microwaves have an additional advantage as they can penetrate clouds. Images can be acquired even when there are clouds covering the earth surface. A microwave imaging system which can produce high resolution image of the Earth is the synthetic aperture radar (SAR). The intensity in a SAR image depends on the amount of microwave backscattered by the target and received by the SAR antenna. Since the physical mec anisms h
green. blurring or degradation by other factors.responsi le for t is backscatter is different for microwave.T i :T i i x l i i . The cones are insensitive under low light illumination condition. i i techniques the image may be employed to to help visual interpretation. The rods are sensitive only to the total light intensity. t e i i of SAR images requir i Remote sensing images are normall in . image i and l i i i algorithms are used to delineate different areas in an image into i l . it is not coincidental that the modern computer display monitors make use of the same three primary colours to generate a multitude of colours for displaying colour images. There are many image analysis techniques available and the methods used depend on the requirements of the specific problem concerned. The human visual system is an example of a remote sensing system in the general sense. and blue regions of the visible spectrum. In order to t e form of i i l i extract useful information from t e images. known as the and the . compared to visible/infrared radiation. The resulting product is a i of the study area. each being sensitive to one of the red. In many cases. everything appears in shades of grey when there is insufficient light. at the retina of the eyes. The cones are responsible for colour vision. This thematic map can be combined with other databases of the test area for further analysis and utilization. Hence. Thus. There are three types of cones. i l P i i l i i j ill l . . when their jobs are taken over by the rods. The in this example are the two types of photosensitive cells. and to or the image if the image has been subjected to geometric distortion.
Part of the scattered light is intercepted by the eyes. However. These signals are processed and interpreted at the brain. In this case. The signals l i generated at the retina are carried via the nerve fibres to the brain. i i ill i i :T i i i . the i i i is i .97 x 1024 kg. Descriptions of the shape of the earth have evolved from the flat earth model. The Planet Earth The planet E is the third planet in the l located at a mean distance of about 1. with the aid of previous experiences. When operating in this mode. a part of the l ambient light falling onto them. The objects l / the the visible light. the visual system is an example of a "P i i " system which depends on an external source of energy to operate. We all know that this system won't work in darkness. we can still see at night if we provide our own source of illumination by carrying a flashlight and shining the beam towards the object we i ". spherical model to the currently accepted ellipsoidal model . the i CPU of the visual system. by supplying our want to observe. forming an i on the retina after passing through the optical system of the eyes.As the objects/events being observed are located far away from the eyes. we are performing " i own source of energy for illuminating the objects. with a mass of 5. the information needs a carrier to travel from the object to the eyes. In this case.50 x 108 km from the sun.
About 29.derived from accurate ground surveying and satellite measurements.e. A layer of gaseous atmosphere envelopes the earth's surface. The gaseous materials extend to several hundred kilometers in altitude. The rest is covered by water. position in space) of a point on or above the earth surface for the purpose of surveying.1% of the earth's crust area is above sea level.1370 km Polar Radi s = 6356. A number of reference elli i have been defined for use in identifying the three dimensional coordinates (i. The first 80 km of the atmosphere contains more than 99% of the total mass of the earth's atmosphere. Atmosphere The Earth's Atmosphere The earth's surface is covered by a layer of atmosphere consisting of a mixture of gases and other solid and liquid particles. Vertical Structure of the Atmosphere . mapping and navigation.7523 km The earth's crust is the outermost layer of the earth's land surface. though there is no well defined boundary for the upper li mit of the atmosphere. The reference ellipsoid in the rl Geodetic System 1984 (WGS-84) commonly used in satellite Global Positioning System (GPS) has the following parameters: y y Eq atorial Radi s = 6378.
e. Ozone exists mainly at the stratopause. which is well above the thermopause. they are ioni ed due to bombardment by solar ultraviolet radiation and energetic cosmic rays. stratosphere. i. up to an altitude of about 50 km. mesopause and thermopause. Mesosphere: The temperature decreases in this layer from an altitude of about 50 km to 85 km. Stratosphere: The temperature at the lower 20 km of the stratosphere is approximately constant. up to a height of about 10 km. y y y y Troposphere: This layer is characteri ed by a decrease in temperature with respect to height. All the weather activities (water vapour. stratopause. The gases exist mainly in the form of thin plasma. mesosphere and thermosphere. A layer of aerosol particles normally exists near to the earth surface. clouds. Many remote sensing satellites follow the near polar sun-synchronous orbits at a height around 800 km. The tops of these layers are known as the tropopause. with a characteristic height of about 2 km. . respectively. after which the temperature increases with height. at a rate of about 6. The aerosol concentration decreases nearly exponentially with height. The temperature may range from 500 K to 2000 K.5ºC per kilometer. precipitation) are confined to this layer. The term upper atmosphere usually refers to the region of the atmosphere above the troposphere.The vertical profile of the atmosphere is divided into four layers: troposphere. Thermosphere: This layer extends from about 85 km upward to several hundred kilometers. The troposphere and the stratosphere together account for more than 99% of the total mass of the atmosphere.
The tops of these layers are known as the tropopause. stratopause. with a characteristic height of about 2 km. at a rate of about 6. mesopause and thermopause. up to a height of about 10 km. All the weather activities (water vapour. A layer of aerosol particles normally exists near to the earth surface. The first 80 km of the atmosphere contains more than 99% of the total mass of the earth's atmosphere. The gaseous materials extend to several hundred kilometers in altitude.5ºC per kilometer.The Earth's Atmosphere The earth's surface is covered by a layer of atmosphere consisting of a mixture of gases and other solid and liquid particles. precipitation) are confined to this layer. clouds. mesosphere and thermosphere. y Troposphere: This layer is characterized by a decrease in temperature with respect to height. . though there is no well defined boundary for the upper li mit of the atmosphere. respectively. stratosphere. The aerosol concentration decreases nearly exponentially with height. Vertical Structure of the Atmosphere The vertical profile of the atmosphere is divided into four layers: troposphere.
e. These particles may congregate to form clouds and haze. water droplets and ice crystals. They consist of water vapour. after which the temperature increases with height. Solid and li uid particulates Other than the gases. up to an altitude of about 50 km. the atmosphere also contains solid and liquid particles such as aerosols. o one.y y y Stratosphere: The temperature at the lower 20 km of the stratosphere is approximately constant. The temperature may range from 500 K to 2000 K. carbon dioxide and other gases. Mesosphere: The temperature decreases in this layer from an altitude of about 50 km to 85 km. Ozone exists mainly at the stratopause. All electromagnetic waves travel through space at . The term upper atmosphere usually refers to the region of the atmosphere above the troposphere. The gases exist mainly in the form of thin plasma. Atmospheric onstituents The atmosphere consists of the following components: y y y Permanent Gases They are gases present in nearly constant concentration. they are ioni ed due to bombardment by solar ultraviolet radiation and energetic cosmic rays. The remaining one percent consistsof the inert gases. About 78% by volume of the atmosphere is nitrogen while the life sustaining oxygen occupies 21%. which is well above the thermopause. i. Thermosphere: This layer extends from about 85 km upward to several hundred kilometers. The troposphere and the stratosphere together account for more than 99% of the total mass of the atmosphere. Many remote sensing satellites follow the near polar sun-synchronous orbits at a height around 800 km. with little spatial variation. nitrogeneous and sulphurous compounds. Gases with Variable oncentration The concentration of these gases may vary greatly over space and time. Electromagnetic Radiation Electromagnetic Waves Electromagnetic waves are energy transported through space in the form of periodic disturbances of electric and magnetic fields.
among which only a narrow band from about 400 to 700 nm is visible to the human eyes.99792458 x 10 8 m/s.5 .1. Wavelength units: 1 mm = 1000 µm.1 GHz (30 .7 .8 GHz (3. The microwaves are further divided into different frequency (wavelength) bands: (1 GH = 109 H ) o P band: 0. Microwaves: 1 mm to 1 m wavelength.5 .2. The Electromagnetic Spectrum The electromagnetic spectrum can be divided into several wavelength (frequency) regions. This wide frequency range of electromagnetic waves constitute the Electromagnetic Spectrum.75 .5 .12. commonly known as the speed of light. An electromagnetic wave is characterized by a frequency and a wavelength.1.1 .100 cm) o L band: 1 .4 GHz (7. speed of light = frequency x wavelength The frequency (and hence.3.5 GHz (1.7 cm) o Ka band: 26.18 GHz (1.7. Note that there is no sharp boundary between these regions.3 .5 cm) o X band: 8 . 1 µm = 1000 nm. These two quantities are related to the speed of light by the equation. ranging from the low frequency of the electric waves generated by the power transmission lines to the very high frequency of the gamma rays originating from the atomic nuclei.4 .15 cm) o C band: 4 .30 cm) o S band: 2 .8 cm) o Ku band: 12.40 GHz (0.the same speed.8 .2 GHz (15 . the wavelength) of an electromagnetic wave depends on its source. The boundaries shown in the above figures are approximate and there are overlaps between two adjacent regions.1 cm) .4 cm) o K band: 18 . c = 2. y y Radio Waves: 10 cm to 10 km wavelength.26. There is a wide range of frequency encountered in our physical world.5 GHz (2.
Atmospheric Effects Effects of Atmosphere .Back to Spectrum y Infrared: 0. This region is further divided into the following bands: o Near Infrared (NIR): 0.450 nm o Violet: 400 . o Far Infrared (FIR): longer than 15 µm. o Long Wanelength Infrared (LWIR): 8 to 15 µm.430 nm Back to Spectrum y y Ultraviolet: 3 to 400 nm X-Rays and Gamma Rays Photons According to quantum physics.570 nm o Blue: 450 . o Short Wavelength Infrared (SWIR): 1. The basic unit of energy for an electromagnetic wave is called a photon. referring to the main infrared component of the solar radiation reflected from the earth's surface. The various colour components of the visible spectrum fall roughly within the following wavelength regions: o Red: 610 .500 nm o Indigo: 430 . o Mid Wavelength Infrared (MWIR): 3 to 8 µm.700 nm o Orange: 590 .5 to 3 µm. The MWIR and LWIR are the Thermal Infrared.610 nm o Yellow: 570 .e.626 x 10-34 J s.5 µm. the energy of an electromagnetic wave is quantized. h = 6. i.590 nm o Green: 500 . it can only exist in discrete amount. The NIR and SWIR are also known as the Reflected Infrared. The energy E of a photon is proportional to the wave frequency f. Back to Spectrum y Visible Light: This narrow band of electromagnetic radiation extends from about 400 nm (violet) to about 700 nm (red).7 to 1. E=hf where the constant of proportionality h is the Planck's Constant.7 to 300 µm wavelength.
These regions are known as the Atmospheric Transmission Windows. Effects of Atmospheric Absorption on Remote Sensing Images Atmospheric absorption affects mainly the visible and infrared bands. These windows are found in the visible. As a result. Atmospheric Transmission Windows Each type of molecule has its own set of absorption bands in various parts of the electromagnetic spectrum. The reflected radiance is also . it may be absorbed or scattered by the constituent particles of the atmosphere. The various effects of absorption and scattering are outlined in the following sections. near-infrared. certain bands in thermal infrared and the microwave regions. The overall effect is the removal of energy from the incident radiation. only the wavelength regions outside the main absorption bands of the atmospheric gases can be used for remote sensing.When electromagnetic radiation travels through the atmosphere. Optical remote sensing depends on solar radiation as the source of illumination. Molecular absorption converts the radiation energy into excitation energy of the molecules. Absorption reduces the solar radiance within the absorption bands of the atmospheric gases. The wavelength bands used in remote sensing systems are usually designed to fall within these windows to minimi e the atmospheric absorption effects. Scattering redistribu the energy of the tes incident beam to all directions.
Near to the boundary between two regions of different brightness. The average translational kinetic energy of a molecule is equal to kT/2 where k is the Boltzmann's constant and T is the absolute temperature of the gas. Effects of Atmospheric Scattering on Remote Sensing Images Atmospheric scatterring is important only in the visible and near infrared regions. Electronic Energy: Energy due to the energy states of the electrons of the molecule. A photon of electromagnetic radiation can be absorbed by a molecule when its frequency matches one of the available transitional energies. Ultraviolet Absorption Absorption of ultraviolet (UV) in the atmosphere is chiefly due to electronic transitions of the atomic and molecular oxygen and nitrogen.e.attenuated after passing through the atmosphere. Most noticeably. This effect is known as the adjacency effect. atmospheric absorption will alter the apparent spectral signature of the target being observed. the solar radiation scattered by the atmosphere towards the sensor without first reaching the ground produces a ha y appearance of the image. some of the oxygen and nitrogen molecules in the upper atmosphere undergo photochemical dissociation to . the energy can change only in discrete amount. Hence. Absorption of Radiation Absorption by Gaseous Molecules The energy of a gaseous molecule can exist in various forms: y y y y Translational Energy: Energy due to translational motion of the centre of mass of the molecule. the light from a target outside the field of view of the sensor may be scattered into the field of view of the sensor. Scattering of radiation by the constituent gases and aerosols in the atmosphere causes degradation of the remotely sensed images. Furthermore. This vibration is associated with stretching of chemical bonds between the atoms. the adjacency effect results in an increase in the apparent brightness of the darker region while the apparent brightness of the brighter region is reduced. This attenuation is wavelength dependent. Vibrational Energy: Energy due to vibration of the component atoms of a molecule about their equilibrium positions. The last three forms are quantized. Scattering also produces blurring of the targets in remotely sensed images due to spreading of the reflected radiation by scattering. Rotational Energy: Energy due to rotation of the molecule about an axis through its centre of mass. i. Due to the ultraviolet absorption. known as the transitional energy. resulting in a reduced resolution image. This effect is particularly severe in the blue end of the visible spectrum due to the stronger Rayleigh Scattering for shorter wavelength radiation.
most of the radiation is absorbed by the atmosphere. Visible Region There is little absorption of the electromagnetic radiation in the visible part of the spectrum. ozone exists at a constant concentration level. Ozone Layers Ozone in the stratosphere absorbs about 99% of the harmful solar UV radiation shorter than 320 nm. Scattering of Electromagnetic Radiation by Atmosphere Scattering of Electromagnetic Radiation Scattering of electromagnetic radiation is caused by the interaction of radiation with matter resulting in the reradiation of part of the energy to other directions not along the path of the incidint radiation. contributing to the depletion of the ozone layers. It is formed in three-body collisions of atomic oxygen (O) with molecular oxygen (O2) in the presence of a third atom or molecule. In the far infrared region. The photochemical dissociation of oxygen is also responsible for the formation of the ozone layer in the stratosphere. It has been observed by measurement from space platforms that the ozone layers are depleting over time. causing a small increase in solar ultraviolet radiation reaching the earth. In recent years. . The water and carbon dioxide molecules have absorption bands centred at the wavelengths from near to long wave infrared (0. These atoms play an important role in the absorption of solar ultraviolet radiation in the thermosphere. Microwave Region The atmosphere is practically transparent to the microwave radiation. The ozone molecules also undergo photoch emical dissociation to atomic O and molecular O2. When the formation and dissociation processes are in equilibrium. existence of certain atoms (such as atomic chlorine) will catalyse the dissociation of O3 back to O2 and the ozone concentration will decrease.7 to 15 µm). The main atmospheric constituents responsible for infrared absorption are water vapour (H2 O) and carbon dioxide (CO2) molecules. Scattering effectively removes energy from the incident beam. However. but is redistributed to other directions. this energy is not lost. Infrared Absorption The absorption in the infrared (IR) region is mainly due to rotational and vibrational transitions of the molecules. Unlike absorption. increasing use of the flurocarbon compounds in aerosol sprays and refrigerant results in the release of atomic chlorine into the upper atmosphere due to photochemical dissociation of the fluorocarbon compounds.become atomic oxygen and nitrogen.
the scattered radiation in Mie scattering is mainly confined within a small angle about the forward direction. the angle between the directions of the incident and scattered rays. Rayleigh scattering occurs when the size of the particle responsible for the scattering event is much smaller than the wavelength of the radiation. and hence this scattering is named Rayleigh Scattering. The scattered light intensity is inversely proportional to the fourth power of the wavelength. Airborne Remote Sensing In airborne remote sensing. the calculation can become very complicated. The scattered light intensity in Rayleigh scattering for unpolarized light is proportional to (1 + cos2 s) where s is the scattering angle. In general. The radiation is said to be very strongly forward scattered. sizes and the materials of the particles. This phenomenon explains why the sky is blue and why the setting sun is red. However. i.Both the gaseous and aerosol components of the atmosphere cause scattering in the atmosphere. The scattering intensity and its angular distribution may be calculated numerically for a spherical particle. If the size of the particle is similar to or larger than the radiation wavelength. Hence. Scattering by Aerosols Scattering by aerosol particles depends on the shapes. for irregular particles.e. the scattering is named Mie Scattering. blue light is scattered more than red light. downward or sideward looking sensors are mounted on an aircraft to . Scattering by gaseous molecules The law of scattering by air molecules was discovered by Rayleigh in 1871.
compared to satellite remote sensing. The canopy of each individual tree can be clearly seen. It is not cost-effective to map a large area using an airborne remote sensing system. Airborne remote sensing missions are often carried out as one-time operations.obtain images of the earth's surface. Digital photography permits real-time transmission of the remotely sensed data to a ground station for immediate analysis. videography. and digital photography are commonly used in airborne remote sensing. Another example of a high resolution aerial photograph over a residential area. whereas earth observation satellites offer the possibility of continuous monitoring of the earth. Analog aerial photography. The interpretation of analog aerial photographs is usually done visually by experienced analysts. A high resolution aerial photograph over a forested area. The disadvantages are low coverage area and high cost per unit area of ground coverage. The digital images can be analysed and interpreted with the aid of a computer. Synthetic Aperture Radar imaging is also carried out on airborne platforms. Analog photography is capable of providing high spatial resolution. The photographs may be digitized using a scanning device for computer-assisted analysis. An advantage of airborne remote sensing. . This type of very high resolution imagery is useful in identification of tree types and in assessing the conditions of the trees. is the capability of offering very high spatial resolution images (20 cm or less).
2. 4 EROS A1 TERRA OrbView 2 (SeaStar) NOAA 12. 16 ERS 1.Spaceborne Remote Sensing IKONOS 2 SPOT 1. 2 RADARSAT 1 . 14.
Geostationary Orbits Geostationary Orbit: The satellite appears stationary with respect to the Earth's surface. Quantitative measurement of ground features using radiometrically calibrated sensors. This time interval is called the repeat cycle of the satellite. Semiautomated computerised processing and analysis. Relatively lower cost per unit area of coverage. the satellite traces out a different path on the ground in each subsequent cycle. As the earth below is rotating. Satellite imagery has a generally lower resolution compared to aerial photography. very high resolution imagery (up to 1-m resolution) is now commercially available to civilian users with the successful launch of the IKONOS-2 satellite in September 24. called its ground track. At present. Spaceborne remote sensing provides the following advantages: y y y y y Large area coverage. . 1999.The receiving ground station at CRISP receives data from these satellites In spaceborne remote sensing. sensors are mounted on-board a spacecraft (space shuttle or satellite) orbiting the earth. there are several remote sensing satellites providing imagery for research and operational applications. the satellite will appear stationary with respect to the earth surface. The time taken to complete one revolution of the orbit is called the orbital period. Frequent and repetitive coverage of an area of interest. However. Remote sensing satellites are often launched into special orbits such that the satellite repeats its path after a fixed time interval. If a satellite follows an orbit parallel to the equator in the same direction as the earth's rotation and with the same period of 24 hours. Satellite Orbits A satellite follows a generally elliptical orbit around the earth. The satellite traces out a path on the earth surface. as it moves across the sky.
This orbit is a geostationary orbit.e. A sun synchronous orbit is a nearpolar orbit whose altitude is such that the satellite will alwayspass over a location at a given latitude at the same local solartime. In terms of the spatial resolution. Remote Sensing Satellites Several remote sensing satellites are currently available. the satellite imaging systems can be classified . the satellite imaging systems can be classified into: y y y y Low resolution systems (approx. 1 km or more) Medium resolution systems (approx. 5 m to 100 m) Very high resolution systems (approx. i. Sun Synchronous Orbits A near-polar sun synchronous orbit Earth observation satellites usually follow the sun synchronousorbits.000 km. Satellites in the geostationary orbits are located at a high altitude of 36. The geostationary orbits are commonly used by meteorological satellites. Each of these satellite-sensor platform is characterised by the wavelength bands employed in image acquisition. In this way. the coverage area and the temporal coverge. A satellite following a properly designed near polar orbit passes close to the poles and is able to cover nearly the whole earth surface in a repeat cycle. 100 m to 1 km) High resolution systems (approx. These orbits enable a satellite to always view the same area on the earth. providing imagery suitable for various types of applications. how frequent a given location on the earth surface can be imaged by the imaging system. 5 m or less) In terms of the spectral regions used in data acquisition. Near Polar Orbits A near polar orbit is one with the orbital plane inclined at a small angle with respect to the earth's rotation axis. A large area of the earth can also be covered by the satellite. spatial resolution of the sensor. the same solarillumination condition (except for seasonal variation) can be achieved for the images of a given location taken by the satellite.
Aerial photographs are examples of analog images while satellite images acquired using electronic sensors are examples of digital images. The images may be analog or digital. near infrared. grey-scale image) systems Multispectral (several spectral bands) systems Superspectral (tens of spectral bands) systems Hyperspectral (hundreds of spectral bands) systems Synthetic aperture radar imaging systems can be classified according to the combination of frequency bands and polarization modes used in data acquisition.: y y y y Single frequency (L-band. or HH. Digital Image Analog and Digital Images An image is a two-dimensional representation of objects in a real scene. e.g. "black-and-white". Remote sensing images are representations of parts of the earth surface as seen from space. or HV) Multiple polarization (Combination of two or more polarization modes) Descriptions of some of the operational and planned remote sensing sattelite platforms and sensors are provided in the appendix of this tutorial. or C-band.into: y y y Optical imaging systems (include visible. . or X-band) Multiple frequency (Combination of two or more frequency bands) Single polarization (VV. and shortwave infrared systems) Thermal imaging systems Synthetic aperture radar (SAR) imaging systems Optical/thermal imaging systems can be classified according to the number of spectral bands used: y y y y Monospectral or panchromatic (single wavelength band.
a digital number is stored with a finite number of bits (binary digits). Longitude. 28 . There is a one-to-one correspondence between the column-row address of a pixel and the geographical coordinates (e. Each pixel represents an area on the Earth's surface. the actual intensity value can be derived from the pixel digital number. The address of a pixel is denoted by its row and column coordinates in the two-dimensional image. In a Radiometrically Calibrated image.1). the exact geographical location of each pixel on the ground must be derivable from its row and column indices. This value is normally the average value for the whole ground area covered by the pixel. A pixel has an intensity value and a location address in the two dimensional image.A digital image is a two-dimensional array of pixels. For example. The intensity of a pixel is digitised and recorded as a digital number. . emitted infrared radiation or backscattered radar intensity.e. Each pixel has an intensity value (represented by a digital number) and a location address (referenced by its row and column numbers).g. The number of bits determine the radiometric resolution of the image. given the imaging geometry and the satellite orbit parameters. Due to the finite storage capacity. The detected intensity value needs to be scaled and quantized to fit within this range of value. latitude) of the imaged location. while a 11-bit digital number ranges from 0 to 2047. an 8-bit digital number ranges from 0 to 255 (i. In order to be useful. Pixels A digital image comprises of a two dimensional array of individual picture elements called pixels arranged in columns and rows. The intensity value represents the measured physical quantity such as the solar radiance in a given wavelength band reflected from the ground.
The signal recorded by a detector element is proportional to the total radiation collected within its IFOV. a layer of ERS synthetic aperture radar image. Each detector element projects an "instantaneous field of view (IFOV)" on the ground. As the detector array flies along its track. Each component image is a layer in the multilayer image. Multilayer Image Several types of measurement may be made from the ground area covered by a single pixel. a multilayer image is formed. and perhaps a layer consisting of the digital elevation map of the area being studied. The imaging system has a linear detector array (usually of the CCD type) consisting of a number of detector elements (6000 elements in SPOT HRV). the row of pixels sweeps along to generate a twodimensional image. . a row of pixels are formed. Each type of measurement forms an image which carry some specific information about the area. By "stacking" these images from the same area together. For example. a multilayer image may consist of three layers from a SPOT multispectral image. Multilayer images can also be formed by combining images obtained from different sensors. and other subsidiary data."A Push-Broom" Scanner: This type of imaging system is commonly used in optical remote sensing satellites such as SPOT. At any instant.
An illustration of a multilayer image consisting of five component layers.
A multispectral image consists of a few image layers, each layer represents an image acquired at a particular wavelength band. For example, the SPOT HRV sensor operating in the multispectral mode detects radiations in three wavelength bands: the green (500 - 590 nm), red (610 - 680 nm) and near infrared (790 - 890 nm) bands. A single SPOT multispectral scene consists of three intensity images in the three wavelength bands. In this case, each pixel of the scene has three intensity values corresponding to the three bands. A multispectral IKONOS image consists of four bands: Blue, Green, Red and Near Infrared, while a landsat TM multispectral image consists of seven bands: blue, green, red, near-IR bands, two SWIR bands, and a thermal IR band.
The more recent satellite sensors are capable of acquiring images at many more wavelength bands. For example, the MODIS sensor on-board the NASA's TERRA satellite consists of 36 spectral bands, covering the wavelength regions ranging from the visible, near infrared, shortwave infrared to the thermal infrared. The bands have narrower bandwidths, enabling the finer spectral characteristics of the targets to be captured by the sensor. The term "superspectral" has been coined to describe such sensors.
A hyperspectral image consists of about a hundred or more contiguous spectral bands. The characteristic spectrum of the target pixel is acquired in a hyperspectral image. The precise spectral information contained in a hyperspectral image enables better characterisation and identification of targets. Hyperspectral images have potential applications in such fields as precision agriculture (e.g. monitoring the types, health, moisture status and maturity of crops), coastal management (e.g. monitoring of phytoplanktons, pollution, bathymetry changes).
Currently, hyperspectral imagery is not commercially available from satellites. There are experimental satellite-sensors that acquire hyperspectral imagery for scientific investigation (e.g. NASA's Hyperion sensor on-board the EO1 satellite, CHRIS sensor onboard ESA's PRABO satellite).
An illustration of a hyperspectral image cube. The hyperspectral image data usually consists of over a hundred contiguous spectral bands, forming a three-dimensional (two spatial dimensions and one spectral dimension) image cube. Each pixel is associated with a complete spectrum of of the imaged area. The high spectral resolution of hyperspectral images enables better identificaiton of the land covers.
Spatial resolution refers to the size of the smallest object that can be resolved on the ground. In a digital image, the resolution is limited by the pixel size, i.e. the smallest resolvable object cannot be smaller than the pixel size. The intrinsic resolution of an imaging system is determined primarily by the instantaneous field of view (IFOV) of the sensor, which is a measure of the ground area viewed by a single detector element in a give instant in time. n However this intrinsic resolution can often be degraded by other factors which introduce blurring of the image, such as improper focusing, atmospheric scattering and target motion. The pixel size is determined by the sampling distance. A "High Resolution" image refers to one with a small resolution size. Fine details can be seen in a high resolution image. On the other hand, a "Low Resolution" image is one with a large resolution size, i.e. only coarse features can be observed in the image.
A low resolution MODIS scene with a wide coverage. This image was received by CRISP's ground station on 3 March 2001. The intrinsic resolution of the image was approximately 1 km, but the image shown here has been resampled to a resolution of about 4 km. The coverage is more than 1000 km from east to west. A large part of Indochina, Peninsular Malaysia, Singapore and Sumatra can be seen in the image. (Click on the image to display part of it at a resolution of 1 km.)
The browse image has been resampled to 120 m pixel size. This scene shows Singapore and part of the Johor State of Malaysia. . The multispectral SPOT scene has a resolution of 20 m and covers an area of 60 km by 60 km.A browse image of a high resolution SPOT scene. and hence the resolution has been reduced.
roads.8 km by 3. At this resolution. The image shown here covers an area of approximately 4.6 km. vegetation and blocks of buildings can be seen.Part of a high resolution SPOT scene shown at the full resolution of 20 m. .
10 m pixel size 30 m resolution. Even though they have the same pixel size as the first image. A full scene of an IKONOS image has a coverage area of about 10 km by 10 km. details of buildings. This image is further processed to degrade the resolution while maintaining the same pixel size. At this resolution. In realiaty. but still digitized at the same pixel size of 10 m. An image sampled at a small pixel size does not necessarily has a high resolution. . The effective resolution of the image is 1 m. which is 10 m. This true-colour image was obtained by merging a 4-m multispectral image with a 1-m panchromatic image of the same area acquired simultaneously. The next two images are the blurred versions of the image with larger resolution size. individual trees. It was derived by merging a SPOT panchromatic image of 10 m resolution with a SPOT multispectral image of 20 m resolution. they are not equivalent. The subsequent images show the effects of digitizing the same area with larger pixel sizes. The image shown here covers an area of about 400 m by 400 m. The effective resolution is thus determined by the resolution of the panchromatic image. shadows and roads can be seen. they do not have the same resolution. The following three images illustrate this point. 10 m pixel size 80 m resolution. 10 m resolution. The first image is a SPOT image of 10 m pixel size derived by merging a SPOT panchromatic image with a SPOT multispectral image. 10 m pixel size The following images illustrate the effect of pixel size on the visual appearance of an area. Spatial Resolution and Pixel Size The image resolution and pixel size are often used interchangeably. A very high spatial resolution image usually has a smaller area of coverage. The merging procedure "colours" the panchromtic image using the colours derived from the multispectral image.Part of a very high resolution image acquired by the IKONOS satellite. The first image is a SPOT image of 10 m pixel size. vehicles.
the radiometric resolution is limited by the number of discrete quantization levels used to digitize the continuous intensity value. Height = 40 pixels Pixel Size = 80 m Image Width = 20 pixels. In a digital image. The following images illustrate the effects of the number of quantization levels on the digital image. Height = 20 pixels Radiometric Resolution Radiometric Resolution refers to the smallest change in intensity level that can be detected by the sensing system. . Height = 80 pixels Pixel Size = 40 m Image Width = 40 pixels. The subsequent images show the effects of degrading the radiometric resolution by using fewer quantization levels. Height = 160 pixels Pixel Size = 20 m Image Width = 80 pixels. The intrinsic radiometric resolution of a sensing system depends on the signal to noise ratio of the detector. 256 levels) per pixel.Pixel Size = 10 m Image Width = 160 pixels.e. The first image is a SPOT panchromatic image quantized at 8 bits (i.
However. The high radiometric resolution enables features under shadow to be recovered. . the accuracy of analysis will be compromised if few quanti ation levels are used. Even 4-bit quanti ation (16 levels) seems acceptable in the examples shown. Part of the running track in this IKONOS image is under cloud shadow. The IKONOS uses 11-bit digiti ation during image acquisition. if the image is to be subjected to numerical analysis.8-bit quanti ation (256 levels) 6-bit quanti ation (64 levels) 4-bit quanti ation (16 levels) 3-bit quanti ation (8 levels) 2-bit quanti ation (4 levels) 1-bit quanti ation (2 levels) Digiti ation using a small number of quanti ation levels does not affect very much the visual quality of the image.
A small number of spectral bands or a smaller area of coverage may be accepted to allow high spatial resolution imaging. a 3-band multispectral SPOT image covers an area of about 60 x 60 km2 on the ground with a pixel separation of 20 m. it is desirable to have a high spatial resolution image with many spectral bands covering a wide area. the data volume will be 108 million bytes. Each pixel intensity in each band is coded using an 8-bit (i. Data Volume The volume of the digital data can potentially be large for multispectral data. Thus. Ideally. depending on the intended application.e. as a given area is covered in many different wavelength bands. the panchromatic data has only one band. giving a total of about 27 million bytes per image. an IKONOS 4-band multispectral image at 4-m pixel size covering an area of 10 km by 10 km. has a data volume of 4 x 2500 x 2500 x 2 bytes. In reality. The bandwidth of the telecommunication channel sets a limit to the data volume for a scene taken by the imaging system. a SPOT panchromatic scene has the same coverage of about 60 x 60 km2 but the pixel size is 10 m.The features under cloud shadow are recovered by applying a simple contrast and brightness enhancement technique. The images taken by a remote sensing satellite is transmitted to Earth through telecommunication. For example. So there are about 3000 x 3000 pixels per image. 1 byte) digital number. For example. spatial resolution may have to be compromised to accommodate a larger number of spectral bands. In comparison. A 1-m resolution panchromatic image covering the same area would have a data volume of 200 million bytes per image. Optical Remote Sensing . or 50 million bytes per image. For very high spatial resolution imagery. digitized at 11 bits (stored at 16 bits). If a multispectral SPOT scene is digitized also at 10 m pixel size. For example. the data volume is even more significant. such as the one acquired by the IKONOS satellite. panchromatic systems are normally designed to give a higher spatial resolution than the multispectral system. giving about 6000 x 6000 pixels and a total of about 36 million bytes per image. or a wide area coverage.
Examples of multispectral systems are: o LANDSAT MSS o LANDSAT TM o SPOT HRV-XS o IKONOS MS Superspectral Imaging Systems: A superspectral imaging sensor has many more spectral channels (typically >10) than a multispectral sensor. then the resulting image resembles a "black-and-white" photograph taken from space. The spectral information or "colour" of the targets is lost. Optical remote sensing systems are classified into the following types.Optical remote sensing makes use of visible. Thus. Examples of panchromatic imaging systems are: o IKONOS PAN o SPOT HRV-PAN Multispectral imaging system: The sensor is a multichannel detector with a few spectral bands. The physical quantity being measured is the apparent brightness of the targets. If the wavelength range coincide with the visible range. the targets can be differentiated by their spectral reflectance signatures in the remotely sen sed images. y y y y Panchromatic imaging system: The sensor is a single channel detector sensitive to radiation within a broad wavelength range. The resulting image is a multilayer image which contains both the brightness and spectral (colour) information of the targets being observed. Examples of superspectral systems are: o MODIS o MERIS Hyperspectral Imaging Systems: A hyperspectral imaging system is also known as . Each channel is sensitive to radiation within a narrow wavelength band. enabling the finer spectral characteristics of the targets to be captured by the sensor. depending on the number of spectral bands used in the imaging process. Different materials reflect and absorb differently at different wavelengths. The bands have narrower bandwidths. near infrared and short-waveinfrared sensors to form images of the earth's surface by detecting thesolar radiation reflected from targets on the ground.
Spectral Reflectance Signature . coastal management (e. monitoring of phytoplanktons.an "imaging spectrometer". The precise spectral information contained in a hyperspectral image enables better characterisation and identification of targets. The solar irradiation spectrum above the atmosphere can be modeled by a black body radiation spectrum having a source temperature of 5900 K. the solar irradiation spectrum at the ground is modulated by the atmospheric transmission windows. bathymetry changes). with a peak irradiation located at about 500 nm wavelength. An example of a hyperspectral system is: o Hyperion on EO1 satellite Solar Irradiation Optical remote sensing depends on the sun as the sole source of illumination. monitoring the types.25 to 3 µm. moisture status and maturity of crops). Hyperspectral images have potential applications in such fields as precision agriculture (e.g. Physical measurement of the solar irradiance has also been performed using ground based and spaceborne sensors. Significant energy remains only within the wavelength range from about 0. health. pollution.g. it acquires images in about a hundred or more contiguous spectral bands. Solar Irradiation Spectra above the atmosphere and at sea-level. After passing through the atmosphere.
Reflectance Spectrum of Five Types of Landcover The reflectance of clear water is generally low. This property has been used in early reconnaisance . the reflectance is maximum at the blue end of the spectrum and decreases as wavelength increases. due to absorption by chlorophyll for photosynthesis. Hence. Vegetation has a unique spectral signature which enables it to be distinguished readily from other types of land cover in an optical/near-infrared image. In the example shown. absorbed or reflected. Different materials reflect and absorb differently at different wavelengths. it may be transmitted. bare soil and two types of vegetation. The reflectance of bare soil generally depends on its composition. The following graph shows the typical reflectance spectra of five materials: clear water.When solar radiation hits a target surface. This premise provides the basis for multispectral remote sensing. the reflectance increases monotonically with increasing wavelength. clear water appears dark-bluish. However. Hence. vegetation can be identified by the high NIR but generally low visible reflectances. Hence. accounting for its brownish appearance. turbid water. In the near infrared (NIR) region. the reflectance is much higher than that in the visible band due to the cellular structure in the leaves. a material can be identified from its spectral reflectance signature if the sensing system has sufficient spectral resolution to distinguish its spectrum from those of other materials. Turbid water has some sediment suspension which increases the reflectance in the red end of the spectrum. It has a peak at the green region which gives rise to the green colour of vegetation. The reflectance is low in both the blue and red regions of the spectrum. The reflectance spectrum of a material is a plot of the fraction of radiation reflected as a function of the incident wavelength and serves as a unique signature for the material. it should appear yellowish-red to the eye. In principle.
the reflectance spectrum also depends on other factors such as the leaf moisture content and health of the plants. especially during night-time when the background interference from SWIR in reflected sunlight is absent. E: short-wave IR band Interpretation of Optical Images Interpreting Optical Remote Sensing Images . B: green band. The shape of the reflectance spectrum can be used for identification of vegetation type. depending on the types of plants and the plant's water content.missions during war times for "camouflage detection". D: near IR band. This property can be used for identifying tree types and plant conditions from remote sensing images. Water has strong absorption bands around 1. The reflectance of vegetation in the SWIR region (e. reflectance of leaves generally increases when leaf liquid water content decreases. For the same vegetation type.50 µm. band 5 of Landsat TM and band 4 of SPOT 4 sensors) is more varied. Outside these absorption bands in the SWIR region. Typical Reflectance Spectrum of Vegetation. and hence can be used to detect active fires.45. Vegetation 1 has higher reflectance in the visible region but lower reflectance in the NIR region. C: red band. For example.g.95 and 2. 1. The labelled arrows indicate the common wavelength bands used in optical remote sensing of vegetation: A: blue band. The SWIR band can be used in detecting plant drought stress and delineating burnt areas and fire-affected vegetation. the reflectance spectra of vegetation 1 and 2 in the above figures can be distinguished although they exhibit the generally characteristics of high NIR but low visible reflectances. The SWIR band is also sensitive to the thermal radiation emitted by intense fires.
Four main types of information contained in an optical image are often utili ed for image interpretation: y y y y Radiometric Information (i e brightness. Te tural Information. It is usually displayed as a grey scale image. .e. intensity. hue). Panchromatic Images A panchromatic image consists of only one band. The Radiometric Information is the main information type utili ed in the interpretation. the displayed brightness of a particular pixel is proportional to the pixel digital number which is related to the intensity of solar radiation reflected by the targets in the pixel and detected by the detector. a panchromatic image may be similarly interpreted as a black -and-white aerial photograph of the area. Geometric and onte tual Information They are illustrated in the following examples. Spectral Information (i e colour. tone). i. Thus.
The urban area at the bottom left and a clearing near the top of the image hav e high reflected intensity. The river appears bright due to sediments while the sea at the bottom edge of the image appears dark. corresponding to different types of vegetation. In contrast.5 km (height).A panchromatic image extracted from a SPOT panchromatic scene at a ground resolutionof 10 m. cutting across the top right corner of the image can be seen.while the vegetated areas on the right part of the image are generally dark. or in combination of three bands at a time as a colour composite image. . The area covered is the same as that shown in the above panchromatic image. Interpretation of a multispectral colour composite image will require the knowledge of the spectral reflectance signature of the targets in the scene. A river flowing through the vegetated area. Multispectral Images A multispectral image consists of several bands of data. For visual display. Note that both the XS1 (green) and XS2 (red) bands look almost identical to the panchromatic image shown above. Water mass (both the river and the sea) appear dark in the XS3 (near IR) band. each band of the image may be displayed one band at a time as a grey scale image. The following three images show the three bands of a multispectral image extracted from a SPOT multispectral scene at a ground resolution of 20 m. The ground coverage is about 6.5 km (width) by 5. the spectral information content of the image is utili ed in the interpretation. Several shades of grey can be identified for the vegetated areas. Roads and blocksof buildings in the ur ban area are visible. In this case. the vegetated areas now appear bright in theXS3 (near infrared) band due to high reflectance of leaves in the near infrared wavelength region.
SPOT XS1 (green band) SPOT XS2 (red band) .
green and blue) are used. the three bands . Blue) in various proportions True Colour Composite If a multispectral image consists of the three visual primary colour bands (red. three primary colours (red.SPOT XS3 (Near IR band) Colour Composite Images In displaying a colour composite image. Many colours can be formed by combining the three primary colours (Red. they produce different colours in the visible spectrum. Green. green. When these three colours are combined in various proportions. Associating each spectral band (not necessarily a visible band) to a separate primary colour results in a colour composite image. blue).
In this case.may be combined to produce a "true colour" image. the bands 3 (red band). A very common false colour composite scheme for displaying a SPOT multispectral image is shown below: R = XS3 (NIR band) G = XS2 (red band) B = XS1 (green band) This false colour composite scheme allows vegetation to be detected readily in the image. the colour of a target in the displayed image does not have any resemblance to its actual colour. For example. In this type of false colour composite images. the colours of the resulting colour composite image resemble closely what would be observed by the human eyes. However. vegetation appears in different shades of red depending on the types and conditions of the vegetation. There are many possible schemes of producing false colour composite images. In this way. some scheme may be more suitable for detecting certain objects in the image. The resulting product is known as a false colour composite image. . since it has a high reflectance in the NIR band (as shown in the graph of spectral reflectance signature). G. False Colour Composite The display colour assignment for any band of a multispectral image can be done in an entirely arbitrary manner. and B colours for display. A 1-m resolution true-colour IKONOS image. 2 (green band) and 1 (blue band) of a LANDSAT TM image or an IKONOS multispectral image can be assigned respectively to the R.
Landsat TM band 3) An example of this false colour composite display is shown below for a SPOT 4 image. False colour composite multispectral SPOT image: Red: XS3.Clear water appears dark-bluish (higher green band reflectance). Landsat TM band 4) B = Red band (SPOT4 band 2. depending on their composition. roads and buildings may appear in various shades of blue. Blue: XS1 Another common false colour composite scheme for displaying an optical image with a short-wave infrared (SWIR) band is shown below: R = SWIR band (SPOT4 band 4. yellow or grey. Green: XS2. Bare soils. Landsat TM band 5) G = NIR band (SPOT4 band 3. while turbid water appears cyan (higher red reflectance due to sediments) compared to clear water. .
The patch of bright red area on the left is the location of active fire s. Blue: Red band. Bare soils and clearcut areas appear purplish or magenta. Blue: Green band. Natural Colour Composite . A smoke plume originating from the active fire site appears faint bluish in colour. In this display scheme. False colour composite of a SPOT 4 multispectral image without displaying the SWIR band: Red: NIR band.False colour composite of a SPOT 4 multispectral image including the SWIR band: Red: SWIR band. Green: Red band. Green: NIR band. The smoke plume appears bright bluish white. Vegetation appears in sh ades of red. vegetation appears in shades of green.
However. The three bands. this term is misleading since in many instances the colours are only simulated to look similar to the "true" colours of the targets. The term "natural colour" is preferred. Blue: 0. One such combination is the ratio of the near-infrared band to the red band. Green: 0. soil in brown or grey. the spectral bands (some of which may not be in the visible region) may be combined in such a way that the appearance of the displayed image resembles a visible colour photograph.e. etc. water in blue.For optical images lacking one or more of the three visual primary colour bands (i. green and blue).75 XS2 .25 XS3 Vegetation Indices Different bands of a multispectral image may be combined to accentuate the vegetated areas.25 XS3.XS3)/4 where R. The SPOT HRV multispectral sensor does not have a blue band. and NIR bands respectively. i. vegetation in green.0. Many people refer to this composite as a "true colour" composite. XS2 and XS3 correspond to the green. Another commonly used vegetation index is the Normalised Difference Vegetation Index (NDVI) computed by . This ratio is known as the Ratio Vegetation Index (RVI) RVI = NIR/Red Since vegetation has high NIR reflectance but low red reflectance. XS1.75 XS2 + 0. But a reasonably good natural colour composite can be produced by the following combination of the spectral bands: R = XS2 G = (3 XS1 + XS3)/4 B = (3 XS1 . vegetated areas will have higher RVI values compared to non-vegetated aeras. Natural colour composite multispectral SPOT image: Red: XS2. G and B are the display colour channels. red. red.e.
clearings.NDVI = (NIR . The NDVI band may also be combined with other bands of the multispectral image to form a colour composite image which helps to discriminate different types of vegetation. river.Red)/(NIR + Red) Normalised Difference Vegetation Index (NDVI) derived from the above SPOT image In the NDVI map shown above. the display colour assignment is: R = XS3 (Near IR band) G = (XS3 . In this image.XS2)/(XS3 + XS2) (NDVI band) B = XS1 (green band) . Note that the trees lining the roads are clearly visible as grey linear features against the dark background. the bright areas are vegetated while the nonvegetated areas (buildings. One such example is shown below. sea) are generally dark.
and algorithms for computer-aided automatic descrimination of different textures in an image are available. The triangular patch at the bottom left corner is the oil palm plantation with matured palm trees. Blue: XS1. This is an IKONOS 1-m resolution pan-sharpened color image of an oil palm plantation. three distinct land cover types can be identified from the image texture. An example is shown below. The predominant texture is the regular pattern formed by the tree crowns. especially for high spatial resolution imagery. Individual trees can be seen. The bright yellow areas are covered with shrubs or less dense trees. The green areas consist of dense trees with closed canopy. The image is 300 m across. At least three types of vegetation can be discriminated in this colour composite image: green. At the bottom right corner. It is also possible to characterize the textural features numerically.NDVI Colour Composite of the SPOT image:Red: XS3. . colour is more homogeneous. Green: NDVI. The golden yellow areas are covered with grass. The non vegetated areas appear in dark blue and magenta. and the tree canopies merge together. the trees are closer together. Textural Information Texture is an important aid in visual image interpretation. bright yellow and golden yellow areas. Even though the general colour is green throughout. indicating that it is probably an open field with short grass. forming another distinctive textural pattern. Near to the top of the image. This area is probably inhibated by shrubs or abandoned trees with tall undergrowths and shrubs in between the trees.
size. roads and vehicles. location. Familiar features visible in the image.Geometric and Contexture Information Using geometric and contextual features for image interpretation requires some a-priori information about the area of interest. The port is probably not operating at its maximum capacity. and regular rows of rectangular containers. roadside trees. The "interpretational keys" commonly employed are: shape. as empty spaces can be seen in between the containers. and association with other familiar features. make interpretation of the image st raight forward. such as the buildings. evidenced by the presence of ships. cranes. pattern. This is an IKONOS image of a container port. Contextual and geometric information plays an important role in the interpretation of very high resolution imagery. .
6 km by 6 . The rectangular grid pattern seen here is a main characteristic of large scale oil palm plantations in this region. The logging tracks are also seen in the cleared areas (dark greenish areas). implicating some logging activities in the forests. A smoke plume can be seen emanating from a site of active fires. Sumatra. The dark red regions are the remaining forests. Infrared Remote Sensing .4 km. This SPOT image shows land clearing being carried out in a logged over forest. It is obvious that the land clearing activities are carried out with the aid of fires. The image area is 8. Tracks can be seen intruding into the forests.This SPOT image shows an oil palm plantation adjacent to a logged over forest in Riau.
The middle-wave infrared (MWIR) and long-wave infrared (LWIR) are within the thermal infrared region. Black Body Radiation . Thermal infrared remote sensing is also often used for detection of forest fires. They are used in satellite remote sensing for measurements of the earth's land and sea surface temperature.Infrared remote sensing makes use of infrared sensors to detect infrared radiation emitted from the Earth's surface. These radiations are emitted from warm objects such as the Earth's surface.
Thermal emission from a surface at various temperatures. modeled by the Planck's equation for an ideal black body. its apparent temperature (known as the brightness temperature) and the spectral radiance are related by the Planck's blackbody equation. Hot spots detected by the MODIS thermal infrared bands are indicated as red dots in the image. Besides the measurement of regular surface temperature. AVHRR band 3) and 10 µm (e. For this reason. ERS-ATSR and TERRA-MODIS are equipped with this band that can be used for detection of fire hot spots.g. . plotted in the above figure for several temperatures. This is a true-colour image (at 500 m resolution) acquired by MODIS on 9 July 2001. The two bands around 3. If the earth's surface is regarded as a blackbody emitter. The peak wavelength decreases as the brightness temperature increases. over the Sumatra and Peninsular Malaysia area. most satellite sensors for measurement of the earth surface temperature have a band detecting infrared radiation around 10 µm. Sensors such as the NOAA-AVHRR.g. the spectral radiance peaks at a wavelength around 10 µm.8 µm.8 µm (e. The amount of thermal radiation emitted at a particular wavelength from a warm object depends on its temperature. Smoke plumes can be seen spreading northwards from the fire area towards the Northern part of Peninsular Malaysia. the radiance versus wavelength curves peak at around 3. AVHRR band 4) commonly available in infrared remote sensing satellite sensors are marked in the figure. For a surface at a brightness temperature around 300 K. For typical fire temperatures from about 500 K (smouldering fire) to over 1000 K (flaming fire). infrared sensors can be used for detection of forest fires or other warm/hot objects.
i. Occurrence of abnormal climatic conditions such as the El-Nino can be predicted by observations of the SST anomaly. land and ocean. A microwave radiometer is a passive device which records the natural microwave emission from the earth.50-km resolution Global Sea Surface Temperature (SST) Field for the period 11 to 14 August 2001 derived from NOAA AVHRR thermal infrared data. It can be used to measure the total water content of the atmosphere within its . the deviation of the daily SST from the mean SST.e. (Credit: NOAA/NESDIS) Microwave Remote Sensing Electromagnetic radiation in the microwave wavelength region is used in remote sensing to provide useful information about the Earth's atmosphere.
A radar altimeter sends out pulses of microwave signals and record the signal scattered back from the earth surface. A radar pulse is transmitted from the antenna to the ground The radar pulse is scattered by the ground targets back to the antenna In real aperture radar imaging. the narrower the beam. microwave pulses are transmitted by an antenna towards the earth surface. The magnitude of the backscattered signal is related to the ocean surface roughness. it sends out pulses of microwaves along several directions an records the magnitude of the d signals backscattered from the ocean surface. The SAR makes use of the radar principle to form an image by utilising t e time delay of the h backscattered signals. Finer details on the ground can be resolved by using a narrower beam. the longer the antenna. A wind scatterometer can be used to measure wind speed and direction over the ocean surface. orne platforms to generate r high resolution images of the earth surface using microwave energy.field of view. The microwave energy scattered back to the spacecraft is measured. . i. and hence the wind speed and di ection can be derived.e. the ground resolution is limited by the si e of the microwave beam sent out from the antenna. The beam width is inversely proportional to the si e of the antenna. The height of the surface can be measured from the time delay of the return signals. Synthetic Aperture Radar (SAR) In synthetic aperture radar (SAR) imaging. which in turns is dependent on the sea surface wind condition.
In radar imaging. It is not feasible for a spacecraft to carry a very long antenna which is required for high resolution imaging of the earth surface.The microwave beam sent out by the antenna illuminates an area on the ground (known as the antenna's "footprint"). SAR capitalises on the motion of the space craft to emulate a large antenna (about 4 km for the ERS SAR) from the small antenna (10 m on the ERS satellite) it actually carries on board. Increasing the length of the antenna will decrease the width of the footprint. the recorded signal strength depends on the microwave energy backscattered from the ground targets inside this footprint. To overcome this limitation. Imaging geometry for a typical strip-mapping synthetic aperture radar imaging system. The antenna's footprint .
Geometric factors such as surface roughness. SAR Imaging . Interaction between Microwaves and Earth's Surface When microwaves strike a surface. then any surface fluctuation of the order of 1 cm or less will be considered smoot . polarisation and incident angle in SAR imaging All-Weather Imaging Due to the cloud penetrating property of microwave. or land surface cover depends on its frequency. then the surface is considered smooth. slopes. However. the same surface will . On the other h hand. vegetation or man-made objects). Click here to read more about microwave fre uency. time operation. the penetration power increases for longer wavelength (lower frequency). the proportion of energy scattered back to the sensor depends on many factors: y y y y Physical factors such as the dielectric constant of the surface materials which also depends strongly on the moisture content. In SAR imaging. Microwave frequency. The SAR backscattered intensity generally increases with the surface roughness. The types of landcover (soil. Being an active remote sensing dev it is also capable of nightice. Generally.sweeps out a strip parallel to the direction of the satellite's ground track. However. Whether a surface is considered rough or not depends on the length scale of the measuring instrument. If a meter-rule is used to measure surface roughness. if a surface is examined under a microscope. then a fluctuation of the order of a fraction of a millimiter is considered very rough. This is especially useful in the tropical regions which are frequently under cloud covers throughout the year. Polarisation and Incident Angle Microwave Fre uency The ability of microwave to penetrate clouds. "roughness" is a relative quantity. SAR is able to acquire "cloud -free" images in all weather. For example. polarisation and incident angle. the reference length scale for surface roughness is the wavelength of the microwave.Frequenc . If the surface fluctuation is less than the microwave wavelength. precipitation. orientation of the objects relative to the radar beam direction. little radiation is backscattered from a surface with a fluctuation of the order of 5 cm if a L-band (15 to 30 cm wavelength) SAR is used and the surface will appear dark.
The L band has a longer wavelength and is more penetrating than the C band. . The land surface appears smooth to a long wavelength radar. The C band is useful for imaging ocean and ice features. Hence. However. Both the ERS and RADARSAT SARs use the C band microwave while the JERS SAR uses the L band.appear bright due to increased backscattering in a X-band (2.4 to 3. it is more useful in forest and vegetation study as it is able to penetrate deeper into the vegetation canopy. The surface appears bright in the radar image due to increased backscattering from the surface. Little radiation is backscattered from the surface. it also finds numerous land applications. The same land surface appears rough to a short wavelength radar.8 cm wavelength) SAR image.
Microwave Polarisation: If the electric field vector oscillates along the horizontal direction. there are four possible polarisation configurations for a SAR system: "HH". the beam is said to be "H" polarised. Incident Angles .The short wavelength radar interacts mainly with the top layer of the forest canopy while the longer wavelength radar is able to penetrate deeper into the canopy to undergo multiple scattering between the canopy. After interacting with the earth surface. the beam is "V" polarised. the SAR onboard the RADARSAT satellite is a "HH" polarised SAR. Microwave Polarisation in Synthetic Aperture Radar The microwave polarisation refers to the orientation of the electric field vector of the transmitted beam with respect to the horizontal direction. the polarisation state may be altered. If the electric field vector oscillates perpendicular to the horizontal direction. "VV". In comparison. If the electric field vector oscillates along a direction parallel to the horizontal direction. if the electric field vector oscillates along a direction perpendicular to the horizontal direction. So the backscattered microwave energy usually has a mixture of the two polarisation states. The SAR sensor may be designed to detect the H or the V component of the backscattered radiation. so it is a "VV" polarised SAR. On the other hand. "HV" and "VH" depending on the polarisation states of the transmitted and received microwave signals. For example. Hence. the SAR onboard the ERS satellite transmits V polarised and receives only the V polarised microwave pulses. the wave is H polarised. trunks and soil. the wave is V polarised.
ERS SAR image (pixel si e=12. JERS and RADARSAT. The interaction between microwaves and the surface depends on the incident angle of the radar pulse on the surface.The incident angle refers to the angle between the incident radar beam and the direction perpendicular to the ground surface. The incident angle of 23o for the ERS SAR is optimal for detecting ocean waves and other ocean surface features. Acquisition of SAR images of an area using two different incident angles will also enable the construction of a stereo image for the area. special care has to be taken when interpreting radar images. An example of a ERS SAR image is shown below together with a SPOT multispectral natural colour composite image of the same area for comparison. A larger incident angle may be more suitable for other applications. nterpreting SAR Images SAR Images Synthetic Aperture Radar(SAR) images can be obtained from satellites such as ERS. Since radar interacts with the ground features in ways different from the optical radiation.5 m) ¡ . ERS SAR has a constant incident angle of 23o at the scene centre. RADARSAT is the first spaceborne SAR that is equipped with multiple beam modes enabling microwave imaging at different incident angles and resolutions. For example. a large incident angle will increase the contrast between the forested and clearcut areas.
radar images are formed by coherent interaction of the transmitted microwave with the targets. The speckle noise is sometimes suppressed by applying a speckle removal filter on the digital image before display and further analysis. The SAR image was acquired in September 1995 while the SPOT image was acquired in February 1994. .SPOT Multispectral image in Natural Colour (pixel size=20 m) The urban area on the left appears bright in the SAR image while the vegetated areas on the right have intermediate tone. Additional clearings can be seen in the SAR image. A radar image appears more noisy than an optical image. it suffers from the effects of speckle noise which arises from coherent summation of the signals scattered from ground scatterers distributed randomly within each pixel. The clearings and water (sea and river) appear dark in the image. These features will be explained in the following sections. Hence. Speckle Noise Unlike optical images.
This image is e tracted from the above SAR image. showing the clearing areas between the river and the coastline The image appears "grainy" due to the presence of speckles This image shows the effect of applying a speckle removal filter to the SAR image The vegetated areas and the clearings now appear more homogeneous Backscattered Radar Intensity A single radar image is usually displayed as a grey scale image. as well as the incident angles of the radar beam. frequency and polarisation of the radar pulses. The intensity of each pixel represents the proportion of microwave backsc attered from that area on the ground which depends on a variety of factors: types. moisture content of the target area. si es. The pixel intensity values are often converted to a physical quantity called the backscattering coefficient or normalised radar cross-section measured in decibel (dB) units with values ranging from +5 dB for very bright objects to -40 dB for very dark surfaces. such as the one shown above. . shapes and orientations of the scatterers in the target area.
Under certain conditions when the sea surface is sufficiently rough. they appear as moderately bright features in the image. i e the angle of reflection is e ual to the angle of incidence Very little energy is scattered back to the radar sensor Diffused Reflection A rough surface reflects the incident radar pulse in all directions Part of the radar energy is scattered back to the radar sensor The amount of energy backscattered depends on the properties of the target on the ground Calm sea surfaces appear dark in SAR images. Hence. the rougher is the surface being imaged . Flat surfaces such as paved roads.Interpreting SAR Images Interpreting a radar image is not a straightforward task. rough sea surfaces may appear bright especially when the incidence angle is small. A ship (bright target near the bottom left corner) is seen discharging oil into the sea in this ERS SAR image Trees and other vegetations are usually moderately rough on the wavelength scale. which is spatially homogeneous and remains stable in time. runways or calm water normally appear as dark areas in a radar image since most of the incident radar pulses are specularly reflected away. the tropical rainforests have been used as calibrating targets in performing radiometric calibration of SAR images. the higher the backscattered intensity. Specular Reflection A smooth surface acts like a mirror for the incident radar pulse Most of the incident radar energy is reflected away according to the law of specular reflection. For this reason. The presence of oil films smoothen out the sea surface. oil films can be detected as dark patches against a bright background. The tropical rain forests have a characteristic backscatter coefficient of between -6 and -7 dB. . However. Very bright targets may appear in the image due to the corner-reflector or double-bounce effect where the radar pulse bounces off the hori ontal ground (or the sea) towards the target. It very often requires some familiarity with the ground conditions of the areas imaged. As a useful rule of thumb.
Typically. For similar soil roughness. the beam bounces twice off the surfaces and most of the radar energy is reflected back to the radar sensor This SAR image shows an area of the sea near a busy port Many ships can be seen as bright spots in this image due to corner reflection The sea is calm. the surface with a higher moisture content w appear brighter. Built-up areas and many man-made features usually appear as bright patches in a radar image due to the corner reflector effect. and hence the ships can be easily detected against the dark background The brightness of areas covered by bare soil may vary from very dark to very bright depending on its roughness and moisture content. Examples of such targets are ships on the sea. Corner Reflection When two smooth surfaces form a right angle facing the radar beam. high-rise buildings and regular metallic objects such as cargo containers.and then reflected from one vertical surface of the target back to the sensor. ill . rough soil appears bright in the image.
resulting in low backscattered intensity. The flooded area appears dark in the SAR image. if three images are available. then one image can be assigned to the Red. . the second to the Green and the third to the Blue colour channels for display. resulting in less backscattered intensity. Multitemporal SAR images If more than one radar images of the same area acquired at different time are available. Flooded Soil: Radar is specularly reflected off the water surface. For example. This technique is especially useful in detecting landcover changes over the period of image acquisition.Dry Soil: Some of the incident radar energy is able to penetrate into the soil surface. they can be combined to give a multitemporal colour composite image of the area. Wet Soil: The large difference in electrical properties between water and air results in higher backscattered radar intensity. The areas where no change in landcover occurs will appear in grey while areas with landcover changes will appear as colourful patches in the image.
Vietnam. Three SAR images acquired by the ERS satellite during 5 May. where the landcovers change rapidly during the rice season. The area shown is part of the rice growing areas in the Mekong River delta. The two towns appear as bright white spots in this image. An area of depression flooded with water during this season is visible as a dark region. The colourful areas are the rice growing areas. The grey patch near the bottom of the image is wetland forest.This image is an example of a multitemporal colour composite SAR image. green and blue channels respectively for display. The greyish linear features are the more permanent trees lining the canals. near the towns of Soc Trang and Phung Hiep. Image Processing and Analysis . 9 June and 14 July in 1996 are assigned to the red.
. we will examine some procedures commonly used in analysing/interpreting remote sensing images. initial processing on the raw data is usually carried out to correct for an y distortion due to the characteristics of the imaging system and imaging conditions. In this section.Image Processing and Analysis Many image processing and analysis techniques have been developed to aid the interpretation of remote sensing images and to extract as much information as possible from the images. ground control points (GCP's) are used to register the image to a precise map (geo-referencing). The choice of specific techniques or algorithms to use depends on the goals of each individual project. if accurate geographical location of an area on the image needs to be known. some standard correction procedures may be carried out by the ground station operators before the data is delivered to the end -user. The image may also be transformed to conform to a specific map projection system. These procedures include radiometric correction to correct for uneven sensor response over the whole image and geometric correction to correct for geometric distortion due to Earth's rotation and other imaging conditions (such as oblique viewing). Depending on the user's requirement. Furthermore. Pre-Processing Prior to data analysis.
i. 0 to 255. This image is displayed without any further enhancement.e. It is useful to examine the image Histograms before performing any image enhancement. The y-axis is the number of pixels in the image having a given digital number. visual appearance of the objects in the image can be improved by image enhancement techniques such as grey level stretching to improve the contrast and spatial filtering for enhancing the edges. The image has also been transformed to conform to a certain map projection (UTM projection). The histograms of the three bands of this image is shown in the following figures. producing a hazy apapearance. The xaxis of the histogram is the range of the available digital numbers. In the above unenhanced image.Image Enhancement In order to aid visual interpretation. Multispectral SPOT image of the same area shown in a previous section. a bluish tint can be seen all-over the image. This effect also degrades the contrast between different landcovers. Radiometric and geometric corrections have been done. This hazy appearance is due to scattering of sunlight by atmosphere into the field of view of the sensor. . but acquired at a later date. An example of an enhancement procedure is shown here.
. Histogram of the XS2 (red) band (displayed in green).Histogram of the XS3 (near infrared) band (displayed in red).
The image can be enhanced by a simple linear grey-level stretching. most of the pixels in the image have digital numbers well below the maximum value of 255. All other pixel values are linearly interpolated to lie between 0 and 255. The Grey-Level Transformation Table is shown in the following graph. The maximum digital number of each band is also not 255. This shift is due to the atmospheric scattering component adding to the actual radiation reflected from the ground. In this method. An upper threshold value is also chosen so that all pixel values above this threshold are mapped to 255. The shift is particular large for the XS1 band compared to the other two bands due to the higher contribution from Rayleigh scattering for the shorter wavelength. . Note that the minimum digital number for each band is not zero. The lower and upper thresholds are usually chosen to be values close to the minimum and maximum pixel values of the image.Histogram of the XS1 (green) band (displayed in blue). Hence. a level threshold value is chosen so that all pixel values below this threshold are mapped to zero. The sensor's gain factor has been adjusted to anticipate any possibility of encountering a very bright object. Each histogram is shifted to the right by a certain amount.
Red line: XS3 band. Multispectral SPOT image after enhancement by a simple linear greylevel stretching.Grey-Level Transformation Table for performing linear grey level stretching of the three bands of the image. Blue line: XS1 band. except for some parts near to the top of the image. The contrast between different features has been improved. The result of applying the linear stretch is shown in the following image. . Green line: XS2 band. Note that the hazy appearance has generally been removed.
Each cluster will then be assigned a landcover type by the analyst. These areas are known as the "training areas". the brightness and "colour" information contained in each pixel. In unsupervised classification.Image Classification Different landcover types in an image can be discriminated usingsome image classification algorithms using spectral features. Every pixel in the whole image is then classified as belonging to one of the classes depending on how close its spectral features are to the spectral features of the training areas.e. depending on their spectral features. SPOT multispectral image of the test area . In supervised classification. Each class of landcover is referred to as a "theme"and the product of classification is known as a "thematicmap". The following image shows an example of a thematic map. the spectral features of some areas of known landcover types are extracted from the image. the computer program automatically groups the pixels in the image into separate clusters. i. The classification procedures can be "supervised" or"unsupervised". This map was derived from the multispectral SPOT image of the test area shown in a previous section using an unsupervised classification algorithm.
Less dense forest Grass Bare soil.Thematic map derived from the SPOT image using an unsupervised classification algorithm. bare soil. The first graph is a plot of the mean pixel values of the XS3 (near infrared) band versus the XS2 (red) band for each class. A plausible assignment of landcover types to the thematic classes is shown in the following table. (Colour in Map) 1 (black) 2 (green) 3 (yellow) 4 (orange) 5 (cyan) 6 (blue) 7 (red) 8 (white) Landcover Type Clear water Dense Forest with closed canopy Shrubs. The second graph is a plot of the mean pixel values of the XS2 (red) versus XS1 bands. built-up areas The spectral features of these Landcover classes can be exhibited in two graphs shown below. Class No. . The accuracy of the thematic map derived from remote sensing images should be verified by field observation. built-up areas bare soil. built-up areas bare soil. The standard deviations of the pixel values for each class is also shown. built-up areas Turbid water.
details such as buildings and roads can be seen. In the XS2 (visible red) versus XS1 (visible green) scatterplot. contextual and g eometrical properties are required. image processing and analysis algorithms utilising the textural.Scatter Plot of the mean pixel values for each landcover class. vehicles. In order to fully exploit the spatial information contained in the imagery. analysis at different spatial scales and combining the resoluts) is also a useful strategy when dealing with very high resolution imagery. In the scatterplot of the class means in the XS3 and XS2 bands. Spatial Feature E traction In high spatial resolution imagery. even road markings. The amount of details depend on the image resolution. . Pixel-based e methods of image analysis will not work successfully in such imagery. In this case. Incorporation of a-priori information is sometimes required. Such algorithms make use of the relationship between neighbouring pixels for information extraction. individual tree crowns. and aggregates of people can b seen clearly. The vegetated areas and clear water are generally dark while the other nonvegetated landcover classes have varying brightness in the visible bands. This line is called the "soil line". In very high resolution imagery. the data points for the non vegetated landcover classes generally lie on a straight line passing through the origin.e. A multi-resolutional approach (i. The vegetated landcover classes lie above the soil line due to the higher reflectance in the near infrared region (XS3 band) relative to the visible region. all the data points generally lie on a straight line. This plot shows that the two visible bands are very highly correlated. pixel based method can be used in the lower resolution mode and merged with the contextual and textural method at higher reso lutions.
stratospheric ozone. sea water chlorophyll concentration. For example. Geographical Information System (GIS) Different forms of imagery such as optical and radar images provide complementary information aboutthe landcover. In this case. the solar illumination direction the satellite sensor viewing direction need to be known. tropospheric aerosol. Oil palm trees in an IKONOS image Detected trees (white dots) superimposed on the image. etc.Building height can be derived from a single image using a simple geometric method if shadows of the buildings can be located in the image. An automated technique for detecting and counting oil palm trees in IKONOS images based on differential geometry concepts of edge and curvature has been developed at CRISP. land and sea surface temperature. using a simple geometric relation. forest biomass. Measurement of Bio-geophysical Parameters Specific instruments carried on-board the satellites can be used to make measurements of the biogeophysical parameters of the earth. Specific satellite missions have been launched to continuously monitor the global variations of these environmental parameters that may show the causes or the effects of global climate change and the impacts of human activities on the environment. . radar image can form oneof the layers in combination with the visible and near infraredlayers when performing classification. More detailed information can be derived by combining several different types of images. Some of the examples are: atmospheric water vapour content. For example. sea surface wind field. Individual trees in very high resolution imagery can be detected based on the tree crown's intensity profile. the building height of the building shown here can be determined by measuring the distance between a point on the top of the building and the corresponding point of the shadow on the ground.
where each layer containsinformation about a specific aspect of the same area which isused for analysis by the resource scientists. AGIS is a database of different layers.The thematic information derived fromthe remote sensing images are often combined with other auxiliary datato form the basis for a Geographic Information System (GIS). End of Tutorial Image Processing and Analysis .
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview.