Professional Documents
Culture Documents
¥
Corresponding author contact information: arturo.sanchez@ualberta.ca, phone: 1-780-492-1822; Fax: 1-780-492-2030
Winter 2004
© Earth Observation Systems Laboratory (EOSL)
For internal Use – Do not circulate
Disclaimer
The present work reflects the opinion of the authors of this report alone and does not imply
The University of Alberta encourages its staff to carry out and publish scientific and scholarly
research of the highest quality and consistent with the best ethical standards. Nevertheless it
does not and cannot endorse or guarantee the results or the products of research performed by its
staff or students on or off campus, nor is it responsible for possible inaccuracies, errors or
hold the University harmless in the event of legal action against the company.
1
Contents
Disclaimer ....................................................................................................................................... 1
Figures and Tables ...........................................................................................................................3
Introduction .................................................................................................................................... 4
Background .....................................................................................................................................5
AGCC data
• Definition, history, project methods ..................................................................................14
QuickBird Data Study Sites
• Specifications, imagery properties, study site locations ....................................................15
Classification Process
• Object-Based Classification .............................................................................…….…….19
• Objects-Based Classification Method .…...........................................................................21
o Segmentation ........................................................................................................ 21
o Classification ........................................................................................................ 23
• eCognition Classification Results ...................................................…………...................24
• AGCC and eCognition Classification Comparison…………... ........................................28
Field Work…………………………………………………………………………………….…32
Discussion and Conclusions ..........................................................................................................33
References .....................................................................................................................................38
Appendix .......................................................................................................................................41
2
List of Figures
Fig. 1: Profile of Pixel Value Variability Within A Homogenous Conifer Stand: Landsat (Fig. 1A),
Fig. 4: Hierarchical network example: Schematic and Image Objects. ......... ......………..........................20
Fig. 7: QuickBird Imagery and eCognition Classification: Images 58738, 58739 .….........................26
Fig. 8: QuickBird Imagery and eCognition Classification: Images 58741, 58742 ..................……....27
Fig. 9: Comparison between AGCC and eCognition Classification: Images 58738, 58739.......................28
Fig. 10: Comparison between AGCC and eCognition Classification: Images 58741, 58742 ....................29
Fig. 11: Scatter Plot Comparing eCognition QuickBird and AGCC Landsat ETM+ Classifications ....... 34
List of Tables:
Table 3: Comparison of extracted area between Landsat ETM+ and Quickbird based on AGCC Land
3
Introduction
The successful launch of Digital Globe Corporation’s QuickBird satellite and its very high-
resolution (VHR) sensors have narrowed the gap between satellite and aerial imagery, permitting
easier, faster and potentially cheaper access to high-resolution data. Recently, VHR satellite
imagery has become available at spatial resolution that rivals that of aerial photography with the
added advantage of digital multispectral information and without problems caused by aircraft
flight (Goetz et. al., 2003). These new data offer an increased ability to map land use, land
cover, and land use/cover change in higher detail and accuracy for use with even smaller scale
applications. This study will explore the capability of using QuickBird imagery to classify land
cover in forested regions according to the Alberta Ground Cover Classification (AGCC) scheme
and compare classification results with previous AGCC classifications based on 30m Landsat
images. Traditional image classification methods rely on pixel-based classifying where each
pixel in an image is individually analyzed and grouped with other similar pixels throughout the
entire image. Recently, object-based approaches of classifying have been gaining popularity as
they analyze and identify boundaries between homogenous regions of pixels and classify these
individually defined regions as ‘objects’. This new classification method will be investigated
using the QuickBird imagery in software called eCognition by Definiens Imaging. The project
will be divided into two phases of which the first will be an a priori classification and
methodology investigation and the second will involve more specific classification methods and
inclusion of accurate field data to verify and amend the classification results. The availability of
VHR data will provide for many exciting research opportunities and improve decision making
for numerous aspects of resource management. Usage of this new data source will now make
possible analysis and levels of detail that previously were not possible. For example, QuickBird
4
satellite data has the ability to differentiate forest at the crown level making the future potential
Background
QuickBird imagery can be used for land cover, forest inventory, decision-making support and
mapping projects that were once limited by larger spatial resolution. For example, Bjorgo (2000)
used high-resolution multispectral imagery to monitor a refugee camp on the Thailand border;
something that requires imagery on the meter scale, especially for population estimations based
upon the number of individual tents. By using the imagery, Bjorgo (2000) was able correlate
population size and the area of the camp based upon ground data for population density by using
3.3m resolution Russian KVR-1000 VHR satellite sensor. With <2.5m imagery like IKONOS or
QuickBird, even more accurate refugee population counts can be achieved, providing crucial
information not before available to decision makers dealing with a humanitarian crises. A
second new application of very high-resolution data is the update of urban maps with the
(2002) used an object-oriented approach to classify complex urban areas and produced an overall
classification accuracy of 78.4%. Distinguishable objects such as vegetation patches, roads and
urban land cover and land use types were clearly identifiable within their small study area.
These are encouraging results of the application of derived land cover mapping in studies
with high-resolution data, complex land cover and with object oriented approaches (Herold et al.,
2002).
5
Other new applications include improved emergency response (Digital Globe), accurate forest
fire mapping, and damage estimations (Adams et al, 2003). More important functions however,
are resource management applications such as tree cover mapping, impervious surface areas, and
riparian buffer zones that were investigated with IKONOS imagery by Goetz et al., (2003).
Higher resolution imagery carries a new set of problems relating to the logistical issues of
phenological and atmospheric conditions in the images, shadowing within canopies and between
scene elements, and limited spectral discrimination for cover types (Goetz et al., 2003). Despite
encountering these new problems, findings indicated that use of the VHR data would increase
the accuracy in assessment, monitoring, and management of wetlands, parks and protected areas,
and forest and carbon inventories (Clark et al., 2004). In addition, cost per unit area is much
higher for VHR imagery and footprint size is considerably smaller which can be a concern for
project criteria. Goetz et al., (2003) concluded that Landsat data does not have the necessary
spatial resolution to adequately map riparian buffer vegetation to within the riparian zones, but
successful local scale analysis of the riparian buffers with VHR data provided statistically
meaningful samples from within the zones. Emphasizing this conclusion for resolution
improvement was Mumby & Edwards (2002) with their study of marine environments. Findings
concluded that the IKONOS sensor lacked spectral capability to properly identify distinct marine
habitats, but patches and boundaries were mapped with much greater accuracy than previous
sensors. Kayitakire et al. (2002) investigated the capabilities of IKONOS-2 imagery in mapping
forest stands in a Belgium mixed woods forest. Both per-pixel and per-parcel land cover
between dominant species types and for producing easier to use data. Kayitakire et al. (2002)
6
concluded that at high spatial resolution, accuracy in classification of forested land could be
improved by taking into account the locational context of a group of similar pixels, as opposed to
the spectral signature alone from each individual pixel. This solution is suggested due to the
radiometric value of an isolated pixel not providing enough information about the larger object in
interest, such as a forest stand (Kayitakire et al. 2002). Though direct accuracy ratings were
actually higher for per-pixel classification (88%), compared to the per-parcel (83%), the error in
a parcel classification concerns an entire object and its impact on the overall accuracy is more
pronounced (Kayitakire et al., 2002). Corrections in their classification, then, concern adjusting
individual objects, as opposed to changing a class, or bin, derived from pixel spectral clustering
(Kayitakire et al., 2002). Correcting a misclassified object is much easier than performing a
manual per-pixel recode of a spectral class, which can involve lengthy and tedious manual
adjustments.
Kayitakire et al. (2002) also concluded that more advanced texture analysis would lead to further
improved classification results. A key feature of the new VHR sensors is the ability to provide
within parcel textural data, such as with QuickBird’s 60cm panchromatic band, and at high
spatial resolution, texture analysis becomes just as important as spectral analysis (Kayitakire et.
al., 2002). QuickBird imagery can be obtained in either 2.4m multi-spectral or 0.6m
panchromatic and can be used together to enhance available information used for classification
through a process that it is called resolution merge. eCognition can be used to extract textural
statistics from the panchromatic imagery, while spectral signatures from the multi-spectral image.
Remotely sensed data used for spectral signatures consist of using the reflectance values from the
spectral bands of the sensor while the texture statistics are derived from the grey levels such as
7
contrast, correlation and standard deviation, which have been found most powerful for forest
stand type discrimination (Kayitakire et al., 2002). Franklin et al. (2001) showed that higher
accuracy was obtained using texture data as opposed to spectral data alone when classifying
forest stands in CASI imagery of a spatial resolution of about 1m and also successfully used 1st
order variance and 2nd order homogeneity texture analysis of IKONOS imagery to divide forest
stands into age classes. Franklin et al. (2001) concluded that the second-order (spatial co-
occurrence homogeneity) texture values were the most effective in distinguishing between age
classes and also that using a window size of less than 25x25 pixels was not as effective. Zhang
et al. (2004) also reported an increase in accuracy of forest classification by using image texture
information improves thematic map accuracy for finer level habitat discrimination (Mumby &
Edwards, 2002). Even small variations at the textural level can account for significant changes
in stand classification. Increased spatial resolutions allow for textural information to be utilized
even for small land parcels and when properly incorporated can improve accuracy.
Because VHR satellite data contains significantly more detailed information than previous
satellites, such as Landsat TM, problems do arise. The main problem is a “salt & pepper” effect
that hinders the recognition of cover classes (Caprioli & Tarantino, 2003). These effects are not
limited to VHR data, but the associated increase in spatial resolution increases the variability
within land parcels and generates noise in the image. Classifying land cover with current 30m
resolution Landsat imagery works sufficiently well due to general uniformity within a pixel that
would be comprised of many times more, and smaller sized, QuickBird imagery pixels. A single
30m Landsat pixel will contain approximately 150 multi-spectral QuickBird pixels. Increasing
8
the resolution introduces smaller differences within regions of an otherwise homogeneous patch
(see Figure 1). An area may have many colors and shadings due to smaller differences in the
cover and sun orientation that are not detectable at the lower resolution (Caprioli & Tarantino,
2003). Thus, the size of the objects, like a stand of trees, that need to be differentiated must be
bigger than the size of the noise in the land cover texture.
Even though it is now possible to differentiate spatial entities that in the past were not detectable
due to low-resolution, or that previously were indistinguishable features, very high spatial
resolution imparts an increase in internal variability within land covers and thus the accuracy of
results may decrease on a per-pixel basis (Caprioli & Tarantino, 2003). The high-resolution data
exhibits greater within stand variability than lower resolution data given similar class definitions.
In order for the new image data to be utilized for land use mapping, improved analysis tools and
methods need to be investigated (Lawrence et. al., 2004). When standard procedures of per-pixel
multispectral classification are applied to VHR data, the increase in spatial resolution leads to
classification and a decrease of accuracy in class identification (Caprioli & Tarantino, 2003).
Looking at VHR images it is very likely that a neighboring pixel belongs to the same land cover
class as the pixel under consideration, but may not be detected as so using traditional pixel based
methods (Blaschke et al., 2000). High-resolution does not necessarily lead to higher
classification accuracy and procedures that aim to reduce the negative impacts of the increased
spatial complexity must be investigated to ensure continued accuracy in land cover classification
with the addition of the satellite technologies. One such solution is to incorporate an object-
oriented approach.
9
Problems in classification are not unique to VHR imagery, but are emphasized by the high
spatial resolution relative to other imaging sensors such as Landsat TM. In the case of early
season deciduous forest Goetz et al., (2003) found that 4m resolution IKONOS imagery includes
data in the inter-canopy space, whereas with Landsat data, the pixels include a mixture of both
canopy and biomass woody stems which produce higher NDVI values. On a Landsat TM pixel
that is classified as deciduous forest, a group of large conifers that are within the undifferentiated
greater variation with foliage density and canopy shape (Caprioli & Tarantino, 2003). This
Landsat imagery is, in fact, quite complex when viewed with QuickBird imagery (Figure 1). To
emphasize this point a profile representing pixels values across the same plot (Band 4 in both
sensors) is compared.
Fig. 1A Fig. 1B
10
LANDSAT
Mean 32.773
Max 36
Min 30
QuickBird
Mean 267.303
Max 452
Min 111
As can be clearly observed, over the same profile a substantial increase in detail and a
corresponding increase in within parcel volatility is evident within the QuickBird imagery
(Figure 1b). These patterns of variance can be utilized on a textural level to search for patterns
corresponding to land cover types. However, this effect also may be caused by within canopy
11
Since the resolution is high enough to detect individual shadows, an increase in variability can
occur due to canopy shadowing. Shadows cast within the forest canopy onto adjacent trees are
now an important issue in classifications (Goetz et. al., 2003). Asner & Warner (2003)
investigated the effects of variation of canopy shadow fraction across a broad range of forests
and savannas in Brazil and found that forests have substantial apparent shadow fractions due to
sun and satellite view angles. Observations using both red and near-infrared wavelength regions
were found to be highly sensitive to sub-pixel shadow fractions in tropical forests, accounting for
30–50% of the variance in red and NIR responses (Asner & Warner 2003). Shadowing is closely
linked to the characteristics of plant canopies and is a major contributor to the radiance or
reflectance properties of tropical forests and savannas. VHR data provides an opportunity to
observe plant canopies at spatial scales approaching the size of individual crowns and vegetation
clusters which are affected by canopy shadowing (Franklin, et. al., 2001). The maneuverability
of high spatial resolution sensors and differences in daily sun angles leads to substantial variation
in viewing and solar geometry during imaging and can result in changed shadow fractions
between image acquisitions (Asner & Warner 2003). Issues caused by the shadows can be
reduced through the use of multi-temporal imagery by obtaining images from several sun angles,
but this may be difficult depending on the sensor used and the budget of a project (Goetz et. al.,
2003). Three-dimensionality of the forest structure controls the shadow variations and VHR data
Salajanu & Olson (2001) investigated the significance of spatial resolution in identifying forest
cover with both Landsat and SPOT data. They concluded that the overall accuracy of careful
classification does increase with higher spatial resolution. A point that was repeatedly stressed
12
in many studies is the importance of accurate ground control points and that improving spatial
resolution is not a substitute for good ground data to correlate with the image (Toutin & Chénier,
2004). Higher spatial resolution satellite data should be coupled with increased detail in both
land cover definitions and accuracy of field data acquired from ground based control sources.
GPS points must be taken with accuracy of 1m or higher and within forest patches that exhibit
representative regions, or on locations that are easily identifiable, or segmented, such as lone
forest patches (Toutin & Chénier, 2004). Field locations can be also located within a patch to
detect internal variations. For example, points can be taken within a clear-cut to identify re-
growth levels within a patch that otherwise would be fairly uniform on a Landsat images.
The first phase of the project will be an a priori classification of the Quickbird imagery using the
object-oriented approach applied by eCognition. This approach should minimize the effects such
as speckling and canopy shadowing while ensuring that differentiation of objects using their
spectral signatures is retained. The resulting classification will be directly compared to the
current AGCC classification of clipped Landsat imagery and tested for correlation. It is expected
that the QuickBird imagery will allow for differentiation of more AGCC classes and that
eCognition overall will provide faster and improved results. Phase two, to be performed in
Spring 2005, will then verify and amend the resulting classification by incorporating data from
13
Alberta Ground Cover Characterization (AGCC) Data
In its most basic definition, Alberta Ground Cover Characterization (AGCC) is a land use/land
cover classification scheme derived for land cover types in Alberta. Its main purpose is to serve
as the main labeling tool to produce a comprehensive land cover map for the forested regions of
Alberta with the final product used by a variety of partners for planning, conservation, and
development within the province. The AGCC process is based on a hierarchical classification
method that takes into consideration the different spectral and spatial properties of the different
land cover types present in the province. This process combines extensive fieldwork and expert
knowledge to categorize the spectral signatures extracted from the Landsat image into one of
many different land cover classes. The AGCC classification scheme is comprised of broad types
of land cover (open/closed forest, wetland, etc.) and a more detailed sub-category (conifer,
deciduous, bog etc.). To classify imagery, AGCC uses unsupervised pixel based classification to
group like pixels into sets. Known extractable features are then removed (such as water), thus
reducing the amount of image bins and subsequent classifications are performed on the
remaining image. Extracted classes are then further sub-divided into more detailed classes as far
as spectral signatures can be used to differentiate. The process yields an overall accuracy of
~80% at 30m resolution. See Appendix A for classification scheme. Figure 2 presents a
summary of the AGCC classification levels and their overall average accuracy.
14
Figure 2: AGCC Classification Accuracy Pyramid
QuickBird imagery’s greatest advantage is its very high spatial resolution. Panchromatic images
are available in 61~72cm ranges while multispectral imagery ranges from 2.44~2.88m. Spatial
resolution depends on the view angle used during the image acquisition. Spectrally, QuickBird
has only four bands and does not have the band discrimination ranges provided by some other
satellites (Landsat has 7 bands), but can still be used effectively in differentiating spectral
signatures. The band combinations and resolutions for QuickBird are as follows:
15
Spatial and Spectral Resolution
Panchromatic Multispectral
Black & White Blue Green Red Near IR
Spectral
Characteristics 450 520nm 630 760
450 ~ 900nm
~ 520nm ~ 600nm ~ 690nm ~ 900nm
Pixel Resolution 61cm ~ 72cm 2.44 ~ 2.88m
Scene 27,552 ×
Dimensions 6,888 × 6,856 pixels
27,424 pixels
272-km2 (nadir) to 435-km2 (25° off-nadir) (105 to 168-mi2)
Scene Size
16.5-km2 (nadir) to 20.8-km2 (25° off-nadir) (10.3 to 12.9-mi2)
Image Accuracy
CE 90% RMSE
Positional Accuracy
23-meters 14-meters
Processing
16
For this study, four QuickBird scenes were obtained throughout Alberta (Figure 3). The selected
17
The northeastern most image, 58738, is located in NTS block 84A03 and is just east of the
Wabasca Lakes. To the southwest of this image is 58739, in NTS block 83O16, which is
northeast of Lesser Slave Lake and southwest of the Wabasca Lakes. Further to the
southwest is image 58741 that straddles blocks 83J14 and 83O03 and is in the Swan Hills
region. The final study site is image 58742 and is located in the foothills near
Luscar/Cadomin in NTS block 83F03. The study sites represent a broad range of forested
Alberta landscape from wetlands to deciduous to coniferous dominated regions and were
18
Classification Process
Object-Based Classification
Compared to traditional image processing methods that use only pixel value clustering,
eCognition software is the first commercially available product for object-oriented and multi-
scale image analysis. It is designed to work on high spatial resolution or hyperspectral imagery
and includes several useful parameters to develop a knowledge base for elaborate land use
regions, based on neighboring pixels, spatial properties, and visual borders approximating what
the human eye can differentiate. This expectation of human-equivalent object differentiation
topological network has a considerable advantage as it allows the efficient propagation of many
different kinds of relational information (Martin B et al., 2000). The resulting formatted objects
have not only the value and statistical information of the pixels that they consist of, but also carry
texture, spatial features, and topology information in a common attribute table. The
classifying image objects. A bottom up procedure starts with the lowest level and merges
distinguishable objects into higher levels. For a visual understanding, the hierarchical methods
19
Images objects
Pixels
Fig. 4 Hierarchical Network Example: Schematic and Image Objects. Upper is schematic view
and below are image objects on different levels of the hierarchical network
20
Objects Oriented Classification Method
The principal procedure of eCognition is focused on multi-resolution segmentation and a
patented image object extraction. Each level in this hierarchical network is produced by a single
segmentation run. The whole image analysis process can be divided into the two principal
workflow steps: segmentation and classification (Baatz et al. 1996, Willhauck et al. 2000).
Segmentation
Segmentation means the process of grouping like elements by homogeneity and merging them
into distinct regions (Willhauck et al. 2000). To obtain segments suited for the desired
classification, the segmentation process can be manipulated by defining which of the loaded
channels are to be used by what weight and by the following three parameters: scale, color, and
form. For examples, band 4 of a QuickBird image can be weighted heavier such that segments
are created with a bias to that band. The scale parameter is an abstract value with no direct
21
correlation to the object size measured in pixel. Rather, it depends on the heterogeneity of the
data material. The color parameter balances the color homogeneity of a segment on one hand
and the homogeneity of shape on the other. A value of one on the color side will result in very
fractal segments with low standard deviation of pixel values. A zero color value would result in
very compact segments with higher color heterogeneity. The form parameter controls the form
features of an object by simultaneously balancing the criteria for smoothness of the object border
and the criteria for object compactness (Willhauck et al. 2000). Smoothness describes the
similarity between the image object borders and a perfect square while compactness describes
segmentations with different parameters, the schematic view for a hierarchical network of
sensible image objects is built. Each object knows its relationships to its neighbor-, sub- and
super objects, which allows classification of relationships between objects. To ensure the
hierarchical structure of the network, two rules are mandatory. One is that the sub levels inherit
object borders of higher levels and the other is that super object borders restrain the segmentation
process. Supplementary to the normal segmentation two special types of segmentation are
provided. One is the knowledge-based segmentation, the other the construction of sub-objects.
The knowledge-based segmentation is a feature that allows the use of already made
classifications as additional information for the merging of objects. Segments of one class can
be fused on the same level or a higher level than is constructed. The construction of sub-objects
are used for special classification tasks such as texture, or from classifications based on sub-
levels. The segmentation process can principally be compared to the construction of a database
22
Classification
In order to be able to compare different objects, features like color and size as well as uncertain
statements, fuzzy logic functions are used for the extraction and land cover classification (Martin
B. et al., 2000). This allows classification of very complex tasks on one hand, and makes
classification transparent and adjustable in detail on the other. Fuzzy logic is a mathematical
approach to quantify uncertainty between classes (Willhauck, G. et. al., 2000). The basic idea is
to replace the two strict logical statements “yes” and “no” by the continuous range of [0...1],
where 0 means “exactly no” and 1 means “exactly yes”. That is, all values or expressions
between 0 and 1 represent a more or less certain state of yes and no in terms of membership to a
property. To translate the range of most different features into fuzzy logic expressions, software-
cognition uses two kinds of classifiers, membership functions and nearest neighbor classifier
(Martin B. et al., 2000). All expressions of one class have to be combined to produce a result.
This is done using logical operators such as max and mean, or, if and else. The features used for
classification can be divided into three categories. 1) Object Features such as color, texture, form
and area; 2) Classification related to sub-objects, super-objects and neighbor objects; 3) Terms
such as nearest neighbor classifier or similarity to other classes (Willhauck, G. et. al., 2000).
Sample selection is the final key process to performing a successful classification in eCognition.
Choosing representative samples is critically important as they define the range of spectral
signatures and image object properties that group other objects into the classification categories.
Fieldwork can then be used to supplement sample selection and should instigate very specific
discrimination of cover types. Also, eCognition includes a complex sample selection tool that
selected. This allows for a quick visual interpretation of a sample relative to other classes, and to
23
other samples. Membership to a class, or the capability of a sample to produce a new class on its
own can be performed efficiently and quickly. Then, a nearest neighbor function is applied to
the samples and image objects are sorted into the appropriate category based upon best fit to the
selected samples. Revision and reclassification occurs until the results are accepted. The
samples due to the fact of variance within samples selected for the same class. Impure samples
will cause objects in one class to stretch the feature space and reduce the ability to discriminate
classes. To improve the accuracy, samples verified by field training sites will be used and outlier
24
Raw Image Data
Multiresolution
Segmentation
Create Polygon
Changing Scale parameters
Satisfied
segmentation?
Classification
Results
25
eCognition Classification Results
58738, 03jul26
58739, 03jul18
26
eCognition Classification Results
58741, 03jul18
58742, 03jul23
27
AGCC and eCognition Classification Comparison
58738, 03jul26
58739, 03jul18
28
AGCC and eCognition Classification Comparison
58741, 03jul18
58742, 03jul23
29
Table 3: Comparison of extracted area between Landsat ETM+ and Quickbird based on
58738, 03jul26
58739, 03jul18
(Continued)
30
58741, 03jul18
58742, 03jul23
31
Comparing the extracted area between Landsat ETM+ and QuickBird based on AGCC Land Cover type
(Table. 3), the extracted area of closed coniferous dominated forest from two different methods was
remarkably consistent. However, the AGCC created from unsupervised Landsat classification made use
of manual unsupervised classification methods and included the use of vector shapefiles for features such
as roads and water (AGCC Class 92) that exaggerate the extractable area. Therefore, to acquire more
reliable eCognition results, we must use other object shape features and texture features to closely classify
the road cover (AGCC class 11 and 13), or include the same vector shapefiles.
In case of 58738-03jul26, similar tendencies were found in AGCC Land Cover types such as mixed
woods forest, shrubby wet, and closed coniferous. However, the analytical results of 58739-03jul8 were
difficult to establish coherence for AGCC land cover type mainly due to previous burned area vectors
recoded in at the time for producing the classification based on the Landsat ETM+ image. In addition, the
images both included clouds and shadow, and account for large differences between the classification
results. Although classification results from Landsat ETM+ include the areas of Cloud/Shadow (AGCC
class 112/113) and Clearcuts (AGCC class 30 and 32) in 58741-03jul18, vegetated classes compared well.
In particular, closed coniferous including and closed deciduous forest (AGCC class 55) compared very
successfully. In case of 58742-03jul23, distribution for closed coniferous forest (AGCC class 54) in
Landsat ETM+ was somewhat similar with the result from eCognition QuickBird classification. However,
the relationship between deciduous forest and shrubland is poorly represented and these differences could
be caused by differences in image acquisition dates and can rectified by inclusion of field data.
Field Work
Classification of the Quickbird imagery thus far has been performed a priori and an in situ field
32
Discussion and Conclusion
Using eCognition for image processing is a highly significant approach for an exploratory
AGCC relevant project of forest classification in Alberta, Canada. It is believed that eCognition
has great potential for faster and more efficient ability to provide specific and accurate results for
spatial resolution satellite data has potential for detecting spatial characteristics of forest ecology
such as vegetation distributions and also in differentiating with greater capability between forest
classes. Fig. 11 shows the linear correlation between AGCC classification from Landsat ETM+
and eCognition analysis using QuickBird. R square (R2) values were well showed as 0.9211 in
Fig. 11 Scatter Plot Comparing eCognition QuickBird and AGCC Landsat ETM+ Classifications
Closed Coniferous
4000
AGCC (from Landsat ETM+)
y = 1.067x
R2 = 0.5858
3000
Shrubby Wet
2000
(Continued)
33
eCognition for QuickBird versus AGCC classification (58739, 03jul18)
3000.00
Closed Coniferous
2500.00
y = 0.4006x
AGCC (from Landsat ETM+)
R2 = -0.2728
2000.00
Closed
1500.00
1000.00
Clearcuts Shrubby Wet
Water
Mixed Wood
500.00
Road Grassland
Shrubland Series1
0.00 Linear (Series1)
0 1000 2000 3000 4000 5000
eCognition (from QuickBird)
6000
C losed
5000 y = 0.9589x
R2 = 0.9211
AGCC (from Landsat ETM+)
4000
3000
C losed
2000
D eciduous
1000
Water
ClouShrubland Mixed Wood
Series1
Shado
0 Grass Linear (Series1)
0 1000 2000 3000 4000 5000 6000
eCognition (from QuickBird)
(Continued)
34
eCognition results for QuickBird versus AGCC classification (58742, 03jul23)
5000
4000
y = 0.9204x
AGCC (from Landsat ETM+)
3500 R2 = 0.4415
3000
2500
2000
Shrublan
d
1500
Grasslan
1000 Wat d
Mixed
500 Deciduous Series1
Linear (Series1)
0
0 1000 2000 3000 4000 5000
eCognition (from QuickBird)
Significant problems can be observed between legend definitions and the high-resolutions
observed by the sensor. During the selection of samples in the objected-oriented classification, it
is difficult to define exact percentages of mixed wood forest areas such as Mixed Wood
Coniferous Dominated Mixed (20 - 80%, AGCC class 56). Most of all, it is necessary to use
ancillary ground-truthing data to approach a more detailed classification level for AGCC. In-situ
field data is essential to better determine mixed-forest classes properly and to compare results of
classification methods, and thus must first be obtained before final conclusions. In addition, a
Geographic Information System (GIS)-based database that includes forest age and stand
structure in the mixed wood area should be utilized. While using eCognition, it was difficult to
35
discriminate between open coniferous and closed coniferous and to detect anthropogenic features
like roads and contiguous classes. Furthermore issues could probably be resolved by means of
integration of canopy closure or Leaf Area Index layers into the final classification. For future
Landsat data and eCognition results to assess error analysis and construct a spatial distribution of
error. These results can then be used to identify limitations of the classification and generate a
methodology for reducing the error, or acknowledging a specific definition of its presence.
eCognition are yet untested such as feature, Class-related features, and Global features which
where not included in this report but that will be explored in the second phase of this project.
Object features such as Layer value, Shapes, Texture and Hierarchy could enable more reliable
results from eCognition. Incorporating panchromatic data and textural information could have
proven to provide better results (Willhauck, 2000). Using field data, specific textures can be
assigned to land cover types and thus should improve classification accuracy as well as level of
classification. Of important note is the extractable material from haze-covered regions. Since
eCognition discriminates objects on a local basis, objects within these regions can still be
correctly classified, as they are not mixing spectrally with a classification bin applied to the
entire image. Thus, on a local object basis, haze covered regions can be classified faster than a
manual bin recode of an unsupervised classification and without the need of complex
manual statistical approaches, object-oriented classification can process much faster using
36
unsupervised classification should be performed on the QuickBird imagery using the same
classification methods using the field data. However, given the rich information content and
high variability of the QuickBird imagery, an object-based approach to classifying with the
37
References
Adams, Beverley J., Huyck, Charles K., Mansouri, Babak, Eguchi, Ronald T. and Shinozuka,
Masanobu, 2003. Mutlidisciplinary Center for Earthquake Engineering Research:
Research Progress and Accomplishments 2003-2004. Application of High-Resolution
Optical Satellite Imagery for Post-Earthquake Damage Assessment: The 2003
Boumerdes (Algeria) and Bam (Iran) Earthquakes.
Available: http://mceer.buffalo.edu/publications/resaccom/0304/contents.asp
Asner and Warner S Amanda, 2003. Canopy shadow in IKONOS satellite observations of
tropical forests and savannas. Remote Sensing of Environment 87, pp. 521–533. Gregory
P.
Bjorgo Einar. 2000. Using very high spatial resolution multispectral satellite sensor imagery to
monitor refugee camps International Journal of Remote Sensing 21 pp 611–616
Blaschke, T., Lang, S., Lorup, E., Strobl, J., and Zeil, P., 2000. Object-oriented image processing
in an integrated GIS/remote sensing environment and perspectives for environmental
applications. In: Cremers, A., Greve, K. (eds.): Environmental Information for Planning,
Politics and the Public. Metropolis Verlag, Marburg Vol 2: 555-570.
Caprioli, M., and Tarantino, E. 2003. Urban Features Recognition From VHR Satellite Data
With An Object-Oriented Approach. http://www.iuw.uni-
vechta.de/personal/geoinf/jochen/papers/24.pdf
Caprioli, M., and Tarantino, E., 2001. Accuracy assessment of per field classification integrating
very fine spatial resolution satellite sensors imagery with topographic data. Journal of
Geospatial Engineering 3 (2), pp. 127-134.
Clark, David B., Soto Castro, Carlomagno, Alvarado, Luis Diego Alfaro and Read, Jane M.
2004. Quantifying mortality of tropical rain forest trees using high-spatial-resolution
satellite data. Ecology Letters. Vol.7. pp. 52-59.
Clark , David B., Read, Jane M, Clark , Matthew L, Cruz, Ana Murillo, Dotti, Marianela Fallas,
And Clark, Deborah A. 2004. Application Of 1-m and 4-m Resolution Satellite Data To
Ecological Studies Of Tropical Rain Forests. Ecological Applications, 14, pp. 61–74
38
Franklin, S. E., Wulder, M. A., and Gerylo, G. R. 2001. Texture analysis of IKONOS
panchromatic data for Douglas-fir forest age class separability in British Columbia.
International Journal of Remote Sensing 22, 2627– 2632.
Franklin, S.E., Hall, R.J., Moskal, L.M., A.J. Maudie and M.B. Lavigne. 2000. Incorporating
texture into classification of forest species composition from airborne multispectral
images. International Journal of Remote Sensing 21, pp. 61-79.
Franklin, S.E., Maudie, A.J. and M.B. Lavigne. 2001. Using spatial co-occurrence texture to
increase forest structure and species composition classification accuracy.
Photogrammetric Engineering and Remote Sensing 67, pp. 849-855.
Goetz, Scott J., Wright, Robb K., Smith, Andrew J., Zinecker, E., and Schau, E. 2003. IKONOS
imagery for resource management: Tree cover, impervious surfaces, and riparian buffer
analyses in the mid-Atlantic region Remote Sensing of Environment 88, pp.195–208
Herold, M., Gardner, M., Hadley, B. and Roberts, D. 2002. The spectral dimension in urban land
cover mapping from high-resolution optical remote sensing data, in: Maktav, D.,
Juergens, C., Sunar-Erbek, F. and Akguen, H., Proceedings of the 3rd Symposium on
Remote Sensing of Urban Areas, June 2002, Istanbul, Turkey, Volume 1, pp. 77-85.
Kayitakire F., Farcy C., and Defourny, P. 2002. IKONOS-2 imagery potential for forest stands
mapping. Presented at ForestSAT Symposium Heriot Watt University, Edinburgh,
August 5th-9th , Available: http://www.enge.ucl.ac.be/staff/curr/kayitaki/forestsat.pdf
Lawrence, R., Bunna, A., Powell, S., and Zambon, M., 2004. Classification of remotely sensed
imagery using stochastic gradient boosting as a refinement of classification tree analysis.
Remote Sensing of Environment 90, pp. 331–336
Martin Herold, Sylvia Guenther, and Keith C. Clarke. Mapping Urban Areas in the Santa
Barbara South, http://www.definiens-imaging.com/documents/an/sb.pdf
Martin B., Ursula B., Seyed D., Markus H., Astrid H., Peter H., Iris L., Matthias M., Malte S.,
Michaela W., and Gregor W., 2000. User Guide 4 in eCognition.
Mumby, Peter J., Edwards, and Alasdair J. 2002. Mapping marine environments with IKONOS
imagery: enhanced spatial resolution can deliver greater thematic accuracy. Remote
Sensing of Environment 82, pp. 248–257
Salajanu, D. & Olson, and C. E. 2001. The significance of spatial resolution: identifying forest
cover from satellite data. Journal of Forestry 99 pp. 32-38.
39
Seelan, Santhosh K., Laguette, S., Casady, Grant M., Seielstad, and George A. 2003. Remote
sensing applications for precision agriculture: A learning community approach. Remote
Sensing of Environment 88, pp. 157–169
Willhauck, G., Schneider, T., Dekok, R., & U. Ammer 2000. Comparison of object oriented
classification techniques and standard image analysis for the use of change detection
between SPOT multispectral satellite images and aerial photos In: ISPRS, Vol. XXXIII,
Amsterdam
Zhang, Chengqian, Franklin, Steven E., Wulder , Michael A. 2004. Geostatistical and texture
analysis of airborne-acquired images used in forest classification. International Journal
of Remote Sensing, Vol. 25, Number 4 pp 859-865
40
Appendix A.
A. ANTHROPOMORPHIC
2. Agriculture
21 (21) Cropland (including cereal crops and forage)
22 (22) Irrigated Land
23 (23) Agricultural Clearing (recently cleared land, often with windrows)
Clearcuts
30 (30) Undifferentiated clearcut
31 (31) Graminoid (grasses/sedges/forbs) dominated clear-cut
32 (32) Tree/shrub dominated clear-cut
33 (33) Tree (replanted - immature trees, <20 years old) dominated clearcut
Burns
40 (40) Undifferentiated burn
41 (41) Graminoid (grasses/sedges/forbs) dominated burn
42 (42) Tree/shrub dominated burn
43 (43) Tree dominated burn
44 (44) New Burn
B. UPLANDS
41
5453 (253) Closed Se/Sw leads con. 154153 (173) Open Se/Sw leads conifer
55 (55) Closed Aspen, Balsam Poplar 155 (155) Open Aspen, Balsam Poplar
and/or Birch and/or Birch
5 (5) Riparian Poplar
Mixed Wood Coniferous Dominated Mixed Wood Forest (20 - 80% mixed wood cover
based on occurrence)
Leading Species Coniferous Dominated Mixed Wood Forest (20 - 80% mixed wood
cover based on occurrence)
5650 (180) Closed Fir mixed wood 60-80% 156150 (190) Open Fir mixed wood 60-80%
5651 (181) Closed Sb mixedwood 60-80% 156151 (191) Open Sb mixed wood 60-80%
5652 (182) Closed Pine mixedwood 60-80% 156152 (192) Open Pine mixed wood 60-
80%
5653 (183) Closed Se/Sw mixedwood 60-80% 156153 (193) Open Se/Sw mixed wood 60-
80%
42
6. Shrubland (>25% shrub cover and <6% tree cover)
Open shrubland
7. Grassland (<25% shrub cover and <6% tree cover) and Upland Fords (<6%
graminoid)
Upland Fords
8. Wetlands
832 (132) Shrubby Wetlands (willow and birch) (less than 6% tree cover)
Shrub closed (crowns touching) (greater than 25% shrub)
43
85 (85) Lichen Bog (Lichen Understorey) (6 - 25 % tree cover)
89 (137) Wooded Fen (6 - 100 % tree cover) (Larch Drainage Flow Patterns)
891 (138) Wooded Fen (6 - 50 % tree cover)
892 (139) Wooded Fen (51 - 100 % tree cover)
9. Water
D. BARREN LANDS
E. UNCLASSIFIED
44