You are on page 1of 45

Object-Based and Pixel-Based Classification Comparison of High-

Resolution QuickBird Data in Forested Alberta

Earth Observation Systems Laboratory


Department of Earth and Atmospheric Sciences
University of Alberta

G. Arturo Sánchez-Azofeifa¥, Joel D. D. Sinkwich, Kyung-Kuk Kang

¥
Corresponding author contact information: arturo.sanchez@ualberta.ca, phone: 1-780-492-1822; Fax: 1-780-492-2030

Winter 2004
© Earth Observation Systems Laboratory (EOSL)
For internal Use – Do not circulate
Disclaimer

The present work reflects the opinion of the authors of this report alone and does not imply

endorsement by the Department of Earth of Atmospheric Sciences or the University of Alberta.

The University of Alberta encourages its staff to carry out and publish scientific and scholarly

research of the highest quality and consistent with the best ethical standards. Nevertheless it

does not and cannot endorse or guarantee the results or the products of research performed by its

staff or students on or off campus, nor is it responsible for possible inaccuracies, errors or

misinterpretations. In commercialization of its research by a company, the latter must agree to

hold the University harmless in the event of legal action against the company.

1
Contents

Disclaimer ....................................................................................................................................... 1
Figures and Tables ...........................................................................................................................3
Introduction .................................................................................................................................... 4
Background .....................................................................................................................................5
AGCC data
• Definition, history, project methods ..................................................................................14
QuickBird Data Study Sites
• Specifications, imagery properties, study site locations ....................................................15
Classification Process
• Object-Based Classification .............................................................................…….…….19
• Objects-Based Classification Method .…...........................................................................21
o Segmentation ........................................................................................................ 21
o Classification ........................................................................................................ 23
• eCognition Classification Results ...................................................…………...................24
• AGCC and eCognition Classification Comparison…………... ........................................28

Field Work…………………………………………………………………………………….…32
Discussion and Conclusions ..........................................................................................................33
References .....................................................................................................................................38
Appendix .......................................................................................................................................41

2
List of Figures

Fig. 1: Profile of Pixel Value Variability Within A Homogenous Conifer Stand: Landsat (Fig. 1A),

QuickBird (Fig.1B)…………………………………………………………………………… …...11

Fig. 2: AGCC Classification Accuracy Pyramid. ......….....................…….…...........................................15

Fig. 3: QuickBird Study Site Locations.......................................................................................................17

Fig. 4: Hierarchical network example: Schematic and Image Objects. ......... ......………..........................20

Fig. 5: Object-oriented, knowledge-based analysis with eCognition (Niemeyer, 2001).............. ..............21

Fig. 6: QuickBird eCognition Methodology Flowchart……………… ...............................……...............25

Fig. 7: QuickBird Imagery and eCognition Classification: Images 58738, 58739 .….........................26

Fig. 8: QuickBird Imagery and eCognition Classification: Images 58741, 58742 ..................……....27

Fig. 9: Comparison between AGCC and eCognition Classification: Images 58738, 58739.......................28

Fig. 10: Comparison between AGCC and eCognition Classification: Images 58741, 58742 ....................29

Fig. 11: Scatter Plot Comparing eCognition QuickBird and AGCC Landsat ETM+ Classifications ....... 34

List of Tables:

Table 1: Characteristics of QuickBird Sensor...........................................................................……….......16

Table 2: AGCC Landsat and QuickBird Scene Summary ..........................................................………....18

Table 3: Comparison of extracted area between Landsat ETM+ and Quickbird based on AGCC Land

Cover type ...........................................................................……….................................................30

3
Introduction

The successful launch of Digital Globe Corporation’s QuickBird satellite and its very high-

resolution (VHR) sensors have narrowed the gap between satellite and aerial imagery, permitting

easier, faster and potentially cheaper access to high-resolution data. Recently, VHR satellite

imagery has become available at spatial resolution that rivals that of aerial photography with the

added advantage of digital multispectral information and without problems caused by aircraft

flight (Goetz et. al., 2003). These new data offer an increased ability to map land use, land

cover, and land use/cover change in higher detail and accuracy for use with even smaller scale

applications. This study will explore the capability of using QuickBird imagery to classify land

cover in forested regions according to the Alberta Ground Cover Classification (AGCC) scheme

and compare classification results with previous AGCC classifications based on 30m Landsat

images. Traditional image classification methods rely on pixel-based classifying where each

pixel in an image is individually analyzed and grouped with other similar pixels throughout the

entire image. Recently, object-based approaches of classifying have been gaining popularity as

they analyze and identify boundaries between homogenous regions of pixels and classify these

individually defined regions as ‘objects’. This new classification method will be investigated

using the QuickBird imagery in software called eCognition by Definiens Imaging. The project

will be divided into two phases of which the first will be an a priori classification and

methodology investigation and the second will involve more specific classification methods and

inclusion of accurate field data to verify and amend the classification results. The availability of

VHR data will provide for many exciting research opportunities and improve decision making

for numerous aspects of resource management. Usage of this new data source will now make

possible analysis and levels of detail that previously were not possible. For example, QuickBird

4
satellite data has the ability to differentiate forest at the crown level making the future potential

of this imagery both promising and exciting (Clark et al., 2004).

Background

QuickBird imagery can be used for land cover, forest inventory, decision-making support and

mapping projects that were once limited by larger spatial resolution. For example, Bjorgo (2000)

used high-resolution multispectral imagery to monitor a refugee camp on the Thailand border;

something that requires imagery on the meter scale, especially for population estimations based

upon the number of individual tents. By using the imagery, Bjorgo (2000) was able correlate

population size and the area of the camp based upon ground data for population density by using

3.3m resolution Russian KVR-1000 VHR satellite sensor. With <2.5m imagery like IKONOS or

QuickBird, even more accurate refugee population counts can be achieved, providing crucial

information not before available to decision makers dealing with a humanitarian crises. A

second new application of very high-resolution data is the update of urban maps with the

assistance of object-oriented software such as eCognition (Herold et al., 2002). Herold et al

(2002) used an object-oriented approach to classify complex urban areas and produced an overall

classification accuracy of 78.4%. Distinguishable objects such as vegetation patches, roads and

urban land cover and land use types were clearly identifiable within their small study area.

These are encouraging results of the application of derived land cover mapping in studies

analyzing spatial urban structure, socio-economic characteristics and transportation infrastructure

with high-resolution data, complex land cover and with object oriented approaches (Herold et al.,

2002).

5
Other new applications include improved emergency response (Digital Globe), accurate forest

fire mapping, and damage estimations (Adams et al, 2003). More important functions however,

are resource management applications such as tree cover mapping, impervious surface areas, and

riparian buffer zones that were investigated with IKONOS imagery by Goetz et al., (2003).

Higher resolution imagery carries a new set of problems relating to the logistical issues of

phenological and atmospheric conditions in the images, shadowing within canopies and between

scene elements, and limited spectral discrimination for cover types (Goetz et al., 2003). Despite

encountering these new problems, findings indicated that use of the VHR data would increase

the accuracy in assessment, monitoring, and management of wetlands, parks and protected areas,

and forest and carbon inventories (Clark et al., 2004). In addition, cost per unit area is much

higher for VHR imagery and footprint size is considerably smaller which can be a concern for

project criteria. Goetz et al., (2003) concluded that Landsat data does not have the necessary

spatial resolution to adequately map riparian buffer vegetation to within the riparian zones, but

successful local scale analysis of the riparian buffers with VHR data provided statistically

meaningful samples from within the zones. Emphasizing this conclusion for resolution

improvement was Mumby & Edwards (2002) with their study of marine environments. Findings

concluded that the IKONOS sensor lacked spectral capability to properly identify distinct marine

habitats, but patches and boundaries were mapped with much greater accuracy than previous

sensors. Kayitakire et al. (2002) investigated the capabilities of IKONOS-2 imagery in mapping

forest stands in a Belgium mixed woods forest. Both per-pixel and per-parcel land cover

classifications were investigated and a per-parcel classification was favoured in discriminating

between dominant species types and for producing easier to use data. Kayitakire et al. (2002)

6
concluded that at high spatial resolution, accuracy in classification of forested land could be

improved by taking into account the locational context of a group of similar pixels, as opposed to

the spectral signature alone from each individual pixel. This solution is suggested due to the

radiometric value of an isolated pixel not providing enough information about the larger object in

interest, such as a forest stand (Kayitakire et al. 2002). Though direct accuracy ratings were

actually higher for per-pixel classification (88%), compared to the per-parcel (83%), the error in

a parcel classification concerns an entire object and its impact on the overall accuracy is more

pronounced (Kayitakire et al., 2002). Corrections in their classification, then, concern adjusting

individual objects, as opposed to changing a class, or bin, derived from pixel spectral clustering

(Kayitakire et al., 2002). Correcting a misclassified object is much easier than performing a

manual per-pixel recode of a spectral class, which can involve lengthy and tedious manual

adjustments.

Kayitakire et al. (2002) also concluded that more advanced texture analysis would lead to further

improved classification results. A key feature of the new VHR sensors is the ability to provide

within parcel textural data, such as with QuickBird’s 60cm panchromatic band, and at high

spatial resolution, texture analysis becomes just as important as spectral analysis (Kayitakire et.

al., 2002). QuickBird imagery can be obtained in either 2.4m multi-spectral or 0.6m

panchromatic and can be used together to enhance available information used for classification

through a process that it is called resolution merge. eCognition can be used to extract textural

statistics from the panchromatic imagery, while spectral signatures from the multi-spectral image.

Remotely sensed data used for spectral signatures consist of using the reflectance values from the

spectral bands of the sensor while the texture statistics are derived from the grey levels such as

7
contrast, correlation and standard deviation, which have been found most powerful for forest

stand type discrimination (Kayitakire et al., 2002). Franklin et al. (2001) showed that higher

accuracy was obtained using texture data as opposed to spectral data alone when classifying

forest stands in CASI imagery of a spatial resolution of about 1m and also successfully used 1st

order variance and 2nd order homogeneity texture analysis of IKONOS imagery to divide forest

stands into age classes. Franklin et al. (2001) concluded that the second-order (spatial co-

occurrence homogeneity) texture values were the most effective in distinguishing between age

classes and also that using a window size of less than 25x25 pixels was not as effective. Zhang

et al. (2004) also reported an increase in accuracy of forest classification by using image texture

information in CASI images of Kananaskis. When added to a supervised classification, textural

information improves thematic map accuracy for finer level habitat discrimination (Mumby &

Edwards, 2002). Even small variations at the textural level can account for significant changes

in stand classification. Increased spatial resolutions allow for textural information to be utilized

even for small land parcels and when properly incorporated can improve accuracy.

Because VHR satellite data contains significantly more detailed information than previous

satellites, such as Landsat TM, problems do arise. The main problem is a “salt & pepper” effect

that hinders the recognition of cover classes (Caprioli & Tarantino, 2003). These effects are not

limited to VHR data, but the associated increase in spatial resolution increases the variability

within land parcels and generates noise in the image. Classifying land cover with current 30m

resolution Landsat imagery works sufficiently well due to general uniformity within a pixel that

would be comprised of many times more, and smaller sized, QuickBird imagery pixels. A single

30m Landsat pixel will contain approximately 150 multi-spectral QuickBird pixels. Increasing

8
the resolution introduces smaller differences within regions of an otherwise homogeneous patch

(see Figure 1). An area may have many colors and shadings due to smaller differences in the

cover and sun orientation that are not detectable at the lower resolution (Caprioli & Tarantino,

2003). Thus, the size of the objects, like a stand of trees, that need to be differentiated must be

bigger than the size of the noise in the land cover texture.

Even though it is now possible to differentiate spatial entities that in the past were not detectable

due to low-resolution, or that previously were indistinguishable features, very high spatial

resolution imparts an increase in internal variability within land covers and thus the accuracy of

results may decrease on a per-pixel basis (Caprioli & Tarantino, 2003). The high-resolution data

exhibits greater within stand variability than lower resolution data given similar class definitions.

In order for the new image data to be utilized for land use mapping, improved analysis tools and

methods need to be investigated (Lawrence et. al., 2004). When standard procedures of per-pixel

multispectral classification are applied to VHR data, the increase in spatial resolution leads to

intensification in uncertainty in the statistical definition of land cover classes during

classification and a decrease of accuracy in class identification (Caprioli & Tarantino, 2003).

Looking at VHR images it is very likely that a neighboring pixel belongs to the same land cover

class as the pixel under consideration, but may not be detected as so using traditional pixel based

methods (Blaschke et al., 2000). High-resolution does not necessarily lead to higher

classification accuracy and procedures that aim to reduce the negative impacts of the increased

spatial complexity must be investigated to ensure continued accuracy in land cover classification

with the addition of the satellite technologies. One such solution is to incorporate an object-

oriented approach.

9
Problems in classification are not unique to VHR imagery, but are emphasized by the high

spatial resolution relative to other imaging sensors such as Landsat TM. In the case of early

season deciduous forest Goetz et al., (2003) found that 4m resolution IKONOS imagery includes

data in the inter-canopy space, whereas with Landsat data, the pixels include a mixture of both

canopy and biomass woody stems which produce higher NDVI values. On a Landsat TM pixel

that is classified as deciduous forest, a group of large conifers that are within the undifferentiated

TM pixel would be detected independently, and represented by 5 or 10 VHR pixels resulting in

greater variation with foliage density and canopy shape (Caprioli & Tarantino, 2003). This

variation is illustrated in Figure 1. What appears to be a fairly homogenous conifer stand in

Landsat imagery is, in fact, quite complex when viewed with QuickBird imagery (Figure 1). To

emphasize this point a profile representing pixels values across the same plot (Band 4 in both

sensors) is compared.

Fig. 1A Fig. 1B

10
LANDSAT

Standard Deviation 1.716

Mean 32.773

Max 36

Min 30

QuickBird

Standard Deviation 65.646

Mean 267.303

Max 452

Min 111

Figure 1: Profile of Pixel Values Within A Homogenous


Conifer Stand: Landsat (Fig. 1A), QuickBird (Fig.1B)

As can be clearly observed, over the same profile a substantial increase in detail and a

corresponding increase in within parcel volatility is evident within the QuickBird imagery

(Figure 1b). These patterns of variance can be utilized on a textural level to search for patterns

corresponding to land cover types. However, this effect also may be caused by within canopy

shadowing so a cautious approach should be taken.

11
Since the resolution is high enough to detect individual shadows, an increase in variability can

occur due to canopy shadowing. Shadows cast within the forest canopy onto adjacent trees are

now an important issue in classifications (Goetz et. al., 2003). Asner & Warner (2003)

investigated the effects of variation of canopy shadow fraction across a broad range of forests

and savannas in Brazil and found that forests have substantial apparent shadow fractions due to

sun and satellite view angles. Observations using both red and near-infrared wavelength regions

were found to be highly sensitive to sub-pixel shadow fractions in tropical forests, accounting for

30–50% of the variance in red and NIR responses (Asner & Warner 2003). Shadowing is closely

linked to the characteristics of plant canopies and is a major contributor to the radiance or

reflectance properties of tropical forests and savannas. VHR data provides an opportunity to

observe plant canopies at spatial scales approaching the size of individual crowns and vegetation

clusters which are affected by canopy shadowing (Franklin, et. al., 2001). The maneuverability

of high spatial resolution sensors and differences in daily sun angles leads to substantial variation

in viewing and solar geometry during imaging and can result in changed shadow fractions

between image acquisitions (Asner & Warner 2003). Issues caused by the shadows can be

reduced through the use of multi-temporal imagery by obtaining images from several sun angles,

but this may be difficult depending on the sensor used and the budget of a project (Goetz et. al.,

2003). Three-dimensionality of the forest structure controls the shadow variations and VHR data

can now be used for further study of these factors.

Salajanu & Olson (2001) investigated the significance of spatial resolution in identifying forest

cover with both Landsat and SPOT data. They concluded that the overall accuracy of careful

classification does increase with higher spatial resolution. A point that was repeatedly stressed

12
in many studies is the importance of accurate ground control points and that improving spatial

resolution is not a substitute for good ground data to correlate with the image (Toutin & Chénier,

2004). Higher spatial resolution satellite data should be coupled with increased detail in both

land cover definitions and accuracy of field data acquired from ground based control sources.

GPS points must be taken with accuracy of 1m or higher and within forest patches that exhibit

representative regions, or on locations that are easily identifiable, or segmented, such as lone

forest patches (Toutin & Chénier, 2004). Field locations can be also located within a patch to

detect internal variations. For example, points can be taken within a clear-cut to identify re-

growth levels within a patch that otherwise would be fairly uniform on a Landsat images.

The first phase of the project will be an a priori classification of the Quickbird imagery using the

object-oriented approach applied by eCognition. This approach should minimize the effects such

as speckling and canopy shadowing while ensuring that differentiation of objects using their

spectral signatures is retained. The resulting classification will be directly compared to the

current AGCC classification of clipped Landsat imagery and tested for correlation. It is expected

that the QuickBird imagery will allow for differentiation of more AGCC classes and that

eCognition overall will provide faster and improved results. Phase two, to be performed in

Spring 2005, will then verify and amend the resulting classification by incorporating data from

intensive field-work in the four study sites.

13
Alberta Ground Cover Characterization (AGCC) Data

In its most basic definition, Alberta Ground Cover Characterization (AGCC) is a land use/land

cover classification scheme derived for land cover types in Alberta. Its main purpose is to serve

as the main labeling tool to produce a comprehensive land cover map for the forested regions of

Alberta with the final product used by a variety of partners for planning, conservation, and

development within the province. The AGCC process is based on a hierarchical classification

method that takes into consideration the different spectral and spatial properties of the different

land cover types present in the province. This process combines extensive fieldwork and expert

knowledge to categorize the spectral signatures extracted from the Landsat image into one of

many different land cover classes. The AGCC classification scheme is comprised of broad types

of land cover (open/closed forest, wetland, etc.) and a more detailed sub-category (conifer,

deciduous, bog etc.). To classify imagery, AGCC uses unsupervised pixel based classification to

group like pixels into sets. Known extractable features are then removed (such as water), thus

reducing the amount of image bins and subsequent classifications are performed on the

remaining image. Extracted classes are then further sub-divided into more detailed classes as far

as spectral signatures can be used to differentiate. The process yields an overall accuracy of

~80% at 30m resolution. See Appendix A for classification scheme. Figure 2 presents a

summary of the AGCC classification levels and their overall average accuracy.

14
Figure 2: AGCC Classification Accuracy Pyramid

QuickBird Data and Study Sites

QuickBird imagery’s greatest advantage is its very high spatial resolution. Panchromatic images

are available in 61~72cm ranges while multispectral imagery ranges from 2.44~2.88m. Spatial

resolution depends on the view angle used during the image acquisition. Spectrally, QuickBird

has only four bands and does not have the band discrimination ranges provided by some other

satellites (Landsat has 7 bands), but can still be used effectively in differentiating spectral

signatures. The band combinations and resolutions for QuickBird are as follows:

15
Spatial and Spectral Resolution

Panchromatic Multispectral
Black & White Blue Green Red Near IR
Spectral
Characteristics 450 520nm 630 760
450 ~ 900nm
~ 520nm ~ 600nm ~ 690nm ~ 900nm
Pixel Resolution 61cm ~ 72cm 2.44 ~ 2.88m
Scene 27,552 ×
Dimensions 6,888 × 6,856 pixels
27,424 pixels
272-km2 (nadir) to 435-km2 (25° off-nadir) (105 to 168-mi2)
Scene Size
16.5-km2 (nadir) to 20.8-km2 (25° off-nadir) (10.3 to 12.9-mi2)
Image Accuracy

CE 90% RMSE
Positional Accuracy
23-meters 14-meters
Processing

Radiometric Corrections Sensor Corrections Resampling Options


Internal detector geometry
Relative radiometric 4×4 cubic convolution
Optical distortion
response between detectors
Scan distortion 2×2 bilinear
Non-responsive detector fill Nearest neighbor
Any line-rate variations
Conversion to absolute 8-point sinc
Registration of the multispectral
radiometry MTF kernel
bands
Order Parameters

Product Type Panchromatic, Multispectral or both


Image Bits/Pixel 8 or 16 bits
File Formats GeoTIFF 1.0, NITF 2.1 or NITF 2.0

Table 1: Characteristics of QuickBird Sensor


(Source from Digital Globe Inc., http://www.digitalglobe.com/)

16
For this study, four QuickBird scenes were obtained throughout Alberta (Figure 3). The selected

images where acquired during the summer of 2003.

Figure 3: QuickBird Study Site Locations

17
The northeastern most image, 58738, is located in NTS block 84A03 and is just east of the

Wabasca Lakes. To the southwest of this image is 58739, in NTS block 83O16, which is

northeast of Lesser Slave Lake and southwest of the Wabasca Lakes. Further to the

southwest is image 58741 that straddles blocks 83J14 and 83O03 and is in the Swan Hills

region. The final study site is image 58742 and is located in the foothills near

Luscar/Cadomin in NTS block 83F03. The study sites represent a broad range of forested

Alberta landscape from wetlands to deciduous to coniferous dominated regions and were

selected to provide a good overview of terrain for classification. Additional information

regarding each one of these images is presented on Table 2.

QuickBird ID Lat/Long Acquisition LANDSAT


(Image Center) Time Path / Row
58738 56.06824 /113.21506 1828.07, July 26, 2003 P42 R21 / P43 R21

58739 55.81320 /114.25690 1838.19, July 18, 2003 P43 R21

58741 54.97351 /115.12474 1838.55, July 18, 2003 P43 R22

58742 53.14711 /117.18673 1844.24, July 23, 2003 P45 R23

Table 2: AGCC Landsat and QuickBird Scene Summary

18
Classification Process

Object-Based Classification
Compared to traditional image processing methods that use only pixel value clustering,

eCognition software is the first commercially available product for object-oriented and multi-

scale image analysis. It is designed to work on high spatial resolution or hyperspectral imagery

and includes several useful parameters to develop a knowledge base for elaborate land use

classification. eCognition can segment a multispectral image into homogeneous objects, or

regions, based on neighboring pixels, spatial properties, and visual borders approximating what

the human eye can differentiate. This expectation of human-equivalent object differentiation

cannot be fulfilled by common, pixel-based approaches. Instead, the object-oriented resulting

topological network has a considerable advantage as it allows the efficient propagation of many

different kinds of relational information (Martin B et al., 2000). The resulting formatted objects

have not only the value and statistical information of the pixels that they consist of, but also carry

texture, spatial features, and topology information in a common attribute table. The

characteristic of an object-oriented approach is a circular interplay between processing and

classifying image objects. A bottom up procedure starts with the lowest level and merges

distinguishable objects into higher levels. For a visual understanding, the hierarchical methods

and multiple segmentation examples are presented on Figure 4:

19
Images objects

Pixels

Fig. 4 Hierarchical Network Example: Schematic and Image Objects. Upper is schematic view
and below are image objects on different levels of the hierarchical network

20
Objects Oriented Classification Method
The principal procedure of eCognition is focused on multi-resolution segmentation and a

patented image object extraction. Each level in this hierarchical network is produced by a single

segmentation run. The whole image analysis process can be divided into the two principal

workflow steps: segmentation and classification (Baatz et al. 1996, Willhauck et al. 2000).

Fig. 5 Object-oriented, knowledge-based analyses with eCognition (Niemeyer, 2001)

Segmentation

Segmentation means the process of grouping like elements by homogeneity and merging them

into distinct regions (Willhauck et al. 2000). To obtain segments suited for the desired

classification, the segmentation process can be manipulated by defining which of the loaded

channels are to be used by what weight and by the following three parameters: scale, color, and

form. For examples, band 4 of a QuickBird image can be weighted heavier such that segments

are created with a bias to that band. The scale parameter is an abstract value with no direct

21
correlation to the object size measured in pixel. Rather, it depends on the heterogeneity of the

data material. The color parameter balances the color homogeneity of a segment on one hand

and the homogeneity of shape on the other. A value of one on the color side will result in very

fractal segments with low standard deviation of pixel values. A zero color value would result in

very compact segments with higher color heterogeneity. The form parameter controls the form

features of an object by simultaneously balancing the criteria for smoothness of the object border

and the criteria for object compactness (Willhauck et al. 2000). Smoothness describes the

similarity between the image object borders and a perfect square while compactness describes

the closeness of pixels clustered in an object by comparing it to a circle. Using repeated

segmentations with different parameters, the schematic view for a hierarchical network of

sensible image objects is built. Each object knows its relationships to its neighbor-, sub- and

super objects, which allows classification of relationships between objects. To ensure the

hierarchical structure of the network, two rules are mandatory. One is that the sub levels inherit

object borders of higher levels and the other is that super object borders restrain the segmentation

process. Supplementary to the normal segmentation two special types of segmentation are

provided. One is the knowledge-based segmentation, the other the construction of sub-objects.

The knowledge-based segmentation is a feature that allows the use of already made

classifications as additional information for the merging of objects. Segments of one class can

be fused on the same level or a higher level than is constructed. The construction of sub-objects

are used for special classification tasks such as texture, or from classifications based on sub-

levels. The segmentation process can principally be compared to the construction of a database

with information of each image object (Willhauck, G. et. al., 2000).

22
Classification

In order to be able to compare different objects, features like color and size as well as uncertain

statements, fuzzy logic functions are used for the extraction and land cover classification (Martin

B. et al., 2000). This allows classification of very complex tasks on one hand, and makes

classification transparent and adjustable in detail on the other. Fuzzy logic is a mathematical

approach to quantify uncertainty between classes (Willhauck, G. et. al., 2000). The basic idea is

to replace the two strict logical statements “yes” and “no” by the continuous range of [0...1],

where 0 means “exactly no” and 1 means “exactly yes”. That is, all values or expressions

between 0 and 1 represent a more or less certain state of yes and no in terms of membership to a

property. To translate the range of most different features into fuzzy logic expressions, software-

cognition uses two kinds of classifiers, membership functions and nearest neighbor classifier

(Martin B. et al., 2000). All expressions of one class have to be combined to produce a result.

This is done using logical operators such as max and mean, or, if and else. The features used for

classification can be divided into three categories. 1) Object Features such as color, texture, form

and area; 2) Classification related to sub-objects, super-objects and neighbor objects; 3) Terms

such as nearest neighbor classifier or similarity to other classes (Willhauck, G. et. al., 2000).

Sample selection is the final key process to performing a successful classification in eCognition.

Choosing representative samples is critically important as they define the range of spectral

signatures and image object properties that group other objects into the classification categories.

Fieldwork can then be used to supplement sample selection and should instigate very specific

discrimination of cover types. Also, eCognition includes a complex sample selection tool that

displays distinct information on a potential sample as compared to other samples previously

selected. This allows for a quick visual interpretation of a sample relative to other classes, and to

23
other samples. Membership to a class, or the capability of a sample to produce a new class on its

own can be performed efficiently and quickly. Then, a nearest neighbor function is applied to

the samples and image objects are sorted into the appropriate category based upon best fit to the

selected samples. Revision and reclassification occurs until the results are accepted. The

classification results demonstrate that nearest neighbor classifier is sensitive to outliers in

samples due to the fact of variance within samples selected for the same class. Impure samples

will cause objects in one class to stretch the feature space and reduce the ability to discriminate

classes. To improve the accuracy, samples verified by field training sites will be used and outlier

samples will be observed and removed or modified.

24
Raw Image Data

Import QuickBird Import thematic layers

Create new project

Scale parameter Shape Factor Compaction/Smoothness

Multiresolution
Segmentation

Create Polygon
Changing Scale parameters

Satisfied
segmentation?

Input parents classes Create class hierarchy Divide children classes

Using Select Samples and Sample Editor

Apply standard NN to Classes

Classification
Results

Fig. 6 QuickBird eCognition Methodology Flowchart

25
eCognition Classification Results

58738, 03jul26

QuickBird (Multispectral, 2.4m) eCognition

58739, 03jul18

QuickBird (Multispectral, 2.4m) eCognition

Fig. 7 QuickBird Imagery and eCognition Classification (58738, 58739)

26
eCognition Classification Results

58741, 03jul18

QuickBird (Multispectral, 2.4m) eCognition

58742, 03jul23

QuickBird (Multispectral, 2.4m) eCognition

Fig. 8 QuickBird Imagery and eCognition Classification (58741, 58742)

27
AGCC and eCognition Classification Comparison

58738, 03jul26

AGCC (from Landsat ETM+) eCognition (from QuickBird)

58739, 03jul18

AGCC (from Landsat ETM+) eCognition (from QuickBird)

Fig. 9 Comparison between AGCC and eCognition (58738, 58739)

28
AGCC and eCognition Classification Comparison

58741, 03jul18

AGCC (from Landsat ETM+) eCognition (from QuickBird)

58742, 03jul23

AGCC (from Landsat ETM+) eCognition (from QuickBird)

Fig. 10 Comparison between AGCC and eCognition (58741, 58742)

29
Table 3: Comparison of extracted area between Landsat ETM+ and Quickbird based on

AGCC Land Cover type

58738, 03jul26

AGCC Land Cover type Landsat ETM+ QuickBird


extracted area (Hectares) extracted area (Hectares)
Roads 183.56(1.68%) 11.05(0.11%)
Water 46.69(0.43%) 11.56(0.12%)
Closed Coniferous 6288.69(57.63%) 3828.13(38.25%)
Closed Deciduous 721.19(6.81%) 338.63(3.38%)
Mixed Wood 689.19(6.32%) 773.13(7.72%)
Grass Wet 1020.13(9.35%) 3384.70(33.82%)
Shrubby Wet 1963.50(17.99%) 1662.20(16.61%)
SUM 10912.9(100.00%) 10009.41(100.00%)

58739, 03jul18

AGCC Land Cover type Landsat ETM+ QuickBird


extracted area (Hectares) extracted area (Hectares)
Roads 135.00(1.18%) 0
Water 647.81(5.65%) 410.75(4.07%)
Closed Coniferous 2712.19(23.66%) 1556.28(15.41%)
Closed Deciduous 1637.56(14.29%) 1688.40(16.71%)
Mixed Wood 540.88(4.72%) 4407.64(43.63%)
Grassland 89.31(0.78%) 233.60(2.31%)
Shrubland 23.25(0.20%) 175.26(1.73%)
Shrubby Wet 745.69(6.51%) 1089.16(10.78%)
Clearcuts 715.06(6.24%) 0
Burns 4215.81(36.78%) 0
Clouds 0 157.52(1.56%)
Shadow 0 383.13(3.79%)
SUM 11462.56(100.00%) 10101.73(100.00%)

(Continued)

30
58741, 03jul18

AGCC Land Cover type Landsat ETM+ QuickBird


extracted area (Hectares) extracted area (Hectares)
Water 533.94(4.96%) 18.28(0.18%)
Closed Coniferous 5134.44(47.71%) 5056.07(50.02%)
Closed Deciduous 1706.13(15.85%) 1598.19(15.81%)
Mixed Wood 728.31(6.77%) 1722.62(17.04%)
Shrubland 423.69(3.94%) 588.99(5.83%)
Grass Wet 144.13(1.34%) 533.05(5.27%)
Cloud 602.94(5.60%) 303.72(3.00%)
Shadow 423.88(3.94%) 287.32(2.84%)
Roads 477.25(4.43%) 0
Clearcuts 587.69(5.46%) 0
SUM 10762.38(100.00%) 10108.24(100.00%)

58742, 03jul23

AGCC Land Cover type Landsat ETM+ QuickBird


extracted area (Hectares) extracted area (Hectares)
Water 1146.15(9.91%) 146.28(1.45%)
Closed Coniferous 4510.53(39.00%) 4424.14(44.00%)
Deciduous 412.38(3.57%) 1772.44(17.63%)
Mixed Wood 803.52(6.95%) 1107.72(11.02%)
Grassland 996.39(8.61%) 742.03(7.38%)
Shrubland 1880.55(16.26%) 43.53(0.43%)
Grass Wet 170.64(1.48%) 736.88(7.33%)
Shrubby Wet 635.13(5.49%) 1081.74(10.76%)
Roads 63.63(0.55%) 0
Unclassified 947.25(8.19%) 0
SUM 11566.17(100.00%) 10054.75(100.00%)

31
Comparing the extracted area between Landsat ETM+ and QuickBird based on AGCC Land Cover type

(Table. 3), the extracted area of closed coniferous dominated forest from two different methods was

remarkably consistent. However, the AGCC created from unsupervised Landsat classification made use

of manual unsupervised classification methods and included the use of vector shapefiles for features such

as roads and water (AGCC Class 92) that exaggerate the extractable area. Therefore, to acquire more

reliable eCognition results, we must use other object shape features and texture features to closely classify

the road cover (AGCC class 11 and 13), or include the same vector shapefiles.

In case of 58738-03jul26, similar tendencies were found in AGCC Land Cover types such as mixed

woods forest, shrubby wet, and closed coniferous. However, the analytical results of 58739-03jul8 were

difficult to establish coherence for AGCC land cover type mainly due to previous burned area vectors

recoded in at the time for producing the classification based on the Landsat ETM+ image. In addition, the

images both included clouds and shadow, and account for large differences between the classification

results. Although classification results from Landsat ETM+ include the areas of Cloud/Shadow (AGCC

class 112/113) and Clearcuts (AGCC class 30 and 32) in 58741-03jul18, vegetated classes compared well.

In particular, closed coniferous including and closed deciduous forest (AGCC class 55) compared very

successfully. In case of 58742-03jul23, distribution for closed coniferous forest (AGCC class 54) in

Landsat ETM+ was somewhat similar with the result from eCognition QuickBird classification. However,

the relationship between deciduous forest and shrubland is poorly represented and these differences could

be caused by differences in image acquisition dates and can rectified by inclusion of field data.

Field Work

Classification of the Quickbird imagery thus far has been performed a priori and an in situ field

campaign has been planned for spring of 2005.

32
Discussion and Conclusion

Using eCognition for image processing is a highly significant approach for an exploratory

AGCC relevant project of forest classification in Alberta, Canada. It is believed that eCognition

has great potential for faster and more efficient ability to provide specific and accurate results for

forest classification processing as compared to pixel based classification approaches. High

spatial resolution satellite data has potential for detecting spatial characteristics of forest ecology

such as vegetation distributions and also in differentiating with greater capability between forest

classes. Fig. 11 shows the linear correlation between AGCC classification from Landsat ETM+

and eCognition analysis using QuickBird. R square (R2) values were well showed as 0.9211 in

case of 58741-03jul18 compared to the other images.

Fig. 11 Scatter Plot Comparing eCognition QuickBird and AGCC Landsat ETM+ Classifications

eCognition results for QuickBird versus AGCC classification


(58738, 03jul26)
5000

Closed Coniferous
4000
AGCC (from Landsat ETM+)

y = 1.067x
R2 = 0.5858
3000

Shrubby Wet
2000

Closed Grass Wet


1000
Deciduous
Series1
Roads Mixed
Linear (Series1)
0 w ater
0 1000 2000 3000 4000 5000
eCognition (from QuickBird)

(Continued)

33
eCognition for QuickBird versus AGCC classification (58739, 03jul18)
3000.00

Closed Coniferous
2500.00
y = 0.4006x
AGCC (from Landsat ETM+)

R2 = -0.2728
2000.00

Closed
1500.00

1000.00
Clearcuts Shrubby Wet
Water
Mixed Wood
500.00
Road Grassland
Shrubland Series1
0.00 Linear (Series1)
0 1000 2000 3000 4000 5000
eCognition (from QuickBird)

eCognition results for QuickBird versus AGCC classification (58741, 03jul18)

6000

C losed
5000 y = 0.9589x
R2 = 0.9211
AGCC (from Landsat ETM+)

4000

3000

C losed
2000
D eciduous

1000
Water
ClouShrubland Mixed Wood
Series1
Shado
0 Grass Linear (Series1)
0 1000 2000 3000 4000 5000 6000
eCognition (from QuickBird)

(Continued)

34
eCognition results for QuickBird versus AGCC classification (58742, 03jul23)

5000

4500 C losed C oniferous

4000
y = 0.9204x
AGCC (from Landsat ETM+)

3500 R2 = 0.4415

3000

2500

2000
Shrublan
d
1500
Grasslan
1000 Wat d
Mixed
500 Deciduous Series1
Linear (Series1)
0
0 1000 2000 3000 4000 5000
eCognition (from QuickBird)

Significant problems can be observed between legend definitions and the high-resolutions

observed by the sensor. During the selection of samples in the objected-oriented classification, it

is difficult to define exact percentages of mixed wood forest areas such as Mixed Wood

Coniferous Dominated Mixed (20 - 80%, AGCC class 56). Most of all, it is necessary to use

ancillary ground-truthing data to approach a more detailed classification level for AGCC. In-situ

field data is essential to better determine mixed-forest classes properly and to compare results of

classification methods, and thus must first be obtained before final conclusions. In addition, a

Geographic Information System (GIS)-based database that includes forest age and stand

structure in the mixed wood area should be utilized. While using eCognition, it was difficult to

35
discriminate between open coniferous and closed coniferous and to detect anthropogenic features

like roads and contiguous classes. Furthermore issues could probably be resolved by means of

integration of canopy closure or Leaf Area Index layers into the final classification. For future

work, an analysis of a statistically oriented projection of the significance between AGCC

Landsat data and eCognition results to assess error analysis and construct a spatial distribution of

error. These results can then be used to identify limitations of the classification and generate a

methodology for reducing the error, or acknowledging a specific definition of its presence.

In the view of objected-oriented classification by eCognition, many more specific features of

eCognition are yet untested such as feature, Class-related features, and Global features which

where not included in this report but that will be explored in the second phase of this project.

Object features such as Layer value, Shapes, Texture and Hierarchy could enable more reliable

results from eCognition. Incorporating panchromatic data and textural information could have

proven to provide better results (Willhauck, 2000). Using field data, specific textures can be

assigned to land cover types and thus should improve classification accuracy as well as level of

classification. Of important note is the extractable material from haze-covered regions. Since

eCognition discriminates objects on a local basis, objects within these regions can still be

correctly classified, as they are not mixing spectrally with a classification bin applied to the

entire image. Thus, on a local object basis, haze covered regions can be classified faster than a

manual bin recode of an unsupervised classification and without the need of complex

atmospheric corrections to be applied to the data. Compared with unsupervised classification by

manual statistical approaches, object-oriented classification can process much faster using

eCognition’s parameters and options. To properly compare classification method results, an

36
unsupervised classification should be performed on the QuickBird imagery using the same

classification scheme and an accuracy assessment performed and compared on both

classification methods using the field data. However, given the rich information content and

high variability of the QuickBird imagery, an object-based approach to classifying with the

AGCC scheme should be implemented and is expected to produce better results

37
References

Adams, Beverley J., Huyck, Charles K., Mansouri, Babak, Eguchi, Ronald T. and Shinozuka,
Masanobu, 2003. Mutlidisciplinary Center for Earthquake Engineering Research:
Research Progress and Accomplishments 2003-2004. Application of High-Resolution
Optical Satellite Imagery for Post-Earthquake Damage Assessment: The 2003
Boumerdes (Algeria) and Bam (Iran) Earthquakes.
Available: http://mceer.buffalo.edu/publications/resaccom/0304/contents.asp

Asner and Warner S Amanda, 2003. Canopy shadow in IKONOS satellite observations of
tropical forests and savannas. Remote Sensing of Environment 87, pp. 521–533. Gregory
P.

BAATZ, M. and A. SCHÄPE, 1999. Object-Oriented and Multi-Scale Image Analysis in


Semantic Networks. In: Proc. of the 2nd International Symposium on Operationalization
of Remote Sensing, Enschede, ITC, August 16–20.

Bjorgo Einar. 2000. Using very high spatial resolution multispectral satellite sensor imagery to
monitor refugee camps International Journal of Remote Sensing 21 pp 611–616

Blaschke, T., Lang, S., Lorup, E., Strobl, J., and Zeil, P., 2000. Object-oriented image processing
in an integrated GIS/remote sensing environment and perspectives for environmental
applications. In: Cremers, A., Greve, K. (eds.): Environmental Information for Planning,
Politics and the Public. Metropolis Verlag, Marburg Vol 2: 555-570.

Caprioli, M., and Tarantino, E. 2003. Urban Features Recognition From VHR Satellite Data
With An Object-Oriented Approach. http://www.iuw.uni-
vechta.de/personal/geoinf/jochen/papers/24.pdf

Caprioli, M., and Tarantino, E., 2001. Accuracy assessment of per field classification integrating
very fine spatial resolution satellite sensors imagery with topographic data. Journal of
Geospatial Engineering 3 (2), pp. 127-134.

Clark, David B., Soto Castro, Carlomagno, Alvarado, Luis Diego Alfaro and Read, Jane M.
2004. Quantifying mortality of tropical rain forest trees using high-spatial-resolution
satellite data. Ecology Letters. Vol.7. pp. 52-59.

Clark , David B., Read, Jane M, Clark , Matthew L, Cruz, Ana Murillo, Dotti, Marianela Fallas,
And Clark, Deborah A. 2004. Application Of 1-m and 4-m Resolution Satellite Data To
Ecological Studies Of Tropical Rain Forests. Ecological Applications, 14, pp. 61–74

Digital Globe. Strategic Wildfire Emergency Planning System. 2004. Available:


http://www.digitalglobe.com/applications/fire_risk.shtml

38
Franklin, S. E., Wulder, M. A., and Gerylo, G. R. 2001. Texture analysis of IKONOS
panchromatic data for Douglas-fir forest age class separability in British Columbia.
International Journal of Remote Sensing 22, 2627– 2632.

Franklin, S.E., Hall, R.J., Moskal, L.M., A.J. Maudie and M.B. Lavigne. 2000. Incorporating
texture into classification of forest species composition from airborne multispectral
images. International Journal of Remote Sensing 21, pp. 61-79.

Franklin, S.E., Maudie, A.J. and M.B. Lavigne. 2001. Using spatial co-occurrence texture to
increase forest structure and species composition classification accuracy.
Photogrammetric Engineering and Remote Sensing 67, pp. 849-855.

Goetz, Scott J., Wright, Robb K., Smith, Andrew J., Zinecker, E., and Schau, E. 2003. IKONOS
imagery for resource management: Tree cover, impervious surfaces, and riparian buffer
analyses in the mid-Atlantic region Remote Sensing of Environment 88, pp.195–208

Herold, M., Gardner, M., Hadley, B. and Roberts, D. 2002. The spectral dimension in urban land
cover mapping from high-resolution optical remote sensing data, in: Maktav, D.,
Juergens, C., Sunar-Erbek, F. and Akguen, H., Proceedings of the 3rd Symposium on
Remote Sensing of Urban Areas, June 2002, Istanbul, Turkey, Volume 1, pp. 77-85.

Kayitakire F., Farcy C., and Defourny, P. 2002. IKONOS-2 imagery potential for forest stands
mapping. Presented at ForestSAT Symposium Heriot Watt University, Edinburgh,
August 5th-9th , Available: http://www.enge.ucl.ac.be/staff/curr/kayitaki/forestsat.pdf

Lawrence, R., Bunna, A., Powell, S., and Zambon, M., 2004. Classification of remotely sensed
imagery using stochastic gradient boosting as a refinement of classification tree analysis.
Remote Sensing of Environment 90, pp. 331–336

Martin Herold, Sylvia Guenther, and Keith C. Clarke. Mapping Urban Areas in the Santa
Barbara South, http://www.definiens-imaging.com/documents/an/sb.pdf

Martin B., Ursula B., Seyed D., Markus H., Astrid H., Peter H., Iris L., Matthias M., Malte S.,
Michaela W., and Gregor W., 2000. User Guide 4 in eCognition.

Mumby, Peter J., Edwards, and Alasdair J. 2002. Mapping marine environments with IKONOS
imagery: enhanced spatial resolution can deliver greater thematic accuracy. Remote
Sensing of Environment 82, pp. 248–257

Russell G. Congalton, 1991. A Review of Assessing the Accuracy of Classifications of Remotely


Sensed Data. Remote Sensing of Environment 37, 35-46

Salajanu, D. & Olson, and C. E. 2001. The significance of spatial resolution: identifying forest
cover from satellite data. Journal of Forestry 99 pp. 32-38.

39
Seelan, Santhosh K., Laguette, S., Casady, Grant M., Seielstad, and George A. 2003. Remote
sensing applications for precision agriculture: A learning community approach. Remote
Sensing of Environment 88, pp. 157–169

Willhauck, G., Schneider, T., Dekok, R., & U. Ammer 2000. Comparison of object oriented
classification techniques and standard image analysis for the use of change detection
between SPOT multispectral satellite images and aerial photos In: ISPRS, Vol. XXXIII,
Amsterdam

Zhang, Chengqian, Franklin, Steven E., Wulder , Michael A. 2004. Geostatistical and texture
analysis of airborne-acquired images used in forest classification. International Journal
of Remote Sensing, Vol. 25, Number 4 pp 859-865

40
Appendix A.

A. ANTHROPOMORPHIC

1. Urban and Industrial


11 (11) Urban (cities, towns - mostly residential and downtown core areas)
12 (12) Commercial and industrial (industrial parks, heavy oil sand development,
Refineries, hydro generating facilities)
13 (13) Major roads, highway and railways
14 (14) Cutlines and Trails.
15 (15) Surface mines (coal) gravel pits, spoil piles
16 (16) Farmstead and/or ranch (including shelter belts)

2. Agriculture
21 (21) Cropland (including cereal crops and forage)
22 (22) Irrigated Land
23 (23) Agricultural Clearing (recently cleared land, often with windrows)

Clearcuts
30 (30) Undifferentiated clearcut
31 (31) Graminoid (grasses/sedges/forbs) dominated clear-cut
32 (32) Tree/shrub dominated clear-cut
33 (33) Tree (replanted - immature trees, <20 years old) dominated clearcut

Burns
40 (40) Undifferentiated burn
41 (41) Graminoid (grasses/sedges/forbs) dominated burn
42 (42) Tree/shrub dominated burn
43 (43) Tree dominated burn
44 (44) New Burn

B. UPLANDS

5. Forested Land (>6% tree cover)

Coniferous Dominated Forest (>80% coniferous cover based on occurrence)

Closed (>50% crown closure) Open (6 – 50% crown closure)


50 (50) Closed Fir 150 (150) Open Fir
51 (51) Closed Black Spruce Sb 151 (151) Open Black Spruce Sb
52 (52) Closed Pine 152 (152) Open Pine
53 (53) Closed Se/Sw 153 (153) Open Se/Sw
54 (54) Closed Undifferentiated Con. 154 (154) Open Undifferentiated Conf.
5450 (250) Closed Fir leads conifer 154150 (170) Open Fir leads conifer
5451 (251) Closed Sb leads conifer 154151 (171) Open Sb leads conifer
5452 (252) Closed Pine leads conifer 154152 (172) Open Pine leads conifer

41
5453 (253) Closed Se/Sw leads con. 154153 (173) Open Se/Sw leads conifer

Deciduous Dominated Forest (>80% deciduous cover based on occurrence)

Closed (>50% crown closure) Open (6 - 50% crown closure)

55 (55) Closed Aspen, Balsam Poplar 155 (155) Open Aspen, Balsam Poplar
and/or Birch and/or Birch
5 (5) Riparian Poplar

Mixed Wood Coniferous Dominated Mixed Wood Forest (20 - 80% mixed wood cover
based on occurrence)

Closed (>50% crown closure) Open (6 - 50% crown closure)

56 (56) Closed Coniferous Dominated 156 (156) Open Coniferous Dominated


Mixedwood (60-80% Coniferous Mixed wood (60-80%
Cover) Deciduous Cover)

Leading Species Coniferous Dominated Mixed Wood Forest (20 - 80% mixed wood
cover based on occurrence)

5650 (180) Closed Fir mixed wood 60-80% 156150 (190) Open Fir mixed wood 60-80%

5651 (181) Closed Sb mixedwood 60-80% 156151 (191) Open Sb mixed wood 60-80%

5652 (182) Closed Pine mixedwood 60-80% 156152 (192) Open Pine mixed wood 60-
80%

5653 (183) Closed Se/Sw mixedwood 60-80% 156153 (193) Open Se/Sw mixed wood 60-
80%

57 (57) Closed Deciduous Dominated 157 (157) Open Deciduous Dominated


Mixedwood (60-80% Deciduous Mixedwood (60-80%
Cover) Deciduous Cover)

58 (58) Closed Coniferous and 158 (158) Open Coniferous and


Deciduous Cover (20-60%) Deciduous Cover (20-60%)

59 (59) Closed Larch 159 (159) Open Larch

42
6. Shrubland (>25% shrub cover and <6% tree cover)

Closed shrubland (streams and coolies) (Crowns Touching)

61 (61) Closed Riparian Shrub


62 (62) Closed Coulee Shrub Thicket
63 (63) Closed Upland Shrub

Open shrubland

161 (161) Open Riparian Shrub


162 (162) Open Coulee Shrub Thicket (chokecherry/buffaloberry, rose, buckbrush,
saskatoon)
163 (163) Open Upland Shrub
164 (164) Open Sagebrush Flat

7. Grassland (<25% shrub cover and <6% tree cover) and Upland Fords (<6%
graminoid)

Graminoids (grasses and sedges)

71 (71) Fescue Grassland


72 (72) Mixed Grassland
73 (73) Sandhill Grassland
74 (74) Coulee Grassland (grassland on steep coulee slopes)

Upland Fords

75 (75) Upland Ford Meadow

C. WETLANDS AND WATER

8. Wetlands

81 (81) Emergent Wetlands (Cattails/Aquatic Plants)


82 (82) Graminoid Wetlands (sedges/grasses/forbs)
- Less than 6% tree cover and less than 25% shrub
83 (83) Shrubby Wetlands (willow and birch)
831 (131) Shrubby Wetlands (willow and birch) (less than 6% tree cover)
Shrub open (crowns not touching) (greater than 25% shrub)

832 (132) Shrubby Wetlands (willow and birch) (less than 6% tree cover)
Shrub closed (crowns touching) (greater than 25% shrub)

84 (84) Sphagnum Bog (Veneer Bogs) (less than 6% tree cover)

43
85 (85) Lichen Bog (Lichen Understorey) (6 - 25 % tree cover)

86 (86) Black Spruce Bog (sphagnum understorey) (6 - 100 % tree cover)


861 (133) Black Spruce Bog (sphagnum understorey) (6 - 50 % tree cover)
862 (134) Black Spruce Bog (sphagnum understorey) (51- 100 % tree cover)

87 (87) Black Spruce Bog (lichen understorey) (6 - 100 % tree cover)


871 (135) Black Spruce Bog (lichen understorey) (6 - 50 % tree cover)
872 (136) Black Spruce Bog (lichen understorey) (51 - 100 % tree cover)

88 (88) Undifferentiated Wetlands

89 (137) Wooded Fen (6 - 100 % tree cover) (Larch Drainage Flow Patterns)
891 (138) Wooded Fen (6 - 50 % tree cover)
892 (139) Wooded Fen (51 - 100 % tree cover)

90 (90) Open Fen


- less than 6% tree cover and less than 25% shrub cover, patterned

9. Water

91 Lake, pond, reservoir, river and stream

D. BARREN LANDS

10. Barren (<6% Vegetation cover)


101 (101) Permanent Ice and Snow
102 (102) Rock, Talus, and/or Avalanche Chute
103 (103) Exposed Soil
104 (104) Alkali Flat and/or Mud Flat
105 (105) Upland Dune Field
106 (106) Alluvial Deposit
107 (107) Beach
108 (108) Badland
109 (109) Blowout Zone

E. UNCLASSIFIED

112 (112) Cloud / Haze / Shadow

44

You might also like