Professional Documents
Culture Documents
Version 1.2
©2017 PCI Geomatics Enterprises, Inc.® All rights reserved.
COPYRIGHT NOTICE
Software copyrighted © by PCI Geomatics Enterprises, Inc., 90 Allstate Parkway, Suite 501, Markham, Ontario L3R 6H3,
CANADA Telephone number: (905) 764-0614
The Licensed Software contains material that is protected by international Copyright Law and trade secret law, and by
international treaty provisions, as well as by the laws of the country in which this software is used. All rights not granted to
Licensee herein are reserved to Licensor. Licensee may not remove any proprietary notice of Licensor from any copy of the
Licensed Software.
Classification 35
Calculating features 35
Selecting the segmentation file and fields 35
Viewing a classification 37
Accuracy Assessment 41
Evaluating classification accuracy 41
Notes 61
Workflow of OBIA
The workflow of performing an OBIA begins with preprocessing your data. If you
are using more the one satellite image, such as is used in this tutorial, you can
merge them into a single PCIDSK file. By doing so, you can more easily apply
operations like resampling or reprojection to make the data easier to work with.
You can also add extra layers, such as vegetation indices, a DEM, or a layer
representing an area of interest (AOI).
After preprocessing, you can then use Object Analyst to segment the data.
Segmentation is the first step of the supervised OBIA process. It involves selecting
a file and the layers it contains to perform segmentation. When a file contains
many layers, you can achieve better results (better objects) by using a relevant
subset.
Other than the selected layers, segmentation is controlled by three basic
parameters: scale, shape, and compactness. To achieve segmentation that meets
the objectives of your supervised classification, some experimentation may be
necessary.
The objects (polygons) layer created by the segmentation is accompanied by an
attribute table containing a unique identification number (ID) for every object.
After segmentation, you next perform feature extraction. This involves selecting
the source channel or channels from which to compute a series of features
(statistical and geometrical) for each object in the segmentation layer. You can
select which features to compute. The features are used later during supervised
classification.
Next, you edit training sites. This consists of selecting a series objects
representative of each land-cover class of interest.
To make editing training sites easier, consider the following:
Select approximatively the same number of training sites for each class.
Select a few representative training objects for each class. That is, selecting
too many training objects for a class does not improve the classification
accuracy; rather, it may degrade it. Conversely, the number of verification
objects can be much greater and will improve the reliability of the accuracy
assessment report.
During supervised classification, you select one of two supervised classifiers: the
maximum likelihood (MLC) or the Support Vector Machine (SVM) classifier. Each
uses as input a selection of calculated features and one training-site field. The
output is stored in a new field, and the output classification is displayed
automatically with its legend in Focus.
You next evaluate the output classification by creating an accuracy-assessment
report and by visually inspecting the output classification using the original image
or ancillary data. If the classification is suitable, you can export it. If the
classification is not entirely satisfactory, you can do any of the following:
Reform Shapes (A)
Use this operation to improve the aesthetic of your classification. You can
choose from two options: Automatic dissolve and Interactive edits.
Each option modifies the shape of the object and the total number of objects.
Therefore, the extracted features (statistical and geometrical) will no longer
be accurate for the edited objects. It is best to use Reform Shapes at the
end of your classification process or use it on a copy of your segmentation
file.
Rule-based Classification (%
This operation refines an existing supervised classification by reassigning the
class of some objects based on conditions or ranges. You can define and
apply a classification rule that you have created on segments classified
already or on unclassified segments. The prerequisite is the feature extraction
of each segment. After algorithmic classification, you can either remove a
class for some or all segments, or change the membership of certain
segments in a class to improve the overall accuracy of the classification.
Another scenario is when no algorithmic classification is performed and you
want to assign certain segments to a class based on criteria you specify. You
can create an attribute field in a vector layer to store the class information
and create a rule by using the available extracted features.
Run another supervised Classification (C)
You can run a second supervised classification by selecting a different set of
features, different training-sites field (see next item), or both if your project
contains more than one.
Project architecture
It is good practice to keep all relevant files of an OBIA project in the same folder
(My_OBIA_project) and to link all files in a Geomatica project file
(My_OBIA_project.gpr). By doing so, you ensure that the project is exportable to
another location and can be reopened quickly, especially if a project contains many
classifications with a legend.
A typical OBIA project contains an image and a segmentation file. The
segmentation is performed on (all or a subset of) the image layers. The project can
also contain ancillary data: raster or vector. While you cannot use ancillary-raster
data for segmentation; that is, you can only use the layers in the image being
segmented, you can use the ancillary-raster data later in feature extraction.
Region of study
The region of study (ROS) is centered in Ottawa (45°25′14.22′′N; 75°
41′47.30′′W), the capital of Canada.
The northern part of the ROS is rugged and hilly and is part of the Canadian Shield.
This region is mostly forested (deciduous and mixed deciduous) with many lakes.
Delimited by the Ottawa River, the southern part of the ROS is the Great Lakes-St.
Lawrence Lowlands. Where the drainage is good, this region is mostly prime
agricultural land while areas of poor drainage are mostly covered by a mix of
wetlands and forested areas composed of tree species that support saturated soils.
Figure 5. Landsat-8 (p16r28 and p16r29) image of ROS (acquired August 26,
2016 (R: Band 6 | G: Band 5 | B: Band 4)
Objective
The objective of this tutorial is to perform a supervised object-based classification
to identify the following land-cover classes:
1. Agricultural areas
2. Urban areas
3. Forested Land
4. Water
5. Wetlands
However, these high-level land-cover classes do contain some heterogeneity and
discriminating them is not a trivial task. This tutorial will demonstrate various
strategies to achieve suitable results:
1. Use remote-sensing imagery acquired during various seasons to account
for the dynamic nature of agricultural land and ease discrimination
between coniferous and broad-leaf tree species.
2. Segmentation; that is, information generalization by regrouping pixels into
meaningful objects.
3. Use a robust nonlinear classifier, such as Support Vector-Machine (SVM).
In the following table, four Landsat-8 OLI images of the Ottawa region have been
downloaded from the USGS Glovis website2.
2 http://glovis.usgs.gov/
Segmentation is key
The success of an object-based supervised classification starts with a good
segmentation. Unfortunately, there are no objective rules to follow or absolute
criteria to tell if a segmentation is good or not. As a guideline, a trade-off is often
necessary between the mean size of objects (generalization) and their
homogeneity. That is, most objects should, in general, correspond to only one land-
cover class and their shapes should align with the boundaries (edges) observed in
the imagery.
Segmentation also depends on the selected layers (in this case, the spectral bands)
and it is not mandatory to use all available layers.
Shape, scale, and compactness parameters are also assigned to the objects.
Choosing a good combination increases the success of the supervised classification
and requires some experimentation.
The following series of figures shows the results of various combinations of scale
(SC), shape (SP) and compactness (CP) values from the segmentation of the B6,
B5, and B4 bands (followed by the total number of objects) for May 6 and
August 26, 2016.
Performing segmentation
In this step, you will perform segmentation on a Landsat image.
To perform segmentation
1. In Focus, open the Landsat image provided with this tutorial
(L8_Ottawa_20160506_20160826.pix).
2. On the Analysis menu, Object Analyst.
The Object Analyst window appears.
3. In the Operation list, select Segmentation.
4. Under Source Image Layers, click Select.
The Layer Selection window appears
Perimeter (m)
PixelValue
Feature extraction
In this step, you will perform feature extraction.
6. Click OK.
7. Under Feature Attributes, select the Mean check box, but leave the
other check boxes clear.
Figure 10. Object Analyst window with Feature Extraction set up to run
After the process is complete, you can view the selected statistics for image
L8_Ottawa_SEG_B6B5B4_50_0.8_0.5.pix in Attribute Manager.
Class Class
May 6, 2016 August 26, 2016
No. /description
Wetlands -
Marsh
Wetland -
Peatlands
Class Class
May 6, 2016 August 26, 2016
No. /description
Forest -
coniferous
Forest -
deciduous
Water
Class Class
May 6, 2016 August 26, 2016
No. /description
Urban - dense
Urban –with
vegetation
Agriculture –
bare (in May)
Class Class
May 6, 2016 August 26, 2016
No. /description
Agriculture –
vegetation (in
May)
5. Click Individual Select , click an object in the image, and then in the
Training Sites Editing window, click Assign.
You can select multiple objects to assign by holding down the Shift key
and clicking each object you want. You can also drag a selection square or
rectangle over the objects you want.
6. In the Training Sites Editing window, beside Sample type, click Accuracy
assessment, and repeat step 5 to select objects for accuracy
assessment.
Note: You cannot use the same object simultaneously for training and as
a verification object. If an object is already assigned to a class, and you
select it again, it will be updated with the new state. The same rule
applies to a class selected previously.
Figure 15. Position of training and verification objects used in this tutorial
Calculating features
The next operation is to classify the data using some calculated features, statistical
or geometrical, in combination with a field containing training (Class Name_T) and
accuracy-assessment (Class Name_A) objects.
To run classification
1. In the Operation list, select Classification.
2. Under Vector Layer and Fields, click Select, and then in the Vector
Layer and Field Selector window, do the following:
Select L8_Ottawa_SEG_B6B5B4_50_0.8_0.5.pix as the
segmentation file.
Select the extracted feature fields to use for the classification.
Click OK.
3. Under Type, click Supervised.
4. Under Training Field, select the field of the segmented layer with the
training and accuracy objects (Training).
5. Under Output Class Field, type SVM_T1 as the name of the field to
which to write the classification result.
The field will be added to the segmentation file in Attribute Manager.
6. Under Classifier, click SVM.
Figure 17. Attribute Manager showing the three new fields added
Viewing a classification
After the classification process is complete, a legend appears on the Maps tab in
Focus. The color of each class corresponds to those specified during selection of the
training and verification objects. The opacity is set at 25 percent for quick
interpretation of the results.
You can also remove the outline of a classified object by, in Advanced mode,
selecting 2 in the Part list, and then clicking Remove .
The following series of images show the progression of the image through object-
based image analysis (OBIA).
Landsat-8
May 6, 2016.
R: Band 6
G: Band 5
B: Band 4
Landsat-8
August 26, 2016.
R: Band 6
G: Band 5
B: Band 4
Supervised classification
result (detail)
Supervised classification
result (detail)
2. In the File Selector window, select a folder, enter a file name for the report,
and then click Save.
No classification is perfect
Even if a high accuracy has been achieved, it is unlikely that the classification is
perfectly suited to the project.
Typically, before exporting a classification refinement of some land-cover classes or
some manual editing might be necessary.
While Object Analyst offers several options for refining a classification, this section
will focus on two:
Rule-based classification
Merging and reshaping objects
Rule-based classification
You can use rule-based classification to split a class into subclasses.
In addition to supervised and unsupervised classification algorithms, with Object
Analyst you can also create a custom rule to assign class membership to segments.
By creating a custom rule, you can as an analyst select the criteria that determines
membership of a sample in a class based on your understanding of the domain,
data, or both.
You can use various tools or assessments to better understand the data and
processes, but the decision on membership of a class is human-made. You can then
create a classification rule by using the available features and based on your
understanding of the data and the application domain. The knowledge to construct
a classification rule comes from existing understanding, which is translated simply
in the form of an equation. This process is highly user-dependent and involves data
exploration and onscreen interpretation of both the image and the segments.
Object Analyst can define and apply a classification rule that you have created on
segments classified already or on unclassified segments. The prerequisite is the
feature extraction of each segment. After algorithmic classification, you can either
remove a class for some or for all segments, or change the membership of certain
segments in a class to improve the overall accuracy of the classification.
A close inspection of the classification created in Classification has revealed that the
wetland class is too general; it would be preferable to split this class into two
subclasses:
The following series of images show the variations of the minimum and maximum
values.
In Figure 21, the selected objects are highlighted in pale orange. These are the
candidates of the new open wetland class. Notice that the wetland reference data is
now displayed with a thicker white outline on the polygon so they are more easily
differentiated from the objects classified as wetlands.
Figure 22. Sample new wetland class (too many objects selected)
In Figure 23, the selected objects are highlighted in pale orange. This is a good
range.
Caution Only reform shapes after classification and after you have
made a backup of your project. Reforming shapes
modifies object shapes and the total number of objects,
which renders the extracted features (statistical and
geometrical) invalid.
Merging polygons
The selected layer must have one or more polygons with class information
6. After you have merged the polygons you want, on the Maps tab, right-
click the map layer, and then click Save.
2. To modify the style of a class, in the Style column of the table, double-
click the class.
The Style Selector window appears.
3. Select a style, and then click Apply.
4. Repeat steps 2 and 3 for each class you want to modify, and then click
OK.
6. In the File box, you can type or select the name you want; that is, select
SVM_T1.
7. In the Description box, type a brief description to help identify the
relevance of the file.
5. In the File list, select the file you created previously; that is, select
Classif_SVM_T1.rst, and then click OK.
6. In the Representation Editor window, click Apply, and then click OK.
The styles are now associated with the classification you selected.
4. Under Output Ports, type a path and SVM1_classif.pix as the file name
for the output file.
6. Click Run.
The classification you specified is converted to a single-channel, 8-bit file.
5. Click Run.