Short Course Remote Sensing 2009 Part 1: Image Analysis for Remote Sensing

Object-based Image Analysis

Dr. Irmgard Niemeyer · Geomonitoring Group · Institute for Mine-Surveying and Geodesy · TU Bergakademie Freiberg Reiche Zeche, Fuchsmühlenweg 9 · D-09599 Freiberg, Germany Tel./Fax +49 3731 39-3591/-3601 · irmgard.niemeyer@tu-freiberg.de · www.geomonitoring.tu-freiberg.de

1. Introduction: Advantages of object-based satellite imagery analysis 2. Segmentation 3. Object features 4. Feature extraction 5. Classification 6. Processes 7. Change detection 8. Examples

2

1. Advantages of object-based imagery analysis

3

1. Advantages of object-based imagery analysis
What, if we had • the borders of real-world objects, • thematic information on land register, • a surface model?

4

Advantages of object-based imagery analysis Human and computer vision Human visual system: Rule bases Change Detection Automation ⇑ identification and understanding of objects and their context ⇓ estimation of grey values. distances and areas Digital image processing system: ⇓ identification and understanding of objects and their context ⇑ estimation of grey values. distances and areas „Enhancement “ of the human eye? Improvements of software techniques towards human image understanding?! 5 .1.

J lines/pixels n=1... Advantages of object-based imagery analysis Pixel-based image analysis reflection multispectral signature wavelength i Image matrix X=(x)ijn j i=1...I rows j=1..1..N bands n 6 .

N-dimensional feature space d Ban N Band 1 Band 3 vegetation Using the multispectral feature space for classification water soils Band 1 [Albertz 2001. modified] Band 2 7 . Advantages of object-based imagery analysis Pixel-based image analysis Band 2 multispectral.1.

1. Advantages of object-based imagery analysis Pixel-based classification of high resolution images? © Definiens 8 .

1. Advantages of object-based imagery analysis Object level 3 Object-based approaches Object level 2 Object level 1 Pixel [Baatz et al. 2000] Object level 1 Object level 2 Object level 3 9 .

Advantages of object-based imagery analysis http://www.1.com 10 .definiens.

http://www.com 11 .definiens.

• Multiresolution segmentation provides the possibility to easily adapt image object resolution to specific requirements. • Thus. even textured image data can be analyzed. Definiens 2004] 12 . Advantages of object-based imagery analysis • Beyond purely spectral information. data and tasks. • Each classification task has its specific scale. [eCognition User Guide 4.1. image objects contain a lot of additional attributes which can be used for classification: shape. Only image objects of an appropriate resolution permit analysis of meaningful contextual information. texture and—operating over the network—a whole set of relational / contextual information. • Homogeneous image objects provide a significantly increased signal-tonoise ratio compared to single pixels as to the attributes to be used for classification. • Multiresolution segmentation separates adjacent regions in an image as long as they are significantly contrasted—even when the regions themselves are characterized by a certain texture or noise.

independent of the multitude of additional information. image objects can be classified as to the detailed composition of sub-objects. Advantages of object-based imagery analysis • Thus. • Segmentation drastically reduces the sheer number of units to be handled for classification. For instance. different object levels can be analyzed in relation to each other.1. • Thus. [eCognition User Guide 4. the classification is more robust. • The object oriented approach which first extracts homogeneous regions and then classifies them avoids the annoying salt-and-pepper effect of the more or less spatially finely distributed classification results. Definiens 2004] 13 . • Using the possibility to produce image objects in different resolutions. Even if a lot of intelligence is applied to the analysis of each single image object. This structure represents image information on different scales simultaneously. a project can contain a hierarchical network with different object levels of different resolutions. the classification works relatively fast.

2. Segmentation [Definiens Developer 7 User Guide] 14 .

2. Segmentation: Chessboard Segmentation Split the pixel domain or an image object domain into square image objects. Result of chessboard segmentation with scale 20 [Definiens Developer 7 Reference Book] 15 . A square grid aligned to the image left and top borders of fixed size is applied to all objects in the domain and each object is cut along these grid lines.

[Definiens Developer 7 Reference Book / User Guide] 16 . A quad tree grid consists of squares with sides each having a power of 2 and aligned to the image left and top borders is applied to all objects in the domain and each object is cut along this grid lines. Segmentation: Quad Tree Based Segmentation Split the pixel domain or an image object domain into a quad tree grid formed by square objects.2. The quad tree structure is build in a way that each square has first maximum possible size and second fulfils the homogeneity criteria as defined by the mode and scale parameter.

Segmentation: Quad Tree Based Segmentation Result of quad tree based segmentation with mode color and scale 100 17 .2.

Higher values for the scale parameter result in larger image objects. representing a mutual-best-fitting approach: • The segmentation procedure starts with single image objects of 1 (one) pixel size and merges them in several loops iteratively in pairs to larger units as long as an upper threshold of homogeneity is not exceeded locally.2. The segmentation procedure works according the following rules. It can be applied on the pixel level or an image object level domain. smaller values in smaller image objects. You can influence this calculation by modifying the scale parameter. Segmentation: Multiresolution Segmentation Apply an optimization procedure which locally minimizes the average heterogeneity of image objects for a given resolution. [Definiens Developer 7 User Guide] 18 . the seed looks for its best-fitting neighbour for a potential merger. • As the first step of the procedure. This homogeneity criterion is defined as a combination of spectral homogeneity and shape homogeneity.

image objects are merged.2. every image object in the image object level will be handled once. the best candidate image object becomes the new seed image object and finds its best fitting partner. [Definiens Developer 7 Reference Book] 19 . • The loops continue until no further merger is possible. Segmentation: Multiresolution Segmentation • If best-fitting is not mutual. • In each loop. • When best fitting is mutual.

2. Segmentation: Multiresolution Segmentation [Definiens Developer 7 User Guide] 20 .

2. Segmentation: Multiresolution Segmentation [Definiens Developer 7 User Guide] 21 .

2. Segmentation: Multiresolution Segmentation [Definiens Developer 7 Reference Book] 22 .

Segmentation: Multiresolution Segmentation h(s) = w c hc (s) + w s hs (s) _ 1 n ( x i − x )2 hc (s) = ∑ n − 1 i =1 hs (s) = w smoothhsmooth(s) + w comp hcomp (s) l hcomp (s) = n l hsmooth(s) = b c = color s = shape smooth compact = compactness l = border length b = lenght of edge box n = number of pixels in the object 23 .2.

shape 0.1 and compactness 0.5 24 . Segmentation: Multiresolution Segmentation Result of multiresolution segmentation with scale 50.2.

• The average size of image objects must be adaptable to the scale of interest. • Minimisation of the weighted heterogeneity of objects • Segmentation Parameter: Scale Parameter. • Segmentation procedure should therefore be as fast as possible. • Resulting image objects should be of more or less same magnitude.2. Segmentation: Multiresolution Segmentation • Segmentation procedure should produce highly homogeneous segments for the optimal separation and representation of image regions. • Segmentation procedure should be universal and applicable to a large number of different data types and problems. • Region merge • Fusion of pixels by a criterion of homogeneity. Homogeneity criterion (Color/shape) 25 . • Segmentation results should be reproducible.

object in first layer. An integrated reshaping operation modifies the shape of image objects to help form coherent and compact image objects. The resulting pixel classification is stored in an internal thematic layer.2. Neighbouring image objects are merged if the difference between their layer mean intensities is below the value given by the maximum spectral difference. Segmentation: More Techniques • Contrast Split Segmentation: Segments an image or an image object into dark and bright regions. 26 [Definiens Developer 7 Reference Book] . ignored by threshold. • Spectral Difference Segmentation: Merges neighbouring objects according to their mean layer intensity values. This algorithm is designed to refine existing segmentation results. The contrast split algorithm segments an image (or image object) based on a threshold that maximizes the contrast between the resulting bright objects (consisting of pixels with pixel values above threshold) and dark objects (consisting of pixels with pixel values below the threshold). Each pixel is classified as one of the following classes: no object. object in second layer. • Contrast Filter Segmentation: Uses pixel filters to detect potential objects by contrast and gradient and create suitable object primitives. object in both layers. Finally a chessboard segmentation is used to convert this thematic layer into an image object level.

2. Segmentation: Object levels [Definiens Developer 7 User Guide] 27 .

2. Segmentation: Object levels [Definiens Developer 7 User Guide] 28 .

2. Segmentation: Object levels [Definiens Developer 7 User Guide] 29 .

The separation of different regions is more important than the scale of image objects. However. 30 [Definiens Developer 7 Reference Book] . Segmentation: Recommendations Produce Image Objects that Suit the Purpose • Always produce image objects of the biggest possible scale which still distinguishes different image regions (as large as possible and as fine as necessary). The reason for this is that a high degree of shape criterion works at the cost of spectral homogeneity. Using too much shape criterion can therefore reduce the quality of segmentation results.2. the primary information contained in image data. • Use as much colour criterion as possible while keeping the shape criterion as high as necessary to produce image objects of the best border smoothness and compactness. at the end. the spectral information is. There is a tolerance concerning the scale of the image objects representing an area of a consistent classification due to the equalization achieved by the classification.

3. • Global features are not connected to an individual image object. shape. for example the number of image objects of a certain class. for example the area of an image object. Features are used as source of information to define the inclusion-orexclusion parameters used to classify image objects. These characteristic attributes are called Features in Definiens software. and hierarchical characteristics. There are two major types of features: • Object features are attributes of image objects. Object features Image objects have spectral. [Definiens Developer 7 Reference Book] 31 .

Object features: Membership functions [eCognition User Guide 4.3. Definiens 2004] 32 .

Object features Object Features Class-Related Features Scene Features Process-Related Features Customized Metadata Feature Variables [Definiens Developer 7 User Guide] 33 .3.

Definiens 2004] 34 . Object features: Membership functions [eCognition User Guide 4.3.

Object features [eCognition User Guide 4.3. Definiens 2004] 35 .

Object features [eCognition User Guide 4. Definiens 2004] 36 .3.

Definiens 2004] 37 . Object features [eCognition User Guide 4.3.

Object features [eCognition User Guide 4.3. Definiens 2004] 38 .

4. Bayes’ statistical approach: Solving p(x|C1) p(C1) = p(X|C2) p(C2) for x 39 . 2006. 2008] Based on training samples the feature analyzing tool SEaTH (SEparability and THresholds) identifies the significant object features. Marpu et al. Feature extraction: Automation SEATH: A semi-automatic feature recognition tool [Nussbaum et al.

Feature extraction: Automation SEATH: A semi-automatic feature recognition tool thresholds prominent features Output: • Object features providing the optimal separability of the object classes (Jeffries-Matusita distance) • Feature thresholds for the optimal separability 40 .4.

When editing processes. • Advanced Classification Algorithms: are designed to perform a specific classification task like finding extrema or identifying connections between objects.5. 41 . • Classification: uses the class description to assign a class • Hierarchical classification: uses the class description as well as the hierarchical structure of classes. Classification Classification algorithms analyze image objects according defined criteria and assign them each to a class that best meets the defined criteria. you can choose from the following classification algorithms: • Assign class: assigns a class to image object with certain features.

Classification [eCognition User Guide 4.5. Definiens 2004] 42 .

Processes • Definiens Developer provides an artificial language for developing advanced image analysis algorithms. Processes are the main working tools for developing rule sets. All conditions for classification as well as region of interest selection may incorporate semantic information. Arranging processes containing different types of algorithms allows the user to build a sequential image analysis routine. A single process allows the application of a specific algorithm to a specific region of interest in the image. These algorithms use the principles of object oriented image analysis and local adaptive processing. The so formed process hierarchy defines the structure and flow control of the image analysis. • The main functional parts of a single process are the algorithm and the image object domain. [Definiens Developer 7 User Guide] 43 . • Processes may have an arbitrary number of child processes. This is achieved by processes.6. • A single process is the elementary unit of a rule set providing a solution to a specific image analysis task.

classifying objects etc. Processes The algorithm defines the operation the process will perform.6. merging or splitting image objects. This can be generating image objects. • Segmentation algorithms • Classification algorithms • Variables operation algorithms • Reshaping algorithms • Level operation algorithms • Interactive operation algorithms • Sample operation • Image layer operation algorithms • Thematic layer operation algorithms • Export algorithms • Workspace automation algorithms • Process related operation [Definiens Developer 7 User Guide] 44 .

6. Processes [Definiens Developer 7 User Guide] 45 .

6. Processes [Definiens Developer 7 User Guide] 46 .

[Definiens Developer 7 User Guide] 47 . Examples for image object domains are the entire image.6. an image object level or all image objects of a given class. Processes: Image Object Domain The image object domain describes the region of interest where the algorithm of the process will be executed in the image object hierarchy.

48 . The aim of pre-processing is therefore to correct the radiance differences caused by variations in solar illumination. • When using satellite imagery from two acquisition times. differences in radiance values are taken as a measure of change. Change Detection • Change detection is the process of identifying and quantifying temporal differences in the state of an object or phenomenon. atmospheric conditions and sensor performance and geometric distortion [Singh 1989] respectively. each image pixel or object from the first time will be compared with the corresponding pixel or object from the second time in order to derive the degree of change between the two times. • Differences in radiance values indicating significant (“real”) changes have to be larger compared to radiance changes due to other factors.7. • A variety of digital change detection techniques has been developed in the past three decades. • Most commonly.

regression. • class membership. • the class memberships of the pixels: comparison of classifications.7. 49 ... Change Detection change of the image pixel Change measure: • grey values. • transformed pixel values: principal component analysis.. Approaches: Changes of • the spectral or texture pixel values: arithmetic procedures. • texture. . • transformed values. multivariate alteration detection.. . multitemporal classification. .. change vector..

Change Detection change of the image object Change measure: • layer features (mean. stdev.…) • relations among neighbouring.7. sub.and super-objects time 1 time 2 • object class membership 50 .…) • shape features (area. texture. ratio. position. direction.

by applying the segmentation parameters to the image data of one date and assigning the object borders to the image data of the other date. 2008] 51 . separately for the two times.7. Change Detection: Object extraction Given image data from two acquisition times. Time 1 Time 2 Segmentation levels Time 1 3. Time 2 Segmentation levels 2. Time 2 Segmentation levels [Niemeyer et al. the segmentation could be carried out Time 1 1. on the basis of the bitemporal data set.

time-variant object features (e.time-variant object features • but: problems in the extraction of no-change objects due to overall variations 52 .g.7. Change Detection: Feature extraction • Common segmentation (1. shape features) . layer features) • separate segmentation (3): . 2): .apparently time-invariant object features (e.g.

7. 2009 from Listner 2008] 53 . Change Detection: Bitemporal segmentation [Niemeyer et al.

Listner (2008) suggested two different techniques.7. 54 . • Splitting of the segments was done either by global or universal segment adjustment. Change Detection: Bitemporal segmentation Procedure • For testing the plausibility of the merges. • Implementation: The procedure was programmed and implemented using the development environment Matlab and the Matlab-Toolkit Dipimage. the so-called threshold test and the local best fitting test.

7... Change Detection: Bitemporal segmentation global segment adjustment universal segment adjustment . ... . [Listner 2008] 55 ...

2009 from Listner 2008] 56 .7. Change Detection: Bitemporal segmentation [Niemeyer et al.

2009 from Listner 2008] 57 .7. Change Detection: Bitemporal segmentation [Niemeyer et al.

Change Detection: Combined pixel-/object-based approach ( ) OrthoEngine 58 .7.

4 m) July 2003 July 2002 59 .6 m) MS (2. Esfahan Nuclear Centre PAN (0.Motivation Rule bases Automation Input Data: QUICKBIRD.

Change Detection: Combined pixel-/object-based approach Very-high resolution optical imagery Pre-processing • image-to-image registration by contour matching or image correlation • radiometric normalisation using nochange pixels • pan-sharpening by wavelet transformation 60 .7.

Motivation Rule bases Automation C omparison of different fusion techniques ARSIS PC Spectral Sharpening 2002 Pan-sharpened 2003 Gram Schmidt Resolution Spectral Merge Sharpening 61 .

7. Change Detection: Combined pixel-/object-based approach July 2002 July 2003 Pre-processed Quickbird image [Nussbaum & Niemeyer 2007] Pre-processed Quickbird image (Credit: DigitalGlobe) 62 .

7. Change Detection: Combined pixel-/object-based approach Change detection Multivariate Alteration Detection (MAD) [Nielsen 2007. download at: http://www. Nielsen et al.zip 63 (Credit: DigitalGlobe) . orthogonal.fz-juelich. 1998] time 1 D= a TX - bTY time 2 • Canonical correlation. • Fully automatic scheme gives regularized iterated MAD variates (MAD’s).de/ief/ief-ste/datapool/page/210/canty_7251. • Implemented as ENVI extension [Canty 2006]. invariant to linear/affine transformations. MAD analysis.

3 (blue) Automatic threshold Automatic threshold * 2 Automatic threshold * 3 64 .Motivation Rule bases Automation MAD Components 1 (red). 2 (green).

Motivation Rule bases Automation Multiresolution Segmentation On the basis of the 2003 pan-sharpened MS Level 1 Level 3 Level 5 Level 1 Level 3 Level 5 Level 1 Level 1 Level 3 Level 5 65 .

pixel level AST_07 data. object level 66 . Change Detection: Combined pixel-/object-based approach Object extraction Chessboard segmentation & subsequent multiresolution segmentation Process flow AST_07 data.7.

MAD 4 (blue) Gray indicates „no change“. 67 . Change Detection: Combined pixel-/object-based approach Change information given by the pixel’s DN MAD transformation Change information given by the object’s features MAD 2 (red). MAD 3 (green).7. Different types of changes are represented by different MADs and thus colours.

Motivation Rule bases Automation Class hierarchy Classification of changes 68 .

3.7. Gray: No change Colours: Different types of changes [Niemeyer & Nussbaum 2006] changes within industrial areas 69 .4. Change Detection: Combined pixel-/object-based approach ASTER July 2000 ASTER July 2001 Classification (using SEaTH for „industrial sites“) industrial sites Change detection (Using MAD) Thematic information Change information Change given by MADs 2.

2008] July 2002 July 2003 Object classes “buildings” and “streets” [Nussbaum & Niemeyer 2007] Object classes “buildings” and “streets” 70 (Credit: DigitalGlobe) .7. 2006. Change Detection: Combined pixel-/object-based approach Semantic classification using SEATH [Nussbaum et al. Marpu et al.

7. Change Detection: Combined pixel-/object-based approach Semantic classification of changes (Credit: DigitalGlobe) [Nussbaum & Niemeyer 2007] 71 .

7. Change Detection: Combined pixel-/object-based approach
Semantic classification using SEATH [Nussbaum et al. 2006, Marpu et al. 2008]

[Nussbaum & Niemeyer 2007]

72

7. Change Detection: Combined pixel-/object-based approach
MAD transformation and semantic classification of changes

[Nussbaum & Niemeyer 2007]

73

7. Change Detection based on object features

Preprocessing

Object extraction

Feature extraction

Change detection

Clustering of change objects

Post-classification processing

Time 1 Map of changes Time 2 Segmentation levels Feature views, i.e. Layer features, shape features Improved map of changes

Image data

74

7. Change Detection based on object features
Multivariate Alteration Detection [Nielsen 2007, Nielsen et al. 1998] • linear combination of the intensities for all N object features in the first image acquired at time t1, represented by the random vector F. U = aTF = a1F1 + a2F2 + . . . aNFN • linear combination all N object features in the second image acquired at time t2, represented by the random vector G. V = bTG = b1G1 + b2G2 + . . . bNGN • scalar difference image D=U −V= aTF − bTG

• determination of a and b, so that the positive correlation between U and V is minimized (generalized eigenvalue problem)
75

Change Detection based on object features Multivariate Alteration Detection • As a consequence. N pairs of eigenvectors and N orthogonal (uncorrelated) difference images. • Multivariate autocorrelation factor (MAF) transformation (Minimum noise fraction transformation) • Automatic thresholding (probability mixture model) 76 . the procedure returns N eigenvalues. the difference image D contains the maximum spread in its pixel intensities and therefore maximum change information.7. referred to as to the MAD variates. • For a given number of bands N.

Change Detection based on object features Change detection: Change information given by object MADs [Niemeyer et al.7. 2009] 77 .

• The fuzzy cluster membership of an object calculated in FMLE is the aposteriori probability p(C|f) of a object (change) class C. 78 MAD transformation of objects. MAD 4 (B) . • Clustering by fuzzy maximum likelihood estimation (FMLE) (Gath and Geva. with automatic thresholds MAD 1 (R). 1989). • Advantages of forming elongated clusters and clusters of widely varying memberships.7. Change Detection based on object features Unsupervised classification of changes • MAD transformation enhances different types of changes within the object level rather than classifying them. MAD 2 (G). given the object feature f.

different color indicate different types of changes 79 . Change Detection based on object features MAD transformation based on object features Layer features Shape features MAF/MAD components 3 (red).7. 4 (green) and 5 (blue). grey indicates no-change.

Change Detection based on object features FMLE classification of MAD components Layer features Shape features Clustering for 6 classes 80 .7.

Bachmann. TU Bergakademie Freiberg. M. & Hay. 4-5 July 2006. Band 17 (in print) Niemeyer. June 2008 Blaschke.and Hyperspectral Data. Niemeyer.R.R. G. 2008b: A class dependent neural network architecture for object-based classification. MatGeoS 1st Workshop on Mathematical Geosciences. G... B. J. Bratskikh. Springer.. Calgary. 2008: Implications of Invariant Moments for Texture Analysis. & Canty. S. 2006: Change detection . 1–19. Geomonitoring Group. 2008: Definiens Developer 7 User Guide.R. With Algorithms for ENVI/IDL. 2004: eCognition Userguide 4. 463-478.. Lang. T.V. & Stein. John. 169-184. I. & Niemeyer. Munich. & Hay. & INiemeyer.J. 2009: Techniques for object-based image analysis. pp. B. G. Listner... ISPRS Volume No.. Berlin. 2008: Definiens Developer 7 Reference Book. 1998: Multivariate alteration detection (MAD) and MAF processing in multispectral. A. 2008a: A Procedure for Automatic Object-based Classification. & Nussbaum. Richard. Nussbaum.. Springer. Definiens. S. A. A. ESARDA Bulletin 36. (ed. 2008: Object-Based Image Analysis Spatial Concepts for Knowledge-Driven Remote Sensing Applications. Berlin.. F. Lang. S. I. Nussbaum.. A. S. 1st International Conference on Object-based Image Analysis (OBIA 2006). P.. S. 2007: The Regularized Iteratively Reweighted MAD Method for Change Detection in Multi..J. Marpu. 16. Berlin. P. 19-25 Nussbaum. 1989: Digital change detection techniques using remotely-sensed data. I. Springer.. Research Paper. A. 2006: Image Analysis.the Potential for Nuclear Safeguards.. Kristinsdottir. 2008: Statistische Detektion und Analyse von Bildobjekt. In: Avenhaus.A new tool for automated feature extraction in the context of object-based image analysis.. 2008: Bildsegmentierung für die objektbasierte Änderungsdetektion digitaler Satellitenbilder. N. 2. Remote Sensing of Environment 64. Niemeyer. Springer. & Gloaguen. In: International Journal of Remote Sensing 10(6): 989–1002... 2008: Object-Based Image Analysis Spatial Concepts for Knowledge-Driven Remote Sensing Applications. 335-348 . S..A. In: Blaschke. & Simpson.. 2006: SEATH . Lang. pp. (ed.. Conradsen. In: Publikationen der Deutschen Gesellschaft für Photogrammetrie. Kyriakopoulos. No. 81 . pp. Series: Lecture Notes in Geoinformation and Cartography. I. & Nussbaum.). Bachmann. Definiens. I. Definiens. Series: Lecture Notes in Geoinformation and Cartography.). R.R.References Bachmann. & Bachmann. In: Proc. Master Thesis.J.A. 185-201. T. F. T.. & Hay. P. Definiens. Marpu. Niemeyer. Kristinsdóttir. Marpu. Series: Lecture Notes in Geoinformation and Cartography. 2006: Verifying Treaty Compliance.. Listner. Definiens. Geomonitoring Group. Salzburg... IEEE Transactions on Image Processing Vol.. Taylor & Francis. Institute of Mine-Surveying and Geodesy. Berlin. F.. P. bitemporal image data: New approaches to change detection studies. In: Blaschke. Nielsen. CRC Press. I. 2008: Object-Based Image Analysis Spatial Concepts for Knowledge-Driven Remote Sensing Applications. Fernerkundung und Geoinformation (DGPF) e. 2008: Change Detection using Object Features. & Niemeyer. Munich. 2007:Automated extraction of change information from multispectral satellite imagery. XXXVI – 4/C42 Singh.. Diploma Thesis.. G. Canty. GEOBIA 2008. Niemeyer. Geomonitoring Group. Institute of Mine-Surveying and Geodesy.... John. Limiting Weapons of Mass Destruction and Monitoring Kyoto Protocol Provisions.. TU Bergakademie Freiberg. Nielsen. Marpu..und Bildelementveränderungen. 6-7 August 2008. S.. Segmentation and Classification. R.. Classification and Change Detection in Remote Sensing. A. Freiberg (Germany). Munich. Definiens. I. S. F. C. TU Bergakademie Freiberg. K. Institute of Mine-Surveying and Geodesy. C.. I. M. M. 2008: An Architecture based on Neural Networks. pp.

Sign up to vote on this title
UsefulNot useful