You are on page 1of 60

G Ashtech G Mobile Augmented Reality

G Cadastres and Climate Change G Geomarketing

Magaz i ne f or Sur veyi ng, Mappi ng & GI S Pr of es s i onal s
March 2010
Volume 13
Putting NovAtels GNSS
receiver in our siveslleeiee
gave us a huge advantage.
Success has a secret ingredient.
Our precise positioning technology helps some of the worlds leading companies stay in the lead. To join
them, visit or call you know who. Integrate success into your .

While You Were Out
When not sitting in front of my desktop pc, I try not to be occupied too much with gadgets
other than my mobile Phone or iPod. Although Im interested in gadgets, Im not the type of
person who wants to have this weeks new revolutionary and life-changing device that can
do even more than the one that came out the week before last. More interesting for me, is
how a device is used and by whom (and for how long). Im amazed by the sort of informa-
tion people share on the internet. It may not surprise you that there is a search machine
that tells you which people are not at home the moment you perform a search. The site is
called Please Rob Me. The idea behind the site is quite simple: it combines public Twitter
accounts that also use Foursquare, a location based web service for leisure purposes. Once
you check in online, in a bar for instance, and publish this information on your public Twitter
account, it follows youre not at home at the moment. Combine this in a search engine and
you have the perfect burglar tool (I think it should be mentioned here that it was not the
intention of the makers to make a burglar tool, but to show people what can be done
with information they share with thousands of other people).
That is not all. Municipalities are also discovering the power of individuals with mobile
devices. The recent initiative called NYC BigApps deserves succession: it is a software appli-
cation challenge in keeping with New York Citys drive to become more transparent, accessi-
ble and accountable. One of the winning applications is called Taxi Hack, which allows you
to review and share your live comment on NYC taxis. Users are encouraged to review the
ride and everything that comes with it, in combination with the medallion number or driv-
ers license number. Id like to point out that not only bad reviews are submitted, but also a
great deal of compliments on good drivers as well. But what surprised me most about this
particular application was that taxi drivers themselves use it to comment on clients who
may have done something wrong.
In this issue of GeoInformatics, you will find more on this app challenge: Florian Fischer
takes a look at whats happening right now in the world of augmented reality and whats to
come. And the news keeps on coming: just as I finish this editorial, I read on a travel blog
that Google Goggles (an application on mobile phones that makes use of the Android oper-
ating system) can be used for translating purposes. According to the blogger, who is a fer-
vent traveler, this is ideal for translating menus in foreign restaurants. What mobile devices
cannot do for you is order the meal. But they do make life much easier for you in the end.
Enjoy your reading!
Eric van Rees
March 2010
GeoInformatics provides coverage, analysis and
commentary with respect to the international surveying,
mapping and GIS industry.
Ruud Groothuis
Eric van Rees
Frank Arts
Florian Fischer
Job van Haaften
Huibert-Jan Lekkerkerk
Remco Takken
Joc Triglav
Contributing Writers
Leah Wood
Remco Takken
Joc Triglav
Simon Cottingham
Paul van der Molen
Rebecca Muhlenkort
Jasper Dekkers
Menno-Jan Kraak
Account Manager
Wilfred Westerhof
GeoInformatics is available against a yearly
subscription rate (8 issues) of 89,00.
To subscribe, fill in and return the electronic reply
card on our website or contact Janneke Bijleveld at
All enquiries should be submitted to
Ruud Groothuis
World Wide Web
GeoInformatics can be found at:
Graphic Design
Sander van der Kolk
ISSN 13870858
Copyright 2010. GeoInformatics: no material may
be reproduced without written permission.
GeoInformatics is published by
CMedia Productions BV
Postal address:
P.O. Box 231
8300 AE
The Netherlands
Tel.: +31 (0) 527 619 000
Fax: +31 (0) 527 620 989
Florian Fischer
Luigi Colombo
Barbara Marana
David J. Coleman
Yola Georgliadou
Job van Haaften
1Spatial and High Quality Geospatial Data
1Spatial is an innovator in the field of knowledge engineering. This term
covers geospatial data integration, harmonization and quality control.
With this, the company is putting high quality geospatial data at the
centre of its universe. GeoInformatics asked 1Spatials Business
Development Director Steven Ramage about its current activities in the
geospatial business and how new technologies and concepts influence
the way people think about geospatial data.
C o n t e n t
March 2010
Fusing with Other Intelligent Data
The Power of Full Motion Video 6
Ever-growing Global Risks
Political Risk Map 2010 10
Taking Spatial ETL Technology to New Heights
FME 2010 18
For Avon Fire & Rescue Service
Making Firefighting Safer with GIS 20
Trimble Mobile Mapping Technology
Belgian Road Sign Inventory Project 30
Multi-purpose Land Administration Systems
Cadastres and Climate Change 34
The Buzzword Explained
Geomarketing 40
New Experiences, Remarks and Prospects
Building Reconstruction and Texturing 44
Mobile Augmented Reality at a Glance
The Digital Sixth Sense 48
Why and What Do Individuals Contribute?
Volunteered Geographic Information 50
Educating Remote Sensing Techniques
Eduspace 54
Professional Grade GNSS Technology
The Rebirth of Ashtech 14
Definiens eCognition Server Software
Object-based Image Analysis 22
Innovating Knowledge Engineering
1Spatial and High Quality Geospatial Data 26
About Maps: Theory and Practice
By Menno-Jan Kraak 53
Page 26
Object-based Image Analysis
Definiens is a company active in image analysis. Not restricted to the
geospatial market only, the company offers solutions for all kinds of
imagery used in life sciences and the medical world.
GeoInformatics interviewed Ralph D. Humberg, Vice President of
Definiens Earth Sciences division. Mr. Humberg joined Definiens in 2002
and is responsible for Definiens' global Earth Sciences business. He
talks about eCognition, Definiens image analysis software used for
Earth Sciences. The latest release of the software is eCognition version
8, issued in November last year, along with a new internet portal.
Page 22
Latest News? Visit
March 2010
On the Cover:
Details of a point textured model of the exteriors (the main faades)
of St Maria Maggiore (Italy). See article on page 44.
The Digital Sixth Sense
In these early days of 2010 Augmented Reality resounds throughout the
land. Smartphones eventually seem capable enough to provide a super-
imposed view of virtual and real worlds through the cameras view. This
vision awakes expectations of the big business. In the future show-own-
ers might stick virtual coupons on their shop-windows to be picked-up by
AR-flaneurs and thus attract new customers. This article will give a short
overview about current mobile Augmented Reality applications and the
expected development in the coming years.
FME 2010
With the release of FME 2010 in January, Safe Software is emphasizing its
stated commitment to improve spatial data access for organizations across
the globe. In fact, the technology has been enhanced in ways that make
spatial data more accessible and potentially more useful than ever
Page 18
Calendar 58
Advertisers Index 58
Page 48
Page 44
Fusing with Other Intelligent Data
The Power of Full Motion Video
Intergraphs Defense and Intelligence Industry Manager Leah Wood discusses the Motion Video Exploitation solution, which
was shown at the US GeoINT conference in 2009 and the EU DGI 2010 conference in London. This new solution fuses multiple
motion video data streams with other intelligence data in one high-powered analytical environment. Also, the use of visually
displayed telemetry information allows Intergraphs technology to blend motion video into existing architectures and geo-fuse
with a vast amount of intelligence data that resides in other systems.
by Leah Wood
As todays military and intelligence organi-
zations support foreign military operations and
border security efforts, it is increasingly impor-
tant that they expand their data collection and
analytical capabilities beyond traditional
systems and methods. To this end, there is
increased interest in incorporating video data
sources, such as those from unmanned aerial
vehicles (UAVs) and other unmanned aircraft
systems (UASs) into existing analytical envi-
Intergraph has met the challenges of military
and intelligence agencies with proven solu-
tions since 1969, and continues its role as a
worldwide geospatial solutions provider with
innovative technology and products. They
have increased focus on expanding their rich
set of geospatial exploitation solutions with
applications that provide improved analysis of
motion video, integration with other forms of
intelligence and geospatial information, and
robust management and dissemination of
imagery and video data collections. These new
applications exploit the power of georefer-
enced video sources to create profound
improvements in analytical and decision-mak-
ing ability, and can be directly applied to the
emerging disciplines of wide-area persistent
surveillance and motion imagery intelligence.
In keeping with its commitment to the military
and intelligence community, Intergraph
launched its Motion Video Exploitation solu-
tion, which was shown at the US GeoINT con-
ference in 2009 and the EU DGI 2010 confer-
ence in London. This new solution fuses mul-
tiple motion video data streams with other
intelligence data in one high-powered analyti-
cal environment, where analysts can place clip
marks and annotations, and generate reports
and static GeoTiff images for broad dissemi-
nation. The company also relies on MISB com-
pliant KLV (key, length and value) data to pro-
vide the customer with accurate telemetry
information which can be visualized as a num-
ber of different geospatial features: aircraft
trackline, camera angle, video path trackline,
video path polygon, etc. The use of visually
displayed tele -metry information allows
Intergraphs technology to blend motion video
Ar t i cl e
March 2010
UAV (Unmanned aerial vehicle)
into existing architectures and geo-fuse with a
vast amount of intelligence data that resides
in other systems.
Not Just for the Military Anymore
Global Positioning Satellites and the Internet
are two important examples of technologies
initially developed for military applications, but
that then transcended to the civilian space.
These two technology sets have dramatically
transformed government, business, and per-
sonal lives. UAV technology is also expected
to have a substantial impact in non-military
sectors and will experience rapid growth in the
coming years. According to a May 2008 report
by the U.S. Government Accountability Office,
the number of UASs being built for civilian uses
is expected to increase from 40 to 160 by 2017.
This includes widespread applications for law
enforcement, firefighting, and numerous envi-
ronmental and scientific purposes. The U.S.
Federal Aviation Administration (FAA) is current-
ly working to establish airspace safety regula-
tions that would allow the systems to achieve
wider-spread usage. Police in the UK are plan-
ning to use UAVs for policing of major events,
including the 2010 Olympic Games.
High-level Workflow
A typical high-level workflow consists of four
key areas that must function in harmony to
properly support the end-to-end requirements
of the mission:
Automated and manual capture of geospatial
information, including imagery, video, and
other sensor data
Management of enterprise geospatial content,
including traditional vector data sets (layers,
features), imagery, video, and terrain models
Integration and analysis of multiple overlap-
ping sets of geospatial and non-geospatial
Visualization and dissemination through a
variety of interfaces
It is important to note that Intergraph embraces
open standards for data storage as well as for
open dissemination of geospatial information,
such as through Open Geospatial Consortium
(OGC) Web services.
The Power of Geospatial Fusion
Video-based data sources provide the most
recent view of the battlefield and can augment
other forms of geospatial intelligence to pro-
vide a richer, more detailed view of the area of
interest. To effectively use video as a source of
intelligence, however, the analyst needs to
seamlessly fuse the video with these other
types of intelligence, such as map features and
annotations. This is highly beneficial, as these
other sources can help orient the analysts
Forensic Video Analysis and Real-
time Quality Enhancement
In some cases, even with proper content man-
agement tools and powerful data integration
and visualization tools, poor-quality video can
hamper the analytical process. In some cases,
the original captured video is of poor quality
or is unusable due to flight path, altitude, tilt-
ing and buffeting of aircraft, and other factors.
Therefore, it is important to provide technology
that can work in a modular fashion to perform
real-time enhancements and corrections on the
video, such as removing atmospheric distortion,
correcting for shadows that affect brightness
and contrast, and stabilizing jittery video.
Intergraph provides state-of-the-art patented
technology for performing these types of
enhancements, increasing the usefulness of
UAV video in a real-time and forensic capacity.
Furthermore, Intergraph has a strong history of
deployment of its forensic video technology in
the public safety and law enforcement sectors
through its Video Analyst product, which has
traditionally been used to enhance and analyze
video from closed circuit video systems and
dashboard cameras in police and other emer-
gency vehicles. Intergraph enhanced this post-
collection-based technology to function in real-
time mode to bring these capabilities to
operational environments.
Generation of Georeferenced Imagery
from Video
A key step towards achieving the fusion of a
video data source with other forms of intelli-
gence is generating a georeferenced image as
the result of stitching together or mosaicking
hundreds or thousands of individual video
frames. Intergraphs software automatically gen-
erates this georeferenced image, which can then
be seamlessly integrated with other forms of
static data, such as aerial photos, satellite
imagery, or geospatial layers and features. This
process can dramatically improve the clarity and
accuracy of the video, enabling accurate analy-
sis on the video. This video mosaic capability
provides a mechanism to glean additional
details from the entire collection sequence that
could not be obtained from individual frames.
point-of-view and improve understanding of
video content by eliminating the tunnel vision
effect caused by viewing the video in a dedi-
cated video window. Intergraph developed a
solution that supports this direct fusion and
provides a rich decision-support environment.
Geospatial Content Management
Another important factor in using video for ana-
lytical purposes is the ability to easily query
vast archives of video for specific clips that
meet an analysts search criteria, and to rapidly
deliver the results to the analysts exploitation
environment. Intergraph provides technology to
automate the management of large amounts of
satellite imagery, motion video, aerial photos,
elevation data, and other digital files that are
essential to the geospatial intelligence exploita-
tion workflow. Intergraphs TerraShare, a com-
mercial off-the-shelf (COTS) product for enter-
prise image and elevation management, can
provide multiple users with transparent access
to large amounts of common imagery. This can
greatly reduce the time from collection to
exploitation, while improving efficiency, collab-
oration, and quality.
Automated Geospatial Content Ingest
Although image management systems such as
TerraShare provide a robust solution for the
storage and distribution of data, many organi-
zations still need methods and technologies
that ease the administrative burden of finding,
preparing, and uploading the data into these
archives. These processes can consume a sig-
nificant amount of the operators and analysts
time. As larger volumes of high-resolution data
are being collected, and as the turnaround time
for results is being compressed, it is essential
that the image ingest part of the overall work-
flow be automated to the greatest extent pos-
sible. Todays fast-paced, dynamic environment
also demands around-the-clock monitoring and
processing of new data. It is clear the only way
to ease this burden and realize the full value
of these expensive and complex collection
assets is to implement a mechanism for
automating the ingest, organization, and pre-
processing of new imagery as it becomes avail-
able. Intergraphs TerraShare Automatic Data
Ingest technology provides these capabilities.
Latest News? Visit
Ar t i cl e
March 2010
With the continued evolution in technology, such as
service-oriented architectures, advanced geospatial applications,
mobile technology, and speed and method of transmission,
now is the time to provide powerful and intuitive geospatial
intelligence solutions that can help military and intelligence
agencies be more effective and cost-efficient.
In cases where satellite imagery or aerial pho-
tos are not recent enough, this process provides
a more current representation of an area, which
can then be compared to previous images to
form a foundation for change detection from
video sources. This also reduces the workload
for analysts, since they can view the finished
product more quickly and completely than by
viewing the video in sequence. Since the result-
ing image is made up of multiple frames that
overlap to some degree, the mosaic can provide
a clearer representation of the area of coverage,
which is essential for exploitation.
3D Visualization and Motion Video
To achieve successful results, it is important
to work diligently towards simplifying the user
experience. To achieve this, Intergraph has
partnered with Skyline Software to incorporate
its 3D visualization and fly-through technolo-
gy into its motion video solution. They chose
this partnership specifically due to Skylines
ability to seamlessly incorporate georefer-
enced, real-time video into the 3D environ-
ment, along with satellite imagery draped over
terrain models, 3D models, and the dynamic
location of moving vehicles on the ground and
in the air. This reduces the overall number of
applications required to visualize and analyze
the wide variety of static and dynamic data
The integrated suite of products provides a
rich and intuitive experience, and at the same
time, is built on an architecture that provides
support for direct connections to OGC Web ser-
vices, Oracle spatial databases, and image
libraries. These connections persist during fly-
throughs of the scenes, providing the most
up-to-date representation possible.
As defense and intelligence organizations work
to expand the use of motion video sources for
more widespread purposes, it is important to
appreciate that many civilian organizations are
also incorporating video data sources into their
existing processes and systems. Many civilian
organizations, as well as national and regional
governments, are establishing offices and pro-
grams to address unmanned aerial systems. As
these systems become more reliable and eco-
nomical, and as policies are implemented, a
vast new array of new and innovative applica-
tions will emerge.
To make the most effective use of aerial video
collection in a civilian and military context, it
will be extremely important for these organiza-
tions to implement the types of technologies
that provide reliable enterprise data manage-
ment, fusion with other forms of geospatial
information, cleanup of distorted or jittery
video, and superior analytical abilities.
The combination of these components is the
key to providing the right information at the
right time to solve his problem, achieving
improved analytical quality, performance, and
superior decision-making. Intergraph works with
its partners and customers to provide these
types of essential capabilities.
Military and intelligence agencies are faced with
the need to adapt to wider-reaching demands
and quicker response times than they have in
the past. Furthermore, they are assimilating and
analyzing more available data than ever before,
such as high-resolution imagery, real-time video,
and GPS-tracked objects. Never before has
there been a greater focus worldwide on secu-
rity and emergency preparedness. Todays mili-
tary and intelligence agencies must also meet
the expectations of people and organizations
who are dealing with natural disasters, an
unsettled economy, and devastating global
Therefore, they need to quickly and effectively
collect and analyze relevant information that
helps make sense of current situations and
reduce conflict around the world. With the con-
tinued evolution in technology, such as service-
oriented architectures, advanced geospatial
applications, mobile technology, and speed and
method of transmission, now is the time to pro-
vide powerful and intuitive geospatial intelli-
gence solutions that can help military and intel-
ligence agencies be more effective and
Leah Wood, Defense and Intelligence Industry
Manager at Intergraph.
Ar t i cl e
March 2010
Screenshot of the Motion Video Application (MVA)
2010 Spectra Precision. All rights reserved. All other trademarks are property of their respective owners.
Simply Powerful

StepDrive motion technology

LockNGo tracking technology

Spectra Precision Survey Pro eld software

GeoLock GPS assist technology

2, 3 and 5

Windows CE Touchscreen

2.4 GHz interference-free radio

Spectra Precision Ranger 500X data collector

Contact your Spectra Precision dealer today.



Ever-growing Global Risks
Political Risk Map 2010
Political and financial instability remain a feature of the business landscape as a result of the recession, according to
Aon Risk Services, the global risk management and insurance brokerage business of Aon Corporation.
The company recently launched its 17th annual Political Risk Map.
By Remco Takken
Aon ranked the political risk of 209 coun-
tries and territories, measuring risk of curren-
cy inconvertibility and transfer; strikes, riots
and civil commotion; war; terrorism; sovereign
non-payment; political interference; supply
chain interruption; legal and regulatory risk.
The risk in each country was ranked Low,
Medium-Low, Medium, Medium-High, High or
Very High. A country with an elevated risk
is defined as any country with a risk ranked
Medium-Low, Medium, Medium-High, High or
Very High.
The results of the analysis are detailed on the
2010 Political Risk Map produced by Aon Risk
Services in partnership with Oxford Analytica,
an international consulting firm. Oxford
Analytica draws its analysis from a global net-
work of more than a thousand experts, includ-
ing senior faculty members at Oxford
University and at major research institutions
worldwide, to make independent judgments
about geopolitical risk.
More Red and Orange Zones
While the subsequent risk maps are not meant
to provide comparisons over time, Professor
Erwin Muller, CEO of Aon-owned COT, asserts
that this years map does indeed show more
red and orange zones than those of previous
years: Through the years, you can see the sit-
Ar t i cl e
March 2010
Political Risk Map 2010
Key to Symbols
uation decline. Marc van Nuland, Board
Member of Aon Risk Services, states, The red
zone exemplifies a situation where it is very
hard or even impossible to insure trade risks.
When asked for the cartographic conse-
quences, Muller suggested that the introduc-
tion of yet another theme map by Aon is
always a possibility. Indeed, this has hap-
pened before, with Aons Terrorist Threat Map
emerging out of the Political Risk Map.
This separate map for terrorism threats was
produced in 2006, 2007 and 2009, while gen-
eral economic threats were recently eliminat-
ed from both maps. The one symbol on the
Political Risk Map still associated with terror-
ism looks like an exploding bomb. It is
labelled Strike, Riot, Civil Commotion,
Terrorism. Every year, a separate theme is lift-
ed out of the maps legend. Last year, a
Commodity Crunch Exposure Matrix was pre-
sented, while 2008s Supply Chain Disruption
Risks theme is now permanently featured on
the Political Risk Map.
Food & Water Insecurity
The 2010 map introduces new indices look-
ing at food, agricultural commodities and
water supplies. Van Nuland: This is where
we see most applications for insurances,
which means that most of the trade, and
most investments take place there. There are
two new icons on the 2010 map: Food and
Water Insecurity. They have been applied to
the thirty most high risk countries - that is
those countries potentially facing the most
severe food and water insecurity in the medi-
um to long term. These are all developing
countries, mostly in Africa, which is in keep-
ing with the conventional wisdom that the
impacts of climate change will rebound hard-
est on the countries least responsible for
global warming.
Also, Israel now boasts a symbol for Water
Insecurity. Its ongoing water issues are
extremely well known, but the situation is
supposedly not severe enough to appear in
the Top 20.
Van Nuland picks out the country of
Mauretania to make his point about the Top
10 vulnerable countries when it comes to food
and water risks. There has been a consider-
able amount of food help, and the country is
already buying a lot of food from abroad.
Furthermore, this country is vulnerable to the
warming of the earth.
Not Meant to be Alarmist
The Food and Water Insecurity Indices are not
meant to be alarmist, though, according to Roger
Schwartz, senior vice-president of Aon Trade
Credit. They are forward-looking assessments
designed to be an early warning. While the
Global Agricultural Commodity
The Agricultural Commodity Supply Risk Index
offers a supply-side view, identifying the inter-
nationally-traded agricultural commodities at
greatest risk of a supply shock, and thus a
sudden global price spike.
Many of the worlds most productive agricul-
tural regions are expected to see a decline in
productivity if temperatures rise.
Cocoa tops the 2010 Agricultural Commodity
Supply Risk Index by some margin, as more
than 75 percent of global production is con-
centrated in four countries at significant risk
of supply disruption, said Wilkin. These
threats to cocoa supplies include political
increasing supply-side pressures of global warm-
ing are more of a long-term issue, there are more
immediate concerns.
We are already seeing instances of countries
that cant produce enough of certain foods
and in these financially difficult times cannot
afford to import these food supplies. This
places localized pressures on a countrys
social balance and can lead to the sort of
geopolitical events we saw in 2007 and 2008.
With the prospect of real economic recovery
over the next year or so, we are likely to see
increased demand for food and water global-
ly. With the current supply-side issues being
experienced in some areas, this will only add
to the existing pressures.
Latest News? Visit
March 2010
Ar t i cl e
Political Map in detail
instability, natural disasters, and water sup-
ply insecurity.
For the first time in twenty years, India has
had to import rice for its own population now
the countrys rice production has declined by
16 percent. That explains why India is men-
tioned (with other rice-producing countries)
in the Top 3.
Movements on the 2010 Map
Eight countries or territories have been
upgraded to a lower risk level - Albania,
Myanmar/Burma, Colombia, South Africa, Sri
Lanka, East Timor, Vanuatu, Vietnam and the
Hong Kong Special Administrative Region of
the People's Republic of China.
Hong Kong saw its political stability rise in
recent times. Colombia has proved to be a
safer country than before, with better supply
chain quality. Sri Lanka has (at least for now)
won the war against the rebellious Tamil
Tigers. Albania has made successful steps in
their fight against crime and corruption,
acquiring better cards to play for EU member-
ship at the same time. Vietnam is an upcom-
ing country for its cheap labor, and it has a
good regulatory system which tries to attract
investors from abroad. Myanmar has profited
from high prices for natural gas, which in
effect paid off its national debt.
Eighteen countries have seen conditions wors-
en, leading to a downgrade: Algeria, Argen -
tina, El Salvador, Equatorial Guinea, Ghana,
Honduras, Kazakhstan, Latvia, Madagascar,
Mauritania, Philippines, Puerto Rico,
Seychelles, Sudan, United Arab Emirates,
Ukraine, Venezuela and Yemen.
Van Nuland: Beginning last year, the first
signs of the financial crisis could be seen,
beginning in Latvia. It turned into an econom-
ic crisis and an increase in non-payment by
countries and private companies. The credit
risk is apparent, and it should now be seen
as a combination of political and economic
Very High Risk Countries
Sudan, Venezuela and Yemen have been
added to the Very High category. Muller
asserts, They are joining Afghanistan, Congo
DRC, Iran, Iraq, North Korea, Somalia and
Yemen has been added to the Very High cat-
egory because recently it became painfully
apparent that its government system is about
to collapse. Theres a risk of civil war while at
the same time Al-Qaeda-like terrorist groups
have emerged.
In a fair number of instances, the High and
Very High risk countries have been allocated
seven or more symbols on the map. New on
the list is Eritrea, while others have seen an
increase in significant risks. Muller points out
that this is not only the case with countries
which have to deal with a lot of threats.
Ghana, for instance, currently has only three
symbols to its name, but they are all new for
this year. This is in line with the general trend
that Africa is a continent of growing risks.
Good News for Insurance Companies
Muller sees a trend where there are more High
Risk countries, and more countries residing in
the Medium-High category. Indeed, looking at
the new map, one has to deal with consider-
ably more risk than in previous years. Van
Nuland: Companies should continue to do
their business with flair, but they should also
ask themselves whether higher risks should
be covered in these insecure times. Of
course, this is good news for the insurance
companies Aon works for. This intermingling
within the mapmakers organization, however,
shows a possible weak spot in the scientifi-
cally sound presentation of the data.
Van Nuland: With this map, we want to
establish a growing awareness of higher risks
during the financial crisis. Smaller enterprises
in particular should ask themselves: can we
afford the risk of something going wrong?
The combination of higher risk and financial
instability is worrying Van Nuland the most:
We see it in real life: an increasing number
of claims.
Remco Takken is editor of GeoInformatics.
For more information, have a look at
Ar t i cl e
March 2010
Food and Water Insecurity
























































































































Professional Grade GNSS Technology
In January, Magellan Professional announced that its brand
name has been changed to Ashtech. Joc Triglav asked Franois
Erceau, Ashtech Vice-President and General Manager, to explain
why this happened and what are the companys plans for the
new year in terms of products and market strategies.
By Joc Triglav
I nt er vi ew
Question: You decided to hit the GNSS
market target with the old newly reborn
brand name and a totally new logo.
Please explain the reasons for these
changes and their main goals.
Franois Erceau: Magellan Professional was the brand
we used within Magellan, our former corporation and
a well known GPS company with a big presence in
the consumer market. With the sale of the Magellan
Consumer division to the MiTAC Corporation in early
2009, rights to and ownership of the brand name
changed. Magellan Professional would only be
able to continue using the Magellan name until,
end of 2011. We thought, Why wait? We felt it
was better to immediately clarify and strengthen
our commitment to the professional business by
moving more quickly to re-identify ourselves with
our own brand in the professional market. The
equity we had in the Ashtech brand was so incred-
ible that it would have been a waste not to lever-
age it. The renaming benefits us because of
Ashtechs early and deeply rooted presence in the
high-precision GPS and later GPS/GLONASS
application markets such as surveying, GIS and OEM
boards. We see it as the rebirth of Ashtech. The
Ashtech brand has long stood for technology, preci-
sion and innovation. This is exactly what we offer our
customers and what we want to convey with our new logo.
It sports a trendy new look that upgrades the renowned
Ashtech brand bringing it firmly into the 21st century.
Q: Please, define the main global target high-
performance applications markets and commer-
cial positioning for Ashtechs products.
FE: High-performance applications exist today across a broad
array of markets in land, air and sea applications. We under-
stand high-performance to encompass automation, RTK posi-
Franois Erceau
The Rebirth
of Ashtech
tioning and centimetre-level accuracy. It also means fast and robust signal
processing. Harsh environments also demand specific high-technology fea-
tures, such as strong multi-path mitigation, multi-constellation tracking,
and redundancy of the solution. Heading and relative positioning are also
outputs expected from these high-end applications.
As a leading GNSS manufacturer, we deliver high-performance positioning
solutions to OEMs, integrators, value-added resellers, distributors and
end-users. Obviously, from a pure GNSS technology perspective, perfor-
mance has always been linked to the accuracy, the reliability of the data,
and the speed of the processing solution. However, the performance can-
not be defined solely in terms of a products GNSS performance. Design,
connectivity, ease-of use, inter-operability, are integral components of the
definition of high-performance. Thats our belief and our commercial posi-
tioning for each of the solution we offer to the market.
Q: How is Ashtech segmenting its GPS and GNSS product
line, especially regarding quality and performance, to cover
the needs of these application markets? Which are your
flagship products in individual application market segments?
FE: Professional grade GNSS technology is the core of every Ashtech solu-
tion and our range segmentation is based on the customer expectations,
which is expressed in our motto Right feature, Right time, Right price.
Our portfolio starts with entry-level budget solutions mostly based on sin-
gle frequency technology up to fully featured multi-frequency multi con-
stellation offerings. Whatever the level of investment made by the cus-
tomer in his Ashtech product, quality is never at risk.
We segment our GNSS portfolio, into three categories: Surveying , GIS
and OEM boards. Depending on the application, positioning accuracy
varies from the meter level, down to the centimetre level in real time, with
or without advanced RTK features such as heading, relative positioning,
and the ability to work in harsh environments. Our portfolio of technolo-
gy and solutions complies with those requirements. The accuracy, the
real-time capability, the connectivity features and the number of GNSS sig-
nals are key elements that differentiate the products within our range.
In Surveying our flagship products are the ProMark 500 GNSS RTK system
mostly used on Land as well as our ProFlex series which are very popular
for a variety of marine survey and remote sensing applications but are
also more and more adopted by Machine guidance integrators.
In GIS applications the MobileMapper 6 handheld GPS is showing great
market success thanks to its sub-meter post-processing capability, a unique
offering at that price point.
In OEM boards, the GG12W is very successful in aerospace, while our
newly released GPS/GLONASS board, the MB500, has shown promising
results with various integrators in Navigation, Marine , and others DG14
remains very solid as an outstanding SBAS-enabled L1 board offering RTK
Q: Ashtech probably plans a number of product launches
and initiatives this year to provide new and compelling
offerings to GNSS professionals. Please, outline the main
items shortly to our readers.
FE: The 2010 roadmap is very ambitious for Ashtech. We are planning
several major upgrades to enhance our latest generation of products,
Latest News? Visit
March 2010
I nt er vi ew
as well as the introduction of several completely
new products. The upgrades will impact our GNSS
board offerings as well as our GIS and Surveying
product lines. This January, we released ProMark
500 V4, the newest version of our renowned RTK
GNSS system. We are also launching two entirely
new rugged RTK sensors, the ProFlex Lite and
ProFlex Lite Duo. Last but not least, we have just
released in February the newest generation of our
mobile mapping software, MobileMapper Field,
which is available on our best-seller MobileMapper
6 handheld GPS for GIS and mapping. We intend
to keep this fast pace of new technology and prod-
uct introductions all year long.
Q: The global process of transition from
old-style national coordinate reference
systems to new ITRS-based national
coordinate reference systems is in various
phases in a large number of countries. In
your opinion, which are the crucial con-
siderations and possible dangers in this
process of transition, especially regarding
the use of GNSS technologies and solu-
FE: Coordinates provided by GNSS technologies and
solutions are ITRS-based coordinates. They can be
easily expressed in any new ITRS-based national
coordinate reference systems without any loss of
accuracy. The main issue is the transformation of
existing coordinates expressed in "old-style"
national coordinate reference systems, as this
transformation is always specific and approximat-
ed. What will be key during the transition is to pro-
vide as many details as possible on the reference
system in which any coordinates are expressed, including the reference
date of the coordinate system, as ITRS-based systems are time depen-
dent, in order to avoid tremendous loss in accuracy and so preserve
complete trust in GNSS solutions. Considering the accuracy we achieve
today, telling where a point is on a map means telling where this point
was at the time the map was elaborated. No matter that the point may
have moved since that time, together with Earth's crust on the area
since GPS, and now GNSS, use appropriate coordinates transformation.
It has for long been a key area of activity for Ashtech. This being said,
Ashtech has dedicated specific resources to ease this transition for the
benefit of the end-user. Once the transition is completed, all coordi-
nates will be better harmonized whatever the reference systems used.
Q: This year a wealth of new developments in the GNSS
satellite launches and operations is scheduled and
announced, such as the first launch of the GPS IIF satel-
lite with new L5 signal, launch of two Galileo IOV satel-
lites, first launch of a GLONASS-K satellite with new
CDMA signals, as well as additional launches of Chinese
Compass satellites and first launches of Japanese QZSS
and Indian GAGAN, etc.. How is Ashtech strategically,
technologically and operationally adapting to these actual
novelties and changes in the GNSS business?
FE: As a leader, Ashtech is on top of GNSS technology development.
The growing number and evolving nature of satellites systems is a fan-
tastic opportunity of growth for us. We were the
first with a GLONASS GNSS Board (GG12), and we
are the first, with BLADE technology, to trully blend
L1/L2 GPS and GLONASS and SBAS signals togeth-
er. As many customers are telling us, BLADE deliv-
ers the best PVT computation, as of today .
Moreover BLADE is already tuned to accommodate
all new upcoming signals. Ashtech direction is to
keep customers thrilled with what our GNSS tech-
nology delivers as real user benefits, and make sure
it is constantly up to date, according to the evolu-
Q: Several nations, at least six today, are
already operating or initiating their own
above mentioned global or regional
satellite navigation systems. Even more
such systems are expected to develop
and operate in the next ten years. Are
we entering the golden era of GNSS?
How will this abundance of satellite PNT
services influence global society? How
will our daily life and business change?
FE: Survey and GIS data creation will continue to
grow with more layers of information from geo-
graphical to demographic data. More and more peo-
ple in the field will collect and update data on larg-
er territories. Many additional workers will deploy
GIS mobile technologies in their everyday work life,
in many applications from utilities, homeland secu-
rity to agriculture, natural resources management,
and oil & gas markets. The addition of more satel-
lites expands the operating domain for RTK with
longer ranges and efficient positioning at more
challenging sites where complementary techniques
were needed in the past. Precise positioning will bring more and more
an everyday increase of productivity for many applications, in dredg-
ing, construction, road building, mining, forestry, in any type of asset
management, machine guidance, or fleet tracking. In every of those
fields, Ashtech is a leader and will continue to bring innovation as
expected by the customers.
Q: At the end, I definitely havent asked you everything
you wanted to say to our readers. So, please, take this
opportunity to address our readers with your closing
FE: Ashtech rebirth is good news for the Professional GNSS industry.
Our roots are in Silicon Valley but we are also taking many benefits
from our belonging to Magellan, especially a great know-how in design
of lightweight handheld units for the Survey and GIS applications. With
all its assets Ashtech is well positioned to innovate in the market.
Ashtech will be part of the move to GNSS modernization in the next 10
years. Beyond this Ashtech will be offering new alternatives to cus-
tomers within an open market where interoperability and standardiza-
tion are becoming the rule. Times when you would capture the cus-
tomer for long with one product or one technology are over. Ashtech is
a venture for the 21st century within a new open market.
Joc Triglav is editor of GeoInformatics.
I nt er vi ew
March 2010
Rover Operator
Web Mapping APIs
Use Fast, Intuitive Web Maps to Share Your Geographic Knowledge
You can easily access and leverage your GIS with clients built on

| Silverlight

| JavaScript

Copyright 2009 ESRI. All rights reserved. ESRI, the ESRI globe logo, ArcGIS, and are trademarks, registered trademarks, or service marks of ESRI in the United States, the European Community,
or certain other jurisdictions. Other companies and products mentioned herein may be trademarks or registered trademarks of their respective trademark owners.
With ArcGIS

Web Mapping APIs, you can create and deploy GIS applications that are best suited for
your environment. To save time and resources while learning how to use these APIs, ESRI offers live
samples, training seminars, sample applications, and free maps and GIS tasks such as geocoding and
routing. Discover how ArcGIS Web Mapping APIs can deliver mapping and GIS functionality in your
Web applications; visit
For ESRI locations worldwide, visit
Czech Republic
Estonia, Latvia,
and Lithuania
Greece and Cyprus
Belgium and Luxembourg
Bosnia and Herzegovina
The Netherlands
Slovak Republic
Taking Spatial ETL Technology to New Heights
With the release of FME 2010 in January, Safe Software is emphasizing its stated commitment to improve spatial
data access for organizations across the globe. In fact, the technology has been enhanced in ways that make
spatial data more accessible and potentially more useful than ever before. According to Safe, usability
enhancements in FME 2010 bring greater productivity for existing users, speed the learning curve for people
that are new to the product, and enable faster adoption, broader deployment and creative new applications
of FME technology throughout an organization.
By the editors
Greater Data Access and Sharing
Safe Software has reported that nearly 1,000 of
the improvements introduced in FME 2010 are
a direct result of customer feedback. Users iden-
tified specific changes in FME that could help
them to improve spatial data access and
address their data interoperability challenges.
As with previous releases, FME 2010 continues
to expand its support for various data formats
and coordinate systems to provide what Safe
calls unparalleled data accessibility and facil-
itate greater sharing. This includes not only
adding support for a wide variety of new for-
mats and coordinate systems, but also enhanc-
ing the way the software handles current popu-
lar formats, including 3D.
For instance, based on customer feedback it
became clear to the Safe Software team that GIS
professionals are facing increasing demands to
share their spatial data with non-GIS users and
decision makers. To better facilitate this shar-
ing, many FME users turn to familiar output for-
mats, such as Google Earth (KML) and Adobe
Acrobat (PDF), and FME 2010 is designed to
make the creation and output of these formats
much faster and easier.
According to Safe, FME 2010 now supports
more than 250 different data formats overall,
further empowering GIS professionals to share
information with new user communities. This
includes new support for common statistics for-
mats including IBM PASW (SPSS), R Statistical
Data and SAS (Statistical Analysis System). The
company highlights that this statistical format
support opens up a whole new category of
business applications for use with FME, ideally
allowing improved GIS planning, analysis and
decision making.
Ar t i cl e
March 2010
With FME 2010, users can integrate a variety of data types to create a realistic, integrated 3D model.
FME 2010
As has been highlighted many times in the
pages of GeoInformatics, there is a growing
market interest in 3D data access and model-
ing. To address this need, FME 2010 has been
enhanced to deliver expanded support for these
3D formats, including vastly enhanced 3D
object texture handling. By enabling users to
create realistic multi-dimensional models that
integrate data from a wide variety of data types,
FME 2010 should facilitate both better data
visualization and improved communication.
Among the changes: FME 2010 now includes
support for Autodesk Civil 3D and 3ds, COLLA-
DA, Google Sketchup and Presagis OpenFlight.
The new release also introduces support for
1,850 additional coordinate systems, bringing
the complete total to more than 5,300. So
whatever coordinate systems end-users require,
they can quickly get usable data.
Dynamic Workflow, Faster Workspace
According to Safe, FME 2010s new dynamic
workflows will provide users with greater flexi-
bility and convenience, saving time and improv-
ing efficiency under a wide variety of scenarios.
These workflows (called workspaces in FME)
allow GIS professionals to use either the origi-
nal source schema or a separate schema tem-
plate at runtime, creating workspaces that are
entirely schema-independent. Users can thus
design workspaces and perform translations on
any dataset at any time, and can even create a
single workspace to perform quick translations,
and even transformations, on multiple source
datasets whose schema is unpredictable.
Conversely, users can build a template
workspace to quickly repeat the same transla-
tion or transformation task.
In all of these scenarios, building a dynamic
workflow should save design and maintenance
time, while minimizing the number of
workspaces that need to be created. Ulf
Mnsson of SWECO, a Safe Software reseller in
Sweden thats been testing the new approach
has already found dynamic workflows to be a
huge benefit. SWECO has been using FME 2010
with the City of Gteborg to perform coordinate
system conversions to meet new national
requirements. Our clients can now focus on
the tasks that are interesting, not schema map-
ping, said Mnsson.
Dynamic workflows also improve the efficiency
of SWECOs teams. As consultants, this makes
us more efficient as we can give our clients their
results and feedback immediately. We work on
projects with thousands of datasets, and
dynamic workflows are now saving us tremen-
dous amounts of time. For example, we can
strengthened the security options to match.
These changes should allow both greater flexi-
bility and improved control at an enterprise or
organizational level.
For example, new security capabilities provide
granular data access controls so that GIS pro-
fessionals can ensure that despite broad deploy-
ment, they are sharing spatial data only with
authorized users and preventing unauthorized
access. FME Server 2010s flexible security frame-
work should simplify and somewhat automate
this process by allowing tighter integration with-
in an existing environment, supporting common
directory services such as LDAP-based Microsoft
Active Directory integration, and delivering SSL
encryption for web access.
FME has long been known for its desktop and
server configurations, and for 2010 the compa-
ny has added support for deployment in the
cloud. This should provide considerably greater
IT flexibility, for instance by allowing organiza-
tions to take advantage of the clouds resource
support during peak periods of FME use.
FME 2010 also provides for greater interoper-
ability and fewer cross-platform restrictions,
ensuring that organizations can deploy the soft-
ware on their preferred system (Windows, Linux
or Solaris). And FME 2010 offers a technology
preview of Windows 64-bit, for those organiza-
tions interested in added horsepower on the
Microsoft platform.
Safe Software and Spatial ETL
Thanks to its broad deployment and years of
proven reliability, FME represents the gold stan-
dard in spatial ETL for many in the industry.
Safe Software appears to have taken this
responsibility to heart with FME 2010, provid-
ing nearly unlimited flexibility in data model
transformation and distribution, and delivering
the industrys most extensive format support
for data translation and integration.
In March, Safe Software co-founders Don Murray
and Dale Lutz, along with other Safe team mem-
bers, will kick off 2010: AN FME ODDYSSEY, a
10-city tour across North America geared toward
helping users to solve common data access
challenges. According to the company, atten-
dees will learn all about the latest enhance-
ments in FME 2010, and better still from our
perspective, have the opportunity to discover
how peers are navigating the vast universe of
spatial data challenges. The company will also
participate in FME User meetings in March and
April across Europe, including the UK, Germany
and France.
now easily replicate huge datasets, not only
files but even entire spatial databases with a
single workspace, explained Mnsson.
On a related note, new usability enhancements
help users more rapidly design and maintain
these workspaces from the start. Layout
enhancements in the products graphical user
interface, FME Workbench, improve efficiency
by putting a broad spectrum of tools at your
fingertips, including automated transformer
help. With FME 2010s Quick Add Transformers,
a user can type directly in a workspace canvas
to add transformers, eliminating the need to
search, click and drag. With the new Workspace
Search feature, users can quickly find any object
in a workspace from attribute names and trans-
formers, to bookmarks, comments, and more.
Additional improvements to FMEs data trans-
formers both strengthen existing options, and
create new transformers that should add flexi-
bility and speed to a users workspace design
experience. For example, FME users working
with KML output datasets will benefit from the
latest refinements to the KMLStyler, which has
been simplified to make output settings more
easily understandable.
New KML transformers such as the KMLTour -
Builder and the KMLViewSetter also make it
easy to create a guided end user experience
through an output dataset, and the new
KMLTimeSetter helps users create timeline play-
backs for data which is time sensitive, such as
analysis for a tsunami or tornado event.
In another response to customer feedback, FME
2010 speeds creation of transformation work-
flows by offering a new tester family, filter fam-
ily, styling family, and much more. The rede -
signed transformer dialogs that are grouped
into similar tasks, help users more easily set
parameters. Tapping into fmepedia, an online
encyclopedia of FME technical information and
examples, through FME Workbench, users also
have instant access to useful resources like a
user-submitted custom transformer.
And in keeping with Safe Softwares historical
emphasis on steady performance improvements
for greater usability, FME 2010 is faster and
more responsive throughout, with key areas
seeing speed increases of more than 20 per-
cent according to company representatives.
Broader Deployment Options,
Powerful Security
Deployment and security were two key areas of
focus in the new release, according to Safe.
Considerable customer feedback led the com-
pany to add a SaaS-based cloud computing
option for deploying to the existing desktop and
server configurations for FME. Safe has also
Latest News? Visit
Ar t i cl e
March 2010
For Avon Fire & Rescue Service
Making Firefighting Safer with GIS
Fire and rescue services need to be armed with accurate risk information if
theyre to respond to emergencies effectively. Simon Cottingham, Public Safety
Strategist at ESRI (UK), looks at how the innovative use of geographic
information systems (GIS), is helping one of the UKs Fire & Rescue Services
become better informed and therefore safer when responding to incidents.
By Simon Cottingham
Entering a burning building without knowing
whats on the other side of the door is one of
the dangers that firefighters face every day.
Traditional systems used to capture risk infor-
mation about a buildings lay-out, usage or con-
struction provide a degree of protection but are
largely paper-based, with limitations in terms
of accuracy, currency and distribution of infor-
mation. In addressing this challenge, one pio-
neering fire service has adopted a GIS solution
from ESRI (UK), which helps record all neces-
sary risk data relating to a building but also
greatly improves how information is accessed
and shared with every fire station in the area.
Avon Fire & Rescue Service (AF&RS) watches
over almost 1,500km2 and a million people in
south west Britain, covering the areas of Bath,
Bristol, North Somerset and South
Gloucestershire. With headquarters in Bristol
city centre, the service has 23 fire stations,
almost 900 firefighters and last year attended
over 13,000 incidents. While firefighting remains
at the forefront of its work, the service is also
focused on protection and prevention.
Gathering Risk Information
Along with all other UK Fire & Rescue Services,
one of Avons responsibilities is to provide risk
assessments of large or significant buildings
within their area, in accordance with UK legis-
lation. Involving routine inspections, these pro-
vide vital information about a specific building
and potential hazards within its immediate area,
to reduce the risks if a crew ever has to respond
to an incident at that location.
The majority of fire services store this informa-
tion in a paper-based system which makes it
time consuming to update and difficult to share.
Each of Avons 23 fire stations had a folder, for
example, containing written information and
pictures relating to premises in the area and
these were updated by annual inspections.
Steve Cornish, Station Manager and Project
Manager for AF&RS, explains: Manually updat-
ing the information would involve crews
inspecting buildings, subsequently passing the
information to another team member back at
base to input all the data. When responding to
an incident, the folder would be consulted but
if more than one station was involved in a
major incident, they would not have access to
this information until they arrived at the scene.
Ar t i cl e
March 2010
Avon Fire and Rescue Service Vehicle
Avon began looking for a different method of
managing its risk information to make respond-
ing to incidents safer for its crews. After evalu-
ating available solutions, AF&RS felt that ESRI
(UK) could provide the most suitable option.
ESRI (UK) was the only firm that could provide
what we wanted namely a system which could
make us more efficient at gathering information
and improve our ability to share that data quick-
ly and easily, commented Cornish.
Pioneering Fire and Rescue Service
ESRI (UK)s DragonMap solution, which uses the
ESRI ArcGIS platform, went live in October 2009
following a short pilot phase. It is believed that
AF&RS is currently the UKs first service to use
GIS in this way.
Now each fire station uses a ruggedised laptop
to conduct risk inspections, inputting all the
necessary data using drag-and-drop icons onto
maps and building plans, stored in an ESRI
ArcGIS server. The information is then upload-
ed to a central database on return to the sta-
tion, removing the need for re-keying or tran-
scribing of notes. When any station
sub sequently connects its laptop to the system,
it simultaneously receives all updates available
for the entire Avon area.
We wanted to ensure we could provide the
latest risk information to crews about individu-
al buildings, so they could look at it on the way
to an incident, said Cornish. Our 23 fire sta-
tions now have access to a single view of all
the current risk data across our whole area,
enabling better preparation and decision-mak-
ing, especially when attending an area theyre
not familiar with.
DragonMap provides a simple but accurate
method of capturing risk data on to building
plans and maps any information which might
help a crew respond in the safest possible way.
Originally developed to help the military cap-
ture intelligence in the field, the software was
designed for non-GIS specialists, where the
gathering of location-based information had to
be as simple as possible and not require a user
to learn a complex piece of software.
The user interface was created to be clear and
intuitive with a simple drag-and-drop system
(hence the name Drag-on-Map), using pre-deter-
mined icons to represent hazards, objects or
what-ever is required. DragonMap lets the user
place the icons at the correct spot on a map of
a building layout or its surroundings, then eas-
ily add notes, web links and even digital pho-
tos. For a fire and rescue service, this provides
a very simple, quick and accurate way to record
all potential risks at a site in a consistent way.
a major incident at Bristols BOC (British Oxygen
Company) bottling depot. A fire caused by a gas
explosion involving a number of acetylene cylin-
ders saw AF&RS attend the scene for eight days
using multiple stations. We used DragonMap
to locate the best sources of water, using the
hydrant data layer, explained Cornish. A sig-
nificant volume of cooling water was required
and as several stations from the area were
involved, all engines needed to find it easily.
Normally crews know where hydrants are but if
youre coming in from the surrounding area,
being able to see them on a map with their
exact location makes it a lot quicker.
Protecting Firefighters and the Public
With the majority of risk assessment at UK fire
services still underpinned by paper-based sys-
tems, AF&RS is showing how the smart use of
new technology can be applied to help create
a greater, shared, situational understanding and
reduce risk in what is already a dangerous pro-
fession. For fire and rescue services the world
over, AF&RS serves as a great example of how
GIS can be implemented in day-to-day opera-
tions to improve safety.
Future plans include expanding the system to
give a more in-depth understanding of the
areas infrastructure, by incorporating more data
layers such as utilities, drainage or open water
supplies. This can be a challenge as such data
is often held in many different file formats.
However, with ESRIs ArcGIS able to read over
70 different spatial data formats, AF&RS should
find itself in a good position when it is ready
to take this step.
DragonMap helps us protect the public and
our firefighters, concluded Cornish.
Improving access to risk data helps increase
both the safety of people in a building, as fire-
fighters can respond more effectively, but also
the safety of fire crews as theyre a lot better
Simon Cottingham,
Public Safety Strategist at ESRI (UK)
For more information on DragonMap please visit:
ESRI (UK) would like to thank the team at AF&RS
for their assistance in the production of this article.
More Efficient Data Capture
Crews have taken well to the new method of
data collection, as Cornish highlights: Data
capture has become a lot easier with
DragonMap. The new system gives ownership
of the whole process to fire crews, which theyve
responded well to. After some training, all per-
sonnel feel at home taking the laptops out for
inspections and when attending incidents.
In total, Avon has 37 laptops across 23 stations,
one for each of its front-line response vehicles.
Every laptop has been configured so that only
the designated operators at each respective sta-
tion can update the map information. When
data is uploaded to the central database host-
ed at the headquarters in Bristol, all informa-
tion is double-checked to ensure consistency.
The risk assessment process begins with each
station identifying the buildings they want to
inspect. And because they know their station
areas, theyre familiar with which premises
might cause potential problems. Over a 12
month period, a rolling programme will then
see several premises inspected every month,
said Cornish.
On a first inspection, crews use DragonMap to
identify the outline of the premises it needs to
inspect. Once saved onto the system, firefight-
ers start applying the icons. AF&RS use sym-
bols to represent a wide range of information,
including building usage, number or expected
occupants day or night, construction methods,
location of hazardous materials, plus other
details such as access routes or hydrant loca-
tions. Notes might be added, such as storage
of 10,000 litres of diesel fuel or Asbestos roof
tiles etc. All information is added as layers on
the map, which can be switched on or off.
Seeing the Bigger Picture
In addition to icons at the plan level,
DragonMap also enables different layers of data
to be displayed along with maps of different
scales. When responding to an incident, crews
can use the GIS to zoom out and see a build-
ing in the context of its surroundings and locate
the best access route, water hydrants and
neighbouring properties. Having DragonMap
in the cab on the way to an incident gives crews
an overall picture of the situation but also the
finer details. Being able to access risk data
quickly helps reduce potential harm to firefight-
ers and the public but also limit any damage
to property and disruption. The more we know
about what were facing the better, Cornish
In early January 2010, only a few weeks after
the system went live, DragonMap was used in
Latest News? Visit
Ar t i cl e
March 2010
Definiens eCognition Server Software
Object-based Image Analysis
Definiens is a company active in image analysis. Not restricted to the geospatial market only, the company offers solutions
for all kinds of imagery used in life sciences and the medical world. GeoInformatics interviewed Ralph D. Humberg, Vice
President of Definiens Earth Sciences division. Mr. Humberg joined Definiens in 2002 and is responsible for Definiens'
global Earth Sciences business. He talks about eCognition, Definiens image analysis software used for Earth Sciences.
The latest release of the software is eCognition version 8, issued in November last year, along with a new internet portal.
By Eric van Rees
Question: With version 8 you re-re-named your software
back to eCognition. What was the reason for that back-
to-the-roots naming?
Ralph Humberg: Definiens introduced object-based image analysis to
the geo-sciences industry in 2001 with the release of our eCognition
desktop software, see text box 1. We utilized the eCognition brand
name for the first four versions of our desktop software. As imaging
data sets grew in complexity and size, Definiens pioneered client-serv-
er capabilities beginning with eCognition version 5. With a broadened
suite of products associated with our client-server offerings, we intro-
duced additional brand names. Our server has always been known as
eCognition and the brand name possesses a great deal of cachet in
the industry. We believe the majority of the geo-sciences community is
now moving toward client-server processing and with the introduction
of eCognition version 8, we now refer to the complete product suite
under the eCognition brand.
Q: Many people are convinced of the power of the
software, but they find it very complicated to use or to
understand the principles behind it. Have you done any
technical developments to make the world of OBIA
(object-based image analysis, see text box 2) more
understandable for the non-experts?
RH: As the leading provider of OBIA for the geo-sciences, we have a large
user community, with an installed base of 3000 licensees in more than 80
countries. These users encompass a myriad of disciplines, from remote
sensing to geology, hydrology, forestry and urban planning - all of which
require image analysis technology. Traditionally, geo-spatial image analysis
software has been structured around remote sensing domain expertise.
Simplifying that expertise so that it is accessible by increasingly wider
audiences of non-experts across diverse industries is something we take
very seriously.
In the last few years we have noticed a distinction between our advanced
I nt er vi ew
March 2010
eCognition 8 comes with full 3D image analysis capabilities. Within the eCognition Labs section of our community,
this functionality, which is still in its infancy for LiDAR analysis, is provided to our scientific user base.
and our more casual users, and this has informed our product develop-
ment. In eCognition 8, a new QuickMap start-up mode enables our
casual users to readily complete a number of common image analysis
tasks. A simplified graphical interface, comprising click-and-classify type
tools, allows users to perform image analysis using important segmenta-
tion algorithms and classification standards. This built-in workflow was
developed using findings from training sessions with customers and
remote sensing students and is designed to be intuitive.
Exploring some of the more complex functionality requires an investment
of time, but our advanced users know this investment is justified, as
eCognition enables them to extract features and detect changes like no
other software.
We have also invested heavily in the development of an online communi-
ty to facilitate knowledge sharing and building. A broad range of pre-devel-
oped rule-sets and applications are available online, along with self-learn-
ing materials.
Q: With version 8 you have introduced many new features
to eCognition. Can you outline these and briefly explain
the most important ones?
RH: eCognition 8 represents the first of a new generation of image analy-
sis technologies that takes OBIA into entirely new dimensions. Our rapid
advancement stems in part from Definiens' significant investment in med-
ical imaging research and development. By leveraging technology devel-
oped for 3D MRI and CT image analysis, we have been able to tackle new
and exciting applications in the area of geo-data analysis.
For example, eCognition 8 can load and rasterize native LiDAR (.las) files,
fuse them with stacks of other data, such as high resolution images, and
conduct object-based image analysis in three dimensions. Future versions
of eCognition will significantly expand 3D functionality.
The new QuickMap mode is another important feature because it not
only provides a simple interface for new users, but is also the first of what
we believe will be many standardized applications designed for eCognition.
Performance was another important aspect to this release. The maps and
regions function allows users to save processing time by analyzing differ-
ent geographical features via tailored segmentation approaches. While
other software packages employ the same analysis approach to an entire
image, eCognition segments objects of interest, such as forests or rivers,
using feature-specific approaches within the same image, processing the
remainder of a scene in lower resolution. The result is a more accurate
analysis of data with less processing time.
Finally, a strong focus on production workflows has resulted in a number
of new features designed for building powerful semi-automatic processing
and editing environments including image object generalization which
produces GIS-ready output.
Q: With the introduction of version 8 of now again
eCognition you also launched a new internet user portal.
Is this portal meant for support purposes only or is there
something more behind it?
RH: While product support is certainly an important element, our
eCognition community web portal provides users with a host of opportu-
nities; we designed the portal as a center for knowledge sharing and user
In the last few years we have noticed that local eCognition communities
were sprouting up in many countries, organizing their own conferences or
engaging in informal collaboration. We decided to provide our user com-
munity with a centralized online location to enable more streamlined inter-
action and collaboration, launching the eCognition online community in
July 2009. The response has been very encouraging and in just six months
our online community has grown to almost 1900 members.
The eCognition community is equipped with infrastructure for rule-set and
application exchange. So, for example, a forestry professional in Australia
can easily upload a rule-set for a project that can be downloaded and
used by a colleague in the United Kingdom. The community also provides
collaborative learning and teaching space, with user demonstrations, blogs,
message boards and video and image file sharing capabilities.
Latest News? Visit
I nt er vi ew
March 2010
The Principles of Object-based Image Analysis
In conventional automated image analysis, objects of interest are
identified using a series of pixel-based filters. These filters, such as
intensity thresholds and gradients, compare pixels to their neighbors.
The goal is to transform the original image so that the areas of inter-
est can be extracted by simple threshold measures.
In developing Definiens Cognition Network Technology, our company
founder Gerd Binnig and his team made a radical departure from this
pixel-based approach.
Definiens Cognition Network Technology is object-based: It does not
simply identify the objects of interest but all of the intermediate
objects together with their interrelationships (context). In effect, a model
is built which is represented by Definiens unique Cognition Network.
This stores all of the objects, sub-objects and their semantic relation-
ships in a clear hierarchy.
The difference in approach is profound. It is the contextual information
contained in the Cognition Network that enables the automated extrac-
tion of information in exactly the same way as a human being makes
sense of the image.
Definiens eCognition is image analysis software for fast, accurate geo-
spatial information extraction from any kind of remote sensing imagery.
It has been used by data providers, value adders and remote sensing
professionals for nearly a decade.
Definiens eCognition offers a comprehensive range of tools to create
powerful image analysis applications that can handle all common data
sources, such as medium to high resolution satellite data, very high res-
olution aerial photography, LiDAR, radar and even hyperspectral data.
The eCognition product suite comprises three different components
which can be used independently or in combination to solve the most
challenging image analysis tasks:
eCognition Developer is the development environment for object-
based image analysis. It is used to develop rule sets (or applications
for eCognition Architect) for the automatic analysis of remote sensing
eCognition Architect enables non-technical professionals such as veg-
etation mapping experts, urban planners or foresters to leverage
Definiens technology. Users can easily configure, calibrate and exe-
cute image analysis workflows created in eCognition Developer.
eCognition Server provides a processing environment for the batch
execution of image analysis jobs.
Text box 1.
Text box 2.
In terms of product support, the community hosts a range of self-learning
materials, with open discussion forums, a frequently updated FAQ section
and a knowledge wiki utilized by Definiens consultants, trainers and users
Over time, we envisage the eCognition community as an environment for
creating, downloading and sharing image analysis applications. The com-
munity is a vital component of the Definiens user experience.
Q: Can you imagine that there will some day be a kind
of eCognition solutions market where the users
themselves are in business?
RH: Absolutely, and that day is not far away! A central part of our strate-
gy is to create such an online applications marketplace. OBIA has become
an instrumental tool for many disciplines and these disparate and diverse
projects require a range of applications. While Definiens will continue to
develop applications for users, we also want the whole community
engaged in this process. We anticipate eCognition will host an open appli-
cation marketplace, so that all users can share in the monetization of
application development.
Q: Can you briefly explain how eCognition can be coupled
with other types of Geo-Software in order to gain and
process geo-information from images?
RH: On a file level, eCognition has always been well integrated into the
Geo processing workflow. The software supports a large number of raster
formats, vector and now also point cloud data. There are three main areas
in which eCognition can be coupled with other geo-spatial software. First,
the software can be utilized with GIS repositories, such as ArcGIS from
ESRI or GEs Smallworld. Second, the software can be used in conjunc-
tion with traditional pre-processing remote-sensing software, such as those
provided by PCI, Erdas and ENVI. Finally, eCognition can be used with
geo-databases and representations of data, such as Oracles databases or
InfoSERVER from ESRI, a feature that is becoming increasingly important.
The most common workflow involves data being exported from the remote-
sensing software and imported into eCognition, but there are instances in
which back-loops are established. For example, Definiens eCognition can
import GIS layers, analyze them and export them to GIS software.
For customers who require more specific workflow integration, the soft-
wares development kit allows system integration on all levels.
Q: Your technology is also used in other application
fields mainly in medical imaging and life sciences. Can
you tell us something about the mutual influences of the
different application fields on the development of the
software products (e.g. 3D imaging)?
RH: The synergies between healthcare and geo-spatial industries may not
always be immediately obvious. In the geo-sciences we tend to look at
images of large sections of the earths surface, while healthcare images
are scrutinized down to the cellular level. Nonetheless, the principles of
object-based image analysis inherent to Definiens technology can be
applied across industries.
For example, the broad adoption of OBIA driven by our Earth Sciences
practice led to the development of client-server capabilities to handle large
data sets. From our Medical Imaging practice area, new approaches for
object-based 3D and 4D image analysis were developed and transferred
to our Earth Sciences business, where they are now being utilized with
LiDAR data in eCognition 8. From our Life Sciences business, the struc-
tured workflows required for cell-based image analysis led to the devel-
opment of our eCognition Architect for solutions development.
Eric van Rees is editor in chief of GeoInformatics.
Many thanks to Peter Hoffman. For more information on Definiens,
have a look at
I nt er vi ew
March 2010
Screen capture of the eCognition Developer 8 user interface
Screen capture of a LiDAR application in the eCognition Architect 8 user interface
The eCognition community is the number one resource for everything related
to eCognition - for experienced as well as novice users.
Capture geo-referenced
360 degree images
and point clouds with any
car in your eet
Innovating Knowledge Engineering
1Spatial and High Quality
Geospatial Data
1Spatial is an innovator in the field of knowledge engineering. This term covers geospatial data integration,
harmonization and quality control. With this, the company is putting high quality geospatial data at the centre of
its universe. GeoInformatics asked 1Spatials Business Development Director Steven Ramage about its current activities
in the geospatial business and how new technologies and concepts influence the way people think about geospatial data.
By Eric van Rees
Question: 1Spatials focus is on geospatial
data integration, harmonization and quality
control. One of the services 1Spatial offers is
online data quality validation. Could you
describe exactly how this process works and
what other important solutions 1Spatial offers
in its key activities today (Radius Studio
product for example)?
Steven Ramage: 1Spatial has always placed high qual-
ity geospatial data at the centre of our universe. In
the last five years, advances in service oriented archi-
tectures, digital bandwidth and the advent of grid and
cloud computing have meant that we can now con-
sider providing online services more easily. This
means we can enable a much wider range of users
to access our technology, reducing their investment
in both equipment and expertise required to main-
tain it. A good example of our capabilities in this area
would be from our participation in the Tele Atlas and
Oracle Innovation Center for Geospatial and Location
Based Services.
Our contribution to the Innovation Center is in the
form of geospatial data quality checking as an online
service. The target audience is anyone creating, man-
aging or integrating geospatial data with other
geospatial data or non-spatial data. The online
geospatial data quality validation service uses Radius
Studio as a rules-based data integration and quality control platform to
measure data quality in terms of logical and semantic consistency.
Radius Studio is a web-based, middle tier application for transforming,
analyzing, processing and modeling the often complex relationships
between different features in the landscape. It works by uploading a
customer dataset into the Radius Studio object data cache and then
applying spatial and non-spatial rules to determine whether the data
conforms to defined levels of quality. This is a critical activity that must
be undertaken prior to any centralization and/or harmonization of data
to repurpose and reuse the dataset. Radius Studio is hosted on an appli-
cation server such as Oracle Fusion Middleware and can import data
from a wide variety of relational databases and GIS data formats. As an
example, the service can be used to perform change detection for road
network data from two different versions of the Tele Atlas MultiNet
As part of the Innovation Center we demonstrated the use case for clas-
sifying change detection for two different versions of the Tele Atlas
Multinet Road Network Dataset, using the topological structuring capa-
bilities of Radius Studio. Once structured, the rules engine detected
places where changes in geometry have occurred in the data. These
changes were then classified in terms of split and merged features, par-
tially changed, trimmed or extended, new or deleted features. The
results of the change detection use case were then viewed through
Oracle MapViewer (see Fig 1). As a multi-user, web-based application,
Radius Studio is ideally suited to implementation in a Software as a
Service business model.
I nt er vi ew
March 2010
Figure 1 View areas where data has changed using Radius Studio rules-based approach
1Spatial was also involved in what Antti Jakobsson, the ESDIN Programme
Manager from EuroGeographics, described as a World first with a quali-
ty evaluation web service as part of ESDIN Work Package 8 Metadata
and quality guidelines. These online data quality validation services rep-
resent just one activity within our mission to aid economic development
by unlocking geospatial data and empowering business. Everything we
do is designed to achieve increased automation within the geospatial
data supply chain. Our view is that this is key to increasing operational
efficiency and ensuring that our customers can focus their resources on
analyzing and utilizing their data, and not be burdened with collecting
and managing it.
We group geospatial data integration, harmonization and quality control
together under the term Knowledge Engineering; this is a key objective
when automating processes across the geospatial data supply chain as
represented by Figure 2.
The management of knowledge is very much enhanced by using location
effectively. Our skills in geospatial data processing therefore provide a
very effective platform for knowledge engineering. The ultimate aim for
us is Knowledge Access, which is where the real economic benefit is
derived as organizations can share and reuse knowledge. We provide a
range of tools and systems that optimize the query of multiple knowl-
edge bases by using spatial indexes and by organizing information based
on its location.
At the start of the supply chain are the data providers, who collect and
organize the raw geospatial data and then turn it into information prod-
ucts. We provide them with Enterprise Information Architectures based
on mainstream IT workflows for orchestrating service-oriented compo-
nents. These geospatial data management and information processing
solutions support:
maintenance of accurate, consistent and up-to-date production
generation of real-world, change-driven products
automated map generalization to create more products using the same
increased delivery frequency of products while benefiting from higher
reduced long-term costs through automation and reduced manual inter-
Last year at the prestigious CC The Exchange conference in the UK we
presented on 'Generalization: from research to reality.' This was to high-
light that 1Spatial is actively and successfully collaborating with industry
colleagues in this complex area of geospatial data management and help-
ing them to deliver business benefit. We held a workshop which addressed
the business aspects of generalization. It explained how generalization
needs to be considered as part of any geospatial data investment or data
management strategy. There was also a short demonstration of general-
ization in action and a presentation from one of our European generaliza-
tion customers in Germany (Sabine Urbanke from Landesvermessungsamt
Baden-Wrttemberg). As organizations strive to achieve greater operational
efficiency and do more with less, we know generalization offers a solu-
Q: In a recent article*, you wrote about what is happening to
spatial data in the EU, in terms of INSPIRE, data quality and
SDIs. Here the topic of funding is addressed: in normal
circumstances additional funding would be made available for
facilitating labor-intensive work matching classes and
attributes in the source schema against the target schema of
metadata, ensuring data quality. Given that the current
economic times are not normal, did this influence the quality
of current INSPIRE datasets or is this something that could be
a problem in the future? If so, what can be done to counter it
so that data quality can be assured?
SR: It depends on your definition of current INSPIRE datasets, but cer-
tainly there will be issues with the base reference mapping or master
reference data that is being used to deliver INSPIRE compliant datasets.
No geospatial dataset is perfect, at least not in perpetuity. Data is con-
stantly changing and there are increasingly large volumes, so captur-
ing, managing and integrating the change in these master reference
datasets will introduce the same issues that any other non-INSPIRE
geospatial datasets will have to address; now and in the future.
I recently presented a paper on SDIs and regional competitiveness
and also transcribed that presentation into an article for another online
publication. The basic tenet was that the overriding goals of many SDIs
or regional SDIs like INSPIRE are tackling environmental awareness,
emergency preparedness, disaster response and the like. However, eco-
nomic benefits are also accrued and can potentially be used to main-
tain current levels of funding or obtain more funding by proving the
value of GI to the wider economy. So funding can be part of the pic-
ture if the geospatial data are perceived to be adding value.
For the data to be adding value they obviously need to be trusted. One
of the main problems with INSPIRE is the number of data providers
covering the different data themes. Data may be coming in from a num-
ber of organizations that have all collected the data in different ways
and for different purposes. A decision also has to be made regarding
the master data set at a sub-national and national level in order to
comply with INSPIRE Directive and provide the most suitable, up-to-
date and accurate datasets. The data then have to be repurposed so
they are fit for the purpose of INSPIRE reporting; this includes aspects
like schema transformation from the source data schema to the target
INSPIRE schema, which is currently a manual process.
Latest News? Visit
I nt er vi ew
March 2010
Steven Ramage
We are involved in two important projects that are working to docu-
ment best practice in geospatial data management and create the tech-
nical guidance for using Network Services to transform data to INSPIRE
schemas. ESDIN is the former, where we are working closely with
EuroGeographics and a number of their member organizations, as well
as others from the geospatial community. The other exercise is being
funded by the JRC as part of a contract with the European Commission.
Investigations undertaken to date suggest that data quality needs to
be viewed as an iterative process that ensures that data conforms to a
set of quality rules at different stages of the supply chain. We expect
that over the period of INSPIRE implementation, these quality rules will
be codified according to ISO quality principles so that quality assur-
ance will become an accepted part of the geospatial data supply chain.
All data can then be shared, queried and trusted.
Q: In addition to the last question, how do you value crowd
sourcing or user-generated data as opposed to traditionally-
generated data sources? Will the high quality data sources
from authoritative organizations that have strong metadata be
found to be more trustworthy because of this in the future?
SR: Again it depends on the purpose or end use for those data qual-
ity can only be described and measured according to fitness for a par-
ticular purpose. There is a great paper dedicated to this topic (Coote
and Rackham, AGI 2008), which states:
Quality is a relative, there are no absolutes. Saying something is of
high or low quality is meaningless unless it is expressed as a measure
against a production specification or a user requirement.
So I think it is fair to say that traditional data sources have well-estab-
lished and more rigorous production specifications and quality con-
trolled procedures than newer, user-generated data. However, by the
introduction of robust processes and technologies that enable efficient
checking of large amounts of data against the defined levels of quality,
then there is no reason why crowd sourced data cant be used to sup-
plement and enrich the traditional data sources. Our technology sup-
ports this kind of data integration; enabling users to cherry pick the
data they need, irrespective of the data source. Over time it will be the
business model that will ultimately drive and support the value of the
Metadata is an issue for all data sets; it is still a fairly manual and
labour intensive (maybe even boring) task. If we can convince all orga-
nizations to maintain extensive, complete, standards-based metadata,
irrespective of their method of data capture, then maybe those data
will be trustworthy and become the gold standard data.
Q: In the same article as mentioned above, you state that the
future direction of SDIs in Europe will change as a result of
data transformation activities that are needed to achieve
INSPIRE interoperability and effective data reuse. Essentially
the data will be in the form of common object models, which
in turn will lead to a new generation of applications and
solution providers in Europe. What kind of applications and
solution providers are you addressing here? Also, do you
see a role for web services and consumer-type applications,
so that people can easily access data sets and combine
SR: The new generation of applications and solution providers will be
those who have the capability and flexibility to work directly with the
object models emerging from INSPIRE, as well as data arising from gov-
ernment initiatives to enable PSI (public sector information) reuse.
If you look at the Cadastral Parcels data theme for INSPIRE, then this
refers typically to data linked to land registration activities in EU Member
States. There are companies emerging all the time who are finding
unique ways to take location data that is being made freely available
and reuse it to create new business models around the land registra-
tion area, for example in Norway where Statens kartverk made a recent
announcement in this area regarding freely available data.
A number of activities are already underway to provide greater access to
data sets across Europe. For example, the British Government has
engaged Sir Tim Berners-Lee, the inventor of the World Wide Web, to
promote and support an initiative called This is a govern-
ment website being used to drive free access to public sector data for
reuse, this also includes geospatial data. In a TED conference almost one
year ago Sir Tim chose OpenStreetMap (OSM) to get his point across
about the benefits of sharing data and used OSM to highlight the power
and prevalence of geospatial data, since geospatial data is a large ele-
ment of public sector data.
Berners-Lee is actively promoting the concept of linked data, which is
linked to the Semantic Web and relates to how the Internet can be used
to expose, share and connect data. On 10th March Ordnance Survey Great
Britain is hosting a free seminar on this topic it is called Terra future
2010. It will highlight the important role geographic information has to
play in the development of linked data over the Web. 1Spatial is sup-
porting this event.
Once more work is done to free up the data (and also ensure its quality)
then users will be able to create a wide range of applications and com-
bine or aggregate them from multiple different sources, as well as share
them across the Web. This is also an area where standards from the Open
Geospatial Consortium will be important, but thats another long discus-
sion topic in its own right!
Q: In January 2010, 1Spatial presented a co-authored paper
at Map India 2010, entitled SDI Considerations: A European
Perspective, in which parallels are drawn between the
activities in Europe and India. Could you explain which
parallels you were referring to and what lessons learned
could be of use to India?
SR: Most countries have national, regional and local electronic map-
ping data collected over many years. While historically acceptable to
produce paper maps, the usability of these data is being tested by a
new generation of web and enterprise-based, location-specific services
that require this spatial data to be current, precise and interoperable -
internally and with data from other organizations. This is the founda-
tion of Spatial Data Infrastructures (SDIs) today providing improved
access and sharing spatial data across the Web. Based on our collec-
tive experience in Europe, we (1Spatial and RMSI) decided to co-author
a paper (Karandikar, Ramage and Van Linden, Map India 2009).
There are several aspects that are common across most SDIs and those
are institutional, operational and technical challenges. Fundamental
challenges that still remain include the willingness to share data and
the capability for sharing geospatial data with all stakeholders. This
exchange of data can be referred to as part of the geospatial data sup-
ply chain, i.e. from those organizations involved in the data capture
and data maintenance stages through to data sharing, subsequent use
I nt er vi ew
March 2010
and feedback on currency and quality. The
basis of sharing is gold standard data and
India is no stranger to these frameworks
having been intimately involved in one of
the greatest surveys ever conducted, the
Great Arc:
The Survey has played an invaluable role
in the saga of India's nation building. In
spite of sophisticated technology now
becoming available, the accuracy of its
measurements remains undisputed.
As one of the BRIC (Brazil, Russia, India and
China) economies, in 2003 Goldman Sachs
predicted that India's Gross Domestic
Product (GDP) in current prices would over-
take France and Italy by 2020, Germany, the United Kingdom and Russia
by 2025 and Japan by 2035. By 2035, India is projected to be the third
largest economy of the world, behind the US and China.
With such growth predicted it makes sense for India to observe good
pra ctice from other parts of the World where the economies are already
well developed. With 28 States and a number of international borders,
India could be compared to Europe from that perspective, since there
are 27 EU Member States and even more borders. With regions of India
the same size or larger than some European countries, there are obvious
similarities between the NSDI of India and INSPIRE, i.e. both are trying
to share valuable geospatial information across a wide, dispersed geo-
graphical area.
From this perspective an understanding of the European Spatial Data
Infrastructure Network (ESDIN) project can highlight industry good prac-
tice around INSPIRE and associated data management challenges. As a
result of a successfully delivered project by 1Spatial and RMSI, there are
also lessons to learn about the effective management of land registra-
tion in Europe. Our award-winning solution at Property Registration
Authority Ireland also provides ideas for good practice around spatial
infrastructure management and how to take control of data for real eco-
nomic development.
Q: In November 2009, 1Spatial celebrated its 40th anniver-
sary. Looking to the future, how do you see the year ahead
for 1Spatial in 2010, and what are your predictions for data
integration, quality and harmonization in general?
SR: 2010 is going to be an exciting year for 1Spatial. It will see the deliv-
ery of five key projects into production environments that will clearly
demonstrate our unique object-oriented and rules-based technologies for
geospatial data processing to help control data and achieve new levels
of operational efficiency. All of these projects are based on our open
enterprise architecture approach and will prove the benefits of our solu-
tions at each stage of the supply chain.
At Ordnance Survey Great Britain (OSGB), the Geospatial Data
Management System will enter into production this year. 1Spatial designed
and implemented the data management environment, the transaction
management environment, the data maintenance environment and the
data lifecycle management environment. We are delivering a series of
components, orchestrated by Business Process Execution Language
(BPEL) that will enable OSGB to respond to real-world changes as effi-
ciently as possible, while maintaining their traditional quality measures.
In Malaysia, two key flowlines will go into
production. These have introduced our
rules-based data quality improvement
and automated generalization technolo-
gies into the Department of Survey and
Mapping (JUPEM). These flowlines will
underpin JUPEMs revolutionary approach
to provide digital mapping on demand.
In Australia, we will further our partner-
ship with PSMA to automate the supply
chain activities that integrate state level
datasets into nationwide digital data
products. PSMA Australia won the APSEA
Innovation & Commercialisation Award
for their joint project with 1Spatial for
technical advances in processing conti-
nental datasets. This project, utilizing 1Spatials data integration and qual-
ity assurance platform Radius Studio, was deemed groundbreaking for
both the international and domestic spatial communities.
In Scotland, 1Spatial working with IDOX, will complete the delivery of the
third in a series of projects to provide an ePlanning Infrastructure. The
Online Development Planning system (OLDP) will introduce a groundbreak-
ing approach to knowledge access, enabling plans to be designed interac-
tively, linking user-specified map features to documents and legislation.
Finally, as mentioned previously, 1Spatial will deliver the key technical
guidance document for the Transformation Network Service, a key compo-
nent in the INSPIRE programme. A prototype will be built that will vali-
date this technical guidance. This contract will be a major contribution to
the European geospatial community, effectively delivering the necessary
technical guidance to help institutions in European Member States put
into operation transformation services that are consistent with the INSPIRE
Implementing Rules. The consortium is being led by RSW Geomatics in
conjunction with 1Spatial and Rob Walker Consultancy, all based in
Cambridge, UK. The contract was awarded by the Institute for Environment
and Sustainability at the European Commission's Joint Research Centre.
1Spatial is an innovator in the field of knowledge engineering. We believe
that 2010 will be the year when our efforts, energy and leadership in this
area will come to fruition. We predict that by the end of 2010, rules engines,
object processing and object relational mapping will be seen as the key
enablers in making geospatial data truly operational and adding value
across a wide range of organizations.
Eric van Rees is editor in chief of GeoInformatics.
For more information,
have a look at
Latest News? Visit
I nt er vi ew
March 2010
Fig. 2. Knowledge engineering across the supply chain
Trimbles Mobile Mapping Technology
Belgian Road Sign
Inventory Project
Flemish Region - Department of Mobility and Public Works (MOW) recently initiated a sizeable road sign inventory project.
Goal of the project is improving service and maintenance of speed limit, directional, informational, and priority signs
in order to achieve high levels of public safety while also watching costs. Seeking innovative solutions,
the Department accepted a proposal to use mobile mapping technologies. The vehicle-based mobile mapping
system from Trimble is streamlining the project.
By Rebecca Muhlenkort
The country of Belgium has a rich history characterized by diverse
cultures and a strong sense of community that can be organized around
three population groups and languages: Dutch, French and German.
Specifically, the Flemish Region of Belgium is the territorial entity that
comprises the Dutch speaking areas of the country. Within that territo-
ry, Flemish Region - Department of Mobility and Public Works (in Dutch:
Vlaamse Overheid - Departement Mobiliteit en Openbare Werken or
MOW) focuses on all issues related to transportation and infrastructure.
MOW recently initiated a sizeable road sign inventory project with the
goal of improving service and maintenance of speed limit, directional,
informational, and priority signs in order to achieve high levels of pub-
lic safety while also watching costs. The massive project area spans
five provinces and a distance of 5,150 kilometers (3,200 miles).
Mobile Mapping Technologies
Seeking innovative solutions, MOW accepted GeoInvents proposal to use
mobile mapping technologies. GeoInvent is a European mobile mapping
services company and data producer that provides multifunctional and
multipurpose solutions for the creation of high accuracy spatial invento-
ries. Carl Deroanne, Sales Manager at GeoInvent, and his team recom-
mended a road sign detection and recognition solution that includes a
vehicle-based asset inventory system from the Trimble GeoSpatial Division.
Given the scale and high accuracy requirements of the inventory project,
more traditional manual based survey methods that solely utilize Global
Navigation Satellite System (GNSS) technology and Geographic
Information System (GIS) software were dismissed. According to C.
Deroanne, these methodologies are often plagued with high traffic man-
agement costs, lengthy timelines, as well as increased safety risks for
road crews and field surveyors on the highways.
The Trimble Mobile Data Capture System utilizes advanced photogram-
metric techniques and integrated sensors to achieve georeferenced data
capture, extraction, and analysis. Sensors include high resolution digital
cameras, laser scanners, and positioning systems comprised of GNSS,
inertial, and Distance Measurement Indicator (DMI) systems. The position-
ing systems combine to ensure accurate determination of the vehicle loca-
tion and orientation at all times. Survey images and data are then trans-
Ar t i cl e
March 2010
GeoInvent Systeme dAcquisition Mobile (SAM), Mobile Mapping System.
ferred to the Trimble Trident-3D Analyst software package where analysts
can extract and analyze asset inventory information for further measure-
ment, analysis and export. In order to survey the major roads located in
Flemish region of Belgium, two vans were outfitted with this equipment.
The survey process was managed by approximately 50 people ranging
from data acquisition software operators and drivers to data extraction
analysts, road sign designers, programmers and field surveyors.
The mobile mapping methodology applied to this project was organized
around several phases:
1. Itinerary Planning
At the initial phase of the project, GeoInvents team compiled a list of
the sections of roads to be surveyed, spanning five provinces and 23
districts across Belgium. Divided up by kilometer markers, this data
was then transferred on to a map to be used in the vehicle by field
survey teams.
2. Mobile Mapping Survey
Mobile mapping survey operations consisted of the collection of inte-
grated imagery, laser scanning data, and positioning information, using
the Trimble Mobile Data Capture Systems. These systems underwent a
series of calibrations in order to guarantee precision and accuracy of
the data collected. Two vans, each having a driver and an operator,
were able to collect the necessary inventory data by surveying approxi-
mately 50 to 100 kilometers (31 to 62 miles) a day while traveling at
traffic speeds. During data collection, the Trimble Trident-3D Capture
software triggered the image and laser scanner capture process and
attached synchronized georeferenced locations from the positioning
system at either fixed distance or time intervals. The operator was
responsible for starting and stopping data capture, verifying the navi-
gation data, adjusting the image quality, providing directions, and per-
forming data backups as needed.
3. Detection using Laser Scanning Automation
During the mobile mapping survey, data was collected from laser scan-
ners that emitted beams that were reflected back to the laser scan-
ner once they came into contact with objects such as road signs. Back
in the office, Trimble Trident-3D Analyst detects the road signs found
in the laser scanner point cloud using a batch process that works
with the reflectivity of the road signs reflective film and customizable
software filters. From this, the software is able to calculate the 3D
coordinates of the signs, as well as measure all of the objects in three
dimensions; calculating height, width, surface area, perimeter and any
other measurement that can be performed on the asset. These mea-
surements are then transferred to the appropriate attribute field,
based on the previously built database (ex. directional sign, speed
sign, etc.).
4. Photogrammetric Extraction
Due to MOWs extensive project requirements, GeoInvent also verified
the road sign data using the georeferenced imagery collected during
the mobile mapping survey. During this process, any assets that were
missed by the laser scanners, either because their location was imped-
ed by trees, debris or because signs were missing reflective film, were
added to the database by a team of analysts using the photogram-
metric capabilities in Trimble Trident-3D Analyst. In total, 350,000 road
signs are in the process of being inventoried using the mobile map-
ping system.
Latest News? Visit
Ar t i cl e
March 2010
Roadway map with attributes
5. Field Survey
Using field survey techniques, GeoInvent also continues to gather spe-
cific manufacturer data from road signs, including information stored
on a coded sticker on the back of the signs. By combining the ele-
ments collected with the Trimble Mobile Data Capture System and the
field survey, the team will be able to produce a complete inventory of
its mapping area including: road sign XY coordinates, quantity of signs
per section, orientation, dimensions, code, reflective film type, manu-
facturer data and date, as well as the signs height above ground, the
shape and size of the support used, digital photos and videos, the
roadside condition, and the road name.
6. Road Sign Design
After the mobile mapping and field surveys are complete and the data
is processed, each road sign can be redesigned as needed using CAD
software. Designers utilize the photos collected from the mobile map-
ping system as well as the supporting images taken by field surveyors
to more efficiently and effectively develop new signage.
7. Linear Referencing and Printing
The team then uses GIS software to calculate the linear coordinates of
all signs, display, and produce linework (or place geographic symbols
for the road signage on the maps). Finally, once a detailed list is com-
piled, the maps can be printed.
Using these advanced asset inventory methodologies in combination
with the Trimble Mobile Data Capture System, 350,000 road signs are
being successfully inventoried. Highly accurate inventory information
that is being produced is absolutely critical in the effective manage-
ment of the regions transportation networks. In addition, precise asset
data is one of the most effective tools for accelerating the repair and
replacement of signs in the event of an accident - a top priority for all
infrastructure departments.
Saving Time
This project is the first large-scale semi-automated road sign inventory
plan in Europe. In fact, using the automated sign detection function
within Trimble Trident-3D Analyst software there was a 95 percent detec-
tion of signs and less than 5 percent false detection. GeoInvent believes
that the flexibility of sensors (GNSS/inertial/DMI, cameras, and laser
scanners) that are integrated in the Trimble system guarantees higher
levels of data compliance with a diverse range of client projects and
information systems.
In addition, because road asset inventory data that has been collected
is reliable, accurate and detailed, including graphic representation and
descriptive attributes, it can be used for future infrastructure projects
as well. It is believed this advantage will continue to drive the adop-
tion of mobile mapping technologies because it means that asset inven-
tory data can be repackaged and repurposed for multiple projects. It is
asserted that this benefit greatly extends the value of using geotech-
nologies and applying photogrammetric techniques in asset manage-
ment projects because it has the potential to save time and money
over the lifespan of infrastructure investments, lowering the total cost
of ownership of inventory data and helping clients manage project costs
and budgets more effectively.
Rebecca Muhlenkort, freelance writer. For more information,
have a look at
Ar t i cl e
March 2010
Roadway map with sign locations
Probably the most full-featured
tablet PC in the world:
1.6 GHz Intel Atom processor
Windows 7 Professional
2-megapixel camera with LED light
MaxView technology display
7-inch touchscreen
Dual Li-Polymer Battery Pack, 2600 mAh each
GSM/UMTS/EVDO communication
GPS, antennas integrated
Gobi 2000 ready
Waterproof USB 2.0 and RS232 ports
1.1 kg and 140x230x40 mm
Full MIL-STD-810G
-23 C to 60 C
Super-rugged, ultra-mobile
Multi-purpose Land Administration Systems
This paper explores the role of land registers and cadastre in supporting measures that aim at adapting to and mitigating
climate change. To that end, the paper provides a brief introduction to climate change in general and then continues by
analyzing the role of housing, land use, land-use change and forestry with respect to carbon storage and emission
reduction. To promote carbon sequestration and emission reduction, land policy and associated land instruments such
as market regulation, land use planning, land taxation and land reform should include climate-proof goals.
By Paul van der Molen
As climate change affects the livelihoods of peo-
ple on earth, it is most likely that land and
houses will play a role in adapting to and miti-
gating climate change. This paper first aims to
use desk research to identify the role of land
and houses. Then, the elements of such adap-
tation and mitigation are explored, to identify
the role of land owners, land users and land
managers (using policy reports and scientific lit-
erature). Finally, based on the authors earlier
papers (see, some explorative
research is pursued to identify the role of land
registers and cadastres in adapting to and mit-
igating climate change. As far as the author is
aware, this area still represents a wide gap in
our knowledge.
Climate Change in General
The regular Synthesis Reports of the
Intergovernmental Panel on Climate Change
(IPCC) provide observations and analyses con-
cerning (a) changes in climate regardless of
their causes, (b) an assessment of such caus-
es and (c) a projection of future climate
The latest report (2007) states that the fact that
the climate system is warming is unequivocal:
as is now evident from observations of increas-
es in global average air and ocean tempera-
tures, widespread melting of snow and ice, and
the rising of the global average sea level. As a
rough estimate, this could result in more pre-
cipitation in the north, more droughts in the
south, fewer cold days, more hot days, heat
waves and higher sea levels. As a secondary
effect, the IPCC expects many natural systems
to be affected, such as glacial lakes, early spring
events, bird migration, and shifts in plant and
Ar t i cl e
March 2010
Land registers and cadastres have a role to play in supporting governments and citizens in their efforts at mitigating climate change and trying to adapt to its impact.
animal species towards the polar regions, salin-
ity and earlier greening of vegetation. Various
scenarios show the impact on human systems
such as crop productivity, coastal zones, flood
plains, health, industry and settlements prone
to extreme weather events and drought.
More specifically, Africa is expected to be
exposed to increased water stress, reduced
rain-fed agriculture, affected low-lying coastal
areas and diminished access to food. Asia is
expected to suffer from decreased availability
of fresh water, higher risk for delta areas and
pressure on natural resources. Europe is expect-
ed to be faced with floods and erosion, glacier
retreat, reduced availability of water, worse
weather conditions in the south, and increased
health risks because of heat waves and wild-
fires. The Americas are expected to be prone to
gradual replacement of tropical forests by
savannah, loss of biodiversity, decreased live-
stock and crop production, less precipitation,
heat waves in the north and increased rain-fed
agriculture. Cereal productivity is expected to
increase at mid and high latitudes and to
decrease in lower latitudes, which has a nega-
tive impact on food security and the livelihoods
of small farmers and fisheries.
The drivers for climate change appear to be
both natural and anthropogenic. One example
of a natural driver is solar radiation.
Anthropogenic drivers include greenhouse gas
emissions from human activities. The IPCC
reports that the global increase of carbon diox-
ide (CO2) is due to fossil fuel use and changes
in land use. Global increases in methane levels
(CH4) are very likely due to agriculture and fos-
sil fuel combustion. The increase in nitrous
oxide (N2O) is primarily due to agriculture.
A special report published by the IPCC (2000)
discusses how different land use and forestry
activities affect carbon stocks and greenhouse
gas emissions. Carbon is retained in live
biomass, in organic matter and in the soil.
When human interventions lead to changes in
live biomass, land use and forestry, the carbon
stock also changes, which in turn influences the
global carbon cycle. For example, the report
reveals that substantial amounts of carbon have
been released when forests were cleared.
Greenhouse gas emissions occur as a result of
restoration of wetlands, biomass burning and
fossil fuel combustion, intensive tillage, fertil-
ization of lands and forests, rice cultivation and
enteric fermentation.
Kyoto Protocol
In Article 3.1 of the Kyoto Protocol, parties
agreed to limit and reduce their greenhouse gas
emissions between 2008 and 2012. Further -
elaborates the relation between the two,
revealing that accelerating expansion of bio-
ethanol and bio-diesel production might offer
opportunities for small-scale farmers by revi-
talizing land use in rural areas and increasing
both yields and incomes. However, both would
depend on land tenure security. Large-scale
biofuel production also might provide employ-
ment, skills development and secondary indus-
try, creating potential for long-term poverty
reduction. To achieve such results, the IIED
advises establishing land policy frameworks
that give clearer definitions of concepts of idle,
under-utilized, barren, unproductive, degraded,
abandoned and marginal lands, in order to
avoid land allocation to large-scale biofuel
industries to the disadvantage of local liveli-
hoods. Existing land tenure patterns should be
recognized and implemented within a broader
circumstance of taxation, subsidies, markets
and trade.
Research (e.g. Rothamsted, 2005) demon-
strates that sound land management results
in lower greenhouse gas emissions from all
links in the food chain, provides carbon
sequestration in soil and vegetation, and
replaces fossil fuels with renewable bio-ener-
gy crops. Pfister et al. (2004) discuss the rela-
tions between climate change, land-use
change and run-off predictions in the Rhine
and Meuse river basins. The research concerns
the influence that changes in land use had on
the hydrological subsystem, which interacts
with the climate system. They found that in
general field drainage, wetland loss and
urbanization result in more rapid downstream
transmission of flood waves and less flood-
more, countries that signed the Protocol can
use afforestation, reforestation and deforesta-
tion as potential contributors to the reduction
of emissions (Article 3.3). The same counts
explicitly for measures regarding land use, land-
use change and forestry (Article 3.7). This
aspect is where we find the link to discuss the
role of cadastres in climate change, as manag-
ing lands and forests requires an active land
policy, instruments to implement such policy,
and land tools to facilitate government inter-
vention in private and public rights to land and
Role of Land Use, Land-use Change
and Land Management
The UN Food and Agriculture Organization
(FAO) states in its publication Climate Change
and Food Production (2008) that sustainable
agricultural production plays a role in adapting
to and mitigating the impact of climate change,
because (a) agriculture is an important emitter
of greenhouse gases, (b) has the highest
potential for reducing emissions through car-
bon stocks and (c) is the sector that is most
affected by climate change. FAO is well aware
that expanding biofuel production is likely to
lead to greater competition for access to land.
This requires sound land tenure policies and
land-use planning; otherwise, the livelihood of
farmers, pastoralists, fishermen and forest
dwellers without formal land tenure rights will
be at risk. Greater land tenure security is con-
ditional to applying various mitigation and
adaptation measures.
A study by the International Institute for
Environment and Development (IIED, 2008)
Latest News? Visit
Ar t i cl e
March 2010
Europe is expected to be faced with floods and erosion, glacier retreat, reduced availability of water, worse
weather conditions in the south, and increased health risks because of heat waves and wildfires.
plain storage. There was no evidence that
land-use changes affected flood frequency and
flood magnitude. Whether changes in the
hydrology of the Rhine and Meuse were more
strongly influenced by climate change than by
land-use change appeared to be difficult to say.
Similarly, Juckem et al. (2008) investigate the
effect of land-use change in the driftless area
in Wisconsin. Although increased precipitation
was significantly higher than in other water-
shed areas, they argue that the changes were
likely linked to changes in the soil properties
as a result of agricultural land management
Research by Eve et al. (2002) explains the
background behind removing CO2 from the cli-
mate by growing plants which are able to store
organic carbon in the soil. The paper shows
that under the US Conservation Reserve
Program about 13 million hectares of highly
erodable croplands were taken out of agricul-
tural production by converting them, by plant-
ing it back to grass or trees. Because then the
soil is not disturbed and biomass is not
removed: the soils have shown an increase in
carbon storage. Also, adopting reduced tillage
resulted in increased soil carbon storage
because the soil is less disturbed, even more
for no-till-at-all land use.
Fertilization by using organic manure also
enhances carbon storage in the soil, because
of both the carbon content of the manure and
the increase in biomass production. Eves paper
concludes that there is a net effect of land use
and management changes on agricultural lands
resulting in an increase of soil carbon storage.
Cowie et al. (2007) sees potential synergies
between existing multilateral environmental
agreements and the implementation of land-
use change and land management to adapt to
and mitigate climate change. The basic idea is
that land-use change and land management
can be used to increase the terrestrial carbon
pool, which at the same time contributes to
the Biodiversity Convention (CBD) and the
Desertification Convention (UNCCD). Measures
taken into account in this study include con-
version from conventional cropping to reduced
tillage, manure, rotation, irrigation, biocrops,
plantation, new forests, which appear to
impact on both less emissions of greenhouse
gases, biodiversity and desertification and
reforestation. The paper concludes that good
land management is necessary, in order to
manage forests, cropping and grazing systems,
biofuel production and that when land man-
agers continue to respond to current market
demands the environmental externalities are
not acknowledged.
The land tenure problem regarding carbon
sequestration becomes manifest in Unruh
(2008/9). This research shows that the possi-
bility of sequestering large quantities of atmo-
spheric carbon through woody biomass incre-
ment via tree planting projects in the
tropicshas impressive potential. However,
afforestation and reforestation projects have to
be initiated by governments that have often lit-
tle to say in areas outside the urban sphere,
because the Western notion of property rights
and land law are often limited to those partic-
ular parts of the country. In remote and rural
areas, customary land management prevails
and is overruled by statutory land tenure
arrangements. Unruh argues that there are five
main obstacles for such projects, namely (1)
the land tenure disconnect between customary
and statutory land rights, (2) legal pluralism,
(3) tree planting as land claim, (4) the function-
ing of treed area expansion in smallholder land-
use systems and (5) the abandoned land prob-
lem. Tree planting projects require improved
governance, which assumes single land law for
the entire population, through which the land
rights of customary land holders can be guar-
anteed. Literature reveals that this is hardly a
realistic way forward, as governments often
neglect the land rights of customary peoples
and the poor often need to be protected
against the government Furthermore, tree
planting in Africa often signifies a land claim,
so that tree planting projects are perceived by
local communities as unfair and unjustified land
claims by the government, which are perceived
to be conflicting with their own land rights.
Unruh asks, given the land tenure obstacles to
the afforestation and reforestation approach,
will it be possible to realize sequestration goals
within the time whereby the impact will be
Harper et al. (2007) investigates the potential
Ar t i cl e
March 2010
Land registers and cadastres have to extend their function beyond
the conventional use for land markets and land taxation.
of greenhouse sinks to underwrite improved
land management in Western Australia. The
problem is that Australia is faced with saliniza-
tion of land and water resources, recurrent wind
and water erosion of both cultivated agricultur-
al lands and rangeland, and the prospect of
continued climate change due to increases in
the concentration of greenhouse gases in the
atmosphere. There might be opportunities for
the land management sector arising from green-
house gas abatement and in particular the
development of carbon sinks as a result of land
use change. The carbon storage can be used
to fulfill the Kyoto obligations and opens oppor-
tunities for trading in emission reductions. The
research investigates the possibilities of car-
bon farming by planting trees and shrubs on
(private) farmland and de-stocking rangeland.
Carbon farming requires a title, which is made
possible under the Australian Carbon Right
(Pulselli et al, 2009). Regarding the effects of
climate change on the built environment
(Roberts, 2008) clarifies that buildings play
an important role in both adaptation and mit-
igation. Modern building design includes low
carbon running costs while maintaining com-
fort. Super insulation, high performance win-
dows, heat recovery systems, thermal storage
are to be included in climate proof design
principles. (Hamza et al, 2009) reports about
the role of building regulations in the UK,
which originally were introduced to safeguard
public health and safety, but now -after revi-
sion- are seen as a tool for limiting the envi-
ronmental impact of the built environment on
natural resources. Regarding adaptation to
the effects of climate change, the construc-
tion buildings that are resistant to weather
extremes like flooding and storms, require not
only new construction methods, but also a
land use planning that allocates building con-
struction at the right location (Roberts, 2008).
Recognizing the role of various sectors in soci-
ety for finding solutions for climate change,
like the transportation sector, housing sector,
agricultural sector, the coordinating mecha-
nism still is the spatial planning especially at
local level (Biesbroek et al, 2008). That
explains the role of local governments (or
sub-national governments), as they have
control over areas that crucially affect green-
house emissions, such as transportation,
energy use, land use regulation and environ-
mental education (Puppim, 2008). The role of
spatial planning is even more important as
the reduction of transport related emissions
has a direct relationship with the higher den-
sity of land use, resulting in less transport
activity both for passengers and freight (Grazi
et al, 2008). In order to monitor the energy
use, several countries introduced environmen-
tal rating of buildings. As more than 80% of
energy used in households is dedicated to
space heating, large savings are expected to
be gained in the housing stock. Sweden inves-
tigates an external and an internal factor
(Malmqvist et al, 2009), while Denmark,
Belgium, the Netherlands, Germany publish so
called energy labels, in order to create aware-
ness amongst the populace concerning ener-
gy use of houses and potential savings. That
energy labeling is not a immediate success,
reveals an investigation in Denmark, where no
significant energy saving where found despite
this was the main goal of the Danish Energy
Labelling Scheme (Kjrby, 2008) and an inves-
tigation by a national real estate agent asso-
ciation (VBO) in the Netherlands, that revealed
that only 38% of house buyers paid attention
as whether an energy label was available for
the property they were interested in (Dutch
News, 30 January 2009).
2003 legislation, establishing a title for the
carbon in a sink, separate from that of the
land, which provides a legal base for owner-
ship and trading. These carbon credit titles
are treated like property titles, so they also
need to be registered. Measures to material-
ize the potential of carbon sinks include refor-
estation, grazing land management, cropland
management, and re-vegetation.
Role of Houses and Spatial Planning
According to (IPCC 2007) the largest growth
in greenhouse gases emissions between 1970
and 2004 has come from energy supply, trans-
port and industry. In addition to the land sec-
tor (section 4), the urban environment there-
fore also needs attention. About 30-40% of
the total energy consumption in western
countries is assigned to building. About 50%
of these refer to the energy consumption for
indoor air conditioning (heating and cooling)
Latest News? Visit
Ar t i cl e
March 2010
Mitigation of and Adaptation to Climate
The Kyoto Protocol requires societies to
respond to climate change by reducing green-
house gas emissions (mitigation) and coping
with the changes (adaptation). The IPCC report
specifically summarizes various options.
Regarding mitigation measures related to land
and housing, the report suggests e.g. increased
production and use of biofuels, reduction of
transport needs by means of climate-proof land-
use planning, energy-efficient houses and com-
mercial buildings by the establishment of ener-
gy labeling and building codes, land
ma na gement to increase soil carbon storage,
restoration of degraded lands, application of
cultivation methods that improve carbon
sequestration (such as more rice cultivation,
livestock and manure management), better for-
est management and better land-use manage-
ment. Regarding adaptation measures, the
report suggests e.g. expanded rainwater har-
vesting, water storage, crop variety, improved
land management to achieve erosion control
and soil protection, the construction of seawalls
and storm barriers, dune reinforcement, land
acquisition and creation of marshlands and wet-
lands as a buffer against sea level rise and
Concerning the underlying policy framework,
the report refers to institutional reform, land
tenure and land reform, capacity building, inte-
grated land-use planning, building codes, and
national water policies.
Carbon Credits Market
Articles 3.3 and 3.4 of the Kyoto Protocol pro-
vide for the use of greenhouse sinks (carbon
sequestration in soils and vegetation) to be
used by countries to fulfill their obligation to
reduce greenhouse gases. Articles 6, 12 and 17
establish a market for trading assigned emis-
sion credits. This is known as the compliance
market, structured to facilitate the trade in
emission rights, based on cooperation with
developing countries in carbon sequestration
projects (Clean Development Mechanism).
Article 17 allows countries that have assigned
emission units to spare to sell their surplus
credits to countries that are over their targets.
Since carbon dioxide is the principal green-
house gas, people speak simply of trading car-
bon (UFCCC website, accessed 30-9-2008).
The Dutch government, for example, under the
Clean Development Mechanism (CDM) of the
Kyoto Protocol and the EU Emission Trading
Scheme (EU-ETS), has a portfolio of 28 projects
in 11 different countries, consisting of various
energy technologies such as wind power pro-
duction, methane gas recovery and biofuel pro-
duction; the total contracted volume is 17.4 mil-
lion tons of carbon dioxide equivalent
(SenterNovem website, accessed 7-11-2008).
The government even created a supervisory
authority for emissions trading: the Dutch
Emissions Authority (NEA).
Apart from the compliance market, a retail off-
set market has also emerged, with a focus on
voluntary participation by parties not bound by
specific caps or regulations. Greenhouse gas
emissions can be offset by investing in projects
that provide emission reductions elsewhere;
critically, the voluntary market is still unregulat-
ed in that it has no market standard (Harris,
Here we observe the creation of a new com-
modity, in line with the research on land mar-
kets (Wallace et al., 2006a, 2006b), where she
describes that land markets increasingly include
more complex commodities. In the carbon cred-
it case, this concerns a new commodity in the
form of emission reductions or removals.
This leads to opportunities for such measures
as carbon farming (Harper et al., 2007), to gen-
erate tradable carbon credits through in the
Australian case reduction of livestock density,
removal of wild grazing animals such as goats
and rabbits, conversion from cropping to graz-
ing, conversion from conventional to no-till
cropping, re-vegetation (trees, fodder shrubs)
and forestry development. In this situation, mar-
keting carbon credits requires a title for a car-
bon sink, which is separate from the property
title for the land (unbundling of property right-
s), which also might require registration.
To date, it is recognized that transactions in vol-
untary carbon credits such as occur in Australia,
Europe and North America are not formally
recorded. As cited earlier, Harris (2007) consid-
ers the voluntary retail market to be unregulat-
ed; in order to increase market integrity and to
avoid that emission rights are sold more than
once, formal registration should be implement-
ed; aside from the credibility gained, this regis-
tration could make the market more fungible.
It is remarkable that Harris refers to existing
registers such as Triodos Banks Climate
Clearinghouse register, the Greenhouse Gases
Register of the Environmental Resources Trust
(ERT), and a register managed by the Bank of
New York, while existing land administration
system could so easily adopt such carbon cred-
it rights in their registers.
Role of Land Policy, Land Instruments
and Land Tools
Adaptation to and mitigation of climate change,
by their very nature, challenge professionals in
the fields of land use, land management, land
reform, land tenure and land administration to
incorporate climate change issues into their
land policies, land policy instruments and facil-
itating land tools. This is similarly applicable to
water and coastal zone professionals. It is clear
that land registers and cadastres in themselves
cannot induce mitigation and adaptation of cli-
mate change. However, they must serve as a
sound information base for the implementation
of land management policies.
This means that in addition to appropriate reg-
istration of land tenure and cadastral geome-
try, additional information is requires about
environmental rating of buildings, energy use,
current and potential land use related to car-
bon stock potential and greenhouse gases
emissions, clearer definitions of various land
types related to the application of various legal
regimes (like what is exactly idle land), flood
and storm prone areas, salinization rates and
Ar t i cl e
March 2010
An examle of flooding.
Latest News? Visit
March 2010
transport indicators. This information might not
necessarily be recorded in the land registration
and cadastre system itself, but at least con-
nected with it, so that a strong link with pri-
vate and public rights to land remains in exis-
In the case of unbundled property rights, with
the separation of carbon credit titles, these
registers and cadastres should be able to reg-
ister such rights (registration) and to attach
appropriate geometric attributes (see section
10) and to make those titles accessible for
trade in the carbon credit market. Land regis-
ters and cadastres also have to fulfill their
most vital purpose, namely to provide land
tenure security to right holders, with a focus
on the poor, the vulnerable and indigenous
peoples, in order to safeguard their land rights
in case of e.g. demands for land for purposes
of large-scale biofuel production or afforesta-
tion for carbon sequestration and to provide
information about tenure, value and use of
land when governments want to encourage
changes in livestock, crop production, conver-
sion from arable land to grazing land, from
tillage to no-tillage cropping, reforestation and
combating degradation of soils though sound
land-use planning and management.
When governments want to apply taxation as
a measure to achieve such objectives, land
registers and cadastres are supposed to pro-
vide relevant information about taxable
objects, taxable values and taxable persons,
including earlier mentioned indicators regard-
ing energy use etc.
When governments need lands to realize cer-
tain land use (water storage, carbon sinks),
land registers and cadastres should provide
information about right holders to be com-
pensated in the land acquisition process, in
such a way that peoples land rights are
respected and the risk of eviction is avoided.
When land reform is at stake, land registers
and cadastres provide information about the
existing land tenure pattern and provide an
operational process to change from old to
new situations. In summary, land registers and
cadastres have a role to play in supporting
governments and citizens in their efforts at
mitigating climate change and trying to adapt
to its impact.
The Case of the Dutch Kadaster
As one of the signatory parties to the Kyoto
Protocol, the Netherlands government recog-
nizes the urgency and scale of the global cli-
mate challenge: its goal is a 30% reduction
in greenhouse gas emissions by 2020, rela-
tive to the benchmark year of 1990, prefer-
ably as part of a European effort. In view of
removals by sinks from applicable land use,
land use change and forestry activities.
Although different approaches are possible, in
many cases land surface areas, above-ground
and below-ground volumes of biomass, canopy
surveys, and geoinformation play a role. The
Greenhouse Office of the Australian Department
of Environment publishes its Full Carbon
Accounting Model on the web (Full CAM,
assessed 13-11-2008) and also provides what
is known as a National Carbon Accounting
Toolbox and Data Viewer to allow land man-
agers to ensure that their projects or regional
emissions accounts are determined on a simi-
lar basis to Australias official recording of emis-
sions from the land sector.
The methods used for calculating carbon cred-
its demonstrate a remarkable similarity to the
work of quantity surveyors, whose profession
it is to survey land areas and volumes to esti-
mate building and construction costs. To date,
the author is unaware of any publications which
explore the possible extension of the surveying
profession towards the quantification and qual-
ification of carbon credits and emission reduc-
tion rights.
Land registers and cadastres have to extend
their function beyond the conventional use for
land markets and land taxation. The data com-
prised in the land information systems are also
useful to facilitate government policy on adapt-
ing to and mitigating climate change.
Registering new rights in the form of carbon
credit titles would be feasible. With all these
aspects in mind, the idealistic concept of regis-
ters and cadastres as multi-purpose land
administration systems becomes a real possi-
Paul van der Molen
is currently director of Kadaster International,
holds a chair in land administration and cadastre
at the International Institute for Geo-information
Science and Earth Observation ITC in Enschede
(NL). He is a former chairman of FIG Commission 7
and former FIG Vice President.
This paper has been prepared and presented at the
FIG Working Week in Eilat, Israel, 3-8 May 2009.
Thanks to Markku Villikka.
the fact that 50% of the land area in the
Netherlands is located below sea level, it is
no surprise that coping with the rising aver-
age seawater level, the higher run-off and dis-
charge predictions for the major rivers and
extreme precipitation forecasts is a priority.
However, the government realizes that mea-
sures to cope with water management should
be coupled to measures on land use, nature
conservation, urbanization, transport and recre-
ation. Therefore, the National Adaptation Policy
is based on the concept of integrated land-use
planning, which combines objectives of sustain-
able coastal defense measures, supplemented
by robust river water systems, sustainable
cities, climate-proof buildings and climate-proof
Since January 1, 2008, legislation has entered
into effect that requires an energy label to be
available at the time of transactions related to
the construction, sale or letting of houses. The
energy label issued for a specific house pro-
vides information about the energy consumed
during its standardized use. These energy labels
form a new category in the land registers. To
date, the Netherlands Cadastre, Land Registry
and Mapping Agency, known as Kadaster, has
registered about 50,000 labels. The energy
labels are open for public inspection, as is all
cadastral data.
Kadaster supports the government in providing
not only all information about land tenure,
value and use of land and houses, but also
about public properties and environmental lim-
itations regarding use, noise, soil pollution, nui-
sance. It also supports land acquisition by the
government in order to implement anti-flood-
ing measures.
The land consolidation expertise available at
Kadaster is put into practice when the govern-
ment aims at realizing better climate-proof agri-
cultural business structures as well as sub-
catchments for river water. As a consequence
of sea level rise, seawater will also penetrate
further into the estuaries of the Rhine and
Meuse, causing salt intrusion leading to high
salt concentrations. In this area as well,
Kadaster provides relevant land information to
support land-based anti-salinization spatial
Job Opportunities for Land Surveyors?
A study by the IPPC (2000) reveals widespread
demand for a well-designed carbon accounting
system that provides for the transparent, con-
sistent, comparable, complete, accurate, verifi-
able and efficient recording and reporting of
changes in carbon stocks and/or changes in
greenhouse gas emissions by resources and
Ar t i cl e
Space and location are important factors in the study of production, distribution and consumption of wealth.
The adoption of Geo-ICT in economics however, has been slow and patchy. But things are changing rapidly:
not only is economic science picking up on the possibilities of Geo-ICT for economic research, recent education
programs in the Netherlands show how it can be used to analyze and forecast consumer behavior in time,
showing the continuing integration of the Geo-factor in marketing.
By Jasper Dekkers
The field of economics has developed tremen-
dously since its start in the late eighteenth
century. Although this social science studies
the economic aspects of human behaviour in
its broadest sense, at first it did not address
these social processes in a spatially-explicit
way. More than a century would pass, with
the exception of Johann Heinrich von Thnen
(1826), before economists started to recog-
nize the importance of space and location in
the study of the production, distribution and
consumption of wealth.
The adoption of Geo-ICT in economics has
been slow and patchy, the main application
being GIS software for mapping economic
data. Geo-ICT is also used for other purposes
than mapping, mainly for data exploration,
spatial analysis and modeling in the fields of
spatial economics and marketing. Over the
past few years, we have seen that the
increase in data availability in economics and
in marketing has caused a gradual shift from
exploratory research and mapping towards
explanatory research and modeling. This trend
is expected to persevere and it might be
fuelled further by an increasing awareness
among scientists that Geo-ICT has the poten-
tial to unite the otherwise opposing scientific
nomothetic and idiographic approaches.
Whilst the former approach focuses on defi-
nite truths and generalizations, the latter tries
to identify and record unique properties of
Ar t i cl e
March 2010
Fig.1. Two heat maps of customer movement on a Saturday and on a Tuesday for the ECI book shop in Utrecht
T h e B u z z w o r d E x p l a i n e d
places. Using GIS, these two foci can be com-
bined in a place-based or local analysis
approach, with the goal to identify properties
that distinguish places within the context of
a general framework. The role of Geo-ICT as
a bridge between opposing scientific
approaches becomes apparent when we con-
sider a GIS as consisting of two parts: the
database part (idiographic in nature), and the
part representing functions, algorithms, meth-
ods and models (nomothetic in nature).
What is Geomarketing?
The word geomarketing has a high attention
value nowadays, it is a buzzword. As a lot of
business activities suddenly seem to be or
somehow should be related to geomarketing,
the concept of geomarketing still lacks a clear
and unambiguous definition. The term geo-
marketing is used more as a collective term
for various spatial aspects in marketing. That
leads to another question, namely what is
marketing exactly? Definitions that are used
focus on different aspects of marketing. For
example, marketing can be defined as:
A business function (Marketing depart-
A set of activities (Marketing mix, market
A culture or attitude (Market orientation or
market focus).In this article we choose to
define marketing as the activity, set of insti-
tutions, and processes for creating, commu-
nicating, delivering, and exchanging offer-
ings that have value for customers, clients,
partners, and society at large.
Geomarketing then becomes marketing
activities and processes in which the spa-
tial (buying) behavior of consumers and
businesses is explicitly taken into account.
Marketing can be put into effect on different
levels within organizations: operational (design-
ing a brochure), tactical (price policy) or strate-
gical (changing mission, target group). At the
Vrije Universiteit of Amsterdam, the Marketing
Department focuses on marketing at a strate-
gic level, addressing topics such as market seg-
mentation, the adoption of innovations, brand-
ing, loyalty / Customer Relationship
Management (CRM), distribution channels, etc.
So this is also our focus when we discuss geo-
marketing here. Typical Geomarketing questions
have to do with:
Location choice, e.g.,
- Where are competitors located and what
locations are suitable for a new outlet, and
are they still available?
- How does the internet as a communica-
tions/contact/selling channel influence the
market and where does this influence the
network of outlets?
Where Products Are Sold
Two examples of geomarketing analyses are
presented here. The first example involves a
paper from Marketing Science written by
Garber et al. (2004) and focuses on innova-
tion adoption. It states that successful prod-
ucts are driven by word-of-mouth effects. A
fundamental assumption here is that word-of-
mouth drives innovation and depends on
proximity. The analysis describes the introduc-
tion of a successful product and a not-suc-
cessful product and shows that in the former
case people adopting the new product over
time form a clustering spatial pattern. The
conclusion can be that where new products
are sold can yield interesting additional infor-
mation in an early stage of the introduction
of a new product next to how many products
are sold. Interesting questions are whether
word-of-mouth as such still is a spatial phe-
nomenon in the internet era and whether the
word-of-mouth effect differs per product cate-
The second example is also a nice one. The
scientific paper that was awarded the Paul
Green Award in 2008, the best paper award
granted by one of the top marketing journals
(the Journal of Marketing Research, JMR) got
this honor for being innovative (Bronnenberg
et al, 2007). It took a well-established mar-
keting concept, the modeling of variance in
sales over time, and proved that time has very
little influence compared with the influence of
sales variance over space. The research
showed among others that there are large dis-
parities between national and local market
share (or performance) and that local market
Where do (prospective) customers live, e.g.
- Distance, radius, service area
- What do the geodemographic attributes of
a customers zip code tell me
- How to best reach target groups: Direct
Marketing, Billboarding, Outlets
What is the interaction between (prospec
tive) customers and locations, e.g.,
- What is the influence of geodemographics
and distance on customer loyalty?
The field of geomarketing can be arranged in
various ways, but we like to discern the fol-
lowing three main areas of study, ranging
from rather old to very new. First, the oldest
research field with geomarketing is location
planning. This is still a topic very much of
interest, also on a strategic level, but the
advent of Geo-ICT in marketing has shifted
research in this field. Previously, location
planning was mainly concerned with a priori
research on market areas, e.g. what are the
range and the threshold of a shop or an office
and is it economically viable? Now, the
research is more post hoc in nature and
investigates the effect of location, e.g. what
is the effect of the presence/location of a
shop or office on innovation-adoption, per-
formance, loyalty, etc. More recently, we have
seen the rise of geodemographics, i.e. the
use of detailed demographic data together
with location information. The latest devel-
opment is the focus on spatial aspects of
marketing concepts in general, for instance
the spatial dimensions of consumer behav-
ior. Much of the development in the field is
driven by new technology (software, hard-
Latest News? Visit
Ar t i cl e
March 2010
Fig. 2.Mean range in travel time of tickets sold per week Utrecht
structures can be very different from national
level structures (e.g., dominance versus
duopoly). The interesting conclusion of the
paper was that the well-established concept
of national brands was to be adapted. The
authors found that national brands are those
that lead in multiple (local) markets, but they
also found numerous cases of local brands
securing leadership in spite of their small
scale. As potential explanations for these
results, they name consumer differences (e.g.,
regional taste), retailer/distribution differences
and manufacturer differences (e.g., turf divi-
sion or historic order of entry). It is quite strik-
ing to see that a best paper award in 2008
can still be won by merely stating we need
to take the spatial factor more into account
in our discipline.
Geomarketing in Education
In 2007, two researchers at the Vrije
Universiteit of Amsterdam (prof. dr. Jaap Boter
from the Marketing Department and dr. Jasper
Dekkers from the Spatial Economics
Department / Spatial Information Laboratory
(SPINlab)) started a course Geomarketing
within the Master of Marketing. The birth of
this course fitted in a trend at the university
where the use of Geo-ICT started to transcend
the borders of the traditional Geo-faculties.
Nowadays, Geo-ICT components are taught in
very different disciplines at numerous facul-
ties: health geography, archaeology, land use
modeling, crime analysis, geomarketing and,
most recent, geologistics. The Geomarketing
course also fitted well in the SPINlab educa-
tional strategy. This strategy implies finding
lecturers, specialists in their own disciplines,
who are enthusiastic about Geo-ICT, and cou-
pling them with Geo-ICT specialists to set up
a course together. In this way, both the knowl-
edge of the respective discipline and of the
use of Geo-ICT is guaranteed.
Geomarketing started as an optional course
that goes on for six weeks and takes students
up to 20 hours per week. The course consists
of 50 percent lectures and 50 percent tutori-
als. Students were very enthusiastic about the
course, which led to a large increase in the
number of students taking the course in the
second year (from 9 students in 2007-2008
to 37 students in 2008-2009, which is about
33 percent of all marketing students at the
VU). It was interesting to hear the first excla-
mations of oooh and aaah during the stu-
dents first introduction to GIS software and
they could not believe that they were not
introduced to the possibilities of Geo-ICT
sooner in their studies. Quite a few of them
wrote their masters thesis in geomarketing
and are planning to seek a job in this direc-
tion as well.
Now that this course for regular students is
well in place, plans are to introduce a geo-
marketing course for professionals through
the part-time distance-learning Master in GIS
of UNIGIS ( this fall.
Geomarketing Knowledge Centre
It seems like the time is right and the market
is ready to accept and implement the use of
Geo-ICT in Business on a broader scale.
However, it remains to be seen if this will hap-
pen, since there is a clear lack of successful
business cases that demonstrate and prove
the added value of the use of Geo-ICT in mar-
keting. In its desire to fill this gap and deliv-
er a proof-of-concept, the Vrije Universiteit of
Amsterdam and Geo-ICT consultancy firm
Geodan set up a Geomarketing Knowledge
Centre (GKC,
in 2009. Within this knowledge centre, the
Vrije Universiteit will focus on research pro-
jects for marketing master students and on
research projects that can yield interesting sci-
entific publications. Geodan delivers technol-
ogy and data support, pursues R&D together
with the researchers from the VU and will try
to transfer interesting research outcomes and
models into products and services that can
be used in practice. The further integration of
Geo-ICT into Business Intelligence software is
one way to proceed in this respect.
Meanwhile, the first successful business cases
have been carried out by the GKC. The first
case started as a masters thesis project,
supervised by the VU and supported by
Geodan. A student wanted to investigate
shopping behavior for customers of ECI, a
large book reseller and book club in the
Netherlands. ECI has 14 book stores and the
one in Utrecht was chosen as a test site.
Customers and personnel were given a key
cord with an RFID-tag inside (RFID stands for
Radio Frequency Identification and can be
described as a sort of micro-GPS receiv-
er/transmitter chip that communicates through
radio waves and is thus also operational
indoors). Using this technology, which sends
a location signal approximately every second
to a computer that stores this information,
movements of people through the shop were
Ar t i cl e
March 2010
Fig.3a. Total range of tickets sold
monitored. In this way, heat maps of the shop
could be generated, showing interesting pat-
terns of what parts of the shop were very
crowded, which books and shelves were most
attractive to customers, how customers move
through the shop and were the interaction
between customers and personnel takes
An interesting observation is that the shelves
with CDs are hardly visited by customers at
all. Also, customers disregard the shelves with
discount books when they enter the shop, the
first thing they go to is the area where the lit-
erary fiction is displayed, with books from
popular Dutch authors like Joost Zwagerman
and Heleen van Rooyen. Another interesting
observation is that the space in the book
store is used differently on different days of
the week.
For example, on Saturdays the lounge area
where you can sit down to read something is
not used that much (see Figure 1). This differ-
ence is most likely influenced by consumer
differences: on a Saturday a different type of
consumer frequents the shop than on a typi-
was still several weeks away (see Figure 2).
This means that the market area for this show
actually is not static but dynamic. A conse-
quence for marketing activities is that, for
example, when the theatre wants to do a
direct mailing campaign two weeks before the
show, they need to target a much smaller area
than when the show is still eight weeks away
(see Figures 3a and 3b). The phenomenon of
dynamic market areas may very well apply to
other events, products and services.
As the Geomarketing Knowledge Centres first
year came to a close, it organized a confer-
ence on geomarketing on March 3, 2010. In
addition to other presentations by various
companies that are using geomarketing in
their business operations, the ECI case men-
tioned above was presented here. The GKC is
now entering its second year and we have
several new and interesting business cases to
research, with some 30 new and very enthu-
siastic Geomarketing students, several mar-
keting students starting their thesis projects
in a few weeks time, and even the first PhD
proposal being composed. All these activities
reaffirm our conviction that the future is look-
ing bright with regard to the further develop-
ment of geomarketing and the continuing
integration of the Geo-factor in marketing.
Jasper E.C. Dekkers
Jasper Dekkers (1977) graduated with an award-
winning thesis in both Spatial Economics and
Business Economics at the VU University in 2001.
In 2005 he completed a postgraduate study in GIS.
He has been working at the SPINlab at the
Department of Spatial Economics since 2000 as
researcher and recently as assistant professor in
Spatial Economics and Geographical Information
Science (GIS). His research focuses on spatial-
economic modeling of land use (change) and on
modeling business and consumer behavior.
He is coordinator of the UNIGIS Master of Science
in GIS and one of the founders of the
Geomarketing Knowledge Centre.
cal Tuesday. On the basis of the analysis, an
alternative floor plan was developed in which
the expensive floor space is expected to be
used more efficiently. The idea now is to
implement this alternative floor plan and to
test the effect on turnover. By measuring
turnover we are actually able to quantify the
added value of the use of Geo-ICT in this anal-
ysis in monetary terms. This case really
demonstrates the strength of science (VU,
methods and analyses) and business
(Geodan, technology and data support) team-
ing up with each other.
Another interesting masters thesis project
that has recently been finished is the case of
Theaters Tilburg. This case actually proves
that the static way market areas are currently
defined in practice (set location of a shop,
buffer with travel time/distance X around the
shop is the market area) might not be the way
to go about it. By examining box office data
for tickets sold for a particular show in the
eight weeks prior to the show, it was discov-
ered that in the last weeks before the show,
the mean range (travel time) of people buy-
ing tickets was smaller than when the show
Latest News? Visit
Ar t i cl e
March 2010
Fig. 3b. Range of tickets sold in the week prior to the show
New Experiences, Remarks and Prospects
Building Reconstruction and
A 3D building model represents a useful instrument to guarantee well detailed documentations, about shape, size,
material status, colour, deformations and decay, either under construction or during the life cycle. These years, points or
surface 3D models, mainly if photo-textured, have increased their diffusion as a valid add-on to classic vector plotting,
rectified photography, orthophotos and photo-mosaics. Besides, the recent developments of spatial geo-visualization
services, for instance Google Earth and Bing Maps (formerly Microsoft Virtual Earth), together with GIS instruments,
prospect further immersive ways for land knowledge and representation. Through some experiences on survey of
historic buildings, the paper provides remarks on model collection, reconstruction, photo-texturing and specifications
for different levels of detail and precision.
By Luigi Colombo and Barbara Marana
From Laser Scanning to Imaging
Thanks to automated and non-contact procedures, generally based on
laser and imaging sensors and on 3D processing software, it is nowa-
days possible to carry out measurable virtual-reality models.
These models consist of point sequences or meshes; their description
capability is more effective when the reconstructed geometry is complet-
ed with surface photo-rendering.
Sampling point density (spatial resolution), object morphology, request-
ed level of detail (scale) and accuracy are linked parameters in the pro-
cess for model reconstruction.
Useful detail levels for building 2D-3D representation are historically: 1:50,
1:100 and 1:200; the correspondent accuracy (standard deviation) can
be supposed equal to 10 mm, 20 mm and 40 mm, respectively.
It is possible to assume (Barber et. al., 2003; English Heritage
Specifications, 2006) that the minimum detectable feature size d of the
reconstructed model is nearly 3 times the sampling step s of the scan-
ning survey (with an expected likelihood at 60%).
The English specifications also state that only a sampling grid with the
same step s in both scanning directions and correlated to the represen-
tation accuracy (s ) can provide a suitable object description; at the
Ar t i cl e
March 2010
Fig. 1 - On-line (left) and off-line (right) photo-texture collection.
same time, the beam footprint size b over the
scanned surfaces should fulfil the condition
b 2s.
Obviously, the sampling step strictly depends
on the distance D, between sensor and object,
and on the instrument angular attitude, with
respect to the interested object surfaces.
A nearly constant value of the step s can be
reached through lines of scanning as much as
possible orthogonal to the walls and by chang-
ing the angular acquisition step according to
the distance D.
In photo-realistic modelling, laser scanning
carries out the task to record surfaces through
point sequences with assigned origin and
common reference, while photogrammetry
adds a good description of feature edges,
colours, materials and decay (Debevec et al.,
2001). A suitable scan planning is very impor-
tant: all survey positions must provide the
widest visibility towards the interested areas.
It is also advisable to employ panoramic scan-
ners and a multi-scanning approach, in order
to reduce point-cloud number, sight occlusions
and shadow effects (Petsa et al., 2007).
Visibility-occlusion problems usually represent
the first cause of data voids and information
loss during scanning: therefore, on-line visual-
ization is very important in order to evaluate
at-once errors and further needs.
Cloud management is very heavy: for this rea-
son, a work of simplification over points,
meshes and textures is always requested.
In particular, a level of detail (LoD) procedure
allows to decrease mesh number and texture
resolution: the level of detail is calculated each
time, according to surfaces complexity, visual-
ization distance and observation angle (multi-
resolution approach).
Building Photo-texturing
In photo-realistic modelling, image-taking is a
very important step which needs a correct
design: the image pixel (pimage) should not
exceed a maximum dimension connected to
the expected detail level.
The image radiometry is also an important
item: in exteriors surveys, sun light can create
over-expositions and unwanted shadow-
effects, while, in interiors survey, windows and
glass surfaces cause dangerous reflections,
shadows etc. A possible solution is to avoid
these effects by working day-time, without
direct sun-rays, or at night with diffuse artifi-
cial light.
Latest News? Visit
Ar t i cl e
March 2010
Fig. 2 - Quality of image reprojection: over point models at standard density (a), at high-density (b)
or over mesh (c).
Fig. 3 - Textured model of Grazies church:
views of interiors from different
observation points.
[a] [b]
The acquisition of images for texturing can be made according to on-
line or off-line ways.
On-line or direct collection, while scanning, is possible whether the
laser instrument is equipped with an internal photo-camera or a joint
external one; this kind of approach guarantees the a-priori knowledge
of inner and external orientation parameters but involves a lot of con-
straints, like:
on-line photos are influenced by possible environmental effects,
photographic taking distance cannot be changed in respect to the
scanning one,
single constructive elements may appear in more on-line photos;
this fact requires linking interventions and radiometric post-process-
ing (fig. 1).
On the other hand, one could record images at different times (off-line
acquisition): the photographic coverage has to be realized with taking
locations very close to the scanning ones and with the same orienta-
tion, so as to avoid detail splitting and perspective inconsistencies.
These geometric constraints, between photo-camera and scanner atti-
tudes, can be disregarded only if the processing software allows to
work as follows: photos and original scans are collected from free inde-
pendent locations, but new ad hoc scans (named virtual scans), are
subsequently calculated with respect to photo-taking positions by
applying a spatial transformation to the original scanned points.
Image reprojection requires to know the photo-camera inner orienta-
tion and the external orientation estimate.
With calibrated photo-cameras, built inside or strictly joined to the scan-
ning device, the external orientation is known a-priori and so one
speaks of automatic reprojection.
On the other hand, with independent photo-taking (photo-camera locat-
ed on a proper position), the external orientation is unknown and must
be analytically estimated: well distributed and visible 3D points are
used for its determination (manual reprojection).
Automatic mapping is a theoretically favourable, but not always
responding, process: indeed, an internal photo-camera usually records
images at reduced resolution, while joint-external photo-cameras may
acquire at not suitable light condition.
On the contrary, manual reprojection needs, on the contrary, a relevant
Ar t i cl e
March 2010
Figs. 4 - 5 - Textured model of St Maria Maggiore cathedral: details of exteriors
(a-b, above) and interiors (below).
[4a] [4b]
Latest News? Visit
Ar t i cl e
March 2010
number of tie points, overall in case of
complex object morphology; the search
of tie points over model and images can
be only partially supported by an auto-
matic least squares digital matching.
The mapping resolution depends on the
so-called reprojection factor, a parameter
which measures the ratio between an
image area and the corresponding one
over the model.
When photos have a higher resolution
than point cloud sampling (quite possi-
ble in high detailed models) the image
better quality is preserved only over a
mesh model; indeed, when photos are
back-projected over point clouds, just the point density sets photo-tex-
turing re sults.
Some Experiences
Photo-textured models have been realized lately by GeoLab (University of
Bergamo) to document architectural buildings in Bergamo, a historical city
close to Milan (Northern Italy): for instance, the ancient Cathedral of St
Maria Maggiore (interiors and exteriors description), located on the hill
(upper town), and the Grazies Church (interiors) in the heart of the so-
called lower town (Colombo et al., 2005, 2007, 2009).
The morphology acquisition for 3D modelling has been carried out at a
detail level of 1:100 (accuracy = 20 mm) by a panoramic laser sen-
sor, through a dense sampling. The sampling step s (s ) has been
fixed equal to 10 mm: inside the maximum survey range (around 50 m)
the laser spot has a diameter b not greater than 13 mm and agrees to
the advised relation b 2s.
This detail level enables to recognize elements of about 2-3 cm.
The experiences developed have pointed out that direct (automated)
photo-texturing offers really meaningful advantages only in favourable
environmental work-conditions; otherwise, with bad light or adverse envi-
ronment, manual process seems to be more versatile and effective.
Nevertheless, manual texturing requires the selection of many tie points.
Just to optimize the process of photo-rendering (see tab.1), some compar-
ative tests have been carried out to evaluate the quality of image projec-
tion over point clouds, at different density, and over mesh.
Figure 2 provides examples of image-texturing over a point model at stan-
dard density (a), over a point model at high density (b) and over mesh
(c): it is plain the different radiometric quality of mapping. However, dur-
ing the projects, because of the large dataset and the big object dimen-
sions, it was decided to proceed with photo-texturing only over a high
density point-model (coloured nurbs).
To reduce computing, a sub-sampling factor has been applied to the point
model; this factor allows to filter data in order to speed up processing,
according to the system memory resource and visualization capability.
Figure 3 shows the photo-model for the Grazies church interiors, through
a sequence of views from different observation points. It can be highlight-
ed the best photo-rendering results over the walls with highest density
and nearly constant sampling step: for instance, either the ones closest
to the scanner position or most orthogonal to the scanning direction.
Figures 4 and 5 deal with details of the point textured model respectively
for the interiors (the dome intrados) and the exteriors (the main faades)
of St Maria Maggiore.
Assigned spatial tolerance values (3) for the object reconstruction have
been: T50 = 30 mm for 1:50 detail level and likewise T100 = 60 mm
for 1:100.
Point acquisition has been carried out at a maximum sampling (step s
= 10 20 mm), with respect to the building morphology and the level
of detail.
The workflow related to photo-textured modelling, by laser scanning
and imaging, has pointed out that the acquisition phase has a time
incidence not greater than 10% of the whole working cycle and that
the other processing steps share the remaining time fraction, according
to the percentages listed in table 1.
Final Remarks and Prospects
A reduced copy of the reconstructed textured model for St Maria
Maggiore is going to be developed for representation inside Geospatial
Web environments, namely the popular geo-browsers such as Google
Earth and Virtual Earth.
These three-dimensional platforms provide a new exciting way to view
the world and give a quick, efficient and easy-to-use interface to link
imagery, maps, 3D building models and other geo-spatial information.
The Cathedral model has been simplified in geometry, meshed, export-
ed with texture and imported into the CAD tool Google Sketchup.
This package provides a 3D textured and georeferenced model (World
Geodetic System 1984) according to KML or KMZ (compressed) file for-
mats, the standards for all geo-browsers.
All this can enable a new exciting and immersive way to view urban
landscape and 3D buildings, worldwide.
Luigi Colombo is a full professor of Surveying &
Geomatics and Building monitoring geo-techniques at the faculty of
Engineering - University of Bergamo (Italy); Barbara Marana works as researcher at the same faculty.
Thanks to all the team, senior students and apprentices at Geo-technology
Laboratory (GeoLab) of the University of Bergamo, Faculty of Engineering
(Dalmine - BG - Italy).
Tab. 1 - Workflow of the laser-scanning process, with time consuming analysis.
Augmented Reality Close to the Real
Notes are stuck on houses but not everyone can
see them. Road signs tell everybody the exact
way to get to their individual destinations.
Monsters appear on walls, or you can simply see
through buildings. Whoever can do this is not
hallucinating, but just might be using the right
mobile application. Augmented Reality (AR) is the
term for this technical extension of our senses.
The concept is straightforward. A real-world view
on the mobiles camera screen has additional
information and graphics superimposed on it. In
the past a GPS module for localization, a power-
ful computer in the backpack for heavy matrix
calculations, and some kind of cyber goggles
were necessary to do Augmented Reality. Today
more and more smartphones can do it and
even more. Every mobile has a built-in camera.
Nokia immediately became the market leader in
digital cameras when it started to integrate cam-
eras into its mobiles. Todays popular smart-
phones all have integrated GPS modules, large
screens and enough computing power to do
Augmented Reality. As well, fast mobile broad-
band internet allows these smartphones to con-
nect to major geo-referenced information
databases such as Bing Maps, Google, Qype and
all geo-tagged articles in Wikipedia. Consequently
real and virtual worlds can be linked and dis-
played on the mobile screen to augment reality.
In 1994 researcher Paul Milgram introduced the
reality-virtuality continuum that portrays the link
between the real and virtual worlds and allows
better classification of what Augmented Reality
is. The real world is at one end of the continu-
um and virtual reality, the computer-generated
world, is at the other end. Augmented Reality is
closer to the real-world end. The closer a system
is to the virtual reality end, the fewer real ele-
ments. If the real world can be augmented with
virtual objects, it is logical to expect that the vir-
tual world can be augmented with real scenes.
Such an environment is called augmented virtu-
In the early days of 2010 the term Augmented
Reality began to exert a tremendous omnipres-
ence. Augmented Reality is considered the
hottest topic of the year and it really seems that
AR lurks in every corner of our everyday lives.
Most AR applications are designed for the iPhone
3GS or Google Android phones initiating with G1.
The possibilities are manifold. In addition to over-
laying information on the visible, indicating the
invisible opens up opportunities for many appli-
Augmented Skiing and Hiking
A range of skiing and hiking applications make
use of Augmented Reality as, in the mountains,
real signage is rare and objects of interest such
as mountain tops and panoramic views are
sometimes unreachable. REALSKI (
is dedicated to skiers and snowboarders and
their iPhone 3GS. Users can navigate trails and
see on-mountain features at selected North
American ski resorts. It allows riders to view their
surroundings while the app overlays digital
graphics showing nearby lifts, runs and resort
facilities in real time. Hence users can find the
location of these facilities and points of interest
such as named runs, lift names with loading and
unloading areas, lodges, and special areas such
as terrain parks. REALSKI layers information on
top of the visuals picked up by the camera using
the current location and elevation detected by
the GPS, the compass heading and the device
Similarly the application Peak.AR is your best
mobile mate if you have ever wondered about
the name of a prominent peak. It lets you find
the answer simply by taking a look through the
camera of your iPhone. Salzburg Research devel-
oped this Augmented Reality application to
demonstrate a platform that can be used to visu-
alize any kind of geo-referenced data. Using
Peak.AR the most prominent peaks within the
Ar t i cl e
March 2010
Wikitude Drive - real-time navigation with mobile augmented reality. (Source: Mobilizy GmbH,
In these early days of 2010 Augmented Reality resounds throughout the land.
Smartphones eventually seem capable enough to provide a superimposed view of
virtual and real worlds through the cameras view. This vision awakes expectations of
the big business. In the future show-owners might stick virtual coupons on their
shop-windows to be picked-up by AR-flaneurs and thus attract new customers.
This article will give a short overview about current mobile Augmented Reality
applications and the expected development in the coming years.
By Florian Fischer
Mobile Augmented Reality at a Glance
chosen range are automatically selected in
order to produce a clean and uncluttered
image. It comes with a data base of over
100,000 peaks world wide. The data, extract-
ed from OpenStreetMap, is held in its own
database. In the future online access to
OpenStreetMap will be provided.
The Mobile Phone becomes a HUD
An old hand in Augmented Reality is start-up
Mobilizy ( Their application
Wikitude-AR, also known as Wikitude World
Browser, first appeared on the G1 Google
Android in October 2008 in the UK. It is an
Augmented Reality browser for geo-referenced
articles in Wikipedia, the geo-coded and multi-
lingual database of the local search provider
Qype (, and all data of the photo ser-
vice Panoramio. Thus Wikitude offers approxi-
mately 350,000 articles world-wide. All articles
can be visualized in a map view, a satellite map,
simply as a list or in the camera view. In August
2009 the research and development group
Mobilizy reached the next step and revealed a
preview of an augmented reality navigation sys-
tem. Wikitude Drive combines real-time naviga-
tion with mobile augmented reality by overlay-
ing point-to-point directions on a camera view,
without the need for maps. It will be launched
for Android, iPhone and Symbian phones. The
navigational data is accessed in real time from
the internet. Wikitude Drive is similar to the head-
up displays (HUD) for cars that are sometimes
integrated into today's high-end models. But this
light- weight navigational system is appropriate
for pedestrian navigation as well. This last mile,
as engineers call it, often contains the most dif-
ficult visual obstacles. Metro stations, for exam-
ple, elude surface visibility.
Wayfinder ( locates
the nearest New York City subway and PATH (Port
Authority Trans-Hudson) stations via the Android
phone. It also gives walking directions and a view
of the walking directions on a map. This aug-
mented reality application won the New York City
BigApps competition. New York City aims to pro-
vide information and transparency to its citizens.
A similar purpose underlies a new update of the
applications London Tube and Mtro Paris from
the independent French development studio
Presselite ( It enables
iPhone 3GS users to see the nearest stations and
POI in Augmented Reality views of London and
Paris. Supplementary places of interest, shops
and snack bars are also shown. Local search
seems to be a noteworthy field for Augmented
Augmented Local Search
Layar ( shows apartments to rent, the
shortest way to where you left your car and doc-
tors associated with your health insurance com-
pany. This Mobile Augmented Reality Browser,
as it is called, utilizes a layer concept for addi-
tional information instead of connecting to
Wikipedia like Wikitude does. The user can
choose the additional layers to display. The appli-
cation then overlays the camera display with
data, references and advertisements. Yelp
( published a similar tool called
Monocle in 2009. It takes Yelp information and
overlays it into the real world. The feature hangs
up little icons for Yelp-reviewed establishments
letting you know exactly where and how far away
they are.
Meanwhile the application Junaio
( focuses on social networking
to generate content. The application was pub-
lished by Metaio (, a company
that specializes in Augmented Reality solutions
in a multitude of technological domains. Junaio
is an iPhone client that combines mobile
Augmented Reality with social networks. Users
can geo-code information and publish this con-
tent within their network of friends. Graphics and
text can be geo-coded as well as twitter feeds.
At the moment Metaio accumulates AR tags to
establish a content database. This process cur-
rently works without any moderation or censor-
ship, though this might soon be changed to
enable more strategic content development.
Everybody is in the Starting Block
The digital sixth sense appears to be within tan-
gible reach. Developers all around the world are
working on computer-aided extensions of our
sense of reality. The big players are no excep-
tion. Microsoft's Bing Map is strolling down the
Augmented Reality track as shown in some demo
videos at the TED conference in February 2010
( Bing Maps' Blaise Aguera y Arcas
explained, "We see this space, three-dimension-
al environment, as being a canvas on which all
sorts of applications can play out". The new iPad
holds out the possibility of being an Augmented
Reality device, but unfortunately Apple forgot to
give it a camera. However, a close look into the
iPad reveals space reserved for a camera.
Googles Goggles hit the front page in January
2010. It is an application for visual search on
Android smartphones that compares elemen-
tary parts of the camera view with Googles
database of pictures. If there is a match all
information linked to the picture is shown.
Goggles evoked another big debate about pri-
vacy issues as now even pictures of peoples
faces might be searched and linked to
Facebook user profiles.
I would not call it augmented reality yet
because there is no positional link for the
superimposition of data. But it is definitely on
the verge of happening and might be a solution
to mitigate the problem of inaccurate GPS local-
We Need More Computing Power
The GPS on a mobile phone only gives your posi-
tion within around 20 meters. Using the iPhones
compass for orientation is accurate only to
around 20 degrees. This can lead to problems in
localizing content for the cameras view. Real and
augmented objects may also be poorly aligned
with each other. Hence virtual objects can end
up floating in the view rather than being solidly
anchored to real objects. This becomes a big
problem if, for example, two restaurants with dif-
ferent review ratings sit side by side. The cre-
ation of a signature from a photograph of a build-
ing or other landscape object that is then linked
to the objects coordinates might help decrease
these problems of inaccuracy. Metaio as well as
the U.S. chip producer Qualcomm identified this
kind of environment detection to be the long-
term solution for problems with accuracy.
It will take some time before we can get skiing
directions on ski goggles and immediately send
a Facebook message to that unknown girl we
met on the slopes. Until then computing power
becomes another challenge along with the prob-
lem of inaccuracy. If you saw the way AR appli-
cations are presented you might feel disappoint-
ed, as they still appear as very simple dots on a
camera view. Expectations are somehow driven
by the way Hollywoods special effects fool us.
Chip producers need to comply with the need
for computing power in smartphones. Qualcomm,
for example, is working on chips specialized for
AR applications as announced in Qmag in
November 2009 ( I expect
the first big steps in mobile AR to be taken in
2011. So, if 2010 has been heralded as the year
of mobile AR, 2012 might be the year of its even-
tual breakthrough.
Florian Fischer, GIS Editor and Research Assistant at
the Austrian Academy of Sciences, Institute for
GIScience in Salzburg, Austria.
Latest News? Visit
Ar t i cl e
March 2010
Presselites London Tube AR application.
(Source: Presselite,
Why And What Do Individuals Contribute?
Volunteered Geographic Information
Whether you think of Google Maps, OpenStreetMap, TomTom, the US Government, or many other sites, the increased
presence of volunteer contributors (or 'produsers') is beginning to relocate and redistribute GI production activities
from professional mapping agencies to wider networks of actors. In this article, the authors introduce the types of
people who volunteer geospatial information, sort though the nature of their contributions, and identify early lessons
to be drawn from this research.
By David J. Coleman and Yola Georgiadou
Numerous articles and examples over the past 30 months have introduced
us to the terms "neogeography", "produsers"; and "volunteered geo-
graphic information" or VGI. Collaborative Web-based efforts like Open
Street Map, Tagzania,, Peoples Map, and Platial all enable
amateur enthusiasts to create and share georeferenced point- and line-
based data. Citizen inputs from personal GPS receivers and cellphones
now strengthen emergency response efforts.
Commercially, firms like Tele Atlas, NAVTEQ and TomTom use Web-based
customer input to locate and qualify mapping errors and/or feature
updates required in their road network databases. Google Map Maker
now provides to citizens in over 170 jurisdictions the ability to help popu-
late and update Google Maps graphical and attribute data (Google, 2009,
see Figure 1). In October 2009, Google announced it was even foregoing
relationships with a traditional supplier for its U.S. data coverage as it
increasingly relies on its own capabilities and volunteer base.
What motivates people to voluntarily contribute such information? Can
we assume that VGI contributors will be motivated by the same factors
as those of contributors to the Open Source software and Wikipedia com-
munities? VGI updates may represent a real opportunity to keep map-
ping data up to date. If the private sector is already using this, what's
holding back budget-strapped public sector mapping programs?
These questions form the basis of a research program underway at the
University of New Brunswick. They are discussed in depth along with a
more extensive list of references in Coleman et al (2009a). In this article,
the authors introduce the types of people who volunteer geospatial infor-
mation, the nature of their contributions, and early lessons to be drawn
from this research.
Contributors and Their Motivators
Empirical research has classified volunteer contributions in
both the Wikipedia and the Free or Open-Source Software
(or "F/OSS") communities. Closer to our own field, early
lessons may be drawn from VGI research now underway by
research groups in North America, the United Kingdom, and
western Europe. An excellent collection of early research
papers on the subject may be found in GeoJournal (2008).
Jimmy Wales, one of Wikipedias founders suggested 2%
of the users do 75% of the work on that site. In-depth
analyses of Wikipedia contributions confirm that even small-
er percentages of committed, registered contributors --
called zealots, insiders, or elite regulars -- undertake
the vast majority of the individual edits. While occasional
"Good Samaritan" content providers may actually make
very few unique contributions, they are still very important
in terms of contributing new content.
There are parallels with VGI here. For example, two major road network
providers claim that many of their individual contributors may be satis-
fied to provide only one or two contributions often concerning new
roads or updates in their own immediate neighbourhood. An early analy-
sis of OpenStreetMap contributors suggests that a very small number of
individuals contribute the majority of content to that database.
Why Contribute?
To better understand why individuals contribute geographic information,
lessons may again be drawn from experiences in the Wikipedia, F/OSS,
and commercial User Contribution Systems communities. Consolidating
and summarizing these findings yields the following list of motivators to
make constructive contributions:
(1) Altruism;
(2) Professional or Personal Interest;
(3) Intellectual Stimulation;
(4) Protection or enhancement of a personal investment;
(5) Social Reward;
(6) Enhanced Personal Reputation;
(7) Provides an Outlet for creative & independent self-expression; and
(8) Pride of Place.
Pride of Place plays a major role in encouraging individuals to make
updates to road centreline and point-of-interest data in Google Earth,
OpenStreetMap and Tele Atlas or NAVTEQ datasets covering their home
town. Altruism, Professional or Personal Interest, and possibly Social
Reward are strong motivators for citizens engaged in reporting specific
Ar t i cl e
March 2010
Figure 1. Countries in which Individuals Collect and Edit Their Own Data using
Google Map Maker (Google, 2009)
instances or extents of natural or man-made disasters. Social Reward,
Professional or Personal Interest, Pride of Place, and possibly Intellectual
Stimulation can motivate persons to participate in OpenStreetMap "map-
ping parties", Google's volunteer-based mapping in emerging nations,
and early USGS National Map Corps operations. Protection or
Enhancement of a Personal Investment motivates individuals to use
TomTom's MapShare service to update data on their TomTom personal
navigation unit.
Not all contributors may be interested in providing objective or reliable
information. There are negative motivators to consider as well. Such
motivations are easy to identify:
(1) Mischief: Mischievous persons or vandals hoping to generate skep-
ticism or confusion by replacing legitimate entries with nonsensical or
overtly offensive content.
(2) Social, Economic or Political Agenda: Independent individuals or repre-
sentatives motivated by beliefs in a given community, organization or
cause; and
(3) Malice and/or Criminal Intent: Individuals possessing malicious (and
possibly criminal) intent in hopes of personal gain.
As one progresses from (1) to (3), it is more difficult to develop automat-
ed approaches to monitoring, identification, editing and overall QA. While
far from tamper-proof, there are tools now being developed that can help
identify the location of the computer from which a contribution is being
made. For example, WikiScanner consists of a publicly searchable
database that links millions of anonymous Wikipedia edits to the organi-
zations where those edits apparently originated, and then maps the cor-
responding geographic location(s) associated with those respective IP
Characterizing the Contributions
Drawing from the work of authors mentioned above, voluntary contribu-
tions to Wikipedia may be termed either Constructive or Damaging
and fall into one of ten categories. Specifically:
Legitimate new content;
Constructive amendments, clarifications and additions;
Validation and repair of existing entries; and
Minor Edits and Format Changes.
Mass Deletes Removal of all or nearly all of an articles content;
Nonsense Text that is meaningless to the reader and/or irrelevant to
the context of the article;
Spam Advertisements or non-useful links incorporated into the arti-
Partial deletes Removal of some of an articles content, from a few
sentences to many paragraphs;
Offensive content Inclusion of (e.g.) obscenities, hate speech, unwar-
ranted attacks on public figures, unexpected links to pornography;
Misinformation Clearly false information, such as changed dates, sub-
tle insertions or removal of certain words which change the meaning
of a passage.
There are corresponding geographical examples of all four types of
Constructive Contributions. In terms of damaging contributions, the pos-
sibility of a Partial Delete to a map database could have serious conse-
quences. While they may occur, the likelihood of not easily detecting and
correcting Mass Deletes or Nonsense contributions (e.g. GPS Art) to a map
database would be low. "Misinformation" may fall into two categories.
Unintentional misinformation may be provided where someone genuinely
believes they are providing reliable new information or updates but, due
to procedural errors, innocent misinterpretations, or reliance on false sec-
ond-hand information, incorrect information is provided.
As mentioned earlier, contributions of deliberate or intentional misinfor-
mation are usually driven by a conscious agenda. For example, a group
of concerned citizens and organizations may wish to see digital map and
attribute data amended to re-route traffic around older village centers,
residential neighborhoods and school zones. Again, WikiScanner-type
tracking technologies may be useful in identifying logical linkages between
the nature & location of contributors and their respective contributions.
Early Lessons Learned
What early lessons may be drawn from these findings? First, VGI need not
necessarily be new graphical information. In many instances, it may be
updated attributes (a dirt road now paved) or even additional information
(the official name and/or purpose of a given building). For example, most
data submitted by TomTom MapShare customers are updated attributes.
Second, volunteer contributors want some recognition of their contribu-
tion. Such recognition may range from early acknowledgement of a contri-
bution via an automatic return e-mail message (a practice adopted by
NAVTEQ's MapReporter site, for example) to more formal recognition on a
website's "List of Contributors" or even in metadata.
Third, contributors want to see their contribution used -- and quickly.
Bearden pointed out volunteer discouragement when the US Geological
Survey was unable to quickly incorporate map updates by USGS National
Map Corps members.
Fourth, there are ways to assess contributor credibility and validate the
corresponding contributions. There are definite spatial considerations that
make VGI contributions unique. Similarly, the date and time at which a
volunteered contribution is made may have a bearing on its credibility ---
especially when trying to assess the reliability of two or more competing
or contradictory contributions.
Finally, in an environment where many people have access to inexpensive
means of "production" a cellphone camera, digital camcorder, or GPS
in a PDA the emphasis of both consumers and professionals under-
Latest News? Visit
Ar t i cl e
March 2010
An Example of Volunteered Geographical Information in New York.
standably shifts away from production and towards filtering. In future,
there may even be a mix of responsibilities when it comes to determining
who actually performs such filtering or quality control trained profes-
sionals or a network of informed consumers.
Going Forward
Research in this field is moving ahead quickly. Reports of empirical qual-
ity testing are now emerging that better qualify the strengths and weak-
nesses of volunteered contributions of graphical and attribute data and
where those contributions are most likely to occur.
Going forward, the cultural and process changes involved in shifting the
planning and production focus from a "coverage-based" to a "feature-
based" orientation cannot be underestimated. Road network firms like
TomTom, NAVTEQ, and TeleAtlas have already made this shift and realized
quicker turnaround times of updates (TomTom, 2008), but many govern-
ment mapping organizations have not. How three such organizations have
dealt with such change is discussed in Coleman et al. (2009b).
Finally, if a mapping organization wishes to capitalize on a distributed net-
work of volunteer geospatial data produsers, then it must start refocusing
attention across what happens both inside that organization and also in
the new social network of geo-information production. New rules and stan-
dards will be required to take into account the values of these volunteers
equity, security, community building, privacy in evaluating the per-
formance of this new production system. Depending on the type of infor-
mation being collected, there may in future even be a mix of responsibili-
ties when it comes to determining who actually performs such filtering or
quality control trained professionals or a network of informed consumers.
Much will depend upon program design and acceptance criteria. If the
variety of examples already on the Web is any indication, the situation
will likely be different from organization to organization.
Dr. David Coleman is Dean of Engineering and a Professor of
Geomatics Engineering at the University of New Brunswick in Canada. Prior to
obtaining his PhD, he spent 15 years in the Canadian geomatics industry as a
project engineer, executive, and independent consultant.
Yola Georgiadou is Professor in Geo-information for
Governance at the Faculty of Geo-Information Science and Earth Observation
(ITC), University Twente, The Netherlands. Her research interests include the use
of geo-information in public governance and the governance of Spatial Data
Infrastructures (SDI).
The senior author would like to recognize the Earth Sciences Sector of Natural
Resources Canada, the Natural Sciences and Engineering Research Council of
Canada (NSERC), and the GEOIDE Network of Centres of Excellence for their
financial support of this research.
Ar t i cl e
March 2010
ESRI International User Conference
July 1216, 2010 | San Diego, CA
One week. One place. Everything GIS.
The registration deadline
is May 21, 2010.
Copyright 2009 ESRI. All rights reserved. The ESRI, ESRI globe are trademarks, registered trademarks, or servi
in the United States, the European Community, or certain othe
Latest News? Visit
March 2010
Just before our students start their thesis period, our MSc Geoinfor mation
Science and Earth Observation program offers a set of optional topics.
So far the students have been confronted with a lot of theoretical sub-
jects. Of course these principles have been illustrated by many practical
examples and exercises. One optional topic is Geo-information processes
for United Nations peacekeeping missions. One of its objectives is to
understand the problems involved in working in a data-poor environ-
ment: data poor because the operational areas of UN peacekeeping mis-
sions tend to be in virtually non-charted locations. And even if maps are
available, they are often outdated. This puts all kinds of constraints on
data gathering, processing and use. From an academic perspective, this
results in interesting data-integration problems. This topic is taught with
the assistance of the United Nations. Staff members of the UN Department
of Field Support, Cartographic Unit visit Enschede to explain in detail how
their organization works. An additional benefit for the students is that
they are meeting one of their potential employers. Of the two hundred
staff members spread over the New York headquarters, the GIS Centre in
Brindisi and the GIS units with the missions, about twenty percent are ITC
No Time to Reflect
Within the UN Department of Field Support, the Cartographic Unit is respon-
sible for providing peacekeeping missions around the world with correct
geographic information. This includes up-to-date topographic and relevant
thematic maps. Geographic information systems are used as a peacekeep-
ing decision-support system to analyze and interpret terrain features,
weather, demographics, situations and environment. The most beneficial
aspect of the use of GIS is its capability to integrate information from var-
ious sources. Much of the preparatory work is done at the GIS Centre at
the UNs logistics base in Brindisi, Italy.
The Cartographic Unit also offers services related to the work of the
Secretariat including the Security Council. These could be simple maps
explaining where UN troops are deployed, or maps clarifying specific situ-
ations at international boundaries.
After the UN staff has explained their mode of operation, the students
have to execute tasks that are supposed to mimic practice. The idea is
that the students combine all the knowledge they have recently gained to
arrive at solutions. These tasks can be related to planning (where to estab-
lish a mission headquarters), data collection (how to find available maps;
how to deal with fieldwork data such as annotated GPS data), organiza-
tion of an information system (how to deal with datasets of different spa-
tial and temporal scales; what are the consequences of working with data
of poor quality), dissemination (how to create maps for multiple audiences
such as the UN Security Council and field officers). The results of the stu-
dents work is presented in New York via video conference. The interest of
the Cartographic Unit in this work is related to the fact that they live day
to day and have to act depending on the current situation. This leaves
virtually no time to reflect. The students might present new ideas and
solutions in relation to their problems which could be incorporated into
daily operations. Here theory meets practice.
Time Constraints
This idea has expanded our relations and we offer our skills to brief their
staff so they can strengthen their background knowledge in topographic
and thematic mapping. To make sure the course content fits their needs,
we recently visited Brindisis GIS Centre and training facilities. We traveled
there in the week following the Haiti earthquake. Maybe not the most suit-
able moment to discuss strategic training, but it was the best time to see
them in action. We were able to witness how they work and how difficult
a seemingly simple job such as creating a real-time damage map of Port-
au-Prince actually is. But even more impressive, under what kinds of time
constraints they have to operate. And for a moment, you doubt the impor-
tance of lecturing about the use of color in a map. Who cares? These
maps have to be ready! The next moment you realize just how important
it is. The maps have to be understandable.
A scientist doesnt work from nine to five, but I do know what, where and
when Ill be teaching in October 2010. Staff at the UN have to be prepared
to work 24/7 because they never know what is going to happen when or
where. They have no clue what will be keeping them busy in October
2010. It will be an interesting didactical challenge to introduce theory into
such a pragmatic organization. But one thing was confirmed during this
trip: practice needs theory and theory needs practice.
About Maps:
Theory and Practice
Menno-Jan Kraak is head of
ITCs Geo-Information Processing Department.
He is a member of the editorial board of several
international journals in the field of Cartography and GIS.
In the school geography lesson many features and processes of the nat-
ural and human influenced environment can be documented, observed
and analysed by means of satellite images. The syllabuses of different
school types for 14 to 18 year olds recommend the use of aerial pho-
tographs and satellite images, and many school textbooks and atlases
include satellite representations and visualisations. Nevertheless all these
illustrations are no more than analogue images of a quasi reality.
Knowledge, infrastructure (hardware and software) and suitable images
are required in order to bring forth the hidden information from multi-
spectral images.
The Eduspace Website for secondary schools is designed to provide stu-
dents and teachers with a new learning and teaching tool. Eduspace com-
bines teaching and learning material with background material and work-
sheets to put the student in the centre of the learning process. It aims at
inspiring teachers to incorporate Earth observation in their curricula and
provides for in-service training. It encourages teachers to use Earth obser-
vation data by providing ready-made projects and furthermore stimulates
the curiosity of students with attractive spaceborne images and
further resources and tools such as the educational image processing soft-
ware LEOWorks (Lichtenegger et al. 2002 & 2007).
The Eduspace initiative consists of a number of different parts:
- The Eduspace Website itself with general information about remote
sensing, different case studies (including data and additional informa-
tion), LEOWorks software, etc.
- LEOWorks is an image processing tool made available for data analy-
sis and image interpretation to both students/pupils and experienced
teachers. It has been developed by the Eduspace team to display, anal-
yse, enhance and interpret images from Earth observation satellites. It
is available free of charge for registered school classes and can be
downloaded via the Eduspace website.
Ar t i cl e
March 2010
Educating Remote Sensing Techniques
Ever since Google Earth and World Wind have made their entrance in childrens rooms and the classroom, aerial photographs
and satellite images have become more and more popular. In print, TV media and even in schools, the general public, teachers
and pupils have become increasingly aware of the possibilities remote sensing brings for obtaining a glimpse of hidden parts
of our Earth and space. But remote sensing can provide more than simply presenting beautiful 3 dimensional images of our
environment as seen from above. An example of this is the Eduspace initiative for secondary schools.
By Wolfgang Sulzer
Example of scholars work with Eduspace/Leoworks
Latest News? Visit
Ar t i cl e
March 2010
- The Eduspace Image Catalogue software allows the user to perform
multi-mission inventory searches on the main ESA-supported missions.
Images of almost the whole of Europe can be viewed and download-
ed in different bands (blue, green, infrared ) and processed with
- The ESA School Atlas, produced by GEOSPACE in cooperation with ESA,
is based on satellite maps, using data from a great number of Earth
observation satellites. A detailed handbook provides suggestions for
teachers on how to make use of the atlas during lessons, explaining
the motivation and contents of the maps and providing numerous
ideas for exercises with this new teaching material. The maps and in
most cases the original Earth observation data used in the creation
of the maps are supplied in digital form on two DVDs accompanying
the atlas. This allows students/pupils to actively modify the maps and
to apply simple data evaluation procedures. Work with the digital data
is supported by the integration of the atlas into the Eduspace website
(Beckel et al. 2006).
At the Institute for Geography and Regional Science (University of Graz,
Austria/ these Eduspace tools have been used
for teacher education for the past 6 years. The lessons are embedded in
the subject Geo-Spatial Technologies and give an applied introduction
into using GIS and remote sensing techniques in secondary school teach-
Big Brother GIS
The teaching material Remote Sensing Image is very suitable for use in
geography classes - and possibly more so than its big brother GIS, as an
old proverb says: a picture is worth a thousand words. Analogue images
have been used in school lessons for many years; the implementation of
image processing as a part of geo-spatial technologies has failed so far
due to limiting education possibilities within teacher training curricula.
The acceptance of geo-spatial technologies is increasing in teacher train-
ing courses, but the acceptance level is still too low for its implementa-
tion in lessons on a regular basis. An intensified presence in school events
(open days, seminars ), closer contact to the teachers involved and coop-
eration with educational authorities can contribute to increased usage of
remote sensing in schools.
The infrastructure and software issues appear to have been solved, train-
ing in remote sensing techniques can be supported by adapting curricula
or intensifying internet use. In addition to the Eduspace data sets, local
data and case studies can also be generated through a cooperation
between local/regional governments, universities and schools.
The designing of an appropriate basic lesson in teacher training, which
includes both GIS and remote sensing techniques is an idea that makes a
lot of sense in this context. Experience in Graz has demonstrated that
valuable results can be achieved using basic knowledge and applied image
processing with LEOWorks even when only 24 hours (6 ECTS) are avail-
able for this in a geography teacher curriculum.
Dr. Wolfgang Sulzer is Ass. Professor at the Institute for Geography and Regional
Science (University of Graz/Austria). Further information about the author and
activities can be found under:
Eduspace homepage
Additional textbooks
Opportunities for Emerging
Geospatial Technologies
2010 Annual Conference
April 2630
Town and Country Hotel
San Diego, California
Its time to register for the years most important industry event
the ASPRS 2010 Annual Conference. The program will include
sessions on evolution and future geospatial data collection,
processing and analysis, and information derivation in ways that
are useful in making local, national and global decisions.
Nobel laureate Jonathan Overpeck, together with a panel of
experts, will discuss Predicted Consequences of Global Climate
Change on Land Surface Processes and the Role of Remote Sensing
for Detection and Adaptation in the Opening General Session.
Incoming ASPRS President Carolyn Merry will deliver her Presidential
Address at the Thursday General Session. In addition, Mike Renslow,
Renslow Mapping, will give a state-of-the-industry address on
The Impact of Technology Development, Innovation, and
Nontraditional Mapping Applications.
An expansive Exhibit Hall will showcase the latest products
and services. Industry Hot Topics will be discussed and the
program includes numerous opportunities for networking and career
enhancements, plus an evening on the USS Midway.
All of this and San Diego too.
Calendar 2010
Advertiser Page
ESRI 17, 52
Geodis 15
Handheld 33
Intergeo East 57
Novatel 2
Leica Geosytems 59
Pacific Crest 11
Sokkia 60
Spectra Precision 9
Vexcel 13
Topcon 25
Advertisers Index
27-29 April GEO-Siberia 2010
Novosibirsk, Russia
27-29 April SIBMINING 2010
Novosibirsk, Russia
Internet: and
28 April International Seminar on Early
Warning and Crises Management
Novosibirsk, Russia
Info: Milan Konecny
28-29 April CERGAL 2010
Rostock, Germany
29 April Oracle Spatial User Conference
Phoenix, AZ, U.S.A.
02-08 May XXIII International Geodetic
Students Meeting (IGSM)
Zagreb, Croatia
03-06 May IEEE/ION PLANS 2010
Indian Wells/Palm Springs, CA,Renaissance
Esmeralda Resort & Spa, U.S.A.
Tel: +1 (703) 383-9688
04-06 May Rencontres SIG La Lettre
Marne-la-Valle, ENSG, France
12 May CGS Conference 2010
Ljubljana, Ljubljana Exhibition and
Convention Centre, Slovenia
Tel: +386 1 5301 108
Fax: +386 1 5301 132
12-14 May GEO EXPO China 2010
Beijing, China P.R.
Stuttgart, Germany
19-21 May INTERGEO East
Istanbul, Istanbul Convention & Exhibition
Centre, Turkey
20-21 May 7th Taipei International Digital
Earth Symposium (TIDES) 2010
Taipei, Taiwan
Tel: +886-2-28619459
Fax: +886-2-28623538
25-29 May BALWOIS Conference
Ohrid, Republic of Macedonia
27-28 May GISCA 2010 - Central Asia GIS
Conference - Water: Life, Risk, Energy and
Bishkek, Kyrgyz Republic
02-04 June ISPRS Commission VI Mid-Term
Symposium: "Cross-Border Education for
Global Geo-information"
Enschede, ITC, The Netherlands
02-05 June ACSM 2010
Baltimore, MD, U.S.A.
Tel: +1 317 637 9200 x141
07-09 June Sensors Expo & Conference
Rosemont, IL, Donald E. Stephens
Convention Center, U.S.A.
Tel: +1 (617) 219 8330
07-10 June 2010 Joint Navigation
Orlando, FL, Wyndham Orlando Resort,
Tel: +1 (703) 383-9688
08-10 June 58th German Cartographers
Day 2010
Berlin and Potsdam, Germany
12-14 June Digital Earth Summit
Nessebar, Bulgaria
Tel: +359 (887) 83 27 02
Fax: +359 (2) 866 22 01
14-16 June 2nd Workshop on Hyperspectral
Image and Signal Processing
Reykjavik, Iceland
Tel: +354 525 4047
Fax: +354 525 4038
14-17 June Intergraph 2010
Nashville, TN, U.S.A.
15-20 June 3rd International Conference on
Cartography and GIS
Nessebar, Bulgaria
Tel: +359 (887) 83 27 02
Fax: +359 (2) 866 22 01
20-25 June 10th International
Multidisciplinary Scientific Geo-Conference
and Expo SGEM 2010 (Surveying
Geology & mining Ecology Management)
Albena sea-side and SPA resort, Congress
Centre Flamingo Grand, Bulgaria
21-22 June 2nd Open Source GIS UK
Nottingham, University of Nottingham, U.K.
22-24 June Mid-Term Symposium of ISPRS
Commission V: Close range image mea-
surement techniques
Newcastle upon Tyne, Newcastle University,
23-25 June INSPIRE Conference 2010
Krakow, Poland
28-30 June ISVD 2010
Quebec City, Canada
29 June-02 July GEOBIA 2010
Ghent, Belgium
02-04 July ISPRS TC VII Symposium '100
Years ISPRS-Advancing Remote Sensing
Vienna, Austria
03-04 July InterCarto - InterGIS 16
Cartography and Geoinformation for
Sustainable Development
Rostov (Don), Russia
16-17 March 9. Internationales 3D-Forum
Lindau, Germany
Tel: +49 (8382) 704293
Fax: +49 (8382) 704 5 293
19-24 March SmartGeometry 2010
Workshop and Conference
Barcelona, Spain
22-25 March 2010 ESRI Developer Summit
Palm Springs, CA, U.S.A.
Tel: +1 909-793-2853, ext. 3743
22-25 March CARIS 2010 Stronger
Together People, Products, Infrastructure
March 22-25, 2010, Miami, FL., U.S.A.
24-25 March GEO-10 The complete GEO
Ricoh Arena, Coventry, U.K.
30 March-02 April Geoform+ 2010
Moscow, Russia
Tel: +7 (495) 995 0594
Fax: +7 (495) 995 0594
11-14 April Geospatial Intelligence Middle
East 2010
Manama, Bahrain, UAE
11-16 April XXIV FIG International Congress
2010 Facing the Challenges - Building
Sydney, Sydney Convention & Exhibition
Centre, Australia
Tel: +61 (02) 2 9265 070
Fax: +61 (02) 2 9267 5443
12-16 April SPIE Photonics Europe
Brussels, Belgium
14-16 April IV International Conference
"Remote Sensing - the Synergy of High
Moscow, Atlas Park Hotel, Russia
Tel.: +7 (495) 988 7511
14-18 April AAG Annual Meeting 2010
Washington, DC, U.S.A.
19-23 April BAE Systems GXP International
User Conference and Professional
San Diego, CA, Hilton La Jolla Torrey Pines,
25-29 April GITA 2010 Geospatial
Infrastructure Solutions Conference
Phoenix, AZ, U.S.A.
Phone: +1 (303) 337-0513
Fax: +1 (303) 337-1001
26-30 April 2010 ASPRS Annual Conference
San Diego, CA, Town and Country Hotel,
Please feel free to e-mail your calendar notices
March 2010
I believe in innovation.
Innovation is intelligence
tap it and let your work flow.
You want the tools of your trade to be state-of-the-art. Thats why
Leica Geosystems is continuously innovating to better meet your
expectations. Our comprehensive spectrum of solutions covers all
your measurement needs for surveying, engineering and geospatial
applications. And they are all backed with world-class service and
support that delivers answers to your questions. When it matters
most. When you are in the field. When it has to be right.
You can count on Leica Geosystems to provide a highly innovative
solution for every facet of your job.
Leica Geosystems AG
Leica TS30 the new total station that redefined
precise surveying by offering unmatched accuracy and
quality is a fine example of our uncompromising
dedication to your needs. Innovation: Yet another
reason to trust Leica Geosystems.
GNSS Recei ver
The entirely new Sokkia GNSS system provides
unsurpassed versatility and usability for
RTK,network RTK and static survey, enhancing
efciency in all types of eld work.
Scalable - Affordable - Triple Wireless Technologies