You are on page 1of 45

Interview

Coming Soon:

GeoEyes Next-Generation Color Satel


GeoEye-1, shown here in an artists rendering, will offer the highest resolution and most advanced collection capabilities of any commercial imaging satellite.

In the months ahead a commercial Earth imaging satellite, GeoEye-1, will be launched by GeoEye Inc. It will provide the highest resolution and most advanced collection capabilities of any commercial remote sensing system. The satellite will acquire high-quality panchromatic and multispectral imagery at spatial resolutions of 0.41-meters in the panchromatic mode and 1.64-meters in the multispectral mode, respectively, and collect hundreds of thousands of square kilometers of map-accurate imagery in a single day. By Kevin Corbley

As a comparison, GeoEye-1 will be much larger than IKONOS, the world's first high-resolution commercial imaging satellite launched in 1999 by then Space Imaging. GeoEye was formed in January 2006 when Orbimage purchased the assets of Colorado-based Space Imaging. The newly formed company is now known as GeoEye and its headquarters is in the Washington D.C. area. The IKONOS satellite weighs 1,600 pounds, while GeoEye-1 will tip the scales at more than 4,000 pounds, collecting imagery as it moves 425 miles (684 kilometers) above the earth at about 17,000 miles per hour.

A Team of Partners
To bring about such a major endeavor, GeoEye president and CEO Matthew O'Connell assembled a team of partners to develop and launch GeoEye-1. Gilbert, Arizona-based General Dynamics/Advanced Information Systems serves

as the prime contractor and integrator for the satellite's bus and telescope. To develop a camera capable of acquiring imagery at 41-centimeter spatial resolution (16 inches), GeoEye turned to ITT, formerly Kodak Remote Sensing Systems, which also built the IKONOS sensor. The satellite will be lifted into orbit on a Boeing Delta II launch vehicle from Vandenberg Air Force Base in California. Following the launch of GeoEye-1, the satellite will undergo about 45-60 days of calibration and checkout. Once the satellite is declared operational, it will begin a three-month imaging operation mostly dedicated to meeting the needs of the Pentagons National Geospatial-Intelligence Agency (NGA). For the most part, imagery collected during this period will also be made available in the company archive for commercial sale. GeoEye will take commercial orders during this timeframe and fulfil them as soon as possible.

While GeoEye-1 will be able to collect imagery at 41-centimeter ground resolution, imagery for commercial customers will be re-sampled to half meter resolution before sale. This is due to current U.S. government licensing restrictions. However, GeoEye is seeking a waiver to their license in order to be able to provide the highest resolution imagery to governmental customers in some countries. For example, in the European region, GeoEye has requested that Poland be able to have direct access to GeoEye1 and governmental customers there be able to utilize 41-centimeter imagery. On June 29, 2007, the National Oceanic and Atmospheric Administration (NOAA) notified all U.S. commercial imagery providers that the 24 hours hold rule for imagery better than the resolution of the IKONOS satellite (.82-meter) has been lifted. This licensing restriction was originally created early in the history of the

September 2007

Interview

lite Imagery
[a] [b]

Once launched GeoEye-1 will be equipped with the most advanced technology ever used in a commercial remote sensing system. The satellite will be able to collect images at 0.41-meter panchromatic (black & white) and 1.65-meter multispectral resolution. As shown in the following simulation using IKONOS satellite imagery (a) and aerial photography (b), the detailed half-meter imagery will expand the applications for satellite imagery in every commercial and government market.

tool for mapping and surveying, Brender said. In June of this year GeoEye invested in a privately held company called Spadac. Spadac utilizes geospatial technologies in doing predictive analytics. GeoEye is working closely with Spadac in offering customers the tools to help them take pixels to the next level. Says Brender, Spadac helps us extract knowledge from our pixels and aids our customers in better understanding issues before they become problems.

Advanced Capabilities
As a major customer, NGA will receive priority tasking and a substantial discount for agreeing to purchase a large volume of imagery. But ample capacity will be dedicated to commercial customers and allow the company to build a vast archive of imagery in relatively short time. Spatial resolution, geo-location accuracy, and large-area coverage are the three specifications commercial and government customers are most interested in, says Dave Kenyon, GeoEye senior director, space segment engineering. And those are the key capabilities we focused on when building this satellite. Of course, resolution is the parameter by which most judge and compare imaging satellites. Frank Koester, vice president and director, Commercial and Space Science Program, ITT Space Systems Division, says, ITT's integrated camera payload, including telescope and sensor subsystem, will provide GeoEye-1 with the highest resolution in commercial remote sensing. Offering 41-centimeter panchromatic and 1.64-

commercial remote sensing industry. This will enable space-based commercial imagery providers to sell imagery from current and next generation satellites immediately upon collection.

Mapping in Orbit
The GeoEye-1 satellite fundamentally will be a mapping machine in orbit, explains Mark Brender, GeoEye's vice president of communications and marketing. We will be able to offer commercial customers half-meter resolution color imagery with the most accurate geolocation accuracy ever achieved in a commercial space-based system, he said. GeoEye recently acquired M.J. Harden, an aerial imaging and geospatial firm in Mission, Kansas. The firm flies two aircraft, one with a digital mapping camera, and the other with a new LiDAR imaging system. The combination of aerial and satellite imagery will be a powerful

meter multi-spectral in the blue, green, red, and nearinfrared bands, the satellite will enable clients to identify small objects and features at a level of detail never available before from commercial imaging satellites. At that resolution, you can count the manholes on a city street or discern home plate on a baseball diamond. Geospatial data users in the defense and intelligence, oil and gas, insurance, urban planning, utility, and cartographic disciplines- all of which traditionally map small features-are expected to expand their use of satellite imagery as a result. It's anticipated that online search engines such as Yahoo!, Google Earth, and Microsoft Virtual Earth also will be anxious to import consistent high resolution color imagery over large areas. In addition, though satellite and aerial images often are complementary, GeoEye expects many traditional users of aerial imagery to jump to satellites for applications requiring half-meter resolution, especially in parts of the world where it's difficult to deploy an aircraft due to weather, political, or security issues. But there's more to good imagery than spatial resolution, notes Lee Demitry, GeoEye's vice president of engineering. People are going to be stunned with the sharpness and clarity of this imagery, he predicts, explaining that overall image quality, most often defined by the sharpness of feature boundaries, is just as critical as spatial resolution to many applications. The camera builder, ITT, has employed new technological advancements to achieve this level of image quality. The large size of the telescope's primary mirror, the alignment of the camera telescope, and a favorable (high) signal-to-noise ratio are key design elements in ultimately producing high-quality imagery. Geo-location accuracy is another imaging capability GeoEye expects will appeal to end users across all market segments. This refers to the precision with which objects in an image can be mapped relative to their absolute location on earth's surface. GeoEye-1 will offer three-meter accuracy, which means end users can map natural and manmade features in

Latest News? Visit www.geoinformatics.com

September 2007

Interview

stereo to within three meters of their actual locations without ground control points. This level of geo-location accuracy will be achieved with the help of three onboard systems: a GPS receiver, gyroscope, and star tracker, which will enable the satellite to determine its precise attitude, position, and location at all times. Such ancillary data will be transmitted along with image data back to earth for the ground segment to use in processing the imagery. Some of these systems, such as the star tracker, have never flown on commercial satellites before and were only used on U.S. government imaging satellites. Adds Demitry, The ability to map features with this level of horizontal accuracy without any ground control is for commercial satellites and will be a huge advantage- and enormous cost savings-for any cartographic application.

Defense and Intelligence


For government applications, especially those involving the defense and intelligence communities, the large-area coverage combined with the 41-centimeter spatial resolution has spurred the greatest anticipation for the new satellite, according to Jim Lewis, director of the Technology and Public Policy Program at the Center for Strategic and International Studies (CSIS) in Washington, D.C. Lewis says GeoEye-1 will help take pressure off U.S. Department of Defense satellites because it will provide data for many of their applications, partly because its spatial resolution and coverage area may approach the same capabilities as classified imaging systems. In the defense/intelligence community, you have competition for assets, so they have to prioritize which mission comes first, he explains. Being able to go outside those government assets and get a commercial system that can provide you things on a much faster basis can only help. And the government rides on the investment made by Wall Street in commercial remote sensing. Lewis adds that commercial imaging systems receive tremendous support from the U.S. Department of Defense because their images aren't classified. Although this may sound counterintuitive, military agencies often favor unclassified information, including satellite imagery, because it can be shared more freely with allies and coalition partners or nongovernmental organizations. Security concerns aren't an issue with commercial satellite imagery. Under the development expertise of GeoEye and its key suppliers, the company has upgraded a centralized command and control facility at its headquarters in Dulles, Virginia. This operations center will send tasking and operating commands to the satellite and receive data downlinks from it. Three other stations will be operated or leased by GeoEye in Alaska, Norway, and Antarctica. Regardless of location, GeoEye-1 customers will be able to see what's available for sale easily. The company recently enhanced its existing archive search tool, known as ImageSearch, to allow clients to perform online searches of the IKONOS archive. They expect to be able to deliver images to a client shortly after collection by the satellite. That scenario should come soon, as the GeoEye-1 team anxiously anticipates the launch of its satellite that should expand the reaches of satellite imagery and remote sensing for the government, commercial customers, and the public.
Kevin Corbley is a principal with Corbley Communications in Winchester, Virgina. Graphic illustration and images courtesy of GeoEye. For more information on this subject visit www.geoeye.com.

Prodigious Imagery
The third major technological advancement found in the GeoEye-1 satellite will be its ability to collect an enormous amount of imagery. In the panchromatic mode, the satellite will be capable of collecting up to 700,000 square kilometers in a single day and in the multispectral mode 350,000 square kilometers per day. This volume of data collection-more than four times that of any other existing commercial imaging platformwill be made possible by the agility of the satellite itself. The entire satellite will be able to turn and swivel quickly in orbit to point the camera telescope at areas of the Earth directly below it, as well as from side to side and front to back, explains GeoEye's Kenyon. This agility will enable it to collect much more imagery during a single pass. According to Mike Greenwood, spokesperson for General Dynamics Advanced Information Systems, the agility is made possible by enhanced reaction wheels that provide the torque required for motion yet inject little jitter or smear into the imagery. The standard image swath width will be 15.2 kilometers, but GeoEye1 will be able to swivel and collect multiple adjoining swaths on a single pass, meaning that very large contiguous areas can be imaged at one time. This is ideal for large-scale mapping requirements, especially for emergency response and disaster relief. The agility also means GeoEye can satisfy more than one client during a single pass by collecting a variety of individual scenes in the same geographic region. The satellite will swivel up to 40 degrees off nadir, giving it an effective revisit rate of less than three days. GeoEye has already announced plans to put the large-area imaging capability to work in filling its archive. The company says it will collect as much land imagery as possible on every pass and store it in the archive whether there is a tasking order for the scenes or not.

As shown in this simulation using IKONOS satellite imagery (a) and aerial photography (b), the halfmeter imagery will even be able to discern home plate on a baseball diamond.

Announcement of GeoEye-1's imaging specifications has elicited responses from both the commercial and private-sector imagery markets. Ed Jurkevics, a remote sensing industry consultant and principal analyst at Chesapeake Analytics in Arlington, Virginia, singles out the large-area imaging. GeoEye-1 will be able to deliver imagery over large areas in a relatively short and reasonable time. So clients can expect to receive a complete image map over a large area such as a country in one season rather than over many months, he said. This will be especially important for EU countries in mapping out agricultural yields and measuring the size of land parcels, added Jurkevics. He explained that for large-area mapping projects, fast acquisition improves the overall success. If too much time elapses between collections of contiguous scenes, for example, changes in ground conditions such as vegetative growth or soil moisture can adversely impact correlations made among the scenes. The accuracy of digital elevation model extraction from image pairs can degrade if the image pairs were collected at different times under different conditions.

Latest News? Visit www.geoinformatics.com

September 2007

Article

David Schell Speaks

Open Geospatial Consortium


As a founder of the OpenGIS Foundation (OGF), the OpenGIS Project, and the Open GIS Consortium, later re-christened the Open Geospatial Consortium (OGC), David Schell is a true visionary and a pioneer in the work field of spatial information. By Remco Takken
My main concern is that providers of software and data typically are not required to warrant their products, and frequently provide disclaimers which essentially amount to buyer beware. But the problem is that it is no longer practical to think such disclaimers can continue to make suppliers or their customers invulnerable to liability in cases involving serious consequences. This is all complicated of course by the many ways in which value can be added to spatial data. The OGCs Geo Rights Management Working Group and Data Quality Working Group will bear in positive ways on this problem, but there is no panacea, and it would be good to see wider discussion of the issue. In a very real sense, we are all in this together. If I did indeed refer to all hell breaking loose over this issue I would not have been referring to myself or to the consortium I would most definitely be referring to the much greater visibility of spatial data as mapping enters the IT mainstream, and the fact that standards of usage and best practices which have traditionally been understood between suppliers and users will almost certainly become less clearly understood with the result that the quality of both data and spatial services will become less professional and perhaps less reliable. applications in the web. In fact, the potential legal liabilities I was talking about are an industry-wide issue, which to my knowledge has not yet been adequately addressed.

I am glad we have this opportunity to discuss the current state of our industry. I understand you would like to clarify some issues that were mentioned in the 1Spatial article we published in GeoInformatics 5-2007. I think you feel that some of the complex issues you raised were not precisely stated and need some additional explanation. Maybe that's where we should start.
I, too, am very glad we have a chance to do this interview. Yes, I would like take this opportunity to clarify some of the statements that were attributed to me in the coverage of my 1Spatial speech in your last issue. Im afraid that in trying to summarize what I said for the article many of my ideas were reduced to one-liners that did not do justice to the

complex issues I discussed and I want to correct any possible impression that what I said was meant to be critical or confrontational.

The article, for example, mentioned your concern with the issue of liabilities that could arise from the use of spatial information. Could you explain what you mean by liabilities and what sort of problems you had in mind?
Yes. My meaning concerned the growing popular use of spatial data and online maps -that it is inevitable that situations will arise where someone's naive confidence in widely used but uncertified or informally compiled data results in loss of life or property. When people develop and publish such maps in an informal, ad hoc, almost conversational fashion, they are most of the time not thinking of this. Publishers of map data, both recreational and professional, are finding their markets growing rapidly, and some customers are using that data not only for entertainment activities but for serious planning, assuming that the data can be fit for purpose and totally reliable, even if the data is not intended nor tested by a credible authority for life critical projects. I was surprised that the conclusion of the article seemed to suggest that I am concerned only about the part played by OGC standards in this regard referring of course to the fact that interoperability standards play a part in preserving the reliability of data across

Another issue I know you want to clarify concerns the relation of Googles KML to OGCs standard for GML. Google has positioned KML to be a major enabler of popular mapping on the Web, and Google has recently joined the OGC. Could you comment on the meaning of this relationship and on the future of KML and GML in general?
Yes, I think the OGC-Google relationship is very important, and there is no question that we share an interest in creating a coherent and productive standards environment for information and application interoperability that provides a basis for industry-wide sharing of geospatial information. The issue I have with the phrasing in the article is that I did not say, We better integrate GML with KML.

David Schell: it is inevitable that situations will arise where someones naive confidence in widely used but uncertified or informally compiled data results in loss of life or property.

10

September 2007

Article

Integrate is the wrong word to use here its not a question of integration. The proper word is harmonization. And I am not in the habit of dictating to the members. I did say that Google has joined the OGC and we are pleased with that, and we have - almost everyone has - an interest in the harmonization of these specifications for the purpose of developing efficient and consistent market practices. The two encodings are largely complementary, so we dont expect major difficulties. But the requirement to harmonize such specifications is entirely a member issue. I think it would be a good thing but I am entirely ready for the market to do what the key players think is best. Oh, and in relation to this it was stated that some customers are using that data not only for entertainMicrosoft had joined OGC when in fact my ment activities but for serious planning... comment related to our discussions with Microsoft concerning their obvious interest in this issue, as they are very likely to wish Thats true. When the market is developing to participate in the harmonization process to very fast when it is exploding as it is now ensure an industry-wide standard of this sort. with thousands of new developers and millions of consumers, theres a danger that I believe you did say that you were because of commercial pressures there wont concerned that the industry was moving be enough time to develop standards before so fast. the market is populated again with non-interoperable products.

Different companies naturally tend to put forward different technology approaches for profit, and we all know that profit doesnt always wait for an orderly standards process, so stovepiping develops. Our present situation presents challenges, the kind of challenges that it has taken us 15 years to overcome in evolving the GIS market from its proprietary business models to participation in an open environment of Web services. Im not at all pessimistic, but I think that one of our greatest challenges is to motivate technology users and providers to be concerned enough about this issue to stay engaged in the standards process, no matter how long it takes to deal with the accelerated change we are experiencing. With the new wave of consumer mapping coming into the market characterized by lightweight processes, different objectives and a focus on advertising and consumer issues we face the challenge of integrating a new style of development and geospatial processing into a thirty year tradition of a highly focused and intricate technology shared by the major GIS companies as well as the research establishment. The concern I expressed actually dealt with the requirement

Latest News? Visit www.geoinformatics.com

September 2007

11

Article

for building on the richness of that tradition in creating the new environment of lightweight spatial processes which characterize the new markets a concern that I link back to the issue of liability and the need for people to be able to rely on the accuracy of their data products. There are two different cultures to consider which are now characterized on the one hand by traditional GIS professionals and on the other by fast-moving mashup enthusiasts. The challenge for the Consortium is to embrace and help enable the new style of development while maintaining the value thats already been produced.

pay, but in a barrier free, easy, accessible way in the IT-community.

Actually, although from me it may seem like a contradiction in terms, it may be best not to emphasize standards development exclusively when talking about the work of the OGC. Our goal is not just making standards, but to help create successful busiYou referred to the GIS market as a ness models that make We want to demystify GIS. boutique market? the use of geospatial it impossible for GI and its various data types information ubiquitous. Clearly the standards The GIS market was, in its beginning, a relato be used more generally both to grow a that we develop are a key component of gettively small, specialized market for a very parhealthy and robust business community and ting to a successful business model. But stanticular group of application developers and to improve conditions for the benefit of dards are transitory; they change with the researchers that had not yet come into the IT mankind. The non-interoperability that was evolution of technology. It is much better to mainstream this sort of market is often, and perpetuated by this situation contributed sigrefer to OGC as an organization dedicated to non-prejudicially, called a boutique market. nificantly to societys lack of preparedness for the reconciliation of user requirements with The point is that todays market for geospagetting and delivering the full benefit of spathe development community standards are tial information is not the sort of specialty tial information in dealing with such critical the vehicle for ensuring that the two coopermarket that it was in the 80s, when spatial life threatening issues as climate change, disate and work together efficiently. information was only accessible by means of aster preparedness, and agricultural developproprietary commercial applications or by govment. For years it was non-interoperability In fact, a key reason for the existence of the ernment and university research laboratories. that was the reason for the artificially high OGC is to provide a forum in which competiNow we have the Internet, and a much larger price for spatial data and services, and the tive organisations that have a very significant world-wide market of open resources. Of resulting high cost of building complex sysinterest in the development of the geospatial course the pioneers of the geospatial field tems that require efficient accessibility and market can meet and discuss issues that characterized and promoted the market in the usability of spatial information. affect the user community. They have the beginning. Its in the nature of things no matopportunity, in a civilized and socially responWhat are your views on national governter what you call it, and it was a closed marsible context, to develop norms of behaviour ments role in providing geospatial data? ket. The point is that we now live in a world and best practices, governing all uses of spaof open standards and active sharing of sertial information in a responsible way-a way If the OGC has done one thing to be proud vices and data one that is getting to be so that serves the public interest. of, it has been the democratisation of the spalarge and diverse that it definitely no longer You have spoken in the past of demystial information process. The OGC has proserves just the needs of a few specialty shoptifying GIS. What does that mean and moted the creation of an open market propers. why is it important? cess that prevents the growth of limiting, So, looking forward now and taking a monopolistic tendencies involving some of wider view of things, what would you Thats right. We want to demystify GIS. We the most vital data on which peoples lives say will be the main purpose of the OGC want people to understand that information depend. as we look at the future of geoprocesabout space or location is a necessary dimensing in general? sion of any kind of information processing. Some kinds of data and information should Its very unfortunate that for so many years be funded by the public sector, because they The OGC serves as a clearinghouse, or an there was resistance to breaking down the are so fundamental to public issues and the organisational backbone for those interested barriers around the GIS market. For a long welfare of mankind. And I do think, to a in geographic information. Its primary raison time, the GIS market seemed to enforce the degree, that spatial data should be looked at dtre is to ensure that spatial information idea that there is an exclusive relationship as a necessary commodity, like water. Theres is used freely. By that I dont mean without between data type and application. This made such a great dependency on it. The creation and maintenance of spatial information should be viewed by governments as the I think the OGC-Google relationship is very important, and there same kind of investment as any other infrastructure on which people depend, like any is no question that we share an interest in creating a coherent other government service which is provided for the safety and the welfare of its citizens and productive standards environment for information as a result of informed policy making.

and application interoperability...


Latest News? Visit www.geoinformatics.com September 2007
13

Article

things are happening. You see it in consumer devices like the iPod, in Web services, in online virtual reality games and in all sorts of places. The issue for academia is to understand and lead the convergence of these technologies with spatial technologies. In my view, this should be seen as a new science. The rapidly emerging convergence of modeling, semantics, high performance computing and geospatial technologies is delivering new modes of understanding and inquiry that need to be codified and brought together in a supportive academic environment. Without a concerted global effort of this sort, we will miss the opportunity for the fullest possible use of geospatial data and services in the rapidly evolving ICT environment.

Doesnt the OGC provide such a supportive environment?


Standards developers must, by definition, seek common denominator approaches, approaches that are as simple as possible for good practical reasons. However, this goal frequently runs counter to the need to have standards that are as complex as necessary to maintain rigor in scientific specialties. The world faces difficult challenges that depend on science and decision support being radically empowered by information and communication technologies that have a full measure of spatial enablement. The standards world, the commercial world and the academic world need to work together to make this so.

I think the OGC-Google relationship is very important.

Do you think that governments are now in general meeting this challenge?
In many cases, commercial opportunism and the lack of good government policy regarding these issues have prevented an appropriate assessment of the infrastructure value of spatial information services, and this has done an injustice to people around the world. The result of this condition is that there is extensive disagreement about the state of the worlds natural resources and climate, and a dangerously inadequate capability to assess much of the worlds aging built infrastructure. With poor geospatial information its difficult for policymakers to come to an agreement on solutions regarding maintaining a wellordered society, which could otherwise be obvious.

But with INSPIRE and other programs, dont you find that this is changing?
Yes, the situation does seem to be improving. But geographic information resources are still, in my opinion, generally under-funded by government, although more policymakers are beginning to take notice of the requirement for abundant, easily discovered, easily evaluated, easily accessible, and easily used geospatial information. The Web has been very helpful in forcing the review and modernization of information systems in government agencies around the world, and the idea of loosely coupled open systems has gained

widespread acceptance. Such systems require, by definition, that subsystems connect through open interfaces. And this, of course, is what the OGC has been demonstrating for more than ten years with surprising success. In Europe, INSPIRE calls for the use of OGC standards. The Canadian Geospatial Data Infrastructure, a program shared by multiple agencies, is based on OGC standards throughout. In Great Britain, the Ordnance Surveys OS MasterMap supports distribution as GML. The US Census lists TigerGML as a future Census Bureau product, the US national data portal (The National Map) is based on OGC standards, and OGC standards are written into the US Federal Enterprise Architecture. There is widespread adoption of OGC standards in Australia. GML is specified in electronicGovernment Interoperability Framework (eGIF) best practices in the UK, New Zealand, Denmark and Hong Kong. There are many others, and the list keeps growing.

Thank you. Weve covered a lot of ground today! I look forward to our next interview.
Youre welcome. Thank you for the opportunity to bring these issues to your readers.
Remco Takken (rtakken@geoinformatics.com) is an editor at GeoInformatics. More information about the OGC can be found on www.opengeospatial.org.

You began by speaking about new and difficult challenges in the industry. What would be the role of universities in meeting these challenges?
I think its incredibly important. Now that information about space and time is easily provided as a dimension of any kind of information processing, we are on the verge of something truly exciting. Building on rapid advances in bandwidth, CPU speed, miniaturization and storage capacity, extraordinary

14

September 2007

Review
Reviewed systems set up on dike.

RTK to the Limit

Multi-test UHF RTK sets


RTK systems are commonly used in land surveying, hydrographic surveying and machine control. While the first is switching more and more to GSM telemetry such as NTRIP, the last two almost completely depend on UHF radio for telemetry of the correction signal. For this review we selected five UHF RTK dGPS systems commonly used in the land survey and/or machine control industries. By Huibert-Jan Lekkerkerk
be taken under the same circumstances (as far as possible in the field). Besides these specialized tests I performed a more regular review as well, concentrating on user friendliness. The latter was evaluated during the field tests and no specific survey was performed. The total test time for each system depended on the maximum endurance of the rover and varied from 6.5 hours (Leica) to 14.5 hours (Magellan). This review is divided into two parts: the table and main text describe the results of the more objective tests and comparisons, while the cadres detail the results of the practical tests, including user friendliness.

The reviewed systems are:


Leica
(Leica) Geosystems GX1230GG Smartrover

Magellan Professional Z-Max (Magellan) Sokkia GSR2700 ISX (Sokkia) Topcon GR-3 (Topcon) Trimble R6 GNSS (Trimble)
All systems were tested in the configuration delivered by the Dutch or European reseller or representative, including the recommended controller and software packages. All were requested to include a UHF radio capable of transmitting and receiving correction signals on a permit-free frequency and power setting for the Netherlands.

Test Method
In contrast with other reviews I have performed, I tried to test some of the more objec-

tive specifications of the equipment. The problem with performing tests, though, is that as an editor I cannot afford a highly sophisticated laboratory. Instead I performed the tests in the field and at home using either the receivers themselves or simple tools that everyone has lying around the home or workshop. The tests performed included a range test, re-initialization tests, weight and volume tests as well as a limited precision test and endurance test. In order to be able to compare the results, all tests had to Cases, tripods and poles as delivered.

16

September 2007

Review

System Description
Leica GX1230 GG
The base station tested has a different set-up from the rover and uses a separate geodetic antenna, receiver and correction transmitter. As a result the base is rather bulky although not exceptionally heavy. The connection between base receiver and antennas is made using identical cables which can be prone to switching. The differential antenna arm on both rover and base can be clipped to the receiver so that it either points up or down, depending on the user requirements. The rover receiver is very light at 1.2 kilograms although this is compensated by the weight of the controller and separate correction receiver. The rover battery is relatively small and does not last through a full survey day. Due to its different layout the base station has just enough endurance for a full survey day. For the base a separate battery pack was supplied which extends the endurance by roughly 16 hours, although this pack was not used during the tests. The Bluetooth connection between rover and controller functioned without noticeable problems. Since both base and rover have their own controller, the receiver and controller can be exclusively mated and no switching has to take place. with an internal UHF radio that uses a very small receiving antenna on the underside of the receiver. Although this set-up makes the receiver very compact without any shielding of the GPS horizon, it is a less optimal configuration for receiving UHF correction signals. The receiver was the only one to have two Bluetooth connections, enabling the user to connect to both an external GSM/GPRS unit and the controller. The base was also equipped with an optional built-in GSM/GPRS unit.

Topcon GR-3
The GR-3 is the top model of the Topcon range, which shows in a number of clever details. The batteries, for example, can be hot swapped while the receiver is running. The battery charger, together with two batteries, can be used as an additional power pack for the receiver. Finally, standard AA alkaline penlight batteries can be used in a battery casing that holds four penlights. The receiver and base are identical and can be swapped without a problem. The receiver feels very robust and heavy, which it is at 1.9 kilograms. Due to the weight, steadying it can become tiring after a full survey day. On the other hand, the receiver is built so sturdily that Topcon guaranties it can withstand a fall of two meters. The GR-3 was the only receiver with reception for all current GNSS systems including Galileo. Although theoretically an advantage, there is currently only one Galileo (test) satellite, and few satellites transmitting signals other than the regular L1 and L2. When this changes in the years to come, the GR-3 will be ready and will not require a hardware update.

Magellan Z-Max
The Z-max is the oldest system in this test; the one we tested was produced in 2003. The system is quite bulky and heavy when compared to the other systems and although the weight distribution is good, working with it for a full day becomes tiring. The base and rover receiver are identical although the UHF antenna set-up is different. In our test the base had a separate UHF radio module with its own power supply. The Magellan is also the only base in the test for which settings can be made without the use of the controller; all basic settings are accessible using the keypad and LED display on the receiver. The UHF antenna used on the rover is mounted between the receiver and the geodetic antenna using a bayonet/screw type mounting. The receiver has two detachable units. One of the two is the long-life battery which gives an endurance of over 14 hours. The other unit of the tested system was an optional built-in GSM/GPRS unit. With the receiver/controller combination we tested, the Bluetooth connection constantly lost its connection, requiring a switch to a cable-controlled receiver.

Trimble R6 GNSS
The R6 GNSS receiver we tested from Trimble is not very different from their top-of-the-range model, the R8 GNSS. The main difference lies with the reception of the L2C and L5 GPS frequencies. Since there are few satellites broadcasting these signals at the moment, the disadvantage in everyday use is small. Apart from the frequencies, the model is similar to the R8 and is very compact. The base and rover are identical, making it easy to swap them. The UHF antenna is located on the underside of the receiver; it therefore does not shield the GPS horizon. The downside of this location is that the UHF reception is degraded, which is especially noticeable at longer distances. The receiver is relatively light at 1.3 kilograms. The downside is that the battery used is very small and has the shortest endurance of the systems in this test. The base will therefore usually be equipped with an optional power pack (not tested). The supplied controller, the TSC2, is relatively heavy but feels very robust. The touch screen is very bright and easily readable. It has three card slots and can, as with the Topcon controller, directly connect to a USB memory stick.

Sokkia GSR2700 ISX


This was the only receiver in the test that has non-swappable batteries although the endurance of the batteries in the receiver is long enough for a single survey day. Due to the large capacity batteries the receiver is relatively heavy at 1.8 kilograms, making it slightly harder to steady. Due to the fact that at the time of the review Sokkia did not have two identical systems available, the base tested had a separate radio transmitter that was connected to the receiver using a serial cable. The rover was equipped

Latest News? Visit www.geoinformatics.com

September 2007

17

Review

Base Weight
Most brochures only state the rover weight, giving the impression that the base and overall weights and sizes are not important. The complete system has to be transported to the site, however, with the last few hundred meters usually by hand. We therefore also measured the weight and size of the other components. The total weight of the base was calculated from the measurements, based upon the use of a standard tripod weighing 7 kilograms. Again the Trimble came out lightest at only 9.5 kilograms and the Magellan and Leica the heaviest at 11.5 kilograms. The weight of the Magellan does not include the mandatory 13.5 kilogram battery needed to supply power to the separate UHF radio transmitter.
RTK GPS systems are high-volume products.

Overall Weight and Size


our test (Trimble, 3.6 kilograms) by 300 grams. The heaviest receiver in this test was the Magellan with a total pole weight of 5.7 kilograms. On the plus side, the Magellan can also be used as a backpack receiver, reducing the pole weight by an estimated 1.5 kilograms. Moreover, most weight in the Magellan is halfway down the pole, making it relatively easy to steady the pole. In general the heavier rovers ran longer on one set of batteries, a full survey day or more, than the lighter models in this test. The best weight/endurance results were achieved with the Topcon and the Sokkia which both had a total on-pole weight under 4 kilograms, and which lasted over 10 hours on a single set of batteries. The total weight and size of the cases was calculated as well. Have you ever wondered why GPS representatives drives such big cars? It is not so much the result of the profits they make, but more the immense size of the systems. Excluding the tripod, pole and loose accessories, the storage volume of the cases for a single system varied between 45 litres (Trimble) and 92 litres (Magellan). The total volume of all the cases for the systems tested was 354 litres which, together with the poles and tripods, is enough to fill the back of a medium-sized European station wagon with the back seats folded down. The weight of a single case was always less than the Dutch legal limit for workmen, 25 kilograms, with the two Leica and Sokkia cases the lightest at 8 kilograms apiece and the single Topcon case the heaviest at 15 kilograms. Of course the total volume and weight

Weight Tests
In the world of land surveys, where GPS is considered size does matter, not so much in machine control or for the base station but mostly for the rover. Land surveyors have to carry the equipment around for hours on end and hold it as steady at the end of the day as they did at the beginning. Of course it is not just the total rover weight that is important, but its distribution around the pole as well. The less weight on top of the pole the better, since this makes steadying the pole easier. A light controller also helps, while the weight of the pole has only a limited effect on overall weight distribution. As well, the receiver/controller combination has to be well balanced. Finally, the smaller and lighter the overall set, the easier it is to install in remote locations. I weighed the various system components using a kitchen scale accurate to within 10 grams and, for those components such as the tripod and the filled cases that were too heavy for the kitchen scale, a body weight scale with a resolution of 500 grams. The weights given are the weights with a single set of receiver batteries as supplied by the manufacturer. Since some manufacturers use smaller batteries than others, this will affect the weight of the receiver, but also the maximum endurance.

Rover Weight
The average pole weight was 4.2 kilograms. The Leica receiver was, at 1.2 kilograms, the lightest in this test (although the Magellan with the separate antenna/receiver set-up had the least weight on top of the pole). However, due to its rather large controller, radio and bracket, the total pole weight of the Leica exceeded that of the overall lightest rover in

Antenna layout on the roof of the car for the range test.

Latest News? Visit www.geoinformatics.com

September 2007

19

Review

Range Test Set-up


For this test we wanted to see what the maximum achievable range was for the systems. This is especially important when using the system over larger survey areas. All manufacturers were requested to supply a system set to a legal frequency and power setting. I meant 439 MHz and 500 mW but did not communicate this explicitly at an early stage. As a result some systems were set at other power settings, with the Leica for example being set to 1 W. At the time I thought this was illegal but Leica corrected me, referring me to the website of the Dutch Telecom agency. The difference in power settings, however, meant that comparing the results would be hard. We did proceed with the range tests, though. In order to test the ranges under comparable circumstances a specific set-up was needed. Therefore all five bases were erected five meters apart in a row at a straight angle to the range, a road on an unobstructed dike. During this test both the Sokkia and the Magellan were at a slight disadvantage since their base antennas had to be mounted on the legs of the tripod, resulting in a slightly lower antenna height which can, potentially,

reduce the maximum range. The five rovers were then mounted one meter apart on the roof of my car in such a way that almost all antennas (both GPS and telemetry) had a free field of view. The exception was the UHF antenna of the Magellan which, due to its construction, had to be mounted slightly lower than the others to prevent it from shielding other GPS antennas, giving the system a slight disadvantage (see photo). The systems were than set to continuous position logging with the exception of the Sokkia, which did not have this option in the supplied software. The Sokkia was therefore read manually. With the systems thus set, the car was driven along the dike at speeds never exceeding 10 m/s. At the end of the dike, the car was turned around and the test was repeated in the other direction.

Performing the initialization test with a piece of tinfoil.

Range Results
The results varied greatly and proved hard to compare. On average the range varied from slightly over 2 kilometers to over 7 kilometers. Some systems, however, had trouble maintaining lock during this test, without any obvious reason at the time. One of the problems with a test like this is

of the cases depends on the type of case and the options selected by the client. All representatives, however, claimed that the cases and options supplied were those usually selected by their clients.

20

September 2007

Review

User Interface
Leica GX1230 GG
The software on the Leica controller has quite a few options. A first-time user can easily get lost in all the menus and settings. The advantage of all these options, of course, is that the system can be specifically geared towards a specific application. The controller is also the only one that can be fully and easily controlled using the keyboard. The rover was supplied with the new colour touch screen that is very easy to read, even in bright sunlight. I personally felt that the touch screen did not respond as well to the pen as the greyscale screen on the base controller. Logging data is relatively simple once the unit has been set up. Data can be logged to a CompactFlash card slot in the receiver. Exporting data is simple once the export format has been defined using the office software. No standard export formats are provided with the controller, although the office software holds a number of formal templates that can be used as is or modified. Leica also provides a controller simulator, making it possible to change settings and to export towards specific formats without having to have the physical controller in the office. The operation of the simulator is identical to the controller and can even be configured to display in either greyscale or colour. you that you just lost RTK. It is somewhat of a gadget, but it enables operation without having to look at the controller all the time. The Allegro controller ran Sokkias own software. The software performs all basic tasks, but has no options for auto logging or extensive attribute information. For this reason no points are displayed in the precision test results. An advantage of the software is that it stores its information in a relational database. This makes adjustment of the results possible on the controller without having to use any office software. Just change the base coordinates, and all points measured from that base will shift with it.

Topcon GR-3
As with most manufacturers, Topcon uses a single software package for all its land survey instruments. The package has a very simple layout and surveying is relatively easy. I personally find inputting values into the software a bit of a nuisance since only an onscreen keyboard is available with the supplied controller (FC200). The layout of this onscreen keyboard is not QWERTY, which takes some getting used to. Communication between controller and base/rover is usually done using Bluetooth. With this particular setup the controller lost the Bluetooth connection every now and then, even with the controller close to the rover. Exchanging data with the office computer can be via data card, USB connection or with a USB memory stick. However the USB port only takes very slim memory sticks such as the one supplied by Topcon Europe. The ports are very well shielded from dust and moisture by rubber flaps that open and close without a problem. After a full day of testing and one day in storage the battery of the controller was empty and had to be replaced. It seems the controller uses power even when shut off.

Magellan Z-Max
The Z-max is the only receiver in this test that does not have Glonass support. Furthermore, due to the fact that I seem to have had SBAS switched on during the tests, the Magellan structurally received two to three fewer satellites than the other systems. This is a result of two channels being dedicated to receiving SBAS corrections, which means fewer channels available for satellite tracking. The result is that it was harder for the receiver to get an RTK fix in the re-initialization tests. The software used with the Allegro controller was the commercially available Fast Survey package. This package is very easy to understand and use and has all the features one needs in the field. Data export is mainly towards standard ASCII text files, which can be read by most processing software. If needed, export to shape and dxf formats is also available. Due to the limitations on the controller, data export has to be performed using ActiveSync over a serial cable, which can be a problem since fewer and fewer computers are equipped with a serial port. A serial to USB converter or the optional USB dock can be a solution, but some converters will work better than others.

Trimble R6 GNSS
Whenever the radio signal was lost during the tests, a computer-generated female voice provided you with information. Similar to the Sokkia, this is something of a gadget, but it makes it easier to detect problems with multiple systems running or when temporarily performing other duties. The Trimble and Leica receivers were the only receivers that give only global status information on the receiver itself, requiring the controller to be connected to the system for more exact information. Both also display the information from the base in the controller display of the rover. Exporting both the position information and the quality information in a simple ASCII file proved impossible with the installed export formats. Additional formats can be easily downloaded from the Trimble website, however, giving a broad range of export possibilities.

Sokkia GSR2700 ISX


Configuring the Sokkia system is relatively easy. The base requires no settings at all: simply switch it on and it will start measuring and transmitting results. All settings can be done in the controller from the rover location, where the base position as transmitted from the base can be overridden. Sokkia has the only talking receiver. Although other manufacturers have a talking controller, none has a receiver that can quite clearly (and in different languages!) tell

Latest News? Visit www.geoinformatics.com

September 2007

21

Ionospheric conditions (Kp index) during range test (source: www.sec.noaa.gov)

that there are five transmitters operating at similar frequencies, albeit not exactly the same. Frequencies close to each other can cause crosstalk, making it harder for the receiver to maintain lock. Further, since the frequency used is line of sight, every obstruction between the base and rover will deteriorate the range. This can be partially solved by elevating the base antenna. The range selected was, however, free of obstacles for 7.5 kilometers, apart from the occasional passing car. Since the results varied and had some unexplained gaps in them where receivers lost lock, I investigated the measuring conditions when manufacturers reported that the ranges measured were not representative. When I checked the ionospheric conditions during the range test on the morning of July 11, I found that they were truly bad, which probably was the reason some receivers were losing lock. Due to this the actual results of the range test are not shown here since they are not representative of the range under more normal conditions. One thing I did notice, though, is that having the antenna on top of the GPS antenna certainly provides an advantage.

in 1.5 meters of each other, with all the bases in the same configuration as for the range test. The average distance between bases and rovers was in the order of 25 meters. With this set-up, each receiver GPS antenna was in turn shielded using tinfoil. As soon as the rover reported a loss of RTK and the number of satellites in view remained at a steady low value, the foil was removed. The time between removing the foil and the moment the rover reported an RTK fix was taken as the initialization time. The test was performed three times per rover within a short time span (minutes).

Reacquisition Results
Almost all systems re-initialized within, on average, 15 seconds, with the Sokkia slightly faster at 10 seconds. Only the results for the Magellan were higher, but not comparable due to an incorrect setting in the receiver. It seems that I had the SBAS option turned on during the tests, which reduces the number of available channels for GPS measurements by two. Considering that this reduces the number of satellites available for the solution, initialization times increased. I estimate that, on this short baseline, the results would otherwise have been comparable. One can, however, question the effect of these differences in survey practice; all the systems initialized before the average surveyor would have reached the next survey point and steadied the pole.

Reacquisition Test Set-up


The reacquisition of the RTK fixed solution after passing under a tree or bridge is important since every second spent waiting seems to be one too many in the field. The actual reacquisition time depends on various factors among which are the numbers of satellites in view, their constellation, and the distance between base and rover. In order to test the reacquisition time as reliably as possible, all rovers were set with-

Endurance Test Set-up


The field endurance of an RTK UHF system is mainly defined by the endurance of the base and therefore by the batteries used in the

22

September 2007

Review

Battery results always depend on the conditions under which they are used: the colder it is, the less performance one gets. During these tests the ambient temperature was between 18C and 22C. All batteries were charged using the supplied battery chargers until the indicator showed the battery was in the green or no longer charging. The systems were then run until they switched themselves off, a condition that is not optimal for the system and should be avoided in everyday practice.

Endurance Results
Results of the precision test (Green = Leica; Red = Magellan; Blue = Topcon; Yellow = Trimble)

base. Although all manufacturers can supply additional power packs, in this test only the single set of internal batteries delivered with the system was used. The endurance test was run parallel to the other tests, with the times of switching on and off being noted. Using the auto logging function, the time of shutdown was determined to within the closest half hour.
Leica Geosystems GX1230 GG

Surprisingly, in their brochures almost all manufacturers are pessimistic when they state the endurance of the system. On average the systems ran 1.5 hours longer than stated, the exception being the Topcon, which ran 2.5 hours less than stated in the brochure. The first base to stop function was the Trimble; it ran for 5.5 hours not nearly enough for a full survey day. The longest to run was the Magellan with a base endurance of around 14.5 hours: more than enough for
Magellan Z-Max Sokkia GSX2700 ISX

even a 12-hour survey day. The endurance of the Magellan is largely the result of the separate battery used for the UHF radio and the large 8.8 Ah internal batteries. Almost all rovers ran longer than their corresponding base systems. The exception was the Leica, where the base ran 1 hour longer than the rover. This is the result of a different set-up for the base system, with the base having larger batteries. Almost all controllers had a battery that lasted longer than the rover they were coupled to. The Topcon controller only lasted throughout the first survey day. It seems however that the controller does not completely switch off and uses power even in off mode. The battery on the Magellan controller came close to running out, but considering the uptime of the Magellan rover this was no surprise.

Precision Test
An RTK system is bought for its accuracy of centimeters or better. Without a proper laboratory set-up it is not possible, however, to test both the precision (standard deviation) and reliability for all the systems under exactly the same conditions. Instead we only performed a quick field test to check precision. During this test we left all systems
Topcon GR-3 Trimble R6 GNSS

Test results. Notes: M = measurement based upon system tested (see text for details) G = as given by manufacturer ( ): Optional; see additional remarks 1: Including mounting bracket and radio receiver where applicable 2: With the pole/pole mount delivered with the system 3: Excluding optional power packs and including tripod and bracket as delivered. 4: Approximate size/weight of the filled cases delivered with the system, excluding tripod and pole

5: 6: 7: 8: 9: 10: 11:

Maximum initialization time measured during tests/given by manufacturer With a single set of standard batteries required to operate the system The controller ran out after the first 6 hours Model reviewed included GSM/GPRS Model reviewed included UHF Excluding 13.5 kilogram required base battery Results were not comparable due to an incorrect setting in the receiver

Latest News? Visit www.geoinformatics.com

September 2007

23

Review

running after the initialization tests. The data was logged for roughly one hour at 30-second intervals for all systems with the exception of the Sokkia whose software did not support auto logging. The resulting position plot for each system was then shifted towards an imaginary central point using software in order to be able to compare the results visually.

Precision Results
Although the test as performed by us is not a true indication of the precision of the systems, it gives a good idea of the differences between the systems and the respective settings made within the software. For example, with the incorrect SBAS=ON setting the Magellan lost RTK lock at some point during the tests and therefore logged fewer points, which in turn were very close

together. The Topcon on the other hand had no problem getting into RTK lock but seems to have had some multi-path problems during the test resulting in a larger position spread. The standard deviation for all systems, when locked, was well within the 0.025-meter range and therefore within expectations for such a system. The test did, however, show that specific settings and differences in software can influence the results.

Conclusion
I tested five systems that are marketed by their respective manufacturers as comparable. During the tests we found differences between the systems, not so much in their user friendliness or the applications they could be used for, but in the hardware itself. What is clearly visible from the results,

though, is that every manufacturer has to make certain choices in the design phase of the system. Some will opt for batteries with long endurance and accept a higher weight and others value versatility over a simple user interface. As such, selection of a specific system should be based not so much on the type of application the software supports but more on factors such as price, maximum operational range, endurance and weight of the system for the specific application(s) one has in mind.
Huibert-Jan Lekkerkerk (hlekkerkerk@geoinformatics.com) is Editor-in-chief of Geoinformatics. For more information on these receivers: www.leica-geosystems.com; www.pro.MagellanGPS.com; www.sokkia.com; www.topcon.eu; www.trimble.com.

Manufacturers Remarks on the Results


Leica GX1230 GG
The range performance of Leica might be tempered by the test set-up but not by ionospheric conditions. The antenna position, radio equipment quality and line of sight are important aspects to guarantee receiving of corrections signals. On the reacquisition and precision test results, just a fast TTFF is ignoring the reliability. The Leica precision test shows the best repeatability with small position spread. No outliers support the fact of reliability. GPS1200 realizes this by solving the ambiguities twice and independently before providing a fix.

Topcon GR-3
This field review is a good practical test. It proves that Topcon's GR-3 is a leading product and performs well when compared to others. Its unique design helped achieve the longest range at only 0.5watt radio power. Although the GR-3 is claimed to be heavier than some, it should be remembered the battery life is sufficient for a full day, so no extras are needed and it includes a built-in GSM/GPRS, which others have to add. The fact that the GR-3 is ready for Galileo means no costly hardware changes or add-ons are needed as the satellite program progresses beyond the current single satellite, making the unit future proof. As the test proves, the GR-3 is ready for all aspects of current and future use.

Magellan Z-Max
The Magellan Z-Max is a truly ultra-flexible survey system that lets surveyors control their survey their own way. It permits surveyors to select only the modules they want for the most cost-effective survey solution. The Z-Max is available to survey in NTRIP, VRS, or FKP networks; GPRS or even UHF+GSM/GPRS. It switches seamlessly from post-processing to RTK, and it is suitable as either a base or rover. The detachable modules make configuration changes and system upgrades simple. And, if youre looking for a high-precision RTK solution at about half the cost of any of the systems tested in this article, take a good look at the new Magellan ProMark3 RTK with BLADE, the new Magellan GNSS engine.

Trimble R6 GNSS
The Galileo satellite radio navigation system proposed by the European Union offers advantages to Global Navigation Satellite System (GNSS) users by providing additional satellites, additional signals, and compatibility with GPS. Trimble fully supports this advancement in the GNSS market. As we have done with products that capitalize on next-generation GPS capabilities, we are committed to having Galileo-compatible products available for our customers well in advance of Galileo system availability. In the case of GPS Modernization, our compatible products were available a year ahead of the first L2C-capable satellite launch. Trimble has also developed products for the coming L5 GPS signal. Likewise, we will offer equipment with Galileo capability well ahead of the time when production satellites are launched. In the meantime, it is our goal to offer the most productive and competitive equipment that addresses our customers' needs both now and in the future.

Sokkia GSR2700 ISX


The Sokkia GSR2700 ISX is proofed as a user-friendly receiver with excellent environmental specs and a strong RTK performance. We would have welcomed a test of long-range RTK performance since the GSR2700 ISX excels in quick and reliable RTK solutions over long distances, which can be related to the reacquisition results. Furthermore Sokkias controller software SDR+ is positively received where its strong feature is freedom in the field. Thats why we built SDR+ based on a relational database environment. Sokkia is determined to serve surveying professionals with reliable and accurate positioning solutions such as the GSR2700 ISX, now and in the years to come.

You can also find a movie of the test in our movies section on the website www.geoinformatics.com.

24

September 2007

Article

Image Quality Is Critical

Leading the Way to Accurately Visuali

Ordnance Survey emphasises the big image quality improvement of Designjet Z6100 versus the Designjet 5500 in map printing business when you need to combine clarity and precision of data intensive applications with photo quality for aerial photos. Hotel Metropole in Leeds, 1992.

Ordnance Survey have been providing accurate, reliable and detailed geographic information for more than 200 years. As Great Britain's national mapping agency, it provides the most accurate and up-to-date geographic data from complex digital information to traditional walking maps for Great Britain, relied on by government, business and the public. Although overall performance of their HP Designjet 5500 printer was satisfactory, when HP introduced the HP Designjet Z6100 printer, Ordnance Survey was eager to assess its impact on their map printing business. By Job van Haaften

Ordnance Survey offer a complete mapping


process from flying over the country taking aerial photography, to shipping of the final, printed products through Ordnance Survey Print Services. Its modern and extensive in-plant facility at its headquarters in Southampton (U.K.), allows Ordnance Survey Print Services to provide its clients with a one-source total solution.

Making Deadlines Easy


Its HP Designjet 5500 printer played a critical role at Print Services, producing large-format proofs for production printing of maps on offset machines, and it was used to print the map covers themselves on cardboard, printed 8 up on a carrier, showing a picture of the area covered by the map, and with a surrounding silver spot colour. The HP Designjet 5500 printer was also extensively used for advertising material, from small A2 posters up to billboard, large wall displays for trade shows and for shorter print runs on resistant media for specific products,

such as maps used in the field. Maps and aerial photographs printed on the HP Designjet Z6100 are perfect on the first print. Colours are really vivid and with greater depth, so subtle shading really stands out. Blacks are dense and lines sharp and crisp, creating professional-quality maps that are increasing our customer satisfaction. Garry Heaton, Prepress Technical Business Officer, Ordnance Survey. Gary describes how the new printer was an instant success. The most obvious improvement using the new Designjet Z6100 printer is the speed of the machine. Its much faster than the Designjet 5500, and the quality is much better. The fast printing takes the sweat off coming up to deadlines for trade-show displays and other publicity material production.

Clarity and Precision


Image quality is the critical area for map printing businesses. Printed images present an

series of unique challenges, demanding range, clarity and precision for data-intensive mapping applications combined with photographic quality for aerial photographs, crisp line drawings and smooth area fills with subtle tones that can make all the difference. Garry was delighted with the results from the new printer. The outstanding print quality from the Designjet Z6100 is a selling point for the map. There are people out there that buy maps because they just love maps. When the maps look that much better, our customers are that much happier. The difference is marked Gary adds. We have a series of touring maps where we highlight areas of Britain that are of a great interest to tourists. We put hill shadings on these, but on the Designjet 5500 the shading didnt stand out very well. On the Designjet Z6100 weve noticed a marked difference. Shading is much clearer and crisper. The image quality has improved immensely. Thanks to HP Vivera pigment inks, with the HP

26

September 2007

Article

i ze the World
years.** The HP Designjet Z6100 printer inks also dry much quicker than on the old printer. The improved resistance and shorter drying times also help avoid damaging prints from handling and accelerate finishing processes. In combination with the new HP Vivera pigment ink durability, Gary says this is a major time-saver and allows them to produce maps that can be used in the field. Prints on plastic-like material from the Designjet 5500 would take quite a while to dry. On the Designjet Z6100, its a lot quicker. He adds, Changing from dye-based inks to UV inks on the Designjet 5500 was a bit time consuming, but with the Designjet Z6100 a job comes in for display purposes and theres no problem. With just one ink set for everything all prints are longlasting so we just go straight ahead and print the job. Gary is also extremely satisfied that colour accuracy and consistency are no longer a trial and error process with the HP Designjet Z6100 printer, saving time and eliminating waste. Using the Designjet 5500, we had to alter colour profiles a number of times before we got a print that was not oversaturated in the dark areas we were wasting time and material there. On the new Designjet Z6100 the ink density and colours are fantastic first time off. Its perfect from the word go. Their confidence in the reliability and colour accuracy has never been higher. Every map is folded and contained in an individual cardboard cover with a picture of the area concerned surrounded by a spot colour, silver. The Designjet Z6100 gives us the outstanding image quality we are looking for, with accurate colours straight off. HP DreamColor Technologies featured in the HP Designjet Z6100 printer series, including automatic generation of custom ICC colour profiles, Pantone emulation and the embedded spectrophotometer***, ensure consistent colour across different printers or presses and a wide variety of media. HP's Closed-loop colour calibration accounts for changing environmental conditions and media adjustments for the life of the product. Printing businesses can confidently split print runs between two or more printers and get consistent colours from them all, even in changing environment conditions, and from print to print at different times. HP DreamColor Technologies are designed to streamline workflow through integrated, automated systems speeding turnaround time and lowering production costs.

Hotel Metropole in Leeds, 2006.

Three-black ink set (matte black, photo black, and light gray), the HP Designjet Z6100 achieves millions of colour combinations across a wide colour gamut and across a wide range of media, producing a range of blacks and gray with smooth, subtle transitions, true gray neutrality, and rich black density. Gary claims the image quality compared to the old printer has greatly improved. For a trade show we laid lots of display images out on a black background, and on the Designjet Z6100 we got much more depth of colour, a denser black, and much more vivid colours than on the Designjet 5500.

Maximizing Use of Resources


Saving time, maximizing efficiency and minimizing waste are recurring themes for Ordnance Survey Print Services, and according to Gary Heaton the HP Designjet Z6100 printer optimizes productivity and resource utilization in every area. Our Designjet Z6100 is going all day, every day. We havent had to change a cartridge yet. Theyre much larger and the ink usage is a lot more efficient than on the Designjet 5500. The large-capacity ink cartridges combined with long HP rolls, give Ordnance Survey added confidence in troublefree overnight printing runs. We were printing old mapping, 150 or 200 years old, originally hand drawn, covering the

Printed to Last in the Field


The HP Vivera pigment inks deliver a combination of outstanding photo-image quality, water resistance, and fade resistance for over 200

Challenges
Long Ink-drying times Need faster printing speeds Time consuming changes:Ink-cartridges, media, between UV and dye-based inks Hill shades not standing out Need more vivid colours and deeper blacks Need sharper line quality Altering profiles to get the right ink-coverage Needed better overall image quality and easier colour management

Solution
HP Designjet Z6100 printer HP Vivera pigment inks (8 colours) HP media: HP Coated Paper HP Heavyweight Coated HP Productivity Photo Gloss HP High-Gloss Photo Paper

Results
Professional-quality maps that increase customer satisfaction Customers happier with faster turnaround More confidence on tight product schedules Quicker, simpler workflow Time and materials saved Experimentation with different media for new products

27

Latest News? Visit www.geoinformatics.com

September 2007

country. The Designjet Z6100 had a long roll of paper loaded, so we left it to print overnight. The maps looked stunning when we came in the morning. The outstanding quality and range of media designed with the HP Vivera pigment inks, give Ordnance Survey greater peace of mind to explore innovative GIS printing applications. The Designjet Z6100 has opened up the possibility for us to experiment with different material while always getting accurate colours. We produced a series of 40 aerial photographs on photo paper along with corresponding drawn maps on transparent paper. The drawings sat over the photos to highlight changes on the ground. Our customers were very pleased. It was probably the first time theyd had something like that done, confirms Gary. Ordnance Survey appreciate the flexibility they get from the choice of original HP media substrates for a broad set of applications.

The HP Designjet Z6100 printer delivers more than 1,000 ft2 per hour (100 m2)*. Eight HP 91 Printheads, provide a wider print swath, up to 1.8 inches, and the first-ever Optical Media Advance Sensor (OMAS) improves paper advance accuracy, so the printer can print at higher speeds and regardless of environmental conditions without impeding image quality. * On plain paper in Fast mode ** Display permanence rating for interior displays/away from direct sunlight by HP Image Permanence Lab, and by Wilhelm Imaging Research, Inc. on a range of HP media. Water resistance and interior in-window display ratings by HP Image Permanence Lab on a range of HP media. For details: www.hp.com/go/supplies/printpermanence. *** With i1 colour technology.
Job van Haaften (jvanhaaften@geoinformatics.com) is editor of GeoInformatics. With special thanks to Augustin Comadran from HP. For more information: www.hp.com.

Truly Transformed
The HP Designjet Z6100 printer has truly transformed GIS printing capabilities at Ordnance Survey Print Services, allowing them to work more effectively with increased confidence, improving their productivity and streamlining their workflow. Gary concludes, Job turnaround with the Designjet Z6100 is much faster compared to our previous printer, giving us more confidence on tight product schedules. Our workflow is now quicker and simpler, and there is no trial and error. The printer efficiency is impressive, saving us time and materials.

Three different data sets of an area of Southend in the UK.

28

September 2007

Column

When Will On-Board Processing of Orthophotos be Commercially Available?


While digital image processing for photogrammetry has existed for more than 15 years, it took another 10 years before the first digital aerial cameras became operational. However, two solutions for the production of operational digital aerial cameras, based either on multiple linear arrays, or multiple area arrays, are now available and the market for these cameras is growing.

The advantages of these cameras include but are not limited to: Elimination of degrading effects of film and improved dynamic range. More data acquisition per day and throughout the year, especially in higher latitudes: for example Aerodata in Belgium have reported that during only three days of perfect weather conditions more than 5000 images were taken while KKC (Japan) collected more than 12,000 images in 40 projects in a period of 6 months; and in USA, orthophotos were produced of over 1 million square km for the USDA Farm Service Agency in a 3 month period. High levels of redundancy leading to a paradigm shift in operations in photogrammetry. High geometric accuracies. Near real orthophotos with little or no need to correct images for relief displacement. The highest resolution multi-spectral images for remote sensing applications. There has been an explosion in the number of images acquired by amateur photographers using consumer grade cameras. The developers of the first digital cameras at Eastman Kodak, even in the 1970s foresaw that the applications of digital imaging would be almost limitless. Likewise, as exemplified by the developments by Pictometry International Corp. in USA, and their recent expansion by joint ventures into Europe and Australia and New Zealand, more aerial imaging, both vertically viewing and obliques, are being acquired than ever before. This is apart from advances in satellite sensing, which also reveal rapid advances in the acquisition of high resolution images from space. The digital aerial systems are continually

improving, with new generations of camera heads with more pixels of smaller sizes, and more efficient data handling. So far the output from digital systems has usually been digital orthophotos. Line mapping still requires manual extraction of features from digital images. Photogrammetric software companies have concentrated on more efficient handling and processing of multiple image types, rather than developing higher level information extraction. A great deal of development is still required before efficient high level systems are available for feature extraction suitable for digital line mapping or GIS databases.

John Trinder, Emeritus Professor University of NSW, Sydney Australia 1st Vice President ISPRS

These advances in both forms of data acquisition of the terrain surface have significant financial impacts on photogrammetric companies which are required to update their systems more rapidly that was required for analogue systems. Therefore, since the amortization period of digital aerial cameras is typically quoted as about 3 years, high throughput is required to finance them. Developments in airborne sensing are advancing rapidly. While the high level processing for feature extraction will take many years to become robust and effective, the next step in the advancement of these technologies is on-board processing of some aspects of imaging and LiDAR data acquisition. It seems that the ultimate extent of the advances of these technologies will be on-board processing for the commercial production of orthophotos, though I would not like to predict when this will occur.

The digital aerial systems are continually improving, with new generations of camera heads with more pixels of smaller sizes, and more efficient data handling.

In parallel with these developments in imaging are continued advances in airborne laser scanning or LiDAR systems, with improved power, higher frequencies and recently, multiple sensing in flight, which enables more than 1 pulse to be emitted and received during a cycle of sending and receipt of a pulse. These latest developments can result in improvements in efficiency of LiDAR data acquisition of up to 50%. Processing of LiDAR data is being significantly advanced, while merging by overlaying of LiDAR and imagery is operational in some digital photogrammetry software packages.

Latest News? Visit www.geoinformatics.com

September 2007

29

Article

Cognitively More Ergonomic

Route Directions that Communicate


Do you remember a person next to you, or yourself, giving route directions? Although these directions can be highly individual, following these directions is typically straight forward. They contain the information relevant for reaching the destination, are descriptive, they tell a story of route following, and can easily be memorized. The automatically generated directions by in-car navigation systems, location-based services and web based route planners look different. They are hard to memorize, and their communication is far from perfect. This article discusses the challenges of improving automatically generated route directions, recent progress of research in this area, and a first commercial demonstrator for some of these results. By Stefan Hansen, Stephan Winter and Alexander Klippel

In-car navigation systems, location-based services and web based route planners all do it: calculate a route, and communicate it to human users. This article looks closer at the second aspectthe systems communication to a human user. The questions addressed here are: Do they speak the same language? Can they understand each other? And if so, how much effort is involved in following the given route directions? Finally, can we design systems that communicate better, in the sense of being more intuitive in expressing their directions? As navigation services are about to become a standard feature in our lives, it is worthwhile to look at their history and the directions, that this technology is currently heading. Early mapbased interfaces for in-car navigation did not succeed: a car driver studying a screen map while driving was not acceptable. From there it was a short step to verbal route directions. This feature is now quite common also with web based route planners, which provide both, maps and route directions. Many mobile location-based services suffer from small screen sizes, and up to now have neither come up with convincing maps nor with convincing voicebased solutions. This is probably one of the reasons why despite recent high sales growth of portable navigation devices, many user experiences are unsatisfactory. Interestingly, newer in-car navigation services outplay each other again with visual interfaces such as perspective views of maps or of textured 3D representations of cities. This happens in the spite of the long realization of cognitive psychologists and cartographers that more is not always better. So, where is the balance? What is best for the user? And how can research here support commercial product development?

Figure 1 Examples of reducing the amount of instructions: A) A common technique is to combine segments with the same street name Follow Main Road. B) An extension of example A) is to use highway numbers. This allows covering larger distances in one instruction. C) To indicate the end of a chunk often landmarks can be used Go straight until you reach the gas station.

Figure 2 Examples for integrating landmarks in directions: A) A classic example of landmarks Turn right before the church. B) Salient intersection along the route can also function as landmarks Take the 3rd exit at the next roundabout. C) Multiple similar salient features of the environment can be identified by ordering them Turn right at the 2nd traffic light.

Figure 3 Right turns at different types of intersections: At a t-intersection (A) a simple Turn right is sufficient. Several options to turn right at an intersection (B) require a more differentiated instruction Take the 1st exit on your right as well as a roundabout (C) Take the 4th exit at the roundabout.

30

September 2007

Article

Standard Instructions
1. Start at MELVILLE RD head towards DAWSON ST 2. Turn right at DAWSON ST 3. Continue along DEAN ST

Cognitively Ergonomic Instructions


Head South on Melville RD towards Dean ST.

memorable. In order to generate automatic route directions with these characteristics, the following three principles should be realized.

Fewer directions
Turn right at the traffic light onto Dean St Take at the next traffic light the 2nd exit on your left onto 35/Ascot Vale RD. Follow 35 until you reach after 5km the inter section of 35/Moor St and Hopkins St. Turn right. After 320 m on Hopkins St you reach your destination. The fewer directions that are given to the traveler, the easier they are to memorize. In particular simple, obvious instructions can easily be merged in one single instruction without omitting any information required to successfully follow the directions. For example Go straight at the next intersection and Turn left at the following intersection can be combined to Turn left at the second intersection. However, a navigation service should offer the user access to merged directions, for the case, that they require clarification. Figure 1 shows three examples of possibilities to reduce the amount of the generated route directions.

4. Turn left at ASCOT VALE RD

5. At the ROUNDABOUT - take The 2nd exit onto EPSOM RD 6. Turn right at PRINCES HWY 7. Turn left at MOORE ST 8. Turn right at HOPKINS ST 9. Stop: Stop at HOPKINS ST In this paper we concentrate on how principles of cognitive ergonomics can be used to design better structured route directions. These principles will be applied to verbal route directions, but other modes, e.g. by sketch, could profit as well.

Table 1 shows directions for the route in Figure 4. On the left are directions as they are generated by a standard web-based navigation service, on the right are directions generated on the principles for cognitive ergonomic route directions.

Giving Descriptive Directions


Route directions are easier to memorize and to follow if they describe the environment where the required action takes place. This prepares the traveler for what to expect at the next decision point. Additionally, describing the environment helps to reassure the traveler they are still on the route. Integrating landmarks and intersection categories (e.g. roundabout or t-intersection) in the directions helps the traveler to picture the situation at an intersection and to recognize it in the environment more easily, compared to instructions that rely on street names and abstract concepts like distances. See Figure 2 for examples of possible landmarks.

Automatically Generated
The current forms of route directions were derived more from the sort of data available than from a user perspective. Street network data, available in several levels of detail and rich in attributes, suggested a structure of directions based on sequences of street network segments. Instead of referring to each single segment, simple grammar allows amalgamation of segments between turns, leading to standard turn-by-turn directions. Turn-by-turn directions are tabular directions all of the same grammatical form and level of detail. What can vary between different navigation services is the chosen vocabulary and the set of attributes referred to in the directions. There is, however, a common denominator in current route directions and that is the street name and the distance. So, whats wrong with standard route directions? Well, they are not always easy to follow, which means that travelers concentrate less than desired on traffic while following the directions: - Humans are not necessarily good in estimating distances. So the only way to exactly follow distance-based directions would be to constantly keep an eye on the odometer neither a practical nor a desirable condition. Distances

between turns can also be of ridiculous granularity turn in 11m, or turn in 486km. - Street signs are not always easy to spot, can be hidden by obstacles, invisible from a specific approaching direction, or absent completely. - Salient and easily recognizable features in the environment (i.e. landmarks) play an important part in the human navigation process. Integrating this aspect leads to directions which are easier to follow and to memorize. - By neglecting these aspects that support the way humans process spatial information mentally users easily feel patronized or confused, which means that communication may fail. What standard turn-by-turn style directions fail to achieve is relating to the way in which travelers experience the urban environment as they travel through it. Accordingly, we will argue for significant modifications of these directions.

Giving Unambiguous Directions


Route directions can be realized with low cognitive workload (and stress) if they are unambiguous. For example, a direction Turn right is perfectly clear at a simple t-intersection, but insufficient at an intersection where two options to turn right are available (compare

Criteria for Better Route Directions


Modifications should be based on clear criteria depending on what is appropriate for the user. In short, good route directions guide the traveler along the route and describe even in difficult situations the required action clearly and unambiguously. Apart from providing all necessary information, they are simple, understandable and

Figure 4: A route from start to end see table 1 for route directions

31

Latest News? Visit www.geoinformatics.com

September 2007

Article

sources and apply these criteria to aggregate and combine route elements and other elements of the city into cognitively ergonomic route directions. An implementation of the above criteria, extending the standard grammars to produce directions for a given route, helps to demonstrate the potential of this approach. The chosen example is shown in Figure 4. A standard navigation service as it can be found in various forms on the Internet produces for the given route nine instructions. All instructions are rather short and do not give much more information than the names of the roads and the direction of the turn at each decision point. In contrast, our extended directions engine integrates additional information that helps the traveler to follow the route. It provides even at complex intersections clear directions (e.g., in the 3rd Figure 5: The generation of a response in a routing engine. The instruction) and integrates salient feamodule for turn-by-turn directions (red) receives from preprocessing modules all required data and produces based on this tures as landmarks (e.g., traffic lights in data route directions. the second and third instruction). Using the highway number rather than Figure 3). This problem can simply be resolved the normal street names allows reducing the by giving more precise directions than left, amount of given directions considerably (comright or straight. Introducing an order of the pare direction 4). Since highways are usually competing branches at an intersection is helpclearly marked with signs, it is easy to follow ful in more complex situations. Take the secthem. However, our directions generator also ond exit on your left points out clearly which generates more detailed directions for such street has to be taken, if there are two options segments of the route, which the user can available to turn left. access if required.

Example
It is one art (or science) to postulate criteria for cognitively ergonomic route instructions. Another art is to exploit various available data

Conclusions and Outlook


This article addresses the challenges of improving automatically generated route directions to make them shorter, more descriptive and unam-

biguous, or what we call cognitively more ergonomic. The proposed criteria can be implemented as additional rules to a direction generation grammar that only requires access to additional data to capture a routes context. For a user of improved route directions, the advantages of such an extension have been demonstrated in a characteristic example above and result in a cognitively richer experience for the end user. Human spatial cognition and communication is an active research area with dedicated scientific conferences such as COSIT (see text box). The specific foundations of an implementation described in this article were laid out in others work (e.g. Klippel 2003; Richter 2007), and further developed in a project in the Cooperative Research Centre for Spatial Information (CRCSI), a major research initiative of the Australian Government. One of the industry partners of the project, LISAsoft Pty Ltd implemented the extension of standard route grammars as Java-library useable in any routing engine and with any navigable spatial data set. Alan Tyson, General Manager of LISAsoft, is convinced: Our ergonomic route directions will give next-generation navigation services a clear competitive advantage. A crucial component for route directions that communicate is the availability of rich data sets. Dan Paull, CEO PSMA Australia Limited, says: To generate cognitively motivated route directions requires more than a navigable street data set. This additional information plays a crucial part, and PSMA Australia is proud to be the provider of the rich data source for this research project via our commercial partner, LISAsoft.

References
Klippel, A., Tappe, H., Kulik, L., & Lee, P. U., 2005: Wayfinding choremes - A language for modeling conceptual route knowledge. Journal of Visual Languages and Computing, 16(4), 311-329. Richter, K.-F., 2007: Context-Specific Route Directions. PhD thesis, Faculty of Mathematics und Informatics, University of Bremen, Bremen, Germany.

The Authors
Stefan Hansen holds a Masters in Computer Science from the University of Bremen, Germany. For his thesis he did joint research with the Cooperative Research Center for Spatial Information (CRCSI), then joined the project for developing a proto-type implementation of the research work together with the project partner LISAsoft Pty Ltd. He is now employed by LISAsoft.
32

Stephan Winter is lead researcher in the CRCSI, and Senior Lecturer at the University of Melbourne. His research focuses on cognitive engineering, complex spatial systems, and interoperability. He will chair the Eighth International Conference on Spatial Information Theory, COSIT07 (see text box), which will be held in Melbourne later this year and will bring the worlds most distinguished researchers in this area to Australia. Alexander Klippel is Assistant Professor at the GeoVISTA Center, Department of Geography, Pennsylvania State University, PA, USA. From 2004-06 he was research fellow in the CRCSI. He has a PhD in informatics from the University of Bremen, Germany. Alexanders research interests are in spatial cognition, visual representations, and formal semantics for dynamic processes in geographic space.

Project Links
Here you can find more information on current research in cognitive engineering in the context of wayfinding and navigation: CRCSI project on accessibility of spatial data: www.crcsi.com.au/pages/project.asp?projectid=70 Conference on Spatial Information Theory 2007 (COSIT07), Melbourne: www.cosit.info CORAL: www.ics.mq.edu.au/~coral/ Transregional Collaborative Research Center on Spatial Cognition. www.sfbtr8.uni-bremen.de/ LISAsoft: www.LISAsoft.com

September 2007

Interview

Further Strengthen Our Expertise

Two Partners Join the Group


Since the Management Buy Out a few years ago, 1Spatial has grown profitably through self-funding. In little over three years the company has made three acquisitions in Europe and firmly established itself as a leader in spatial database management. Graham Stickler, 1Spatials Product and Marketing Director, talks about the companys future direction, the role of the 1Spatial Community and how the spatial industry is moving forward. By Job van Haaften
unconstrained sharing of spatial data between systems, business areas, organisations and the public. There is little point in providing widespread access and sharing of data that are not fit for purpose and of which there is little or no quality measures. We are always looking to further strengthen our existing expertise and the acquisitions of IME and Proteus, two companies we have had strong relationships with in the past, have allowed us to do this. We believe this merging of knowledge will ensure that as organisations look to make more use from their spatial databases, share data and build common frameworks and spatial data infrastructures (SDI), 1Spatial will posses the wide range of skills necessary to manage, repurpose and distribute spatial data. Joining forces strengthens all three parties abilities to deliver spatial data infrastructures and data quality products throughout the UK, Ireland and beyond. It also firmly establishes 1Spatials local presence in both Ireland and Scotland to complement existing offices in Cambridge, England and Kongsberg, Norway.

1Spatial is known to have a very strong partnering ethic. How will the acquisitions affect your Partner Community?
If anything it strengthens the partner network. We have built our partner network over the last five years as we developed the Radius Programme, i.e. made 1Spatial technology available in an Oracle environment. Both IME and Proteus have experience in this area where other partners can benefit. For example, IME has an Oracle Spatial Checking Utility that complements other tools available from 1Spatial across the Group. There are also other synergies with partners worldwide, such as Open Spatial in Australia, Credent Technology in Singapore, Geodan in the Netherlands, Spatial Technology in Sweden and Geofoto in Croatia. 1Spatial and all these international partners have complementary skills and solutions based on Autodesk, Bentley, ESRI, Intergraph or PB MapInfo software. IME and Proteus both belonged to the 1Spatial Community prior to the acquisitions so they have also helped shape our partner network. It has developed into a group of like-minded organisations committed to redefining spatial

Alan Douglas (left), former MD of IME now MD of 1Spatial Scotland, Crispin Hoult formerly of IME now 1Spatial Ireland, Peter Bullock and Duncan Guthrie both 1Spatial.

A re-brand, a high profile conference, and now two acquisitions. Its been an exciting and busy year for 1Spatial. What has been your thinking behind these changes?
These changes were not taken lightly, the company has evolved over time and with that become a world leader in spatial database management, as opposed to GIS applications, with a focus on spatial data quality control. The name change was planned over two years ago,

our mission had changed over the years and we wanted the name to reflect that. This then culminated in the 1Spatial Conference in May 2007, with the theme Fit for Purpose, which raised the profile of spatial data management and quality control across the industry. Spatial data have been collected over a long period of time and used extensively for analysis, planning and decision making; valuable and massive benefits can be obtained through their re-use. The modern era is demanding the

34

September 2007

Interview

data through quality, standards and interoperability. Nothing should change from that perspective, now we will work even closer as part of the same organisation. Probably the most exciting thing about this development, apart from providing local support in Ireland and Scotland, is the fact that we now have additional expertise within the Group regarding CAD to GIS migrations and spatial data integration. Again this is expertise that other members of the 1Spatial Community, such as system integrators, will be able to exploit.

1Spatial won the AGI award for Innovation & Best Practice (Private Sector) with Proteus and IME for the combined effort on the Property Registration Authority in Ireland. Clearly this must have been a major factor in the acquisitions?
Yes, we have successfully worked with IME and Proteus on a number of projects over the last two to three years and always had a good relationship. It was fantastic to gain recognition from the AGI for the combined effort that went into the Property Registration Authority in Ireland project. This project was an upgrade to the Digital Mapping System. The DMapS is a web-based, system for recording and accessing spatial information relating to Land Title in Ireland. The project added value, improved levels of service and reduced costs, which all contributed to a satisfied customer. All three companies brought different skills to the project, Proteus, as an Autodesk partner, built the application, whereas IME, with its technical expertise, was able to develop the web-based viewing system. 1Spatial designed and implemented the system. Working together enabled us to recognise how Proteus and IME have the same company philosophy as the staff at 1Spatial when it came to teamwork. We, at 1Spatial are passionate about the concept of working together; in fact it is part of the company ethos and reflected in our partnering strategy. Proteus and IME clearly share these values. In the future this merger, and therefore influx of skills within the 1Spatial Group, will give us an added strength to go after bigger European SDI projects.
Seamus Gilroy former MD of Proteus and now MD of 1Spatial Ireland (left) and Mike Sanderson CEO 1Spatial (right).

How do you see the Spatial Industry changing and moving forward? And what role do you see the new 1Spatial Group playing in this market?
We see the Industry continuing to change rapidly as spatial data is increasingly adopted within mainstream IT. Oracle and Google have seen to that. With this change will be an

increasing awareness of the issue of managing the spatial data, separated from the use of the data in what we have traditionally called GIS. Engineering in the form of CAD and surveying will become an increasing source of high quality data, some of which will be new to us, such as 3D city landscapes. Integrating these new data with the existing, and rapidly growing, traditional spatial data and then allowing for meaningful sharing of these data is a management challenge. We firmly believe that one of the key aspects will be the management and communication of the quality. This is reflected by our involvement with ePSIplus around Information Management Quality and Standards, the Open Geospatial Consortium (OCG) as Chair of the Data Quality Working Group and by our research work on 3D spatial data. We have changed over the past few years to position ourselves firmly in this spatial data management space, away from the traditional (GIS/CAD) applications; the re-brand and acquisitions are steps along this path. We firmly see 1Spatial as being the key player in the management of spatial data and providing guidance and leadership this area. Currently the market has many small players both in Europe and across the world, and with the recent acquisitions we now have added strength and an ability to increase our visibility, especially in Europe. Europe is our key focus at the moment, that is not to say we dont do significant business in other parts of the world, but currently we want to focus on supporting INSPIRE and the impact that will have as part of a wider SDI philosophy. To do this we are

looking at the opportunities available for underlying technology to re-engineer and re-use existing data collected across Europe. In terms of where we plan to go in the future, in the short-term its quite simple really, IME and Proteus will become 1Spatial Ireland and 1Spatial Scotland. In addition, our acquisition of Sysdeco a few years ago is now established as 1Spatial Norway. This means we are starting to increase our coverage with local offices in control of their regions so that when local opportunities occur they can respond. A good example of this is the current opportunity for a planning portal in Scotland with the Scottish Executive. IME were already a lead integrator on a bid but now feel hugely strengthened as they are backed by a much larger entity. Across Europe and globally, the 1Spatial Group is now even more capable of bringing together all the relevant expertise to address the spatial data management issues. These are issues that organisations face on a daily basis as they attempt to make best use of spatial data for whatever their businesses demand. As a company we feel we are now more Fit for Purpose and we believe this will bring about new opportunities for everyone involved with the 1Spatial Group.
Job van Haaften (jvanhaaften@geoinformatics.com) is editor of GeoInformatics. For additional information visit www.1spatial.com.

Latest News? Visit www.geoinformatics.com

September 2007

35

Article

Automated High-Accuracy Orthorectification and Mosaicking of

PALSAR Data without Ground Control


Imagine a fully automated, highly reliable system to produce high-accuracy orthos and mosaics of radar data all over the world. Time-sensitive applications such as oil spills and flood monitoring can now access high-accuracy radar orthos as soon as the data is available. These applications and more are now possible with the successful operation of the ALOS satellite. By Philip Cheng
However, the ScanSAR resolution is inferior to high-resolution mode. Another advantage of PALSAR is its polarimetry mode. The SAR sensor on JERS-1 was equipped only with a horizontal polarization transmission/receipt function, whereas PALSAR realizes both horizontal and vertical polarization. PALSAR can also simultaneously receive horizontal and vertical polarization for each polarized transmission - called multipolarimetry. In addition, PALSAR can switch from horizontal to vertical polarization and vice versa at respective transmission pulse, enabling four polarizations by double simultaneous polarization - a function called full polarimetry.

PALSAR Applications
There are numerous applications for PALSAR data. Examples include: land area basin mapping, coastal area basin mapping, monitoring of the environment, and tracking of natural disasters such as oil spills. One prominent and recent example is the monitoring of an August 2006 oil spill caused by a sunken tanker off Negros Island in the Philippines. A PALSAR image taken two weeks after the initial spill showed that an expanding oil slick could be observed. This suggests that the heavy oil was still leaking from the tanker two weeks after the initial incident. Polarimetric applications are also showing promise in fields such as forest fire monitoring, classification of vegetation (yields and

Figure 1. Orthorectified PALSAR L1.5 SGF data overlaid with USGS 1:24000 scale vectors.

he ALOS satellite was launched successfully on January 26, 2006. It has three remotesensing instruments: the Panchromatic Remote-sensing Instrument for Stereo Mapping (PRISM) for digital elevation mapping, the Advanced Visible and Near Infrared Radiometer type 2 (AVNIR-2) for precise land coverage observation, and the Phased Array type L-band Synthetic Aperture Radar (PALSAR) for day-and-night and all-weather land observation. In order to fully utilize the data obtained by these sensors, ALOS was designed with two advanced technologies. The first is the high speed and large capacity mission data handling technology and the second is the precision spacecraft position and attitude determination capability. These technologies will be essential to high-resolution remote sensing satellites in the next decade. This article will focus on the PALSAR sensor.

PALSAR Observation Modes


PALSAR provides higher performance than previous SAR sensors such as JERS-1. The high resolution mode is used most commonly in regular operation. Its maximum ground resolution of 7 meters is one of the highest among Synthetic Aperture Radar (SAR) sensors. In addition to the conventional high resolution mode, PALSAR has a ScanSAR observation mode which enables it to switch off-nadir angles from three to five times (scan by the swath of 70 kilometers) to cover wide areas: from 250 kilometers (70x3) to 350 kilometers (70x5). For comparison, the off-nadir angle for the SAR sensor on board JERS-1 was fixed at 35 degrees, and swath width was about 75 kilometers.

Figure 2. Orthorectified PALSAR L4.1 SGP data overlaid with USGS 1:24000 scale vectors.

36

September 2007

Article

Points
heights), vegetation and soil moisture tion high-resolution mode. The data is studies, monitoring of snow cover, trackequally spaced on the ground range. Pixel ing of ice conditions, and flood monitorspacing is selectable from 5, 6.25, 12.5 ing. Another recent example was an emerand 25 meters depending on the obsergency request to monitor flooding in vation mode. For these tests, L1.5 data Indonesia. Jakarta, the Indonesian capiwith 6.25 meter pixel spacing was chosen. tal, was heavily flooded on February 2, The L4.1 SGP (SAR Georeference 2007, due to weeks of massive rainfall. Polarization) data is a SAR recovery proThe news reported that city traffic was cessing image rendered to the level 1.0 seriously impaired and that some parts product observed in polarimetry mode of the city were under three meters of (two or four polarizations). This is a cross water. Based on a request from Sentinel product (such as HH*HH and HH*HV) Asia, JAXA, the Japanese Aerospace value of observed multi-polarizations (HH, Exploration Agency, decided to activate HV, VV and VH). Pixel spacing is selectable the ALOS/PALSAR sensor for rapid obserfrom 12.5, 25 and 50 meters for map geovation. They proceeded to image the area code products in map projection coordion February 5, 2007, using PALSAR to nate systems, and pixel and line spacings measure the brightness of the land and of 9.37 meters and 13.96 meters for path Figure 3. Orthorectified PALSAR L4.2 SCN data overlaid with USGS surrounding ocean. With color composites geo-reference products in the slant range 1:24000 scale vectors. and HH-polarimetry, images were created coordinate system. The L4.1 two-polarizaas soon as possible. In this article, we will to show detailed land surface changes due to tions path product data covering 70 kilomeuse different PALSAR data to test and explore flooding. ters x 70 kilometers was chosen for these orthorectification accuracy without the use of tests. The two-polarization data is composed Orthorectification of PALSAR Data GCPs. of four integer data, i.e. two unsigned integer For most SAR applications, the data must be bands and two signed integer bands (one real PALSAR Test Data corrected to a map projection before it and one imagery). There are two formats currently available for becomes useful. This correction process is The L4.2 SCN data set is a SAR recovery proPALSAR data: JAXA PALSAR and ERSDAC PALcalled orthorectification or geometric correccessing image rendered to the level 1.0 prodSAR CEOS. The ERSDAC CEOS format data was tion. The process requires the use of a rigoruct observed in ScanSAR mode (two or four used in this test because it is similar to the ous geometric model, ground control points polarizations). ScanSAR mode uses single RADARSAT CEOS format, where previous (GCPs), and a digital elevation model (DEM). polarization. The product is a row of amplitesting showed a minimum effort required to The collection of GCPs presents a significant tude data equally spaced on the ground support the data. The product is classified problem for SAR orthorectification. First, an range. Pixel spacing is selectable from 12.5, into five levels according to processing grade existing source of GCPs may not be available. 25 and 50 meters. Two L4.2 SCN data sets and observation mode. Details of these levIt is often prohibitively expensive to collect were chosen, each with five scans covering els can be found in www.palsar.ersdac.or.jp/e/ new points, especially for areas inaccessible 350 kilometers x 350 kilometers overlapping product/p_product.html. Each product can be by road. In some cases, the collection of GCPs in the north and south directions with spacordered online for 20000 Japanese Yen. is made almost impossible by local conditions ing of 50 meters. Four high accuracy orbit PALSAR data sets such as floods or oil spill monitoring. Second, Testing Software were acquired from ERSDAC, consisting of one unlike optical satellite images, it can be very PCI OrthoEngine V10.1 software was used for L1.5, one L4.1 and two L4.2 data sets: the L1.5 difficult to identify GCPs on the SAR image, a the testing. Part of the Geomatica suite of data covers an area over California, U.S.A. and problem exacerbated in mountainous areas products, OrthoEngine supports reading of the the L4.2 data covers a much larger area, due to foreshortening and layover effects. The data, manual or automatic GCP/tie point (TP) including California, Nevada and Arizona. This collection of GCPs was the main reason why collection, geometric modeling of different region has an approximate elevation range of it was impossible to generate high accuracy satellites using Toutins rigorous model, RPC -100m to 4400m. There are two products radar orthos automatically in the past. correction method, radar-specific model, autoavailable for each level: geo-reference and Since the ALOS satellite has the advanced matic DEM generation and editing, orthorectigeo-code. The geo-reference product was chotechnologies of precision spacecraft position fication, and either manual or automatic sen because it preserves the satellite geomeand attitude determination, this information mosaicking with different color balance methtry for high accuracy geometric modeling. could potentially be used to orthorectify the ods. For these PALSAR data tests, the radarThe L1.5 SGF (SAR Georeference Fine Beam) PALSAR data accurately to any map projecspecific modeling method was used. It allows version is a multi-look amplitude image, gention without the need for GCPs. This would the computation of a geometric model with erated after SAR recovery processing to the be an immense benefit to a lot of applications or without GCPs. level 1.0 product, acquired in single polarizawhere accurately-corrected orthos are needed

Latest News? Visit www.geoinformatics.com

September 2007

37

Article

processes. PCI OrthoEngines tools for automatic cutline searching, mosaicking and color balancing can be used to perform the entire process automatically. No human intervention is required during the process. To test the automatic mosaicking of PALSAR data, two L4.2 ScanSAR data sets were used, one acquired in descending pass and the other acquired in ascending pass. In general, images acquired with the same passes are preferable to minimize the radiometric differences. Each image has coverage of approximately 350 kilometers in the X direction and 350 kilometers in the Y direction, with approximately 50 kilometer overlap in the Y direction. Since the images were acquired in different passes, a local adaptive contrast stretch filter was used to correct the images before Figure 4. Automatic mosaicking of two PALSAR L4.2 SCN images. mosaicking. Figure 4 shows an Testing Results overview of the mosaicked image, while To test the accuracy of the orthorectification Figure 5 shows a full resolution subset of the of the data without GCPs, independent check mosaicked image overlaid with the cutline (in points (ICPs) were collected from USGS red). It can be seen from Figure 5 that the 1:24000 scale maps and vectors for each roads are aligned to each other perfectly image. The USGS NED 1 arc second (~30m) along the cutline between the two images. resolution DEM was used to extract the eleAutomated Batch Processing vation for each check point. Table 1 shows a Since these tests prove that high-accuracy summary of the results. It can be seen from PALSAR orthos and mosaics can be generatthe table that all images have root mean ed automatically without GCPs, it is possible square (RMS) errors within two pixels (or one to integrate all the processes in a fully-autoresolution of the sensor). Figures 1, 2 and 3 mated batch system. PCI Geomatics software show full resolution examples of L1.5, L4.1 encompasses all the programs required to and L4.2 corrected data overlaid with the perform the necessary steps by using either USGS 1:24000 vectors (in red). The reference Python or PCI EASI scripts. The advantages vectors aligned almost perfectly with the of automated processing are (1) maximizing orthos in all cases. productivity, (2) automating repetitive, timeAutomatic Mosaicking consuming tasks while producing consistent The successful generation of high accuracy results, (3) gaining operational efficiencies, (4) PALSAR orthos means that it is possible to reducing labor costs, and (5) shortening create seamless mosaics of PALSAR data withthroughput time for the delivery cycle. The out GCPs. However, mosaicking and color balgeneration of a large quantity of high-accuraancing are usually extremely time-consuming cy orthos or mosaics, such as a mosaic of an

Figure 5. Full resolution mosaic of two PALSAR SCN images overlaid with cutline.

entire country, can be easily accomplished with such an automated system. As a scalable system, multiple computers can be leveraged to speed up the processing. The availability of this fully-automated process makes it easy to generate PALSAR orthos/mosaics for applications that require rapid results, such as disaster monitoring.

Conclusions
It is possible to generate high-accuracy orthos and mosaics of PALSAR data without ground control points. Test results show RMS errors consistently within one pixel resolution of the data. The fact the GCPs are not required for PALSAR orthorectification translates to very significant cost and time savings for the user. In addition, automated batch processing for generating a large quantity of PALSAR orthos and mosaics is now possible using single computers or multi-processor systems.
Dr. Philip Cheng (cheng@pcigeomatics.com) is a senior scientist at PCI Geomatics. Acknowledgements: The author would like to thank the Earth Remote Sensing Data Analysis Center (ERSDAC) for providing the test data sets. More applications of PALSAR data can be found in www.eorc.jaxa.jp/en/index.html and www.palsar.ersdac.or.jp/e/index.shtml.

Product

Number of ICPs

RMS Error (m) X 13.8 9.6 55.2 Y 12.5 12.6 37.4

Maximum Error (m) X 22.8 16.0 93.4 Y 21.0 21.3 76.4

L1.5 SGF L4.1 SGP L4.2 SCN

11 8 12

Table 1: Orthorectification accuracy results for three different PALSAR data sets using ICPs only, with elevation extracted from USGS NED DEM data.

38

September 2007

Article

Supporting Multi-Vendor Applications

Using an Open Spatial Database


Since 2002 GITA in the US has conducted an annual survey of the North American utility industry. One of the questions that is asked is whether the organization uses more than one GIS. As you can see, most of the organizations reported using more than one GIS and sharing data between them. The survey also asked what tools were used to share spatial data, and Safe Softwares Feature Manipulation Engine (FME) was named most often as the third-party application for data sharing. The reality for many organizations is that sharing data between different vendors applications has involved redundant data: multiple copies in the different formats supported by the different GIS vendors. By Geoff Zeiss
way to exchange electronic as-builts between Engineering and Records; and what I call field force enfranchisement.

Liberating
Widespread support of spatially-enabled relational database management systems by GIS vendors has enabled customers to begin to adopt geospatially-enabled RDBMSs as a shared enterprise datastore. The business benefits of choosing an open geospatially-enabled RDBMS as the enterprise single point of truth is that applications from different vendors can share spatial information in a common repository. This approach to spatial data management liberates customers from vendor lock-in, enabling them to buy best-of-breed applications. Managing spatial data has evolved over the years from proprietary files, through proprietary schemas for storing spatial data in an RDBMS, to the current state where you can store just about any spatial data, including topology, in a modern object-relational database management system (ORDBMS). The advantage of an ORDBMS is that spatial data is accessible through an open query language, SQL, and an open interface standard such as ODBC, JDBC, and OLEDB.

All-relational
In the area of network infrastructure management, beginning in the early 90s solution vendors such as GeoVision pioneered the use of relational database management systems for storing geospatial data. Such systems were deployed by early adopters, typically large utility and telecommunications firms and municipal governments, to manage their network infrastructure. These solutions were marketed as all-relational to distinguish them from traditional GIS applications which used relational technology for feature properties but invariably stored geospatial data in proprietary files external to the RDBMS. All-relational systems were remarkably successful and are still deployed at major utility and telecommunications firms and municipalities around the world. However, one of the disadvantages of these systems is that the data model or schema used to store geospatial data was specific to the solution vendor and required reverse engineering to enable the sharing of geospatial data with other vendors products.

Geospatial Architectures.

Single Point of Truth


Many people have been looking at open spatial databases as a way of replacing multiple files with a single point of truth. The promise is becoming a reality. At GITA last year, two US municipalities reported how they had implemented multi-vendor interoperability based on an open spatially-enabled relational database management system (RDBMS). Throughout the world, utilities and telecommunications firms manage infrastructure in basically the same way and are facing similar challenges. If you look at the information flow in these organizations, the most obvious thing that strikes you is the problem of silos or islands of information. The second thing is that

the information flow in these organizations is for the most part based on paper. For example, the Engineering group uses CAD, the Records (sometimes called Network Documentation) group uses GIS, and the flow of information between these two groups is paper. The result is redundant processes and backlogs. In addition, the aging of the work force exacerbates what is already a critical problem because there is no effective mechanism for transferring the knowledge in the heads of experienced workers to the facilities database where it can be accessed by younger, less experienced workers. Three components are critical to addressing this problem: a single point of truth implemented as a centralized, spatially-enabled RDBMS; a

40

September 2007

Article

taining a slow and expensive paper-based infrastructure management system. In both municipalities, the major challenge involved managing text, metadata, and stylization, and each municipality addressed this in somewhat different ways. At Tacoma RDBMS tools such as triggers were used to maintain the metadata required for each vendors application. At San Jose a third party text management tool was used to share text. The important conclusion is that by either using built-in RDBMS tools or third-party add-ons, it is feasible to address the problems associated with managing metadata, stylization, and text.

Final Challenge
Spatially-enabled RDBMS technology and support by geospatial application vendors has advanced to the point where spatial data can be shared in a secure, highly available environment between applications from different geospatial vendors. This provides a tremendous business benefit to utility and telecommunications firms as well as municipal government organizations in managing their network infrastructure because these organizations can now build solutions by integrating best-of breed applications and solutions from multiple vendors. At the present time this permits telecommunications and utility firms to address the paperbased flow of information between Engineering, Records, and Operations, thereby improving customer responsiveness and reducing costs. There are even signs that the final challenge, improving the flow of information between construction and Engineering, may also be beginning to be addressed.
Geoff Zeiss (geoff.zeiss@autodesk.com) is Director of Technology at Autodesk Inc.

Infrastructure Management Lifecycle Paper.

Object-relational
Michael Stonebraker and pioneering RDBMS vendors such as Illustra introduced what was referred to at the time as object-relational database management systems. ORDBMS differentiated themselves from traditional relational database management systems in their ability to store complicated data structures in each cell of a table. These data types included geospatial data, time series, and other structures that were difficult to store in a simple two-dimensional table. This advance in RDBMS technology meant that now the storage of geospatial data and the SQL for manipulating geospatial data were defined by the RDBMS vendor, not by the geospatial application vendor. The important implication was that two GIS vendors supporting the same RDBMS could share data and avoid the data redundancy associated with file import/export.

For example, an extension to the SFS to support text was approved recently.

No Paper-based Infrastructure
At GITA 2006 two municipalities, Tacoma and San Jose, gave presentations that described operational systems that shared spatial data between applications from multiple vendors, based on a spatially-enabled RDBMS. As I remember the vendors involved included ESRI, Autodesk, Intergraph, MapInfo, and possibly others. The most important implications of what these municipalities have done are 1) organizations are recognizing the business value of having a single point of truth for spatial data and 2) it is technically feasible to implement a central spatially-enabled RDBMS. To put it simply, technology is no longer the excuse for main-

Developing a Standard
A major breakthrough in sharing spatial data was the development of the Open Geospatial Consortiums (OGC) standard Simple Feature Specification (SFS) for SQL which is incorporated by most RDBMS systems, including both closed and open source. Although the SFS only supports simple features, points, lines, and closed polygons, it has become widely adopted and specific implementations by commercial RDBMS vendors are now supported by most geospatial vendors. However, there are some wrinkles. Sharing basic geospatial data types covered by the SFS standards such as points, lines, and closed polygons is supported by most geospatial vendors. But things get more complicated when you want to share text, symbolization, layer definitions, topology, and long transactions. The OGC is making progress in addressing these issues.

Infrastructure Management Lifecycle New.

Latest News? Visit www.geoinformatics.com

September 2007

41

Article

Earth Observation from Space & the Air

Jena-Optroniks Imaging Scanners


[a] [b] [c]

Fig. 1 (a) - The Carl Zeiss Jena MFK-6 multi-spectral film camera that was used on various Soviet spacecraft had six lenses and six separate film magazines. (Source: Deutsche Museum) (b) - The Carl Zeiss Jena MSK-4 four channel airborne multi-spectral film camera on its mount with its viewfinder and navigation/tracking device at the right. (Source: MGGP Aero) (c) - A Carl Zeiss Jena LMK-1000 photogrammetric film camera mounted in a Piper Navajo aerial photographic aircraft. (Source: MGGP Aero)

The Jena-Optronik company has come to the fore in recent years as a supplier of a wide range of imaging scanners that can be mounted on spaceborne and airborne platforms for Earth observation purposes. In many ways, this development marks the revival of a traditional name that, for a long time, was revered in the area of aerial photographic, photogrammetric and surveying instrumentation. However there is nothing too traditional in its range of new scanner products that make use of the latest opto-electronic imaging technologies. By Gordon Petrie Background
The area around Jena is, by tradition, Germany's Optical Valley. Its famous optical industrial companies include Carl Zeiss Jena renowned for its range of precision optical instruments - and Schott, the famous glassmaker. During the period of communist rule of East Germany, these companies became government-controlled people's enterprises. After the collapse of the communist government in 1989, followed by German re-unification in 1990, these companies were gradually returned to private ownership. However, with the loss of their main captive markets in Eastern Europe, including Russia, these large state enterprises had to be much reduced in size in order to be competitive in the global market. Out of the traumatic period of downsizing and re-organization that followed, a number of new companies have arisen. been wholly owned by Jenoptik AG. When it was first established, the core of the then new Jena-Optronik company came from the space engineering department of the former VEB Carl Zeiss Jena enterprise. This had supplied many components, instruments and systems for use in the Soviet space programme. These included the MFK-6 multi-spectral film cameras that were used to acquire imagery from the Soyuz-22, Salyut-6 and -7 and MIR spacecraft [Fig. 1 (a)]. A somewhat similar MSK-4 multi-spectral film camera was also produced for airborne use [Fig. 1 (b)]. Besides which, the photogrammetric department of Carl Zeiss Jena had produced its range
[a]

of LMK metric film cameras that has been used widely for aerial mapping purposes in many parts of the world [Fig. 1 (c)]. So there is a lot of tradition, knowledge and experience lying behind these newest developments in spaceborne and airborne imagers that are coming from the Jena area. As well as the new imaging devices that are being produced by Jena-Optronik, it is worth noting that the company is also a major supplier to Boeing, ESA and DLR of rendezvous and docking sensors, as well as the star and Sun sensors that are used for attitude determination and orbit control purposes on satellites and spacecraft.
[b]

Jena-Optronik
One of these companies is Jena-Optronik GmbH, founded in 1991 jointly by DASA (now part of EADS) and Jenoptik - the latter company having taken over a substantial part of the former VEB Carl Zeiss Jena people's enterprise. Since 2005, Jena-Optronik has

Fig. 2 (a) - A CAD drawing showing the main components of the JSS56 spaceborne scanner. (b) - A photograph of a JSS56 spaceborne scanner.

42

September 2007

Article
[a] [b]

Fig. 3 (a) - The three-mirror anastigmatic (TMA) telescope showing the mirrors (M1, M2 & M3) and the focal plane array (FPA) and their relationship to the overall construction of the JSS-56 spaceborne scanner on which it is mounted. (b) - The five spectral bands or channels that are imaged by the JSS56 spaceborne scanner.

I - Spaceborne Scanners
Jena Spaceborne Scanners (JSS) Currently the Jena Spaceborne Scanners (JSS) line of pushbroom line scanners comprises four different models, each designed for a different application. The number given after the abbreviation (JSS) indicates firstly the number of spectral channels and secondly the magnitude of the ground sampling distance (GSD) in metres.
(a) The JSS-54 has been designed to produce images from space in five different spectral channels in the visible and near infra-red (VIS/NIR) parts of the spectrum. It will feature five CCD linear arrays, each containing 5,000 detectors, and will use a Mangin-type optical system combining both reflective (mirror) and refractive (lens) optical elements with a focal length of 980 mm and an aperture of f/5. The JSS-54 will provide multi-spectral linescan images of the Earth's land surface having a ground sampled distance (GSD) of 4.2 m and a swath width of 21 km from an orbital altitude of 600 km. (b) The JSS-56 is the model of which five examples have already been built for installation in the forthcoming RapidEye constellation of satellites. Like the JSS-54, it is a very lightweight and compact five channel VIS/NIR design intended for use on small satellites or microsatellites [Fig. 2 (a)]. However it is designed to cover a much wider swath over the ground than the JSS-54, using five CCD linear arrays, each with 12,000 detectors, in combination with a three-mirror anastigmatic (TMA) telescope having a focal length of 633 mm and an aperture of f/4.3 [Fig. 2 (b)]. Using this combination of components, the JSS-56 scanners will produce multi-spectral linescan images with a ground sampled distance (GSD) of 6.5 m and a swath width of 78 km from the operating altitude of 600 to 620 km that will be used by the RapidEye satellites.

(c) The JSS-61 model features six channels, comprising (i) a single high-resolution panchromatic channel producing linescan images with a 1.5 m GSD from an orbital altitude of 600 km, and (ii) five medium-resolution channels providing multi-spectral linescan images with a 4.5 m GSD and a swath width of 18 km from an orbital altitude of 600 km. To obtain the high-resolution pan image, the optical telescope is of a Richey-Chretien design with a long focal length of 2.58 m and an aperture of f/4.6. (d) The JSS-95 design will provide an extension to the spectral range into the shortwave infra-red (SWIR) part of the spectrum employing a total of nine channels - comprising six VIS/NIR channels and three SWIR channels.

JSS-56 on RapidEye
As noted above, Jena-Optronik is supplying the JSS-56 multi-spectral pushbroom line scanners that will be used to acquire continuous strip images of the Earth from each of the five polarorbiting satellites in the constellation that will be operated by the RapidEye AG company based in Brandenburg, Germany. All five satellites are scheduled to be launched together early in 2008 using a Russian Dnepr rocket. The RapidEye concept was developed originally by the Kayser-Threde company based in Munich, supported by the German Aerospace Center (DLR). The RapidEye company that resulted from this development is now being financed by a group of German and Canadian financial institutions supported by the German state of Brandenburg and by DLR. Jena-Optronik has just completed the supply of the five JSS-56 scanners to Surrey Satellite Technology Ltd. (SSTL) in the U.K. which has built the five satellite busses (platforms) for the RapidEye project. Once SSTL has completed the mounting and integration of each of the JSS-56 scanners on to their respective platforms [Fig. 4], they will be taken over by MacDonald Dettwiler & Associates Ltd. (MDA) from Canada - which is

JSS-56 Scanner Construction


The three-mirror anastigmatic (TMA) telescope used in the JSS-56 model is of especial interest in that it has an all-aluminium construction with a reflective silver coating instead of utilizing optical glass [Fig. 3 (a)]. The required optical quality of the aluminium surface has been achieved using novel ultra high-precision metal milling and polishing techniques devised and implemented by the Fraunhofer Institute of Applied Optics & Precision Engineering (IOF) - which is also based in Jena. The 12,000 pixel linear arrays that are being used in the JSS-56 scanners have been supplied by Atmel in France. Each of the five linear arrays is equipped with the appropriate spectral filter to produce continuous strip images in the blue (= 440 to 510 nm); green (520 to 590 nm); red (630 to 685 nm); red edge (690 to 730 nm) and near infra-red (760 to 850 nm) parts of the spectrum [Fig. 3 (b)]. Off-nadir pointing of the scanner at angles of up to 25 from the vertical can be achieved through the rotation of the actual satellite itself.

Fig. 4 - A JSS56 scanner being mounted on and integrated with a RapidEye spacecraft at Surrey Satellite Technology Ltd. (SSTL) in the U.K. (Source: SSTL)

Latest News? Visit www.geoinformatics.com

September 2007

43

Article
[a] [b]

scanners, there is a certain amount of previous heritage in the form of the High Resolution Stereo Camera (HRSC) [Fig. 6 (a)]. Originally the HRSC pushbroom line scanner was built by a team from the DLR Institute of Space Sensor Technology & Planetary Exploration in Berlin under Prof. Gerhard Neukum for use on the Russian Mars 96 mission which failed during its launch in November 1996. Since then, a second Fig. 5 (a) - The METimage scanning radiometer that is being developed by Jena-Optronik for future use on European polarorbiting meteorological satellites. HRSC scanner has been used very suc(b) - The optics and focal plane of the METimage scanning radiometer. The two telescopes are used to acquire images in the cessfully in the ESA Mars Express misVIS/NIR and LWIR/TIR parts of the spectrum respectively. sion that was launched in June 2003 and went into operation early in 2004. the prime contractor for the overall RapidEye German Federal Ministry of Transport, Building & In the context of the present account, one notes project, including the provision of its ground Urban Affairs. that the optical lens systems for the HRSC scansegment. ners were developed and manufactured by JenaSentinel-2 Optronik, as was the optical test equipment METimage At present, Jena-Optronik also has a strong (OGSE) for the scanner. During the seven year Another spaceborne scanner that is being involvement in the definition and development time period between these two Mars missions, developed as a future meteorological imager of the design of the super-spectral imager development of airborne versions of the HRSC by Jena-Optronik is METimage. This is a multiintended for deployment in the ESA Sentinel-2 was undertaken by DLR. This resulted first of spectral (indeed super-spectral) scanning space imaging mission. This is the second of the all in the HRSC-A (= Airborne) model equipped radiometer that is being developed as a five Sentinel missions that are being developed with a 5,000 pixel CCD linear array as used in replacement for the AVHRR scanning radiomefor the EU and ESA under their joint Global the HRSC. This was followed by two wider Monitoring of the Environment & Security ters that are currently in use in the American angled versions using 12,000 pixel CCD linear (GMES) programme. The Sentinel-2 polar-orbitNOAA and European METOP polar-orbiting arrays and wider-angled lenses - called the ing satellite is intended to carry out the optical weather satellites that operate at an orbital HRSC-AX (with an f = 151 mm lens) and the imaging of the land surfaces of the Earth at height of ~800 km. The METimage scanner will HRSC-AXW (with an f = 47 mm lens) [Fig. 6 (b)]. medium resolution values. This would provide utilize a rotating TMA telescope equipped with These various airborne models were used enhanced continuity of the image data that two cross-track scanning units to image the extensively by DLR for scientific research flights have been provided by the SPOT and Landsat Earth's land and ocean surfaces (and its cloud and by the French ISTAR company (now part of programmes. Various possible scenarios are patterns!) with a swath width of 2,300 km at a Infoterra) for commercial aerial mapping operabeing explored - especially with regard to the basic GSD of 250 m [Fig. 5 (a)]. The instrument tions. number of spectral channels and the GSD valwill also cover all the main spectral channels JAS 150 Specification ues of the resulting imagery. The outcome from the visible (VIS) to the long-wave or therSeveral recent presentations by Jena-Optronik seems likely to be a pushbroom scanner with mal infra-red (LWIR/TIR) [Fig. 5 (b)]- i.e. over the have mentioned that, in many ways, the HRSCten VIS/NIR bands and three SWIR bands prowavelength range 400 nm to 13.4 m - using AX has served as a template for the new JAS 150 ducing images with GSD values in the range 10 a number of separate focal plane arrays (FPAs) airborne pushbroom line scanner. Indeed it to 30 metres. to generate the required images in the differshares a very similar specification having an f ent parts of the spectrum. The scanner will also II - Airborne Scanners = 151 mm lens; 12,000 pixel CCD linear arrays have a built-in flexibility regarding the number using a 6.5 m pixel size; 12-bit radiometric of spectral channels (up to 30) and the GSD of Heritage resolution; etc. Furthermore, the JAS 150 has a individual channels. The pre-development of The Jena Airborne Scanner (JAS) was first similar number (nine) and geometric arrangeMETimage is being financed by DLR. The later announced in July 2005. As with the spaceborne ment of its linear arrays to those used in the phases B, C and D are in the scope of the
[a] [b]

Fig. 6 (a) - The HRSC pushbroom line scanner that has been used on the ESA Mars Express mission. (Source: DLR) (b) - An HRSC-AX airborne pushbroom line scanner. (Source: DLR)

45

Latest News? Visit www.geoinformatics.com

September 2007

Article

Fig. 7 - The geometry of the nine-line arrangement of the linear arrays used in the HRSC-AX and JAS 150 pushbroom line scanners showing their angular positions and ground coverage. [Pan = Panchromatic; B = Blue; G = Green; R = Red; NIR = Near Infra-red] (Drawn by M. Shand)

Fig. 9 - The uncorrected and corrected versions of a JAS 150 pushbroom line scanner image of the area around the railway bridge crossing the River Rhine in the city of Cologne.

HRSC-AX. Five of these linear arrays provide stereo panchromatic channels with two forward and two backward pointing arrays at 20 and 12 from the vertical, together with a fifth array being nadir pointing [Fig. 7]. The remaining four (of the nine) linear arrays provide multi-spectral coverage of the ground in the red, green, blue (RGB) and near infra-red (NIR) spectral channels around the nadir position using the appropriate filters placed in front of each of the linear arrays. The cross-track angular coverage of all nine lines is 29.1.

JAS 150 Construction


Although there are these basic similarities between the overall system specifications of the two pushbroom scanners, the actual construction of the new JAS 150 is of course substantially different [Fig. 8 (a)]. Internally the JAS 150 has a specially designed Jenoptik Aenar achromatic lens; new electronics; a ruggedized lightweight carbon-fibre housing;
[a]

a high-speed Firewire interface to the camera control unit; etc. [Fig. 8 (b)] Externally the scanner has a mass memory capability comprising either (i) a RAID disk unit with a capacity of 1 to 2 Terabytes, or (ii) a unit with hot swappable disks with a capacity of 2 to 5 Terabytes that can be changed in-flight [Fig. 8 (c)]. Besides which, a quick look capability to view the stored image data can also be provided for off-line operation during the flight - e.g. during the turns at the end of each flight line. The inertial measurement unit (IMU) coupled to the JAS 150 can either be the IGI AEROcontrol system or the POS/AV 510 system from Applanix. At the present time, flight management and the control and operation of the scanner in-flight can be carried out using either Jena-Optronik's own scanner control unit or an IGI CCNS-4 system. In the future, the Applanix POSTrack system will also be offered. Any one of the standard gyro-stabilized mounts for aerial cameras - the GSM 3000

(from Somag in Jena); Leica Geosystems PAV30; or Intergraph T-AS - can be used with the JAS 150 scanner.

JenaStereo
Closely associated with the JAS 150 scanner is the JenaStereo photogrammetric software suite. This runs on PCs under the Windows or Linux operating systems. The software is a modular system comprising a core module (CORE) and a so-called JAS 150 sensor module (JSM). The JSM/ASM module is also available as a standalone piece of software that carries out the preliminary processing of the raw JAS 150 image data [Fig. 9] and has an interface to allow the processed data to be passed to a BAE Systems SOCET SET or an Inpho digital photogrammetric workstation (DPW). This allows the image data that is generated by a JAS 150 scanner to be used by the numerous commercial companies and government mapping organizations who utilize SOCET SET or Inpho's software to generate their map and GIS data. For triangulation purposes, the BINGO software from Dr. Erwin Kruck can also be used with JAS 150 linescan image data.

Conclusion
From the above account, it can be seen that Jena-Optronik has become solidly established as a developer and supplier of spaceborne imaging scanners. However the airborne imaging market is very much larger. Furthermore it is experiencing a boom as commercial companies and government mapping organizations change from aerial photographic film cameras to the equivalent large-format airborne digital imaging systems. It will be very interesting to follow the progress of the new JAS 150 airborne pushbroom scanner within this market.
[c]
Fig. 8 (a) - A JAS 150 pushbroom line scanner mounted in an aircraft operated by the ILV aerial imaging company. (b) - The internal design of the JAS 150 pushbroom line scanner, showing the main components. (c) - The two-piece system rack of the JAS 150, including the control electronics and the mass memory used for the storage of the image data acquired in-flight.

[b]

Gordon Petrie (Gordon.Petrie@ges.gla.ac.uk) is Emeritus Professor of Topographic Science in the Dept. of Geographical & Earth Sciences of the University of Glasgow, Scotland, U.K.

46

September 2007

Article

Part 5: Chart Projections (2)

Practical Geodesy
In the previous article the importance of chart projections was described. An important aspect of projecting information from an ellipsoid on a flat surface is that the information will be distorted. In this article some of these distortions and their effect on calculations will be described. By Huibert-Jan Lekkerkerk
e have already seen that the major distortions take place in distance and direction. Especially for land survey work, these are important parameters. When working on smaller projects distance or scale distortion will not pose a great problem and can generally be ignored. On larger scale projects, such as the laying of a pipeline or construction of a road, scale distortion will pose a problem. Heading or direction distortion is important for all types of projects. Within every chart projection there are one or more points where the distortion will be nonexistent. This is the point or line where the projection intersects with the ellipsoid. For the longitudinal Mercator this is the equator, while for the UTM projection it is the central meridian.

UTM Projection
A commonly used projection is the Universal Transverse Mercator or UTM projection. There is no geodetic datum associated with the UTM projection, so whenever it is used the geodetic datum from which the coordinates were projected has to be stated as well. A UTM projection can, for example, be based on WGS84 or ED50. The UTM projection is derived from the Transverse Mercator or TM projection. A number of countries use the TM projection with an underlying geodetic datum. This combination is identified by name. Examples include the German Gauss-Kruger projection and the American State Plane Coordinate System (SPCS).

State plane coordinate system for the state of Georgia (2 TM zones).

State Plane Coordinate System (SPCS)


The SPCS is a somewhat peculiar system since it uses three different projections depending on which state is selected. For east-west lying states the Lambert conformal projection is used; for states lying generally north-south the Transverse Mercator is used; and for the Alaska panhandle the Oblique Mercator projection is used. Most states are further divided into Federal Information Processing Standard (FIPS) zones that minimize distortion even further. The aim of the SPCS is to minimize scale distortion to a maximum of 1:10000, which at the time of design in the 1930s was supposed to be the maximum survey accuracy. The geodetic

48

September 2007

Article

North Directions
Headings are always referenced to north. This however is an ambiguous reference since there are a number of norths. The following north references are in common use: True or geodetic north: this is the northern location of the axis that the earth revolves around. Magnetic north: the location of the magnetic North Pole. The latter slowly moves around true north. Chart north: the direction on a chart indicated by the northing or Y-axis. This axis is by definition at right angles to the easting or X-axis.

datum underlying the SPCS is always North American Datum 1983 (NAD1983).

Origin of the UTM System


As discussed in the previous article, UTM divides the world into strips or zones that are 6 wide. The line where the projection touches the earth runs north-south and is called the Central Meridian or CM. Since both a north/south and an east/west reference are needed for a position, an additional reference line is necessary. With UTM this is the equator. Therefore all positions in UTM are calculated referenced to the intersection point of the CM and the equator. With the selected intersection point, negative coordinates (south, east) would exist. Since the minus sign is a common source of error when noting coordinates, a solution was found using a so-called false easting and northing. The rules are simple: to all computed easting from the projection formula 500,000 meters are added. For the northing it is slightly more complicated: if the positions are to the north of the equator then no false northing is added, but if the positions are to the south then 10,000,000 is added to the coordinates found from the projection. In other words, for positions in the southern hemisphere the equator has a northing of 10,000,000, while for positions in the northern hemisphere it has a northing of 0.

the longitudinal Mercator projection where the distance between the parallels increases from the equator towards either the North or the South Pole. Since the distortion near the edges of the projection will become disproportionate with respect to the origin, the scale factor in the origin is decreased. The actual result is that the projection no longer touches the ellipsoid but rather intersects it. As a result the line(s) where the scale factor equals exactly 1 will shift outward. When this shift is well selected, the result will be that the scale factor is equally divided across the projection. The UTM projection has, for example, a scale factor of 0.9996 at the Central Meridian. The result is that every kilometer on the CM is portrayed with a length of 999.6 meters. For every kilometer we calculate the distance to be 40 centimeters too short. As we move east or west from the CM, the scale factor will increase until it becomes 1 again. If we go even further east or west the distortion will increase again until it reaches a maximum of 1.000981 or almost a meter per kilometer too long at the edges of the projection.

Convergence
In the previous article it was mentioned that, as a rule, a single aspect of reality is projected, relatively undistorted, in the chart projection. For a UTM projection these are the angles measured. This is an advantage on larger-scale construction jobs. A common mistake, though, is that people think that since angles are undistorted, measured headings are also identical to the true heading. This is simply not true (with the exception of the CM). Headings on the chart cannot be transferred to the real world without some sort of correction. The main reason is that headings in reality are always referenced to true north while headings on the chart are always referenced to chart north (see cadre). The difference between these two norths depends on the position with respect to the origin of the projection and is therefore a variable. The difference between chart north and true north, namely the meridian convergence, varies between 0 and 3 for the UTM projection.

Scale Factor
In the origin of the projection the so-called scale factor equals exactly 1. One meter in reality will show as 1 projected meter. As we move out from the origin (line) the scale factor will increase. This effect is most evident in

Single UTM zone with parameters showing as well the difference between chart north (NC) and true north (NT)

Latest News? Visit www.geoinformatics.com

September 2007

49

Article

Worldwide magnetic variation or declination for the epoch 1995-2000 (source: www.gly.fsu.edu)

A single GPS receiver cannot be used for setting out headings. It is, of course, possible to measure two positions and compute the heading between them. Depending of whether one works with projected positions or geographic positions, the answer will be referenced to chart north or true north. There are special GPS compasses (having two or more antennas) that will indicate heading referenced to true north. When using such a system for setting out headings, one needs to take the convergence into account whenever a heading from the chart is set out. For a small area this error can be assumed constant. When using a magnetic compass, one needs to correct for variation and deviation as well as convergence. As a result, a magnetic heading can differ several degrees from the true heading.

Variation and Deviation


Historically headings are set out using a magnetic compass. With these not only the convergence is important, but the difference between magnetic north and true north, also called the variation or declination, is important as well. This difference is not a constant but will, due to the shifting of the magnetic Poles, vary slowly. Depending on the location, the variation can be several times larger than the convergence. Another factor that influences magnetic heading measurements is the deviation. This is the effect of local magnetic disturbances on the compass heading. Depending on the amount

of (magnetized) iron in the neighbourhood of the compass, the deviation error can be as much as tens of degrees. The deviation is one of the reasons why compasses are mainly built of copper or, nowadays, plastic since these materials cannot be magnetized.

Setting out Distances


Depending on the projection selected, the scale factor usually does not pose a problem in small projects. With UTM, the effect will become readily noticeable over distances of just a few kilometers. Other projections such as the stereographic RD projection in the Netherlands have much smaller scale factors (RD: 0.999079), resulting in errors that are much smaller. As a result the scale factor can be ignored for much greater distances.

Setting out Headings


For every projection and project one needs to determine how much the effect of scale factor and convergence will influence project results. Even if the local effect is small, however, reading headings from the chart and setting them out in reality will probably result in an error.

Conclusion
In general one may assume the distortions mentioned (with the exception of deviation) to be constant over an area of a few square kilometers. Since a constant error is not identical to having no error at all, one needs to determine the convergence and scale factor (and variation if applicable) for every single project.
Huibert-Jan Lekkerkerk (hlekkerkerk@geoinformatics.com) is Editor-in-chief of GeoInformatics.

Applying convergence and variation when determining directions from a chart. - Np = projection or chart north - Nt = true north - Nm = magnetic north - c = convergence - d = declination / variation

Latest News? Visit www.geoinformatics.com

September 2007

51

Article

How ArcGIS Server 9.2 Can Contribute to a Service Oriented Ar

A Silent Servant for the Spatial Enter


ArcGIS Server 9.2 is announced to be the first GIS enterprise application server that implements GIS business logic in an information technology standardsbased server environment. Therefore it draws upon the full spatial enablement of an enterprise. It is an answer to the integration of GIS functionality in business processes where the spatial view is critical as well as for the creation of added-value from existing information within business processes. By Florian Fischer Service Oriented Architecture (SOA)
SOA enables IT departments to make the transition from an application-centric view of the world to a process centric one. IT departments then have the freedom to combine business services from multiple applications to deliver true end-to-end support for business processes. This is achieved by utilizing integration mechanisms such as Web services that are loosely coupled SOA services. ESRI has responded to this fundamental shift in the technology landscape with full Web service integration. Jack Dangermond says This technology will produce a new group of spatial information consumers, knowledge workers who are not trained GIS professionals, who will benefit from access to the information provided by custom-tailored GIS-powered applications. ArcGIS Server enables the integration with other enterprise systems such as customer relationship management (CRM) or enterprise resource planning (ERP) systems using industry-standard software. While often cited an overall SOA is not yet reality. Its more a vision or an objective worth to be achieved. A responsible for information management wants to define what part of an application shall be transparent. A Web services is acting as a black-box that is as transparent as necessary. One can build applications like using Lego bricks where single bricks can easily be replaced without the fear of a collapsing application. The biggest autonomy remains on a little scope the Web service, remarks Gnter Doerffel and adds: At a more abstract level this allows for the smarter combination, creation and advancement of applications. The concept of a SOA can also be considered as the IT translation of the continuing shift from a business organisation stamped by vertical structures to more horizontal and networked structures.

The three tier architecture of ArcGIS Server. It allows common GIS functions to be delivered as services throughout the enterprise.

nce again I visited Gnter Doerffel from ESRI Germany to talk about ArcGIS Server 9.2. This article will briefly show how ArcGIS Server 9.2 helps to exploit the locational context of the corporate information assets and will give a glance to the future of enterprise GIS, where more spatial context possibly means less GIS!

More Than Maps


So far web-mapping, that is serving digital maps via the internet, is the predominant paradigm of web-based geographical information systems. But that is only a part of GIS. Maps are the classical final product of a GIS-workflow but actually they are only one possible final product. Furthermore web-based spatial data management and web-based geo-processing are gaining more and more interest in the enterprisesphere. There are many reasons to work server-based and there is one important fact about ArcGIS Server 9.2 I would like to men-

tion beforehand: Everything what can be done with ArcGIS Desktop, can be done server-based as well by ArcGIS Server 9.2. While using server-based processing, a desktop client is not occupied. Therefore a shift of workload from ArcGIS clients to ArcGIS Server is possible. For enterprises with many clients a shift like this can replace high investments in clients by a low investment for server upgrade. Furthermore even an ArcView licensed desktop can use server-based functions delivered by ArcGIS Server. ArcGIS Server 9.2 introduces an out-of-the box web-based editing functionality to serve Map Services, OGC WMS, KML, Mobile ADF and many more. Moreover web-service standards like SOAP and UDDI are supported to enable every developer to connect to ArcGIS Server. ESRI definitely has its sights on the contribution of building blocks for serving the spatial context within a service oriented architecture.

The Business Sphere


How can ArcGIS Server 9.2 transform a SOA to a geospatial service oriented architecture? As a matter of course Web mapping services are included that support 2D dynamic as well as 3D globes. Based on the geodatabase model it includes both workgroup- and enterprise-level spatial data management. Spatial data services allow administrators to publish geographic data for extraction, checkout/check-in and replication. Furthermore it offers server-based analysis and geoprocessing. This includes vector, raster, 3D, and network analytics; models, scripts, and

52

September 2007

Article

chitecture

prise
tools; desktop authoring and synchronous and asynchronous processing. But GIS is not an end in itself. ArcGIS Server 9.2 is not only about composing GIS applications out of the parts mentioned above. NonGIS requirements have a presumably bigger share than GIS requirements outside of a pure geospatial domain. Of course there are GIS-centric applications but in most SOAs ArcGIS Server will simply add the geospatial perspective to outline the bigger picture. Possibly here and there it will have a decisive function but mostly it will provide additional functions to make use of spatial information within an enterprise environment where many other services that parallel and shall be combined. services GIS know-how is important. From the point of view of Gnter Doerffel this is a part were the strengths of GIS-professionals come into play: Concerning the matter of chaining geospatial analyses to assemble a service GIS professionals will find ones feet. Possibly even on close collaboration with experts who are capable to implement these service within an IT environment. These services may be consumed by GIS applications or by any applications which have the ability to integrate them within their IT. For instance the Locator Service may be requested by a SAP application. Therefore SAP simply has to know what to tell the service and what and how it will deliver. Everything that is beneath is a black-box. The service provider may decide how to access the service and may decide if data, maps or simply a yes-no answer is delivered.

Meaningful Relocation
Every geoprocessing function which is typically a local tool on your desktop GIS is available as a server tool in ArcGIS Server 9.2. The server can handle geoprocessing requests in an asynchronous mode that is the client is released after submitting a request. Other tasks can be performed while the geoprocessing service is handled in the background. By contrast is the synchronous mode which is classical for web mapping applications that can continue only if a processing step has been finished. This ability of server-side geoprocessing is very meaningful for both desktop and mobile spatial information processing. Savings in desktop licenses, desktop computing power and therefore labour time may be reached by relocating computing power and functionality. However James Fee asks in his GIS-Blog: Are we beginning to see a shift away from ESRI

Authoring Services
Generally all ESRI Desktop products are utilisable for authoring services. For the creation of

Latest News? Visit www.geoinformatics.com

September 2007

53

Article

Server backend to Open Source solutions? He justifies by experience with his customers who were desperately looking for fast, cheap and reliable web clients. Finally most of them went out on their own to work with open source solutions. The other issue is that the added functionality of ArcGIS Server does not give any value for most customers. Some of the functions of ArcGIS Server are impressive, but in the real world they have almost no applicability, James Fee argues.

Open-source Challenge
If that is true then it will be a hard bash for ESRI but there is more than maps and most likely James Fees customers are pure geospatial evangelists. But ESRIs ArcGIS Server 9.2 is rather for exploiting the locational context within an IT environment than providing a GIS only environment. It is more likely that people in hardcore geo-science will continue to work with desktop software for a long time and may use opensource web mapping only for presenting and querying their results. However the power of open-source is obvious when looking at the width of products that have been released during the last years and the focus is clearly on web based geospatial products. A reason for this development is the maturity of OGC specifications by the Open Geospatial Consortium. These specifications give a proper framework for the development of geo web services. Furthermore they form a common framework for open-source projects to orientate on and somehow pushed these projects. On the other hand OGC standards are designed to hit the needs of a pure geospatial domain and most open-source projects are originated exactly there. Gnter Doerffel remarks that the market of service infrastructures will experience an enormous development. Thereby not only viewing will be of note but server-based geoprocessing will be demanded in the future. It will be important to provide appropriate services to serve this demand. And it doesnt matter whether it is about a web-GIS application or any web application calling a spatial query. But ESRI is not incurious about the open-source adventure. In October 2006 ESRI joined 52North Initiative as a founding member. 52North is an open initiative whose purpose is the development of open source software for Spatial Data Infrastructures (SDI). The current focus of development is Sensor Web Enablement, Security and Digital Rights Management. Products will be available using two licensing models: GNU General Public License (GPL) and a commercial use license. So

Birds eye view processed on the fly by ArcGIS Image Server

far ESRI is unsure about where this adventure leads to but committed to try out.

Mobile Services and Caching


Mostly mobile applications are deployed for recording a spatial situation and create a context. Considering server-side geoprocessing complex models and simulations may be requested by a mobile device giving local input data. These simulations then can be executed server-side and the result is sent back to the mobile device. That is a really interesting field of application but in the majority of cases data visualisation on mobile devices is the predominant service. A mobile device is therefore not considered as a device to save spatial data and only at the worst case for intermediate data-storage. Eventually spatial data shall be stored on the server sooner or later. The mobile application in turn never accesses the server itself but the cache. ArcGIS Server 9.2 offers many strategies for the cache management while the overall aim is a never hampering application if there is no current connection to the server. Contrariwise a mobile application should at least have a connection every now and then. Otherwise an independent mobile application is the better solution. Considering temporal highly dynamic data caching is a critical procedure. Therefore ArcGIS Server offers caching of selected layers next to ex ante caching, caching on runtime, partial caching with updates and caching by degrees amongst others.

ESRI business partner. The ArcGIS Image Server is a high performance server for image data like aerial photos. The image server even allows to visualise birds-eye views like the ones from pictometry which are integrated in Microsofts Virtual Earth platform. These images show a postcard view of a landscape and therefore have not a uniform pixel resolution. Furthermore the images are overlapping as they were made from five different directions: North, South, West, East and orthogonal. Thus the ArcGIS Image Server must handle a varying resolution, overlapping images and a huge amount of image data on the fly and on demand! And it does. The image data files may be stored unsorted and the Image Server will sort them on demand according to the requested attributes. Its not an easy task to present these birds eye views but Image Server really does a good job even if there are some small mismatches every now and then. This is of special interest for security-relevant applications. The police force or the fire brigade may get a quick overview of the local situation like: Where are the doors of a building? Where are the windows? How does the backyard look like? ArcGIS Image Server gives an impression of what might be possible in the future of displaying image data. The performance is solely up to the speed of the hard-disk.

Maybe the Best Service is the One You Never Notice


Finally with ArcGIS Server 9.2 a twofold shift takes place. First of all a shift from an application-centric perspective to a process-centric perspective. This is common for the whole IT landscape nowadays. Another shift takes place from a pure GIS-centric application focus to the idea of serving the geospatial aspects for a non-GIS SOA. Depending on the level of integration, users may not even realize they are implementing GIS techniques and processes. And ArcGIS Server somehow transforms to a silent geospatial servant for many business and management processes.
Florian Fischer (ffischer@geoinformatics.com) is a Contributing Editor GIS of GeoInformatics. Links: ESRI Whats New in ArcGIS 9.2: www.esri.com/software/arcgis/about/whats-new.html; David Maguires weblog GIS Matters: http://gismatters.blogspot.com/; James Fee GIS Blog: www.spatiallyadjusted.com/2006/10/09/.

Catch a Glimpse
My visit at ESRI-Germany would not have been complete without a short look at the ArcGIS Image Server. ESRI integrated this image server technology into its own product portfolio from MAPS geosystems, a leading producer of geospatial imaging solutions and a long-time

Latest News? Visit www.geoinformatics.com

September 2007

55

Article

Part 3: Data Model

Standards in Practice
In order for it to be shared, information needs to be defined in such a way that no confusion about its content is possible. Within the geo-sector it is a matter of information about geographic objects, also called features or geo-features. Not only the type of object is recorded, but also its properties or attributes and its relations with other features. By Huibert-Jan Lekkerkerk
Relevant standards ISO 19103: Use of UML for data models ISO 19107: Description of geographic and topologic attributes ISO 19108: Description of temporal attributes INSPIRE: draft implementing rules for data harmonization Technical implementation Universal Modeling Language (UML) Legal basis: None yet; the first INSPIRE inquiry is currently underway.

Data Models
What they are for: Defining information elements, their attributes and their relations with other information elements.

so-called data model is often used to describe information. In everyday practice the use of data models is frequently coupled with relational databases. The difference between a data model and the structure of a database is, however, that the latter is a technical implementation of a previously-defined, logical data model.

Data Models
Within the regular ICT industry, designing data models has been common for the last decade or so. Every database development warrants the development of a data model. Within the geographic information or ICT industry, the creation of data models was either not possible or simply not performed. A layer structure was usually defined within the software. Specific attributes were then coupled to a specific layer. With the introduction of object-oriented data acquisition and processing, the storage of information within layers is becoming harder and harder and therefore many organizations are switching to object-oriented data models.

Simple data model for exchanging surface water information.

Generic Data Model


If a certain group (or groups) of people can decide on the definition and content of a shared set of objects, a generic data model that makes data exchange between parties possible can be defined. In the Netherlands such a generic geographic data model was developed with NEN, the Dutch national stan-

56

September 2007

Article

dardization body. The resulting standard, NEN3610: Base model for geography, now serves as an anchor for sector-specific modCode list with water types. els. Sectorspecific models use the model as the basis from which their own model is extended. Currently over ten different sector-specific models have been developed including the Information Model Water (IMWA), the Information Model Spatial Planning (IMRO) and the Information Model Large-scale Topography (IMGEO).

from the GeoFeature. The class GeoFeature is an abstract class (name is in italics) which is never found in the real world. In order to use the attributes of GeoFeature, one needs to create a specialization such as Water. The class Waterpart inherits in turn all the attributes of GeoFeature (via Water) and adds its own attributes.

How Does it Work?


There are international standards for the creation of geographic data models. The main standard used is UML, Universal Modeling Language. This is a very powerful language that can be used to create XML and GML schemas. Using a simple example (see figure) I will now try to explain some of the basics of UML data modeling.

Classes The basis of any UML model is the objects or classes of which three are pictured in the example (GeoFeature, Water and Waterpart). GeoFeature is a base class of which all other classes are subclasses or specializations (closed arrow). This class contains all the generic properties (attributes), such as identification and name, that are common to the other classes. The class Water is an example of a subclass. This class may seem to have no attributes but in fact inherits all attributes

Attributes Compound data type value with accompanying Codelist. Different types of attributes can be found in the example. There are general attribute types such as text GeoFeature which has associations to indicate [CharacterString] and [DateTime]. There are that it is derived from another GeoFeature or also specific types for geometries such as that it is lying above or below another PointGeometry [GM_Point] and SurfaceGeoFeature. If, for example, we decide to build Geometry [GM_Surface]. Further, there are a certain water from other waterparts then attributes that have a sort of pull down list that specific water changes but is still derived attached. Such a list is called an enumeration from the original water. The exact time at list (fixed) or code list (extendible). An examwhich the change was made can be indicatple of a code list is given in the figure on the ed in the example with the temporal attributes left. objectBeginTime and objectEndTime. Finally, we can see compound attributes. Finally the number of times an association or These are complex attributes that are built attribute may be used is denoted with around their own attributes. They look like cardinality: classes but are in fact part of a class. In the [1] or no notation: Mandatory attribute or example this is the case for the value association. attribute. [0..1]: optional attribute or association. It may be used, but does not have to be. Relations and Cardinality [0..*]: optional attribute or association that In simply defining classes and attributes, the may be used multiple times. relationship between classes or the number Legalization of times a certain attribute may be used withAt the moment there is no legal basis for crein a class is not defined. With the help of ating data models, although it is common associations, however, the relations between practice. Within the INSPIRE directive, a draftclasses can be defined. In the example we see ing team is currently investigating whether a that the class Water is built from Waterparts. common data model such as the Dutch This is indicated with an open arrow that may NEN3610 can be developed for Europe. or may not be combined with an open or If such a data model is conceived, it will probclosed diamond. ably limit itself to the class GeoFeature and Another association is demonstrated with will for that class define attributes such as identification as well as temporal attributes.
Huibert-Jan Lekkerkerk (hlekkerkerk@geoinformatics.com) is projectmanager Standards at IDsW and furthermore Editor-in-chief of GeoInformatics.

Building surface waters from water parts.

Latest News? Visit www.geoinformatics.com

September 2007

57

Product News

TopoGX 2D to 3D DXF Converter


TopoGX is a software application that converts a 2D DXF drawing into a useful 3D DXF drawing. It is intended for surveyor produced drawings, but is equally suited to any 2D drawing where the Z levels are represented as text items. Additionally TopoGX contains a powerful triangulation engine which can produce a constrained delaunay triangulation (viewable surface) of the 3D drawing almost instantaneously. TopoGX performs the conversion of 100,000s points in a matter of seconds with one click of a button. Typically five seconds on a modern computer system. TopoGX converts numeric text to Z levels, with the X and Y values being selected from an assigned cross, block or line end. Automatic level range correction will filter out incorrect levels, and provide interpolation along polylines where levels may be missing. An intuitive 2D viewer enables the user to zoom, pan, window and centre the imported DXF in plan vieuw. Colour shading the converted 3D DXF according to level height, providing visual feedback of the terrain for quick and easy site appraisal. The free Google Earth application allows for easy movement around the 3D DXF. Using clear suface provides quick viewing of any survey errors. TopoGX viewer provides additional functionality to display and control contour lines and directional arrows to fully understand the undulations of the surveyed area. TopoGX includes a simple set of DXF tools to insert and edit both 2D and 3D points. A breakline/constraint tool provides further DXF editing where the original data may have been missing or incorrect. The output file formats are 3D DXF file, 3D Polyline mesh in a DXF file, ASCII X,Y,Z file, MicroDrainageTM triangulated surface *.pwf file, Google Earth Placemark (for UK OS-Grid) *.kml file and TopoGX project *.erd file. Internet: www.cabs-cad.com

Leica Geosystems Introduces GMX902 GG Receiver


Leicas new GMX902 GG is a high-performance GPS + GLONASS receiver, specially developed to monitor sensitive structures such as bridges, mines or high rise buildings and crucial topographies such as land slides or volcanoes. It provides precise dual frequency code and phase data up to 20 Hz, enabling precise data capture as the basis for highly accurate position calculation and motion analysis. As with the other receivers in the GMX900 family, the GMX902 GG has been designed and built purely for monitoring applications. The key characteristics of the GMX900 family are low power consumption, high quality measurement, simplicity, durability. The Leica GMX902 GG is an ideal receiver for deformation monitoring with superior tracking of satellites from the both GPS and GLONASS constellations. The GMX902 GG is also a perfect receiver for atmospheric studies and ionospheric scintillation research with 20Hz measurement of high precision dual frequency code, phase and signal to noise ratio. Internet: www.leica-geosystems.com

Oc TCS300/500 different price


In the Juli/August issue of GeoInformatics were tables with different large format printers. Two of them are the Oc TCS300 and 500, the list price of these printers starts at 8,700 euro and not as mentioned 3,200 euro. Internet: www.oce.com

60

September 2007

Product News

Leica Geosystems Announces ScanStation 2


Leica Geosystems announced ScanStation 2, a major advance in the capabilities of pulsed (or time-of-flight) laser scanners for as-built and topographic surveys. The maximum instantaneous scan speed for ScanStation 2 is 50,000 points/second, more than 10-times that of its ScanStation predecessor (4,000 points/second) and the highest in the industry for pulsed scanners. Leica ScanStation 2 retains the four fundamental total station features that defined ScanStation as a new category of laser scanner: Full 360 x 270 field-of-view (FOV) Survey-grade dual-axis tilt compensation for traversing and re-sectioning Survey-grade accuracy for each measurement Excellent measuring distance (300m at 90% albedo)

BAE Systems Software Link-up with Google Earth

In addition to field productivity gains for many applications, ScanStation 2s bar-raising scan speed also lets users: Collect data in tighter time windows Reduce time spent in hazardous locations Provide project results even faster Collect even more complete as-built data Squeeze in additional service requests from clients

Internet: www.leica-geosystems.com/hds

Sokkia Introduces Robotic 3-D Station NET1


Sokkia BV released the NET1 robotic 3-D station offering enhanced measurement efficiency for industrial applications. The NET1 incorporates the latest total station technologies, auto-point, auto-tracking, reflectorless measurement and wireless control to increase efficiency in a wide range of applications. Sokkias NET series are 3-D industrial stations which can be used doing measurements for shipbuilding, large scale building constructions, general steel construction, wagon construction, wind energy projects, but also for deformation monitoring of tunnels, dams, buildings and landslides. The new robotic 3-D station can automatically search and point to prisms and reflective sheets with an auto pointing range of up to 1,000m using prisms. A dedicated auto-pointing algorithm allows it to sight the target closest to the telescope center, even if other reflective objects are in the telescopes field of view. This new algorism is indispensable for automatic deformation monitoring applications where fixed targets are repeatedly measured in pre-determined intervals. Internet: www.sokkia.net

A new version of BAE Systems image analysis and mapping software enables analysts to evaluate and share intelligence data more effectively by integrating with Google Earth, and the ESRI geodatabase. SOCET GXP v2.3 interacts with Google Earth in real time for quick, 3D color visualization and gives geospatial context to objects of interest, resulting in enhanced intelligence for mission planning. With additional tools for detecting changes from one day to the next, analysts can anticipate conditions such as rough terrain or collapsed bridges and pinpoint operational routes more accurately. It also provides a direct connection to the ESRI geodatabase, the Environmental Systems Research Institutes common data storage and management framework. Connection with the database allows users to work with data over secure networks for accurate, timely analysis. SOCET GXP v2.3 is available on Microsoft Windows and UNIX Solaris 8, 9, and 10 operating systems and supports ground space graphics for a wide range of government and commercial sources. Internet: www.baesystems.com

Enhanced Software and Bluetooth for Sokkia's Series 30RK


Sokkia BV, announced that its Series 30RK now incorporates communication functions designed to increase work efficiency with Bluetooth wireless communication and SFX Internet Data Transmission Functions. The SFX function, fitted as standard, enables data transfer via the Internet using mobile phones. The Bluetooth wireless communication function is available now for the Series 30RK as a factory option to provide cablefree communication with data collectors (via integrated Bluetooth technology). New is that the Series 30RKs Bluetooth wireless ommunications modules have a dial-up function. SFX can be used without cables if the mobile phone also incorporates Bluetooth wireless technology. Sending and receiving data from the series 30RK can be done immediately in the field by connecting to a mobile phone with a modem. This latest version of Series 30RK is equipped with enhanced software and surveying programs. The Series 30RK have a robust IP66 level of dust and water resistance, reflectorless distance measuring range up till 350m (starting from 30cm!). Internet: www.sokkia.net

Latest News? Visit www.geoinformatics.com

September 2007

61

Industry News

Satellite Delivered to Vandenberg Air Force Base

Launch Date for WorldView-1


Ball Aerospace & Technologies Corp., ITT Corporation and DigitalGlobe, delivered its WorldView-1 satellite to Vandenberg Air Force Base in California for its scheduled launch on Tuesday, September 18, 2007. WorldView-1 is the first of two new next-generation satellites DigitalGlobe plans to launch.

The WorldView-1 satellite is delivered to Vandenberg Air Force Base in California.

pon launch on September 18, WorldView-1 will undergo a calibration and check out period and will deliver imagery soon after. First imagery from WorldView-1 is expected to be available prior to October 18, the sixth anniversary of the launch of QuickBird, DigitalGlobes current satellite. WorldView-1 will have an average revisit time of 1.7 days and will be capable of collecting up to 750,000 square kilometers (290,000 square miles) per day of half-meter imagery. The satellite will also be equipped with geo-location accuracy capabilities and will exhibit stunning agility with rapid targeting and efficient in-track stereo collection. The addition of WorldView-1 and WorldView-2 in the coming months will bring the total number of satellites DigitalGlobe has in orbit to three, completing a constellation of spacecraft that will offer the highest collection capacity, more than 1 million square kilometers per day.

Part of the NGAs Programm WorldView-1 is part of the National GeospatialIntelligence Agency (NGA)s NextView program. The NextView program is designed to ensure that the NGA has access to commercial imagery in support of its mission to provide timely, relevant and accurate geospatial intelligence in support of national security. The majority of the imagery captured by WorldView-1 for the NGA will also be available for sale through DigitalGlobes archive. Additionally, the launch of WorldView-1 immediately frees up capacity on DigitalGlobes QuickBird satellite to meet the growing commercial demand for multi-spectral geospatial imagery. Side-by-side Ball Aerospace and DigitalGlobe have worked sideby-side on commercial remote sensing satellites for more than a decade to create one of the most capa-

ble systems in orbit, said David L. Taylor, president and CEO of Ball Aerospace. The next-generation WorldView-1 and WorldView-2 satellites will capture more imagery than ever before due to the flexibility afforded by the Control Moment Gyro-based system designed by Ball Aerospace. Not only will ITTs digital imaging sensor for WorldView-1 boast half-meter resolution with threemeter geo-location, itll do so using less space, weight and power than any previously launched system, said Frank Koester, vice president and director, Commercial & Space Sciences Programs, ITT Space Systems Division, based in Rochester, New York. ITT looks forward to the successful test and launch of WorldView-1, followed by further success providing the sensor system for DigitalGlobes WorldView-2.

www.digitalglobe.com

Latest News? Visit www.geoinformatics.com

September 2007

63

Industry News
1Spatial Part of Preferred Supplier Team for Ordnance
1Spatial announced that they are part of Intergraph's Preferred Supplier Team, chosen by Ordnance Survey of Great Britain, for the provision of its new Geospatial Database and Data Management System (DDMS). Ordnance Survey's decision marks an important stage in the selection process although it does not yet constitute a contract award. The system will provide centralised planning and management of Ordnance Survey's production activities, in addition to managing the large-scale data holdings that are used to generate market-leading products, such as OS MasterMap. of location and information technology (IT) to analyze and communicate health and human services issues and challenges.

Septentrio Strengthens Activities in North America and Opens US office


Septentrio appointed J. Christopher Litton as Business Development Manager to start up and run its North-American operations.

www.esri.com/events/health

Intermap Technologies and GAF AG Partner


Intermap Technologies Corp. and GAF AG have signed an agreement to allow GAF AG to immediately begin distributing Intermaps high-resolution 3D digital elevation data and geometric images throughout Germany and the rest of Europe.

www.septentio.com

TomTom Makes Cash Offer for Tele Atlas

www.intermap.com www.gaf.de

www.1spatial.com

Leica Geosystems Extends SmartNet Service to Ireland

3D Laser Mapping in DARPA Challenge


Nottingham based 3D Laser Mapping is playing a key role in the development of robotic vehicles with the latest laser guidance technology. The company distributes Riegl laser scanners, which have been selected by 7 of the teams hoping to compete in a $2 million prize competition for driverless cars being held in America. The vehicle-mounted laser scanners provide a critical view of the street environment helping the robots negotiate obstacles and other road users. Sponsored by the Defense Advanced Research Projects Agency (DARPA), the central research organisation of the United States Department of Defense, the challenge offers a total prize fund of $3.5 million with the winning robotic vehicle receiving $2 million.

TomTom N.V. made a cash offer of 21.25 per ordinary share for Tele Atlas N.V. The Offer Price represents a 32% premium over Tele Atlas average closing share price for the three months prior to 20 July 2007. The Supervisory Board and Management Board of Tele Atlas support the Offer and will, when the Offer is made by TomTom, recommend the Offer to Tele Atlas shareholders

www.tomtom.com

www.3dlasermapping.com

TopoSys North America Opens Denver Office and Adds Industry Veteran to Staff
Leica Geosystems has extended its SmartNet RTK correction network to Ireland, making it the first commercial RTK network fully operational in the region. Leica Geosystems has been working in partnership with Ordnance Survey Ireland (OSI) and Ordnance Survey Northern Ireland (OSNI) over the past year to extend the popular commercial network to cover the entirety of mainland Ireland. Dublin based Survey Instrument Services (SIS) has been appointed as a distribution partner for Leica SmartNet in Ireland. SmartNet Ireland is enabled by a network of 19 OSI and OSNI base stations across mainland Ireland. TopoSys GmbH has opened a US office in Denver and appointed veteran Roland Mangold as its director of business development. For the past two years, Mangold was the organizer of the International LIDAR Mapping Forum (ILMF) and business development manager at Spectrum Mapping, LLC. Previous to that, Mr. Mangold was the founder and publisher of Earth Observation Magazine. He has 17 years sales, marketing and communications experience in the geospatial industry.

Cadcorp SIS-based Application from Vicrea Solutions for Dutch Municipalities


Cadcorp has announced that its business partner in the Netherlands, Vicrea Solutions BV, has recently won contracts from several Dutch municipalities to supply and implement Vicreas Geo Vastgoed Registraties (GVR) application, which has been developed using Cadcorp SIS Spatial Information System.

www.cadcorp.com

Definiens Expands in the North American


Definiens is preparing for further growth and has formally appointed Greg Calaman as Vice President of its operations in North America. Mr. Calaman has been a member of Definiens management team since April 2006. His promotion to the role of Vice President of the North American Operations reflects Definiens success in this market, as the company continues to grow its revenue and customer base.

http://smartnet.leica-geosystems.co.uk

Manila Water Chooses Bentleys WaterGEMS V8 XM Edition


Manila Water, one of the largest water and wastewater service providers in the Philippines, has selected WaterGEMS V8 XM Edition, Bentleys water distribution modeling solution, to manage the water network serving more than five million people in the East Zone. This area includes eastern Metro Manila and portions of Rizal province. WaterGEMS V8 XM will enable Manila Waters staff to exchange data among MapInfo, ArcGIS, AutoCAD, and existing water models.

www.toposys.com

Intermap Technologies Announces 3D Map Products


Intermap Technologies Corp. launched AccuTerra, the companys newest product offering that provides existing outdoor GPS and PND products with 3D maps and off-road points-of-interest (POI) integrated with interactive 3D rendering software. The product addresses a market that is currently limited to two-dimensional data and provides limited or no map coverage once you leave paved roads. The user interface includes realistic 3D views; accurate elevation information; clearly identified and classified trails, paths, and roads (overlaid on the 3D terrain); outdoor-specific points of interest such as campgrounds, service facilities, and trail heads; the ability to route to points of interest and track progress; easy to reference visualization tools to improve trip planning and safety; and, a land use display that depicts the location of public and private property, including areas of restricted use.

www.definiens.com

DigitalGlobe Expands Distribution Network in Australia and New Zealand


DigitalGlobe announced the addition of Geoimage Pty Ltd of Brisbane, Australia to its network of distribution partners. Under terms of the agreement, Geoimage will resell DigitalGlobes high-resolution satellite imagery throughout Australia, New Zealand, Papua New Guinea and the islands of the South West Pacific. Geoimage joins Sinclair Knight Merz (SKM), a long-standing DigitalGlobe partner, in servicing the geospatial needs of the region.

www.bentley.com

Optech Plays Key Role in NASAs Phoenix Mars Mission


Optech LIDAR technology is scheduled to be launched toward Mars aboard NASA's Phoenix Mars Lander on August 3rd. Canada is playing an important role in this mission by contributing a meteorological station to track the weather and climate on Mars. The main sensor of the meteorological station is a lidar instrument designed by Optech and built in collaboration with MDA Space Missions, the Canadian Space Agency, and leading scientists from across Canada and the US.

www.digitalglobe.com

ESRI Health Conference Explores GIS Solutions


ESRIs Health GIS Conference will be held October 710, 2007, at the FireSky Resort and Spa in Scottsdale, Arizona. The conference will provide a global forum for discussing how geographic information system (GIS) applications combine the power

www.Intermap.com

www.optech.ca

Latest News? Visit www.geoinformatics.com

September 2007

65

You might also like