Professional Documents
Culture Documents
Unlike today’s standalone geological and geophysical (G&G) software environments—where data
management is essentially unstructured—a number of more structured approaches to upstream data
management enable asset teams to access information of known quality, share vital data and collaborate
across technical domains while preserving hard-won knowledge and results for reuse in future projects.
In the quest to find and produce oil and gas resources more efficiently, operators depend on a variety
of specialized software applications for analysis, interpretation and visualization of digital exploration
and production (E&P) data. In recent years, standalone workstation-based applications for geological,
geophysical, petrophysical and reservoir modeling workflows have become increasingly widespread.
E&P organizations initially bring in standalone packages to help meet a specific, urgent business
need. Typically, standalone applications are chosen based on a specific functionality, price/
performance, or ease of use. Many share a similar underlying architecture. They have relatively
simple data storage and access mechanisms designed to optimize performance for a single G&G
professional working on a single project. Project data are stored either in a flat file structure on the
user’s hard drive, in a local relational database (such as Microsoft SQL) or in a shared network drive.
1
LANDMARK SERVICES WHITE PAPER
They want a data management environment that allows asset teams to access high quality data,
share information across domains and preserve results for reuse in future projects.
After highlighting the risks and costs inherent in today’s multi-vendor standalone application
environments, this paper presents three potential ways of taming the data management chaos.
We outline the variations, advantages and disadvantages of each alternative. Then we offer
recommendations for basic and advanced implementations, based on our experience resolving
upstream data management challenges.
• Missing data items. Adding new data to one project means current data are missing from
others. Over time, it becomes increasingly difficult to determine the origin and validity of
particular items.
• Loss of data and knowledge. Users often discard previous projects and begin again from
scratch since they lack confidence in existing information. In the process, value-added
knowledge and actual data may be lost.
• Repurchase of existing data. As projects and data proliferate, intelligent reuse of data
becomes challenging or impossible. Original data sets may be lost or misplaced, requiring the
company to repurchase data it already owns.
2
LANDMARK SERVICES WHITE PAPER
Each potential solution requires, at minimum, the implementation of two components: (1) a central
or common G&G data repository, and (2) a consistent form of data transfer between multiple applications
and the central repository. Consider the advantages and disadvantages of each alternative.
Several E&P software providers today actually have such a solution—a relatively unified set of
applications with an optional database designed solely for those applications. Within this closed,
multi-domain environment, consistent data transfer may exist, providing real improvements in
data sharing, handling and storage. However, there are significant limitations. For one thing, if the
optional data management system is relatively new, it may be immature or unproven. It may not
3
LANDMARK SERVICES WHITE PAPER
cover the full range of G&G data types required. It may lack robust utilities for data loading, quality
control, user access and security. The biggest disadvantage, however, is that a database designed
for just one vendor’s applications will not exchange data with applications from other vendors.
A. Common data repository. Two well-established alternatives are available today for the storage
and management of upstream industry data. One is an open E&P data model from an industry
consortium, such as the Public Petroleum Data Model (PPDM) Association, which requires custom
development prior to implementation. For example, it must be wrapped with loaders, data QC tools,
other utilities and connectors to move data to and from applications. The other alternative is a
proven commercial, vendor-neutral E&P data repository, such as the OpenWorks® system from
Landmark, which comes with significant data management tools and capabilities out of the box.
The range of data types covered by either of these structured approaches to data management is
extensive. Either can be deployed as a common data repository for standalone applications from
multiple vendors. The major difference lies in the amount of internal effort and expertise required
to customize, install, connect and maintain the system.
4
LANDMARK SERVICES WHITE PAPER
At minimum, a common repository must provide adequate security so users can access data
without inadvertently modifying its quality. The system must support concurrent, multi-user
access so teams can easily work with the same data. It must handle multiple coordinate reference
systems (CRS), so data loaded in one format can be converted to the CRS required by another
standalone application. Data must be validated so users know it meets sufficient quality standards.
Finally, users need access both to original data and project results.
B. Consistent data transfer. Even if an operator implements one of these upstream data
management systems, it is still necessary to transfer data into and out of existing standalone G&G
applications. Again, a number of options are available.
(1) A horizontal industry extract transform load (ETL) engine could be used to consolidate data
from existing data stores in the central repository. However, generic ETL systems lack preconfigured
connections for the unique G&G applications and data types operators want to integrate. Often a third
party firm or consultant must create appropriate data connectors, which can be costly.
(2) Another option is for internal IT staff to write their own custom links. One simple method would
be a programming level script that automates the manual data import/export process. However,
this does nothing to resolve data redundancy, consistency, quality and collaboration issues. In the
past, many large oil and gas companies developed efficient, hardwired, point-to-point connectors
between standalone applications and databases using application programming interfaces (APIs)
and software developer’s kits (SDKs) from the various vendors. These proprietary links could, in fact,
move data very quickly. But IT departments found it slow, painful and expensive to develop, support,
maintain and upgrade them over time. Every new release of an application or database required a
rewrite of the link, often from scratch, which operators found impractical and unsustainable.
5
LANDMARK SERVICES WHITE PAPER
(3) A third, more viable option would be to deploy a commercial E&P middleware platform
specifically designed to transfer data to and from the most common standalone G&G applications.
Such a system should automate data transfers and provide users easy access to data, ideally
through their own application interfaces. No one wants to log into a separate data management
tool. Users should be able to select data they want to preserve and publish it to the central
database from within their own applications. One example of a widely accepted middleware
system is TIBCO’s OpenSpirit® application, which provides preconfigured data connectors and
copy tools out of the box, as well as regular updates whenever vendors release new versions of
their technology. It also maintains connectors to both PPDM and OpenWorks® repositories.
There are potential advantages in developing a custom, multi-vendor data management system by
combining various commercial and proprietary components. But the smaller the operator, the less
likely sufficient IT resources or expertise would be available for such an undertaking. Even large oil
and gas companies today are minimizing non-core activities such as IT integration.
Nevertheless, there are several issues operators need to be aware of. More sophisticated data
management systems may require additional personnel and budget. However, adding a few more
data management staff frees up G&G professionals to spend more time on higher-value analysis and
interpretation. To ensure a uniform environment across the organization, it is necessary to establish
and enforce data standards, governance and validation processes. This may require guidance from
outside experts. It can also take substantial time and effort to clean up, establish quality control (QC)
and load data into the new central repository. This, too, may require third-party services.
6
LANDMARK SERVICES WHITE PAPER
Operators looking for immediate benefits from a structured approach to data management can begin
simply by installing a robust data repository and data transfer layer and migrating data from one or
two existing standalone applications. Connections to more applications can be added over time.
7
LANDMARK SERVICES WHITE PAPER
(1) More sophisticated data validation and QC tools empower data managers to provide better
service to users. For example, even after implementing a central repository, without proper quality
assurance, users may be concerned about the completeness or integrity of data. Data managers
need to compare multiple data sets at once, detect missing or duplicated information and clean up
increasingly large volumes of data. They need to visualize complex data of many kinds—seismic,
well logs, numerical data and text.
Sophisticated commercial QC tools can divide the central repository into two separate partitions:
one for standard corporate data of known quality, and one for project results saved by users.
When users modify data in some way and save them to the results partition, other users can be
assured no changes have been made to corporate data stored in the standard partition. Advanced
validation tools help establish business processes and quality criteria that must be applied to
all data before they can be loaded or published to the corporate repository. Adding these QC
components to a growing enterprise data management system can improve user confidence and
prevent the loss, corruption or unnecessary repurchase of data.
(2) Incorporating a project data archive also enables an operator to store completed projects
offline without losing access to valuable knowledge. Generic file system backups can only restore
local data from recent activities. They do not provide any means of searching or seeing contents
without reloading the project into an application. Without a project data archive, E&P professionals
may find it too difficult to reuse data and interpretations from former projects, forcing them to start
over from scratch.
A good project archive must document projects at appropriate milestones and store them
in neutral formats so they can be reconstructed later without requiring a specific software
application or version. In addition to individual data items, an archive should save complete sets of
elements used to justify a particular decision; for example, all the data used to evaluate an asset
for a lease sale or calculate reserve estimates. Ideally, the system automatically creates web
pages with “snapshots” of interpretations, lists of data and other vital information about a project.
Thus users can view results online long after the project is taken offline. This additional component
effectively preserves hard-won corporate knowledge.
(3) One final element is a web portal that allows users to search, browse, view and access all
data in the environment through a single familiar interface. Asset teams can perform Google-like
searches across both live data in the central repository and historical data in project archives. The
web portal can be implemented first in a local office, and then across similar data management
environments in remote locations anywhere in the world. Federated views of multiple data stores
give users and authorized partners greater awareness of all available information, encourage
reuse and foster collaboration even across geographic boundaries.
8
LANDMARK SERVICES WHITE PAPER
1. Common data repository. For the central data repository, we recommend Landmark’s OpenWorks®
R5000 database, the de facto industry standard for G&G data in oil and gas companies worldwide. While
it has long been the underlying project database for Landmark’s classic G&G applications, operators
also use it as a shared, vendor-neutral repository for non-Landmark applications. In addition to results
data, the R5000 version incorporates an elegant solution for managing quality-controlled corporate
standard data in the same system. An advanced security model allows data managers to flag every
piece of information as either public or private, thereby differentiating between the two types of data.
The OpenWorks database is built on an Oracle® database, providing full concurrent multi-user access
and user-level security. It incorporates many PPDM standards, but it packages them in a complete,
commercially-supported product. We provide an open API and SDK, which some of our larger customers
use to link proprietary applications they have developed in-house. The OpenWorks database comes with
data management tools for loading and reformatting data, converting coordinate reference systems, as
well as editing, deleting and inserting data. It also links efficiently to the Engineer’s Data Model™ (EDM™)
software, a Landmark data repository designed for the drilling and operations engineering domain.
9
LANDMARK SERVICES WHITE PAPER
2. Consistent data transfer mechanism. For the data transfer layer, one solution we recommend is
TIBCO’s OpenSpirit® middleware platform. It uses our SDK to maintain a commercial connector to
the OpenWorks R5000 database. It also provides preconfigured data connectors for most standalone
G&G applications on the market today.
3. Advanced data QC tools. For data administrators interested in ensuring the highest possible data
quality, we recommend Landmark’s PowerExplorer® software. It provides sophisticated tools for the
validation, visualization, comparison and cleanup of data from multiple databases. PowerExplorer
software can identify missing or duplicate data sets, determine whether data values are within
specified limits and correctly populate all required data attributes in the corporate repository.
Add-ons such as Advanced Data Transfer™ (ADT™) and Reference Data Manager™ applications
can automate workflows based on company-specific business rules for cleaning and publishing data
from the results partition to the standard partition—without manual intervention.
4. Project data archive. Landmark’s Corporate Data Archiver™ (CDA™) system provides an integrated
solution for data archiving. Data managers can take snapshots of OpenWorks projects at any point
in time, document those projects and store a rich set of metadata as HTML pages. The CDA system
stores data elements in application-neutral formats, such as SEG-Y for seismic data, LAS for well
logs, and ASCII for interpretation results. Data are bundled in the context of a particular project for
browsing, retrieval and reuse in the future—long after the original project has been altered or deleted.
5. Web portal. For user web portals, Landmark works with a variety of familiar and emerging
horizontal portal systems. We also provide the unique Web OpenWorks software , which can
integrate with any web portal to allow users to browse E&P data in OpenWorks projects. In addition,
we provide all the services necessary for operators to deploy an effective G&G user portal within
their existing company web portal.
6. Data management services. While all the technologies in our ideal solution are proven and
relatively straightforward, we recognize that implementing any new information management solution
requires careful planning, deployment, training and change management. Landmark provides a wide
range of expert data management and consulting services to help ensure success and optimize
return on investment. Our consultants have extensive experience in diverse E&P and IT systems, and
deliver both hardware- and software-neutral services.
10
LANDMARK SERVICES WHITE PAPER
Landmark provides a range of services that help clients maximize the use of technology assets. Our consultants deliver
application implementation, deployment, onsite mentoring and education programs. In addition, innovative technologies, key
industry partnerships and highly experienced domain experts allow Landmark Services to deliver solutions that optimize clients’
existing assets and enable anywhere, anytime collaboration. These services include intelligent operations solutions, IT/data
management and cloud hosting services to support clients’ national or global workforces. For more information, contact your
Landmark account representative or send an inquiry to Landmark@Halliburton.com.
11
Landmark Software
& Services
www.halliburton.com/landmark
© 2012 Halliburton. All rights reserved. Sales of Halliburton products and services will be in accord solely with the terms and conditions contained in the
contract between Halliburton and the customer that is applicable to the sale.
H09380 07/2012