You are on page 1of 3

EDW Assessment

This assessment tool is easy to use. Simply answer each of the survey’s 11 questions.
When you’ve completed the survey, a custom scorecard will be generated. This scorecard
will rank your readiness in specific technology areas as well as EDW as a whole. It will
provide a gap analysis and recommendations tailored to you. The survey should take
approximately 10 minutes to complete. Your individual results will remain confidential
and will not be published.

1. What is the state of your Enterprise Data Warehousing Program?


We're still trying to figure out what EDW is; or we're just exploring EDW
now.
We are in the process of developing and implementing an EDW program.
We have an active EDW program and have already reaped benefits from
them in production environments.
2. How do you ensure a consistent means of accessing all enterprise data?
We have multiple enterprise application systems and data warehouses/data
marts that are loosely integrated. To integrate them, we typically create new
interfaces or hand-code scripts to tie the systems together for an enterprise
view of data.
We have a consistent method for accessing all types of data, regardless of the
system or format, across the enterprise.
Each group or department has its own means for accessing data. Integrating
the data across different groups is a manual, ad hoc process often relying
heavily on spreadsheets.
3. How do you make data available to multiple consuming applications and
systems?
We use various integration technologies, such as messaging, EDI, EAI, XML,
ETL, etc., depending on the application or process.
We have implemented re-usable data services leveraging a service-oriented
architecture to exchange and update data between databases, applications, or
processes.
We use hand-coding or scripts to move data between applications and
processes or to load databases.
4. How do you capture the business context for data and share that information
across the enterprise?
Business users have online resources to examine what data is available, how
it is transformed (from data sources to report), and where it is used because it
is automatically captured in the data integration system.
Business users can obtain report descriptions and associated lists of data
definitions, but understanding where the data comes from or how it has
changed requires a lot of manual work.
Business users ask IT or power users to identify, gather, and explain what
data is used for reports or specific analysis.
5. How do you define, measure and monitor the quality of your data?
We only check data quality on an ad-hoc basis in a reactive mode; there are
no formal metrics or ongoing monitoring.
We have established a few data quality metrics, but we are not systematically
capturing or monitoring data quality.
We define and monitor key data quality metrics on an ongoing basis and
receive alerts on items that fall out of acceptable ranges as part of the
continuous improvement process.
6. How do you manage users and security across global teams?
We have established enterprise-wide policies, practices & processes,
however, we have no systemic way to monitor, control, or analyze their
implementation.
Each project or application creates and administers its own user management
and security framework as best they can. Policies, practices & processes are
not shared.
Our user management and security is implemented on a global, team-based
environment. We are able to monitor, analyze, and enforce user & group
privileges and activities.
7. How do you establish an audit trail on data, or track down where it came from and
where it's going?
We have a metadata management tool that can automatically generate a visual
representation of data lineage across multiple systems, with drill-down and
search capabilities that track data from its original source to its final end use
by the business.
To document what has happened to the data, we interview the different
system owners and review documentation.
We attempt to infer data lineage by examining multiple tool-specific
repositories as well as documentation for any custom-coded applications.
8. How do you enable end-users to achieve real-time access to data with minimum
latency?
Our data integration processing occurs on a scheduled, batch basis generally
nightly or less frequently. Accessing real-time data from operational systems
requires a lot of manual work to access it on a one-off basis and then
reconcile it with the data in the warehouse.
We have different tools that support batch versus message-based delivery.
Our processes have to be recoded and/or recompiled to work in different
latency modes.
We have one common tool which allows real-time access as well as batch-
processed data. The tool can process and deliver operational data in real time
for end-users.
9. How do you support high availability requirements for your data integration
environment?
Our tool supports high availability using multiple nodes, but it's time-
consuming and difficult to configure and manage.
We do our best to ensure our hardware and software stay up and running. We
aren't configured for true high availability across multiple nodes on a grid.
Our tool enables us to easily configure high availability including built-in
resiliency, failover, and recovery, and multi-node/grid deployment to leverage
existing hardware investments.
10. How do you ensure deployment flexibility in integrating data, for example,
having the ability to choose between Extract, Transform, and Load (ETL) and
Extract, Load, Transform (ELT)?
Data integration & transformations are scripted or hand-coded whenever they
are needed based on needs. Generally we do not use tool-based approaches.
We use a data integration platform tool to encapsulate business
transformations and reuse for all data and application integration. Our tool
can operate in both ETL and ELT modes, based on processing needs.
We use an ETL tool to encapsulate business transformations and reuse for
any ETL processes.
11. How do you address the sharing of data integration resources, best practices, and
processes across your enterprise?

We have implemented an enterprise-wide integration competency center


(ICC) that each project or application leverages for their integration work.
We have some common integration standards, processes, and tools; however,
it is the responsibility of each project to implement and staff the necessary
resources.
Each project or application is responsible for staffing and implementing any
data integration work required, and making their own technology and
architecture decisions.

You might also like