You are on page 1of 20

Oracle Internals

Volume 1, Number 12 March 2000

Adaptive Strategies for Disk Performance in Oracle


Bruce Rodgers

In This Issue During the past couple of years, we have witnessed an explosion in the use of
database engines (e.g., Oracle) at both the enterprise and department level of a
wide range of businesses. The Internet environment has provided additional
Features impetus for more dependence on these engines.

Adaptive Strategies for As user loads grow and data storage demands expand, a primary issue for
Disk Performance in many DBAs has been extracting maximum performance from the application,
Oracle especially as performance relates to disk drive I/O. Increases in disk drive densi-
Bruce Rodgers
ties and spindle speeds have not kept pace with user demands to satisfy ever-
Building a Java Applet changing performance demands. Database applications viewed as a necessary
for Monitoring Oracle: discipline of business management have evolved for many into a primary tool for
Part 2 competitive advantage.
Serg Shestakov
As the use of the database engines has evolved, there are a number of behav-
Alert Mechanisms for ioral issues that arise when maximizing system performance. Common problems
RMAN Backups
include:
Guang Shen Wan

Enterprise  under-estimating the size of the data storage needs


Transformation and Data  under-estimating the rate of growth of the data storage needs
Management
Richard Lee  DBAs often get little help in predicting the growth of their user community
(translation = increased load on the application)

 users typically do not throw anything away — there are also legal reasons
for data retention

 DBAs often do not know their read/write mix

 DBAs often do not know the random/repetitive application content

The software approach — a fair number of offerings have been made in posi-
tion papers as well as software utilities to assist DBAs in indexing disk drive activ-
ity and load balancing the application to squeeze more performance. Application
tuning by the DBA (focusing on changing the application software) has long
fueled a growing consultative marketplace. Tuning can be an effective approach
to gaining I/O performance, but the process can be long and tedious and requires
thoughtful documentation.

The traditional hardware approach — for many years, solid state disk (SSD)
Editor platforms have existed, first for mainframe environments and subsequently for
Don Burleson server environments. SSDs (implemented purely in silicon) have provided the
Adaptive Strategies for Disk Performance in Oracle

greatest performance leverage but at a very high  OS and application demands for main mem-
price. In addition, using SSD platforms has ory are not static. Performance suffers if
required the DBA to have accurate knowledge of pageout is invoked. Swapping does not use
what data to migrate to the SSD platform in order aging to determine which pages are moved
to generate the desired performance gains. to disk. Hot data may be moved back to disk,
or worse, less critical data may be held in
Traditional SSD platforms have not been “adap-
memory while data more critical to system
tive” in managing the content of what data is res-
performance is moved to the paging device.
ident in the silicon — if the application content
changes, the SSD content also needs to change to  Some CPU vendors map main memory to
continue to maximize the performance benefit. disk in case of a shutdown. Additional over-
Many users do not know what specific applica- head, incurred by mapping large memory
tion content changes during a “normal” business cache, consumes extra CPU cycles and I/O
day; this shifts the burden back to the DBA to con- resources.
tinually isolate and migrate frequently used data  Hot data can be flushed from CPU main
to the SSD platform. memory during large reads leaving the
The other traditional hardware approach has main memory cache full of inactive data.
been to stuff prodigious quantities of cache
memory into the server to enlarge the staging There Is Another Way …
buffer for I/O instructions. Adaptive caching technology represents a new
Such an approach can be effective in the short- approach to the management of data. A combina-
term but can also prove to be quite expensive. tion of unique architectural design and advanced
algorithms, adaptive caching, and dynamic RAID
CPU vendors often suggest buying more main assignment delivers exceptional disk I/O perfor-
memory. This additional memory is dedicated to mance improvements for database applications at
internal data caching in the hopes of improving a fraction of the cost of conventional SSD. Adap-
performance of I/O intensive applications. The tive caching technology — without user interven-
theory is that data will be available at high main tion — automatically adjusts its contents to always
memory throughput rates. Performance gains are contain the most frequently used information.
realized if the correct or “hot data” is available
from main memory. The actual hit ratio for main Cache Management and Dynamic RAID
memory must be high or the improvement will be
Level Assignment
marginal.
(Cache here, cache there, everywhere cache,
Common pitfalls of main memory caching cache …)
include:
Most storage providers have stuffed their sys-
 RDBMS caching schemes may not cache tems with cache because of the potential perfor-
some files key to performance increases. mance leverage and relative cost position. Users
will find cache on the disk drives, cache on the
 Large main memory support is needed for
servers, and cache in the external RAID control-
OLTP systems. Some database servers can
lers (some as small as 32 megabytes, some as
exceed a 32 bit system’s memory-address-
large as 8 gigabytes). The presence of cache is far
space-limit of 2GB or 4GB.
less important than how the cache is imple-
 CPU Memory caching may have limited tun- mented for the user.
ability and may not adapt to dynamic
Exhibit 1 illustrates how data patterns and
changes in system requirements.
block sizes in SEEK Systems’ architecture are
identified for cache retention. The dynamic space
serves as a FIFO buffer, while the protected space

Oracle Internals © Copyright 2000 CRC Press LLC


March 2000 All Rights Reserved
Adaptive Strategies for Disk Performance in Oracle

section is where frequently used data is relocated


and retained until more frequently used data Exhibit 1. SEEK Systems’ Architecture
bumps previously held information from the hier-
archy.

The superior performance of adaptive caching SEEK Adaptive Cache

can be traced to advanced memory management


Read/ Write Activity

algorithms. Data in solid state memory can be


retrieved 20 times faster than data on magnetic
disk, so it is critical that active data stays in Protected
memory and inactive data is moved to magnetic Space
(SSD)
disk. To accomplish this, SEEK’s adaptive caching Hosts
technology uses a combination of least recently Dynamic Space
Small
used and least frequently used algorithms to maxi- Blocks

mize performance.
Sequential Data Disks
Normal Space: Least Recently Used Data.
Large Streams
Blocks

SEEK’s controller architecture divides solid state


memory into normal (or dynamic) space and pro-
tected space. Similar to the operation of host-based At any given moment, protected space looks
cache, normal space uses a least recently used just like conventional SSD. The data is safe from
algorithm; as memory is needed for reads and any cache pollution, and will remain in solid
writes, it replaces the least recently used data in state memory until the controller algorithms
the normal space with this new data. The opera- detect that there is data that has become more
tion can be compared to a stack of cafeteria trays, active. By continually monitoring data activity,
in which trays are added to the top and removed only the most active data is kept in protected
from the bottom. space, ensuring SSD performance for most I/O
The least recently used approach is the most operations regardless of application activity.
common type of caching, and for buffering I/O it A common question about dynamic RAID
is highly effective. However, this approach suffers assignment concerns the overhead needed to
from a problem known as cache pollution. Cache manage the process. Because of the efficiency of
pollution results when reads or writes come the architecture, adaptive technologies approach
through with data that is only to be used once, or 90+ percent of the raw performance of tradi-
perhaps a few times. This replaces data in cache tional SSDs, which is of very high value given
that is being used repeatedly by the host — data that the price points are 50 percent of traditional
that is much more important to keep in solid state SSDs.
memory.

Protected Space: Least Frequently Used Data.


Other Acceleration Techniques
To eliminate the problem of cache pollution, SEEK’s RAID controller uses protected space and
SEEK’s RAID controller uses protected space. normal space to make sure that only active data
Whereas normal space uses a least recently used is kept in solid state memory. It also employs
caching algorithm, the protected space uses a algorithms to optimize performance for disk
least frequently used caching algorithm. The con- reads and writes. For read operations, a common
troller actually tracks how much each data block caching algorithm known as a prefetch is per-
is being accessed, and moves the most heavily formed. When the host requests data not in
used data from normal space to protected space. memory, the controller not only reads the
requested data from disk, but the data that
immediately follows as well. This is because

© Copyright 2000 CRC Press LLC Oracle Internals


All Rights Reserved March 2000
Adaptive Strategies for Disk Performance in Oracle

there is a good chance that the host will request lated users performing this concurrently, with
this data next, and one large operation to disk is over 62,000 operations per user. The application
much more efficient than two smaller ones. performance increased by 78 percent.

Prefetch may also be easily adjusted during oper- The benchmark in Exhibit 3 consists of a com-
ation for optimal performance. For online transac- plex update involving three tables and over 72,000
tion processing a value of two is generally optimal operations. Adaptive caching increased perfor-
(for every block of data requested, an additional mance by 98 percent.
two prefetch blocks are returned), while for more
Transaction log files can also be fairly I/O
sequential operations (backups, decision support) a
intensive files, with a large percentage of writes.
prefetch multiple of four or five is preferred.
These files are where the database engine records
For writes to disk, a method known as concate- transactions, so that it can rollback tables to pre-
nation of writes increases performance in much vious states should complications arise. How
the same way that prefetch works with reads. these files are used varies by database engine, but
When the controller writes information to disk, it queries utilizing such commands as Rollback,
pre-organizes data according to where it resides Begin, Save, and Commit are particularly heavy in
on the physical disk, and writes out data accord- their use of the transaction log files.
ingly. The result is fewer, larger, and more effi-
cient writes to disk. Every time disk writes are The example in Exhibit 4 shows the perfor-
obviated, precious milliseconds of disk seek time mance gain for a particular database when the
are saved for the user. Carried to the extreme, the transaction log files were executed on the SEEK
logic of this architecture says never go to the RAID platform. Performance gain was 25 percent
drives if possible — perform as many operations for this benchmark, but jumped to 170 percent
in silicon as economically possible. when a truncate was initially performed.

More Performance in Less Memory


Actual Acceleration Results
The advanced algorithms of SEEK’s adaptive
Relational database applications recognize some caching architecture ensure that active data is
of the most impressive performance gains from kept in solid state memory, where it may be
adaptive caching and dynamic RAID assignment. accessed twenty times faster than if it resided on
Standard files that are moved into protected space magnetic disk. In addition, I/O for inactive data on
include temporary tablespace (or workspace), magnetic disk benefits from prefetch of reads and
transaction log files, and heavily used indexes and concatenation of writes.
tablespaces.

Temporary tablespaces are where intermediary Dynamic RAID Level Assignment —


processing is carried out. For complex queries the How It Works
activity to these tempspaces is particularly write RAID disk array systems suffer from a number of
intensive, as data is written and updated repeat- well understood and fairly well publicized prob-
edly. Table sorts, updates, joins and similar com- lems. These problems include difficulties with
mands are particularly taxing on this tempspace. customer education, configuration, and storage
Following are some of the performance improve-
management, as well as a price premium over
ments seen with various relational database
raw disk capacity. Additional performance penal-
benchmarks, by moving tempspace behind the
ties, such as multiple write updates to maintain
adaptive SSD platform.
redundancy, and double copies of data through
Exhibit 2 displays a benchmark that was a cre- the array controller memory, compound these
ate/sort of 100,000 records, with one primary key problems. Existing disk array controllers add
and five alternate keys. There were three simu- complications because they are expensive and not

Oracle Internals © Copyright 2000 CRC Press LLC


March 2000 All Rights Reserved
Adaptive Strategies for Disk Performance in Oracle

Exhibit 2. Create/Sort of 100,000 Records

Seconds

Seek Xcelerator

Magnetic Disk

0 500 1000 1500 2000 2500 3000

very scalable. One way to resolve this problem is RAID manufacturers often do not allow
with a platform thrust comprised of RAID and dynamic changes to the RAID configuration once
intelligent caching technologies. the disk array is initialized and in operation.
Changing the number of disks used and the
Within the last few years, disk systems based on
levels of protection provided at each target
RAID technology have proliferated to the point of
address often requires that data be copied to a
becoming commonplace. Significant problems
backup device before the configuration changes.
still exist in terms of education, configuration, and
After the configuration changes, the managed
management of these disk array systems. Effec-
disks must be re-initialized and the data copied
tive deployment and use of a RAID array require
again from the backup device. This process takes
an understanding of system parameters such as
many hours or even days, and while in progress,
I/O rates and request sizes under normal and
the disk array is offline and the host data is not
maximized conditions. These parameters are gen-
available.
erally either unknown or vary so widely that there
exists no “typical” load for system tuning. Adaptive caching technology provides an
ongoing dynamic balance between the demands

Exhibit 3. A Complex Update Involving Three Tables and Over 72,000


Operations

Seconds

Seek Xcelerator

Magnetic Disk

0 500 1000 1500 2000 2500 3000 3500

© Copyright 2000 CRC Press LLC Oracle Internals


All Rights Reserved March 2000
Adaptive Strategies for Disk Performance in Oracle

Exhibit 4. Transaction Log Benchmark

Transaction Log Benchmark

Seek Xcelerator

Magnetic Disk

0 50 100 150 200 250

Same Benchmark Performed with a Truncate

Seek Xcelerator

Magnetic Disk

0 100 200 300 400 500 600 700

for maximum I/O performance and for minimum on three or more of the managed disks, available
disk capacity lost to RAID data protection. It strictly to the memory manager. The controller
allows the online addition of disk capacity to an allocates sections from the block pool to hold
existing disk array without backing up and restor- write data in temporary storage. The block pool
ing the existing data. By hiding configuration allocation produces a holding area for the write
details from the user, it also allows the user to data formatted for the optimum RAID perfor-
view the disk array as if it were just a bunch of mance for a particular request. The memory man-
disks (JBOD). ager uses the allocated space to perform the write
operation at optimal speed.
Adaptive caching technology exists primarily to
minimize and, in most cases, eliminate the infa- Write operations from the host that would nor-
mous RAID 5 write penalty. Because RAID 5 is the mally result in a RAID 5 write (with a write pen-
most frequently implemented and commercially alty) take place in this allocated space in the most
successful of the basic RAID variations, this write efficient manner without changing the host’s logi-
penalty has caused customers great disappoint- cal space. By choosing a block from the block pool
ment. The disk array software dynamically adjusts with the correct depth and width parameters, the
the storage method used for each host write write takes place without a write penalty. As the
request in order to eliminate the write penalty. memory manager allocates block from the block
The controller chooses the best RAID format for a pool and inserts them into the mapped logical
request based on proprietary heuristics involving space, it adds the replaced disk blocks to the block
a number of variables. The architecture utilizes a pool for subsequent use (see Exhibit 5).
block pool, with a section of reserved disk space,

Oracle Internals © Copyright 2000 CRC Press LLC


March 2000 All Rights Reserved
Adaptive Strategies for Disk Performance in Oracle

Exhibit 5. SEEK Systems’ Controller Technology Maps Correctly Sized Blocks into the Existing Logical Host Space

Dynamic RAID Mapping


Host Address Space Managed Disk Space
Partitions
• Data striped
Host-defined • # of blocks • Parity striped
Address • Protected / • Hot spares
unprotected • Dynamic additions of
disks online
• RAID 5 type storage

Block Pool
• Mapped from managed
disk space
• RAID 1+0 writes to
eliminate the RAID
Write Penalty

The block pool is a limited resource (the bus. However, write-back cache delivers the
dynamic and protected spaces together can share most significant I/O performance improvement.
memory from 128MB to 1024MB). The block pool The difference between the two occurs in the
allocation manager constantly evaluates the state role the CPU plays. When writing to disk, a con-
of the block pool, and it acts to free entries based troller with write-back cache holds the write in a
on garbage collection and least-recent-access. In cache buffer, but signals to the CPU that the
essence, the block pool manager acts very much write has already been completed.
like a secondary cache memory manager between
This frees the CPU to go on to other tasks
the RAM cache and the RAID 5 disk space.
instead of waiting for the write to actually be
written to the disk. With write thru cache the
Memory Management controller stores the write in a cache buffer, but
One hundred percent memory storage is the the CPU still knows the write has not been com-
purest form of I/O performance (e.g., SSDs). But it pleted. In this case the efficiency comes from the
is very expensive — $70 per megabyte, versus $1 write concatenation process, in which writes
per megabyte for rotating media. Adaptive tech- going to the same sector of the disk are accumu-
nologies optimize this trade-off by analyzing how lated and sent as a larger block transfer.
frequently the host or hosts need data. Requested
data is cached. To eliminate cache pollution, the The Bottom Line
algorithm separates active cache and stores it in
The combination of adaptive caching and
protected space. This discrimination technique
dynamic RAID assignment provides an easy to
bridges the disk/memory barrier. Users receive
install alternative strategy for improving data-
memory storage performance but a significantly
base disk I/O performance.
reduced price. The process of dynamically staging
“hot data” is known as adaptive caching. The technology is host independent (attaching
to UNIX, NT, AIX, Linux, SCO, and other operat-
Other traditional cache techniques are used —
ing systems) and application independent (no
write thru and write back. Both offer performance
code changes are needed in the application soft-
improvement from all peripherals on the SCSI

© Copyright 2000 CRC Press LLC Oracle Internals


All Rights Reserved March 2000
Building a Java Applet for Monitoring Java: Part 2

ware). This powerful hardware technology has tasks are offloaded to the external controller) and
proven to be a very cost-effective strategy for decreased disk I/O activity. 
immediately improving disk performance in data-
base applications. Because of the unique caching
Bruce D. Rodgers is vice president of domestic sales for
architecture, the net effect of implementing the
SEEK Systems, Inc., Seattle, Washington. He can be con-
technology is increased CPU utilization (as I/O
tacted at 800-790-7335, extension 2212.

Building a Java Applet for Monitoring Java:


Part 2
Serg Shestakov

One successful example of Java and Oracle inte- lowing: header, module name with status change
gration is the monitoring applet presented in this indicator, footer, or error message.
two-part article. It is relatively easy to implement
void addItem(int newStatusChange, String new-
and can be very useful for monitoring production String)
systems. The first part of this article (see Oracle {
stat_buffer[cur_row]=newStatusChange;
Internals, February 2000) covered the data
str_buffer[cur_row]=newString;
schema definition and the applet’s core structure cur_row++;
and methods. This part of the article explains how repaint();
}
the applet displays runtime statistics. All scripts
and code mentioned in both parts of the article The paint method is described in section seven.
are listed in Oracle Internals’ Code Depot on the It takes one input parameter, the default Graphics
World Wide Web at www.auerbach-publications object of java.awt package. Remember that we
.com/scripts/monitor.html. imported the Graphics class in the very beginning
of our applet. We start the paint method with ini-
tializing pointers to the current row and column in
Presenting the Results the applet’s window. Its values are given in pixels.
To display the results, we create a custom addItem Next we define a loop for all rows fetched to the
method for caching display information to the applet’s display arrays (str_buffer and stat_buffer)
applets’ arrays and redefine the default paint according to the cur_row index. Each time we
method of applet class. Section six listed below enter the loop, we increment the cur_y pointer by
describes the addItem method used for fetching 15, moving it to the next row. Next we store the
information we want to display in the browser’s message text to the tmpstr variable and status
window with the paint method. The addItem change code to stat variable.
method is called with two parameters — status
public void paint(Graphics g)
change code for network module, the StatusChange, {
and message text, the newString. The status change int cur_y=15;
int cur_x=15;
code is fetched to the stat_buffer array, and the mes-
for (int m=0;m<cur_row;m++)
sage text is fetched to the str_buffer array. We inter- {
pret the cur_row variable as an index for both cur_y=cur_y+15;
String tmpstr = str_buffer[m];
arrays. When we are done fetching, we increment int stat=stat_buffer[m];
the index. Then we call the applet’s default repaint
method, which refreshes the picture, adding next If the status change code is different from none,
the information string which can be one of the fol- the applet has some painting to do. Before we draw

Oracle Internals © Copyright 2000 CRC Press LLC


March 2000 All Rights Reserved
Building a Java Applet for Monitoring Java: Part 2

an alarm, we have to choose the color. This can be parameters: message text, current row, and cur-
done with the setColor method of the Graphics rent column.
object. It takes one input parameter indicating the
g.setColor(Color.black);
color we want to set. This is a foreground color, so }
later we will have to return the default black color g.drawString(tmpstr, cur_x, cur_y);
}
for messages printing. If status change code indi-
}
cates current state is down, we choose red. And if
the status change code indicates the current state The last piece of code is section eight defining
is up, we choose green. the myplay method, which in fact is a handy
wrapper to Java’s play method for playing sound
if(stat!=none)
files in .au format. (This is true for JDK 1.1; in
{
if(stat==up_to_down || stat==down_to_down ) later versions more audio formats are sup-
g.setColor(Color.red); ported.) The myplay method takes two input
if(stat==down_to_up || stat==up_to_up)
g.setColor(Color.green); parameters — audio file name and delay in mil-
liseconds. First we read from the current direc-
Now we have the color and want to draw a tory and play the sound file using the standard
mark indicating status change. There are two sit- play method. (To read the current directory
uations: when the status does not change because name, we call the getCodeBase method.) Then
of old refresh (up_to_up and down_to_ we suspend the applet’s thread according to the
down codes correspond to this state) and when delay parameter. This (1) helps to avoid conflicts
the status changes to the opposite (up_to_down when we have to play several sound alarms in a
and down_to_up codes correspond to this state). row, and (2) improves the quality of sound
In the first case, we want to draw just a small alarms, making it clearer to understand. To sus-
filled rectangle against each module name. (That pend the current thread, we call the sleep
is why earlier we left some space before the method, which requires the InterruptedExcep-
module name.) In the second case, we paint the tion to be caught, so we put method code inside
entire line with the alarm color. In both cases we the try block.
do painting by calling the fillRect method of
void myplay (String aufile, long pausetime)
Graphics object. Rectangle coordinates can be {
adjusted to fit a particular user’s needs. What is try
more, for each network module, we can collect {
play(getCodeBase(), aufile);
statistics on module availability, display this indi- Thread.currentThread().sleep(pause
cator, and put a color mark indicating whether time);
}
today’s module availability decreases or grows. catch(InterruptedException e) {}
For example, this may help operators find out }
quickly what modules now are OK but nonethe-
less have had some problems. Compiling and Running the
if(stat==up_to_up || stat==down_to_down) Applet
g.fillRect(cur_x,cur_y-9,12,10);
At last we come to the point where we can put
if(stat==up_to_down || stat==down_to_up)
g.fillRect(cur_x,cur_y-9,150,10); together all the code segments into the net-
mon.java file and use the javac netmon.java com-
When we are done drawing color alarms, we
mand to create a file of bytecodes. The resulting
should return foreground color to the default
file will have the .class extension. Next we should
(black). Next we can close if the block opened to
copy the netmon.class file to a directory accessi-
process situations with status change code is dif-
ble by an HTTP daemon. To the same directory
ferent from none. Then we print the message to
we may copy .au files if we intend to use the
the browser’s window, calling the drawString
sound alarms mechanism. Remember that for
method of Graphics object and passing to it three
security reasons we should keep Java sources in

© Copyright 2000 CRC Press LLC Oracle Internals


All Rights Reserved March 2000
Building a Java Applet for Monitoring Java: Part 2

a separate directory with restricted permissions When the applet is thoroughly tested, we can try
(not viewable on the Web). to load and run it in the browser’s window. We
start the browser on a client machine, go to the
To put the Java applet into the browser we
URL of the netmon.html file and run the applet.
write the netmon.html file. We create this file
Given test data, our applet will display a header
using any text editor. The netmon.html file
saying that the last refresh happened on October 1,
defines window title and contains a special applet
1999 at 14:30:00, the average network availability
tag that tells the browser where to get .class file
is 50 percent, and the list of two modules indicate
and the frame’s height and width (in pixels):
that the first module is up and the second module
<html> is down with green and red marks, respectively.
<title> Network monitoring applet</title> We can manually insert rows to the STAT table and
<body>
<applet code=netmon.class width=500 height=500> see how the applet will track the changes in net-
</body> work availability and module status.
</html>

To try our new applet, we need a statistics-gath- Conclusion


ering system. But at least we can just pretend we We implemented a simple but very useful moni-
already have one. To fill the Oracle schema with test toring tool. We thoroughly discussed how to write
data, issue the SQL statements listed below. This efficient Java code for working with Oracle data-
will imitate a network of two modules and results of base. Our applet can run on any Web browser that
two statistic-gathering processes. Both statistic gath- supports JDK 1.1. We considered how to use a thin
erings reported the first module status was up and JDBC driver to access Oracle, what problems can
that the second module status was down. arise, and the workarounds.

INSERT INTO MODULE (MODULE_ID, MODULE_NAME) One problem with deploying applets integrated
VALUES (1, ‘Demo module 1’); with databases is delays. In particular, it’s impor-
INSERT INTO MODULE (MODULE_ID, MODULE_NAME)
VALUES (1, ‘Demo module 2’); tant for geographically distributed networks. We
have to think of making our applet more
INSERT INTO STAT (STAT_TIME, MODULE_ID, MODULE_
STATUS) user-friendly. For example, we can print mes-
VALUES(TO_DATE(‘OCT 01 1999 14:00’,’MON DD sages in the browser’s window reporting to the
YYYY HH24:MI’),1,1);
operator when a connection takes too long.
INSERT INTO STAT (STAT_TIME, MODULE_ID, MODULE_ Another problem is security. This problem can be
STATUS) solved only if we protect all hardware and soft-
VALUES(TO_DATE(‘OCT 01 1999 14:00’,’MON DD
YYYY HH24:MI’),2,-1); ware components of our information system.

INSERT INTO STAT (STAT_TIME, MODULE_ID, MODULE_ The strength of our monitoring applet is in its
STATUS) access to statistics database. The more statistics
VALUES(TO_DATE(‘OCT 01 1999 14:30’,’MON DD
YYYY HH24:MI’),1,1); we have, the better we can control the network.
So we’ll need to store and process large volumes
INSERT INTO STAT (STAT_TIME, MODULE_ID, MODULE_
STATUS)
of data. And we need thin client software with a
VALUES(TO_DATE(‘OCT 01 1999 14:30’,’MON DD graphical interface and support for multimedia.
YYYY HH24:MI’),2,-1); An Oracle server is the best database — robust
COMMIT; and scalable. Java applets can run from any
browser, and using JDBC we can efficiently inter-
Now we can test the applet. The best way to do
act with one database. The future looks good for
this is to issue an appletviewer netmon.html com-
Oracle–Java in tandem. 
mand on the server. The appletviewer is a graph-
ical tool, and it can be very useful for debugging
purposes because we can read diagnostic mes-
sages from its standard output.

Oracle Internals © Copyright 2000 CRC Press LLC


March 2000 All Rights Reserved
Alert Mechanisms for RMAN Backups

Serg Shestakov works with Oracle technology in


Russia’s banking industry. He can be contacted at
shestakov@icb.spb.su.

Alert Mechanisms for RMAN Backups


Guang Sheng Wan

Recovery Manager (RMAN) is an Oracle utility the information in the recovery catalog to deter-
used by the database administrator (DBA) to back mine how to execute requested backup and
up, restore, and recover database files. RMAN restore actions. RMAN can work without the
manages the processes of creating backups of data- recovery catalog under certain conditions, but
base files, archived redo log files and control file, the usage of RMAN is limited without the recov-
and restoring or recovering from backups. It ery catalog. For example, RMAN cannot run
greatly simplifies the tasks DBAs perform during stored scripts; it is not possible to perform
these processes. It can detect many types of cor- tablespace point-in-time recovery and recovery
ruption problems, and makes sure that the backup when the control file is lost or damaged. To uti-
does not include corrupted blocks. RMAN provides lize the full functionality of RMAN, the use of the
true incremental backups and automatic parallel- recovery catalog is strongly recommended.
ization of backups and restores. Its efficiency is
The recovery catalog contains the following
very important for any 24x7 database environment.
information:
To make use of RMAN functionality and ease
normal operations, Oracle DBAs normally auto-  the recovery catalog version and all check-
mate resync (resynchronize) catalog operations points
and daily backup operations based on their backup
strategies. In order for DBAs to ensure their jobs  registered target databases and their incar-
have been successfully completed, they check nations (database incarnation is used by
messages in the log files. There are many RMAN RMAN to identify the different “versions” of
messages in the log files, and DBAs need to iden- the same physical database)
tify which are informational messages and which  data file, archive log, and control file
are error messages. This process tends to be backup sets and backup pieces
tedious, especially when there are a considerable
number of database servers to manage.  data file and control file copies

Is there any way to automate the process? Is it  archived redo logs, redo log ranges, and all
possible to build an Oracle database package that redo log history
tells the DBA if there was any problem on the
 Tablespace and data file attributes
RMAN backup? The alert mechanisms introduced
below provide one possibility. Before going into  redo log files, data files, and tablespaces at
details, it is necessary to discuss the RMAN recov- the target database
ery catalog.
 corrupted block ranges in data file backups
or datafile copies
RMAN Recovery Catalog  RMAN stored scripts (a sequence of RMAN
The recovery catalog is a repository of informa-
commands stored in the recovery catalog)
tion used and maintained by RMAN. RMAN uses

© Copyright 2000 CRC Press LLC Oracle Internals


All Rights Reserved March 2000
Alert Mechanisms for RMAN Backups

The recovery catalog table definitions and their measurements. It is used in Highmark Life &
primary/unique keys are listed in Exhibit 1. For Casualty Group), the DBA will be alerted or paged
the column definitions and their descriptions, if any exception occurs. It is also possible to build
please refer to Oracle 8 Backup and Recovery a small Tcl script in Oracle Enterprise Manager,
Guide, Release 8.0 (December, 1997). and schedule it to run periodically. Because
DATAFILE_BACKUP, REDOLOG_BACKUP, and
The RMAN Backup Alert System CTRLFILE_BACKUP are inline functions in the
CHECK_RMAN_BACKUP package, SQL*Plus or
Because RMAN recovery catalog stores all data for
any other Oracle program interfaces can be used
database backups and recoveries, it is not difficult
for retrieving the return values. The following is
to obtain an answer on the status of a specific
an example:
database backup. The package CHECK_RMAN_
BACKUP (to be discussed in detail) is the product SELECT check_rman_backup.datafile_backup
of this idea, and with the package, it is easy to set (‘sig’,’02:30:00’, 0),
check_rman_backup.redolog_backup (‘sig’,
up an alert system for RMAN backups. Exhibit 2
‘02:30:00’, 0),
shows the architecture of the RMAN backup alert check_rman_backup.ctrlfile_backup (‘sig’,
system. ’02:30:00’, 0)
FROM DUAL;
Oracle DBAs may schedule daily RMAN backup
jobs and resync catalog jobs using RMAN stored Where “sig” refers to the target database name;
scripts in the recovery catalog (i.e., in the table SCR “02:30:00” refers to the backup start time; “0”
and SCRL). These jobs are controlled by RMAN and means the checking is for the current date.
performed by the target database server processes.
An incremental level 0 backup performs backup CHECK_RMAN_BACKUP Package
for all blocks that have ever been used. Incremen- CHECK_RMAN_BACKUP is a stored package that
tal backups at levels greater than 0 back up only performs checking on RMAN backup and resync
blocks that have been changed since previous catalog operations based on the data stored in the
incremental backups. Typically, level 0 database RMAN recovery catalog. It was developed on
backups are scheduled once a week; level 1 data- Oracle 8.0.5 for Windows NT, and implemented
base backups are scheduled twice a week; level 2 on Oracle 8.0.5.1.0 for IBM AIX. There are four
database backups are scheduled three times a main functions in the package:
week or more. While resync catalog jobs are
scheduled more often than the backup jobs, it does 1. DATAFILE_BACKUP
depend on the number of archived redo log files
2. REDOLOG_BACKUP
generated each day. For example, they may run
every 15 to 30 minutes. Note that the DBAs should 3. CTRLFILE_BACKUP
manually resynchronize the recovery catalog 4. RESYNC_RCVCAT
whenever structural changes have been made to
the target database. It is critical for the DBAs to be Each of the first three functions accepts the
alerted if anything unexpected happens. database name, backup start time, and offset (to
By installing or creating the CHECK_RMAN_ system date) as input parameters, and returns “1”
BACKUP package in the RMAN database under if the specific backup has been successfully com-
the recovery catalog schema, adding a few lines pleted. If the backup has not been completed or
for calling the functions of the package in an there was no backup at all, the function will
ALERT program (for example alrtpatrol.tcl, which return “0.” Resync_rcvcat function accepts data-
was written in Oratcl, and performs checking on base name and database link as input parameters,
and returns “1” if the recovery catalog and the
all database servers’ exceptions and performance

Oracle Internals © Copyright 2000 CRC Press LLC


March 2000 All Rights Reserved
Alert Mechanisms for RMAN Backups

Exhibit 1. RMAN Recovery Catalog Tables and Their Primary/Unique Keys

Table Primary Key Unique Key


Name Column(s) Column(s) Information Stored in the Table

AL AL_KEY DBINC_KEY Archived redo logs. It corresponds to the V$ARCHIVED_LOG fixed view in the control file.
AL_RECID
AL_STAMP
BCB BDF_KEY Corrupted block ranges in data file backups. It corresponds to the V$BACKUP_CORRUPTION
BCB_RECID fixed view in the control file.
BCB_STAMP
BCF BCF_KEY 1. DBINC_KEY Control file backups in backup sets. (A backup datafile record with file# 0 is used to
BCF_RECID represent the backup control file in the V$BACKUP_DATAFILE view.)
BCF_STAMP
2. BS_KEY
BDF BDF_KEY DBINC_KEY All data file backups in backup sets.
BDF_RECID
BDF_STAMP
BP BP_KEY BS_KEY All backup pieces of backup sets. (A backup piece is a physical file in an RMAN-specific
BP_RECID format that belongs to one and only one backup set.)
BP_STAMP
BRL BRL_KEY DBINC_KEY Backup redo logs. It corresponds to the V$BACKUP_REDOLOG fixed view in the control file.
BRL_RECID
BRL_STAMP
BS BS_KEY 1. DB_KEY All backup sets for all database incarnations. (An RMAN-specific logical grouping of one
BS_RECID or more backup pieces.)
BS_STAMP
2. DB_KEY
SET_STAMP
SET_COUNT
CCB CDF_KEY Corrupt block ranges in data file copies. It corresponds to the V$COPY_CORRUPTION fixed
CCB_RECID view in the control file.
CCB_STAMP
CCF CCF_KEY DBINC_KEY Control file copies. (A data file copy record with file# 0 is used to represent the control
CCF_RECID file copy in the V$DATAFILE_COPY view.)
CCF_STAMP
CDF CDF_KEY DBINC_KEY All data file copies.
CDF_RECID
CDF_STAMP
CKP CKP_KEY DBINC_KEY All recovery catalog checkpoints.
CKP_SCN
CKP_TYPE
CKP_CF_SEQ
CF_CREATE_TIME
DB DB_KEY DB_ID All target databases that have been registered in this recovery catalog.
DBINC DBINC_KEY DB_KEY All incarnations of the target databases registered in this recovery catalog.
RESET_SCN
RESET_TIME
DF DBINC_KEY 1. DBINC_KEY All data files of all database incarnations.
FILE# FILE#
CREATE_SCN DROP_SCN
2. DBINC_KEY
TS#
TS_CREATE_SCN
FILE#
DFATT DBINC_KEY Data file attributes that change over time.
FILE#
CREATE_SCN
END_CKP_KEY

© Copyright 2000 CRC Press LLC Oracle Internals


All Rights Reserved March 2000
Alert Mechanisms for RMAN Backups

Exhibit 1. RMAN Recovery Catalog Tables and Their Primary/Unique Keys (Continued)

Table Primary Key Unique Key


Name Column(s) Column(s) Information Stored in the Table

OFFR OFFR_KEY 1. DBINC_KEY Data file offline ranges.


OFFR_RECID
OFFR_STAMP
2. DBINC_KEY
FILE#
OFFLINE_SCN
CF_CREATE_TIME
ORL DBINC_KEY All redo log files for all database incarnations.
FNAME
RCVER Recovery catalog version.
RLH RLH_KEY 1. DBINC_KEY All redo log history for all threads.
THREAD#
SEQUENCE#
LOW_SCN
2. DBINC_KEY
RLH_RECID
RLH_STAMP
RR RR_KEY Redo log ranges for all database incarnations.
RT DBINC_KEY All redo threads for all database incarnations.
THREAD#
SCR SCR_KEY DB_KEY RMAN stored scripts.
SCR_NAME
SCRL SCR_KEY RMAN stored script lines.
LINENUM
TS DBINC_KEY 1. DBINC_KEY All tablespaces of all database incarnations.
TS# TS#
CREATE_SCN DROP_SCN
2. DBINC_KEY
TS_NAME
CREATE_SCN
TSATT DBINC_KEY Tablespace attributes that change over time.
TS#
CREATE_SCN
END_CKP_KEY

control file of the target database are in synchro- or removed tablespaces, and new or removed
nization; otherwise it returns “0.” online log groups and members. Partial resyn-
chronizations are initiated by RMAN before per-
First of all, it is important to be certain that the
forming backup, copy, restore, recovery, list, and
recovery catalog and the control file of the target
report operations, if RMAN determines that
database are in synchronization. Once that has
resynchronization is necessary. They are also ini-
been confirmed, it makes sense to verify if the
tiated by RMAN after performing backup, copy,
backup has been completed successfully. There
restore, switch, register, reset, and catalog opera-
are two types of resync catalog operations per-
tions. A full catalog resynchronization is an
formed by RMAN. One is partial resynchroniza-
RMAN operation that refreshes the recovery cata-
tion and the other is full resynchronization.
log with all changed information in the database’s
Partial resynchronizations transfer information to
control file. They are normally initiated by DBAs,
the recovery catalog about archived redo logs,
and can also be initiated by RMAN if it deter-
backup sets, and data file copies. They will not
transfer information such as new data files, new

Oracle Internals © Copyright 2000 CRC Press LLC


March 2000 All Rights Reserved
Alert Mechanisms for RMAN Backups

Exhibit 2. The Architecture of the RMAN Backup Alert System

Target Oracle Database

Data Files
Online Redo Log Files
Control File

Oracle Oracle Oracle Oracle


Server Server Server Server
Process Process Process Process
Alert Workstation

Oracle Enterprise
Manager
Oracle Recovery Manager (RMAN)

SQL*Plus
Resync Level 0 Level 1 Level 2
Catalog Backup Backup Backup

Alert Program
(alrtpatrol.tcl)

RMAN Oracle Database

CHECK_RMAN_
BACKUP Package

Oracle Recovery Catalog

mines this action is necessary before executing and V$DATAFILE for comparing data files, TS
certain commands. and V$TABLESPACE for comparing tablespaces,
and ORL and $LOGFILE for online redo log files
To verify if the recovery catalog is up-to-date,
comparison. If any of the comparisons fails, it
the most important thing is to ensure that the
means the recovery catalog is not up-to-date.
information about archived redo log files in the
AL table are the same as those in the target data- In order to check if the data file RMAN backup
base’s control file (i.e., in V$ARCHIVED_LOG was successful, it is necessary to verify that all
view). In addition, it is also important to verify the “normal” (not read-only or not dropped) data
data files, tablespaces, and online redo log files if files have been backed up. In another words, the
they are tallied with those in the control file of the data files listed in the DF table should be
target database. RESYNC_RCVCAT accesses DF included in the backup data file list (i.e., in the

© Copyright 2000 CRC Press LLC Oracle Internals


All Rights Reserved March 2000
Alert Mechanisms for RMAN Backups

BDF table) for the checking period. “versions” of the same physical database. The
DATAFILE_BACKUP, REDOLOG_BACKUP and incarnation of the database changes once it has
CTRLFILE_BACKUP use a one-day period (from been opened with the RESETLOGS option. To be
the specified backup start date and time) as the certain the accessed data is related to the current
checking period. database version, GET_DBINC function can be
used for returning the current database incarna-
It is a little tricky to verify that the archived
tion according to the database name specified.
redo log backup was successfully completed
because the names or sequence numbers and the For flexibility, some functions in the CHECK_
numbers of the archived redo log files are chang- RMAN_BACKUP package accept an input parame-
ing every day. Function REDOLOG_BACKUP_ ter, which is called “offset” (to the current date).
DATE is used to get the latest successful archived Zero (0) indicates the current date or SYSDATE; 1
redo log backup date based on the specified indicates one day ago, and so on.
backup start time. To do so, it retrieves the last
two different dates based on the backup start time For the detailed logic, please refer to the
specified (from the BS table), and obtains the PL/SQL source code in Oracle’s Internals’ Code
maximum redo log sequence number backed up Depot on the World Wide Web at www.auerbach-
in the previous backup (if considering the later publications.com/scripts.
date is the current one from the BRL table). The
BS table contains the completion time for each Conclusion
backup set. Then it counts the number of archived By plugging in the CHECK_RMAN_BACKUP pack-
redo log files to be backed up in the current age, the DBA does not need to manually check
backup based on the maximum redo log sequence the RMAN backup log files, nor to manually verify
number and the current backup completion time if the RMAN recovery catalog is up-to-date. It
(from the AL table). Finally, it gets the number of does make sense to change the way in which the
archived redo log files backed up during the DBA handles the RMAN backup verifications too.
period of time from the BRL table under the above Instead of pulling the messages from the backup
conditions. If they are the same, it means the log files, the DBA can run the package and get the
archived redo log files have been successfully results from the RMAN database. Furthermore, by
backed up. The function returns the backup date constructing a small program, an alert system for
to the caller. The REDOLOG_ RMAN backups can be set up so that the DBA can
BACKUP function figures out whether the backup be alerted only if there is anything that needs
was successful or not using the date and time in attention. It is especially helpful if there are many
question and the date returned by REDOLOG_ database systems in a 24x7 database
BACKUP_DATE. environment. 

Note that as long as FILE #1 has been backed


up, the control file was backed up too. It’s easy for Acknowledgment
CTRLFILE_BACKUP to find out the control file I would like to thank Daniel Clamage for his con-
backup records from the BCF and BS tables (the tributions and comments.
latter contains the backup time) for the specified
database and the backup start time.
Guang Sheng Wan is an Oracle DBA consultant. He may
As mentioned earlier, RMAN uses the term
be contacted at wan.guang-sheng@highmark.com.
“database incarnation” to identify the different

Globalization and rapid technological change has has been a great advance in information technol-
forever changed the competitive landscape. There ogy and telecommunications that accelerate pro-

Oracle Internals © Copyright 2000 CRC Press LLC


March 2000 All Rights Reserved
Enterprise Transformation and Data Management

Enterprise Transformation and


Data Management
Richard Lee

ductivity and supply chain integration. This Organizations that collect, leverage, and uti-
increasing sophistication, and expectations of cus- lize data effectively will have a distinct advan-
tomers around the world, has given rise to impli- tage over their competitors. Organizations that
cations in the way companies manage processes excel at data management will be more efficient
and data. at rolling out their key capabilities into new mar-
kets. Success depends on linking the organiza-
Whether it is through the Internet, information
tion’s strategic objectives with data management.
kiosks, or some other means, there now exists the
There must be clear strategic decision making
“virtual customer” who decides what, when,
for data sharing, and the appropriate infrastruc-
where, and how they will purchase goods and ser-
ture — both technical and nontechnical. As well,
vices. Customers have virtual access through
organizations must address a number of issues
“cyberspace” to products and services to which
including the mindset to share data, the
they previously could not gain access. Not only do
resources to capture and analyze the data, and
customers now have access; they will demand
the ability to find the right people and data.
products and services “online” and almost in
“zero-time.” Consequently, this will have dramatic
effects on the way companies manage, process, Improvement on Processes
and organize data. Organizations that traditionally Almost all enterprise transformations have
retrieve and use “old data” to plan, forecast, and involved some degree of reengineering, whether it
execute, will have a difficult time in meeting cus- has been minor incremental improvements to
tomers’ needs in the future. major dramatic change. Regardless of the degree
of change, any transformation of an enterprise
Leading organizations have shifted their focus
necessitates a reevaluation of the management of
from cost to growth strategies. These companies
data.
are building flexibility and rapid-response capa-
bilities in their products and services. They are Many enterprise transformations have
redesigning business processes, and leveraging improved processes through information tech-
technology to develop innovative, integrated solu- nology integration and centralized data manage-
tions. Studies (conducted by Deloitte Consulting) ment. The advantage of integration allows a
conclude that the speed of adaptability to custom- worldwide organization to run as a small busi-
ers — not incumbency, size, or technological ele- ness. Standardizing business processes all over
gance — has become the chief determinant of the world with one system allows an organiza-
success in the industry. In all regions and sizes of tion to manage data in multiple languages and
organizations, the ability to innovate and respond multiple currencies more effectively. Most IT
quickly to changing market conditions was cited managers cite that standardizing business pro-
as the most critical advantage. The most profit- cesses is the primary advantage for integrating
able companies recognize the power of twenty- computer systems. Organizations also benefit by
first century customers and are adapting to a new generation of better data through more effective
customer value paradigm and proactively chang- utilization of resources. As well, the data that can
ing the basis of competition. be monitored through integrated systems allows
an organization to know how it is actually pro-
ducing, instead of knowing the forecasted or the-

© Copyright 2000 CRC Press LLC Oracle Internals


All Rights Reserved March 2000
Enterprise Transformation and Data Management

oretical capacity of an operation. Once an other valued assets. Critical success factors
integrated system is in place, it can also be less include managing data from an enterprisewide
expensive to operate. basis, managing data quality, assigning data own-
ership and empowerment, and developing long-
Finding Profits in Data term data strategies to support the enterprise.

Companies must recognize the potential for utiliz- Managing Data from an Enterprisewide Basis.
ing data for profit. Improved processes and inte- An effective approach to managing data on an
grated systems do not necessarily make enterprise scale is to transform data from a “func-
enterprises more successful. To sustain growth tional-silo” view to a business-unitwide and enter-
and take advantage of the enterprise transforma- prisewide view. The data should address not only
tion improvements, companies are consolidating functional areas, such as sales, financial, and
internal databases, purchasing market research products, but also how their relationships with
data, and retaining data longer in efforts to better enterprise processes, such as marketing, finance,
focus their marketing efforts. To serve their cus- distribution, are mapped.
tomers better, organizations are analyzing behav-
The culture of an organization usually dictates
ioral characteristics of customers. If organizations
how data is distributed across its business units. A
can start tracking a customer’s purchasing
large multinational organization that has business
behavior, the kinds of things he or she likes and
units operating autonomously may have disparate
doesn’t like, the organization can use the organi-
technology architectures and distributed local
zation’s data mining tools and data warehouses
databases. This environment provides more of a
developed to target-market the customer. There
challenge than companies that have a strategy of
would be a lot less “junk mail” and better deals
centralizing data for current and detailed histori-
that the customer really cares about.
cal data, while allowing each business unit to
However, building and managing data through retain corresponding summary data.
centralized or decentralized databases is not an
The purpose of an enterprisewide view of data
easy task. Many organizations have had difficulty
is to:
in finding adequate tools to manage such an
ordeal. Many IT departments have found that the
 Share data between multiple organizations
tools required to manage these databases are
or among business units that are critical to
inadequate, immature, and possibly nonexistent.
the enterprise.
The business process should dictate the tools
 Identify and control data that have depen-
and data required for users to perform the respon-
dencies with other systems or subsystems.
sibilities that enhance the enterprise. Business
processes should be developed to maximize the  Improve the quality of data resources.
value of the enterprise to customers, employees  Establish effective data change management
and shareholders. Identifying the correct tools processes that maximize the value of infor-
and data translate into specific business goals that mation while ensuring data quality.
match the organization’s unique objectives.
 Minimize data duplication due to collecting
Critical Success Factors and processing information.

Enterprise transformation not only impacts tech-  Facilitate external partner data access and
nical, and organization structures, but data struc- sharing.
tures as well. Companies must recognize the
importance of managing information as an asset. Many companies can benefit from managing
Successful companies recognize the need to care- data on an enterprisewide basis. Data just sitting
fully manage their data as well as they manage in a local database is just data. However, when
data is shared on an enterprisewide scale, then

Oracle Internals © Copyright 2000 CRC Press LLC


March 2000 All Rights Reserved
Enterprise Transformation and Data Management

other business units may be able to take advan- well as component and supplier costs, were diffi-
tage of that data. From a customer relationship cult for the parent company to obtain.
perspective, when demographics and psycho-
In order to improve quality of data, there have
graphics are added to customer data, the company
been a number of approaches utilized through-
can gain extensive knowledge about that cus-
out industry. Companies have achieved improved
tomer. Business intelligence is gained when the
labor time and cost by achieving gains in speed
results of a competitive analysis are added to base
and accuracy of data entry. Directly related to
customer data. When this intelligence is utilized
this is the higher degree of work satisfaction
within the enterprise’s planning process, it will
from staff because the users will not have to re-
result in value-added ways to manage and expand
key the information. As a result, companies that
the customer base.
invest an amount upfront in improving labor and
data entry processes will save an organization a
Managing Data Quality. One of the many bene-
substantial amount of effort and cost providing
fits for improved data management through
batch work solutions later.
enterprise transformation is improved data qual-
ity through elimination of data duplication. Most Assigning Data Ownership and
companies have many individuals throughout dif-
Empowerment. Enterprise transformation typi-
ferent parts of the organization that enter data
cally leads to a change of responsibility for users.
relating to a certain process or business function.
As a result, it is very important for organizations
Consequently, it is very difficult to identify the
to manage data effectively. A number of roles
integrity of the various types of data that is
involved in creating and distributing data are
entered.
important for effective data management. Roles
Data error can be attributed to a number of fac- have been identified in this article as:
tors including:
 Business Process Owners: The business
 multiple data entries from multiple users process owner defines and maintains the
processes and subprocesses across multi-
 lack of corporate standards
ple business functions. Operational busi-
 data distributed across disparate sources ness processes may include market
and legacy systems products/services, perform order manage-
 data redundancy between different applica- ment, procure materials/services, manage
tions logistics and distribution, and provide cus-
tomer support. Infrastructure business
 data entry errors processes may include the following: per-
form financial management, manage
An example of this was cited in an article in
human resources, manage information
Automotive Manufacturing & Production. A first-
systems, and provide support services.
tier automotive supplier, having seven divisions
Business process owners have the respon-
that shared some of the same vendors, did not
sibility of evaluating and managing the
have the same vendor codes. Each division not
proposed changes to data has on the
only assigned a unique identification number to
impact of their business process.
each of those vendors, each division also had its
own descriptions for the components supplied by
those vendors. Also, the seven divisions had a mix
of legacy hardware platforms, software applica-
tions, and financial systems. To say the least,
monthly roll-ups for divisional product sales, as

© Copyright 2000 CRC Press LLC Oracle Internals


All Rights Reserved March 2000
Enterprise Transformation and Data Management

 Data Owner: The data owner is usually a


business-function manager that is responsi-
ble for the data resource. For example, a
person in the finance department should
own the tax-rate data. A data owner should
be able to assess the validity of the data from
a business point of view, and not necessarily
a computer programmer. The data owners
should drive and review proposed changes
to the data and assess the impact on their
own data.

Editor’s Note: To read the conclusion to this


article, please visit our Web site at www.auer-
bach-publications.com/scripts.

Richard Lee is a senior consultant, Operations Reengi-


neering, Deloitte & Touche Consulting Group, Toronto,
Ontario, Canada

Oracle Internals © Copyright 2000 CRC Press LLC


March 2000 All Rights Reserved

You might also like