You are on page 1of 54

Oracle LogMiner

Carl Dudley

Staffordshire University, UK

EOUG SIG Director


UKOUG SIG Director
cdudley@staffs.ac.uk

Carl Dudley - Staffordshire University 1


Oracle8i LogMiner

Used to analyze redo log (including archived log) information

Can help to solve Database Management issues by supporting


Fine-grained recovery
Auditing
Tuning
Application Debugging

Provides similar functionality to the established 3rd party offerings


Much better than ALTER SYSTEM DUMP LOGFILE;

Carl Dudley - Staffordshire University 2


The LogMiner Environment

Online Log
Files
Data
Dictionary Archive
Log Files

LogMiner

Dictionary
File
~~~~~~~~~~~
~~~~~~~~~~~
~~~~~~~~~~~
~~~~~~~~~~~ Fixed (v$) View
~~~~~~~~~~~

Carl Dudley - Staffordshire University 3


Redo Log Analysis

DBAs can pinpoint when a logical corruption has occurred


Can be used to find limits (stop points) for incomplete
recovery

Fine-grained recovery can be used to perform table level undo


Prevents the need for a restore of the table to a previous state
followed by a roll forward

SQL statements are reconstructed and can be seen in


v$logmnr_contents
sql_undo holds the SQL statement which would undo the
change
sql_redo holds the SQL statement which would redo the
change

Carl Dudley - Staffordshire University 4


The LogMiner Dictionary

The LogMiner requires a dictionary file to be built


Must first set a value for UTL_FILE_DIR in the parameter file
It uses the UTL_FILE package to build the dictionary

Dictionary file is created by the dbms_logmnr_d package


Extracts the data dictionary into an external (dictionary) file in
the directory pointed to by UTL_FILE_DIR
Has only one public procedure called build

PROCEDURE build (dictionary_filename IN VARCHAR2


,dictionary_location IN VARCHAR2);

The dbms_logmnr_d package is built by running the script


dbmslogmnrd.sql as sys (dbmslmd.sql on 8.1.6)

Carl Dudley - Staffordshire University 5


Building the LogMiner Dictionary

Open the database whose logs you wish to analyse

EXECUTE dbms_logmnr_d.build
(dictionary_filename => orcldict.ora
,dictionary_location => c:\lmnr);

To monitor the dictionary build, you can issue :-


SET SERVEROUTPUT ON

Carl Dudley - Staffordshire University 6


The Log of the Dictionary Creation
SQL> execute dbms_logmnr_d.build('test_dictionary7.ora','c:\rmanback');
LogMnr Dictionary Procedure started
LogMnr Dictionary File Opened
TABLE: OBJ$ recorded in LogMnr Dictionary File
TABLE: TAB$ recorded in LogMnr Dictionary File
TABLE: COL$ recorded in LogMnr Dictionary File
TABLE: SEG$ recorded in LogMnr Dictionary File
TABLE: UNDO$ recorded in LogMnr Dictionary File
TABLE: UGROUP$ recorded in LogMnr Dictionary File
TABLE: TS$ recorded in LogMnr Dictionary File
TABLE: CLU$ recorded in LogMnr Dictionary File
TABLE: IND$ recorded in LogMnr Dictionary File
TABLE: ICOL$ recorded in LogMnr Dictionary File
TABLE: LOB$ recorded in LogMnr Dictionary File
TABLE: USER$ recorded in LogMnr Dictionary File
TABLE: FILE$ recorded in LogMnr Dictionary File
TABLE: PARTOBJ$ recorded in LogMnr Dictionary File
TABLE: PARTCOL$ recorded in LogMnr Dictionary File
TABLE: TABPART$ recorded in LogMnr Dictionary File
TABLE: INDPART$ recorded in LogMnr Dictionary File
TABLE: SUBPARTCOL$ recorded in LogMnr Dictionary File
TABLE: TABSUBPART$ recorded in LogMnr Dictionary File
TABLE: INDSUBPART$ recorded in LogMnr Dictionary File
TABLE: TABCOMPART$ recorded in LogMnr Dictionary File
TABLE: INDCOMPART$ recorded in LogMnr Dictionary File
Procedure executed successfully - LogMnr Dictionary Created
Carl Dudley - Staffordshire University 7
The Need for the Dictionary File

The dictionary file is used to translate internal Object Identifiers and


data types to Object names and external data format
Without this, it will show only internal object_ids and
hexadecimal data in the sql_undo and sql_redo columns in
v$logmnr_contents

insert into UNKNOWN.Objn:2875(Col[1],Col[2],Col[3],Col[4],


Col[5],Col[6],Col[7],Col[8]) values (HEXTORAW('c24f28'),
HEXTORAW('4b494e47202020202020'),HEXTORAW('505245534944454e54'),NULL
,HEXTORAW('77b50b11010101'),HEXTORAW('c233'),NULL,HEXTORAW('c10b'));

Changes to database schema require a new dictionary file to be built

Carl Dudley - Staffordshire University 8


Contents of the Dictionary File

CREATE_TABLE DICTIONARY_TABLE ( DB_NAME VARCHAR2(9), DB_ID NUMBER(20),

INSERT_INTO DICTIONARY_TABLE VALUES ('ORAC',665102398,'05/14/2000 20:54

CREATE_TABLE OBJ$_TABLE (OBJ# NUMBER(22), DATAOBJ# NUMBER(22),

INSERT_INTO OBJ$_TABLE VALUES (47,47,0,'I_CDEF1',4,'',1,to_date('0...


INSERT_INTO OBJ$_TABLE VALUES (17,17,0,'FILE$',1,'',2,to_date('02/...
INSERT_INTO OBJ$_TABLE VALUES (15,15,0,'UNDO$',1,'',2,to_date('02/...
INSERT_INTO OBJ$_TABLE VALUES (45,45,0,'I_CON1',4,'',1,to_date('02...
INSERT_INTO OBJ$_TABLE VALUES (5,2,0,'CLU$',1,'',2,to_date('02/27/...
INSERT_INTO OBJ$_TABLE VALUES (38,38,0,'I_FILE1',4,'',1,to_date('0...
INSERT_INTO OBJ$_TABLE VALUES (37,37,0,'I_ICOL1',4,'',1,to_date('0...
INSERT_INTO OBJ$_TABLE VALUES (40,40,0,'I_TS1',4,'',1,to_date('02/...
INSERT_INTO OBJ$_TABLE VALUES (53,53,0,'BOOTSTRAP$',1,'',2,to_date...

Note the underscore between the first two keywords


Prevents tables being created by inadvertent execution of the script

Carl Dudley - Staffordshire University 9


The dbms_logmnr Package

Created by dbmslogmnr.sql
(dbmslm.sql/prvtlm.plb on 8.1.6)

Has three procedures


add_logfile
-- Includes or removes a logfile in the list of logs to be
analyzed
start_logmnr
-- Initiates the analysis of the logs
end_logmnr
-- Closes the logminer session

Carl Dudley - Staffordshire University 10


Examining the Redo Log Files
The LogMiner must first be given a list of log files to analyze
EXECUTE dbms_logmnr.add_logfile
(Options => dbms_logmnr.NEW
,logfilename => log1orcl.ora);

This action clears any existing list of logs to be analyzed and


starts a new list, with log1orcl.ora as first in the list
EXECUTE dbms_logmnr.add_logfile
(Options => dbms_logmnr.ADDFILE
,logfilename => log2orcl.ora);
This action adds log2orcl.ora to the list of logs to be analyzed
EXECUTE dbms_logmnr.add_logfile
(Options => dbms_logmnr.REMOVEFILE
,logfilename => log1orcl.ora);

This action removes log1orcl.ora from the list of logs to be analyzed

Carl Dudley - Staffordshire University 11


Examining the Redo Log Files (2)
Include all (three) online redo log files for analysis
Database can be in OPEN, MOUNT or NOMOUNT state

EXECUTE dbms_logmnr.add_logfile
(Options => dbms_logmnr.NEW
,logfilename => log1orcl.ora);

EXECUTE dbms_logmnr.add_logfile
(Options => dbms_logmnr.ADDFILE
,logfilename => log2orcl.ora);

EXECUTE dbms_logmnr.add_logfile
(Options => dbms_logmnr.ADDFILE
,logfilename => log3orcl.ora);

Evidence of logs chosen for analysis can be found in v$logmnr_logs

Carl Dudley - Staffordshire University 12


Automatic Log List Generation (1)
Procedure to generate list of logs between specified periods
Important to limit analysis to conserve memory in UGA

CREATE OR REPLACE PROCEDURE log_spec


(log_dir IN VARCHAR2,
log_list IN VARCHAR2,
start_period VARCHAR2,
end_period VARCHAR2)
AS
sql_stmt VARCHAR2(4000);
file_out UTL_FILE.FILE_TYPE;
counter NUMBER(4) := 1;
file_buff VARCHAR2(2000);
CURSOR log_cur IS SELECT name
FROM v$archived_log
WHERE first_time BETWEEN
TO_DATE(start_period,'dd-mon-yyyy hh24:mi:ss')
AND TO_DATE(end_period,'dd-mon-yyyy hh24:mi:ss')
OR next_time BETWEEN
TO_DATE(start_period,'dd-mon-yyyy hh24:mi:ss')
AND TO_DATE(end_period,'dd-mon-yyyy hh24:mi:ss');

Carl Dudley - Staffordshire University 13


Automatic Log List Generation (2)

BEGIN
file_out := UTL_FILE.FOPEN(log_dir,log_list,'w');
FOR log_rec IN log_cur LOOP
IF counter = 1 THEN
file_buff := 'DBMS_LOGMNR.ADD_LOGFILE
('''||LOG_REC.NAME||''',DBMS_LOGMNR.NEW);';
ELSE
file_buff := 'DBMS_LOGMNR.ADD_LOGFILE
('''||LOG_REC.NAME||''',DBMS_LOGMNR.ADDFILE);';
END IF;
UTL_FILE.PUT_LINE(file_out,'BEGIN');
UTL_FILE.PUT_LINE(file_out,file_buff);
UTL_FILE.PUT_LINE(file_out,'END;');
UTL_FILE.PUT_LINE(file_out,'/');
counter := counter + 1;
END LOOP;
UTL_FILE.FCLOSE(file_out);
END;

Carl Dudley - Staffordshire University 14


Automatic Log List Generation (3)
Generate a list of the target logs in listlogs.sql in the
UTL_FILE_DIR directory
BEGIN
log_spec(log_dir => c:\logminer, log_list => listlogs.sql
,start_period => 21-JAN-2001 13:45:00
,end_period => 21-JAN-2001 14:45:00);
END;

Contents of listlogs.sql

BEGIN
DBMS_LOGMNR.ADD_LOGFILE('D:\ORACLE\ORADATA\ORAC\ARCHIVE\ORACT001S00192.ARC
,DBMS_LOGMNR.NEW);
END;
/
BEGIN
DBMS_LOGMNR.ADD_LOGFILE('D:\ORACLE\ORADATA\ORAC\ARCHIVE\ORACT001S00193.ARC
,DBMS_LOGMNR.ADDFILE);
END;
/
...

Carl Dudley - Staffordshire University 15


Starting the Logminer

When conducting an analysis, the LogMiner can be given a list of


log files or a period of activity to analyze

EXECUTE dbms_logmnr.start_logmnr
(Dictfilename => orcldict.ora
,StartTime => 01-dec-1999 09:00:00
,EndTime => 01-dec-1999 09:30:00);

This action populates v$logmnr_contents with information on


activity between 9:00 and 9:30am on 1st Dec 1999

Carl Dudley - Staffordshire University 16


Viewing LogMiner Information
A simple example

SELECT sql_redo,sql_undo FROM v$logmnr_contents;

SQL_REDO
--------
delete from EMP where EMPNO = 7777 and ROWID = `AAACOOAEBAADPCACA';
SQL_UNDO
--------
insert into EMP(EMPNO, SAL) values (7777,1500)

Without a dictionary file, you can expect to see this kind of output

insert into UNKNOWN.Objn:2875(Col[1],Col[2],Col[3],Col[4],


Col[5],Col[6],Col[7],Col[8]) values (HEXTORAW('c24f28'),
HEXTORAW('4b494e47202020202020'),HEXTORAW('505245534944454e54'),NULL
,HEXTORAW('77b50b11010101'),HEXTORAW('c233'),NULL,HEXTORAW('c10b'));

Carl Dudley - Staffordshire University 17


Viewing Logminer Information

Tracking Freds activity at a particular time

SELECT username
,scn
,TO_CHAR(timestamp
,'dd-mon-yyyy hh24:mi:ss') time
,sql_redo
,sql_undo
FROM v$logmnr_contents
WHERE username = 'FRED'
AND TO_CHAR(timestamp
,'dd-mon-yyyy hh24:mi:ss')
BETWEEN 01-DEC-1999 09:02:00'
AND 01-DEC-1999 09:04:00';

Carl Dudley - Staffordshire University 18


Limitations/Features of the Logminer (1)
Cannot cope with Objects and chained/migrated rows
SQL_UNDO
---------------------------------
Unsupported (Chained Row)

Log Miner generates low-level SQL, not what was actually issued
Cannot fully trace an application
-- If an update statement updates three rows, three separate
row-level update statements are shown in SQL_UNDO column

UPDATE emp SET sal = 4000 WHERE deptno = 10;

3 rows updated.

SQL_UNDO
------------------------------------------------------------
update EMP set SAL = 2450 where rowid = `AAACOOAEBAADPCACI';
update EMP set SAL = 1300 where rowid = `AAACOOAEBAADPCACN';
update EMP set SAL = 5000 where rowid = `AAACOOAEBAADPCACG';

Carl Dudley - Staffordshire University 19


Limitations/Features of the Logminer (2)
Requires an up to date dictionary file to produce meaningful output

DDL is not supported (e.g. CREATE TABLE)


Cannot access SQL on the data dictionary in a visible form
Shows updates to base dictionary tables due to DDL operations
-- Evidence of CREATE TABLE can be seen as an insert into tab$

Reconstructed SQL is generated only for


DELETE, UPDATE, INSERT and COMMIT

ROLLBACK does not generate any data for sql_undo


A rollback flag is set instead

SQL_REDO SQL_UNDO ROLLBACK


------------- ------------ ----------
insert ... delete ... 0
delete ... <null> 1

Carl Dudley - Staffordshire University 20


Limitations/Features of the LogMiner (3)

All LogMiner memory is in the PGA so all information is lost when


the LogMiner session is closed
LogMiner cannot be used in an MTS environment
v$logmnr_contents can be seen only by the LogMiner session
Make permanent with CTAS
-- Avoids having to reload the information when in a new session

CREATE TABLE logmnr_tab01 AS


SELECT * FROM v$logmnr_contents;

v$logmnr_contents is not currently indexed


May be efficient to CTAS and build indexes on the table

Carl Dudley - Staffordshire University 21


Analysing Oracle 8.0 Logfiles

Logminer can build a dictionary file from an Oracle 8.0 database using
dbms_logmnr_d.build

The dictionary file can be used to analyse Oracle 8.0 logs


All analysis must be done while connected to an Oracle 8.1 database

Logminer can be used against ANY foreign version 8.x database


with the appropriate foreign dictionary file
Database must have the same character set and hardware platform

Carl Dudley - Staffordshire University 22


Calculating Table Access Statistics
Start LogMiner to determine activity during a two week period in August

EXECUTE dbms_logmnr.start_logmnr(
StartTime => `07-Aug-99, EndTime => `15-Aug-99'
,DictFileName => `/usr/local/dict.ora');

Query v$logmnr_contents to determine activity during the period

SELECT seg_owner, seg_name, COUNT(*) AS Hits


FROM v$logmnr_contents
WHERE seg_name NOT LIKE `%$'
GROUP BY seg_owner, seg_name;

SEG_OWNER SEG_NAME Hits


--------- -------- ----
FINANCE ACCNT 906
SCOTT EMP 54
FINANCE ORD 276
SCOTT DEPT 39

Carl Dudley - Staffordshire University 23


The Logminer Views
v$logmnr_dictionary
Information on dictionary file in use

v$logmnr_logs
Information on log files under analysis
Log sequence numbers for each log
-- If the log files do not have consecutive sequence
numbers, an entry is generated signifying a gap
High and low SCNs, high and low times of all currently
registered logs

v$logmnr_parameters
Current LogMiner session parameters
High and low SCNs, high and low times, info, status

v$logmnr_contents
Results of analysis
Contains masses of information ( > 50 columns! )

Carl Dudley - Staffordshire University 24


Columns in v$logmnr_contents
operation
Shows type of SQL operation
-- INSERT, UPDATE, DELETE, COMMIT, BEGIN_TRANSACTION
-- All other operations are reported as UNSUPPORTED or
INTERNAL_OPERATION

seg_type_name
Only tables are supported in the first release

rollback
1 represents a rollback operation otherwise 0

xidusn, xidslot, xidsqn


Combination of these three columns identifies a
transaction

row_id
Only The ROWID of the affected row

Carl Dudley - Staffordshire University 25


Tracking Changes made by a Transaction

Typical scenario - you have found an anomaly in a column

Use the column mapping facility to find operations which


have affected that column

Use a query on the relevant placeholder column in


v$logmnr_contents to find the values of xidusn,
xidslt, and xidsqn for the offending operation

Search v$logmnr_contents for operations containing


matching transaction identifiers
-- Will show operations of any firing database triggers
-- This will enable you to gather the necessary undo to
roll the entire transaction back

Carl Dudley - Staffordshire University 26


Tracking Changes to Specific Columns(1)
LogMiner can be used to monitor changes on specific columns
Need to create a file called logmnr.opt
MUST be held in the same directory as the dictionary file

LogMiner must be started with the Options parameter


set to dbms_logmnr.USE_COLMAP

EXECUTE dbms_logmnr.start_logmnr(
,DictFileName => `/usr/local/dict.ora
,Options => dbms_logmnr.USE_COLMAP);

Carl Dudley - Staffordshire University 27


Tracking Changes to Specific Columns(2)
logmnr.opt has a very rigid format
Each entry has to be syntactically and lexically correct
colmap = SCOTT EMP (1, ENAME);

v$logmnr_contents has five sets of placeholder columns


showing the name of the column, the original and new values

PH1_NAME, PH1_REDO, PH1_UNDO


PH2_NAME, PH2_REDO, PH2_UNDO
PH3_NAME, PH3_REDO, PH3_UNDO
PH4_NAME, PH4_REDO, PH4_UNDO
PH5_NAME, PH5_REDO, PH5_UNDO

Changes to the ename column in freds emp table will be shown


in the first set (PH1...) of placeholder columns

Carl Dudley - Staffordshire University 28


Tracking Changes to a Schema
Scripts can be written to generate entries for logmnr.opt

The following package populates placeholder columns


with changes on any table in a given schema
Must be created under SYS

CREATE OR REPLACE PACKAGE filesave AS


FUNCTION logmnr(directory VARCHAR2
,schemaname VARCHAR2
,filename VARCHAR2 DEFAULT 'logmnr.opt')
RETURN INTEGER;
END filesave;

Carl Dudley - Staffordshire University 29


Schema Changes - Package Body (1)
CREATE OR REPLACE PACKAGE BODY filesave AS
FUNCTION logmnr( directory VARCHAR2,
schemaname VARCHAR2,
filename VARCHAR2 DEFAULT 'logmnr.opt')
RETURN INTEGER IS
CURSOR curtab IS
SELECT object_name table_name
FROM dba_objects
WHERE owner=UPPER(schemaname) AND object_type='TABLE'
ORDER BY object_id ASC;
t_tab curtab%ROWTYPE;
CURSOR curcol (v_schemaname VARCHAR2,v_table_name VARCHAR2) IS
SELECT column_name FROM dba_tab_columns
WHERE owner=UPPER(v_schemaname) AND table_name=v_table_name
AND column_id < 6 ORDER BY column_id ASC;
t_col curcol%ROWTYPE;
i NUMBER;
v_errorcode NUMBER;
v_errormsg VARCHAR2(255);
v_line VARCHAR2(255);
v_sep VARCHAR2(5);
v_map VARCHAR2(30) := 'colmap = ' || UPPER(schemaname) ||' ';
v_return INTEGER:=0;
Error 24342
v_handle UTL_FILE.FILE_TYPE;

Carl Dudley - Staffordshire University 30


Schema Changes - Package Body (2)
BEGIN
v_handle := UTL_FILE.FOPEN(Directory,filename,'w');
FOR t_tab IN curtab LOOP
v_line := NULL; v_sep := NULL; i:=1;
FOR t_col IN curcol(SchemaName,t_tab.table_name) LOOP
v_line := v_line||v_sep||TO_CHAR(i)||', '||t_col.column_name;
v_sep := ', ';
i:= i+1;
END LOOP;
UTL_FILE.PUT_LINE( v_handle,v_map||t_tab.table_name||' ('||v_line||');');
UTL_FILE.FFLUSH(v_handle);
dbms_output.put_line(v_map||t_tab.table_name||' ('||v_line||');');
END LOOP;
UTL_FILE.FFLUSH(v_handle); UTL_FILE.FCLOSE(v_handle);
RETURN(v_return);
EXCEPTION
WHEN OTHERS THEN
v_errorcode := SQLCODE;
v_errormsg := SUBSTR(RTRIM(LTRIM(SQLERRM)),1,200);
dbms_output.put_line('error : >');
dbms_output.put_line(v_errormsg);
IF UTL_FILE.IS_OPEN(v_handle) then
UTL_FILE.FFLUSH(v_handle); UTL_FILE.FCLOSE(v_handle);
END IF;
RETURN(-1);
END logmnr;
Error 24342
END filesave;

Carl Dudley - Staffordshire University 31


Construction of logmnr.opt

SET SERVEROUT ON SIZE 20000


DECLARE
v integer;
BEGIN
v := filesave.logmnr (c:\temp','SCOTT');
END;
/

colmap = SCOTT DEPT (1, DEPTNO, 2, DNAME, 3, LOC);


colmap = SCOTT EMP (1, EMPNO, 2, ENAME, 3, JOB, 4, MGR, 5, HIREDATE);
colmap = SCOTT BONUS (1, ENAME, 2, JOB, 3, SAL, 4, COMM);
colmap = SCOTT SALGRADE (1, GRADE, 2, LOSAL, 3, HISAL);
colmap = SCOTT TEST (1, COL1);
colmap = SCOTT TEST2000 (1, COL1);

Only the first five columns in each table can be accommodated

Carl Dudley - Staffordshire University 32


Placeholder Data

Changes to the ename column in freds emp table will be shown in the
first set (PH1...) of placeholder columns
Undo and redo values for any transaction making changes to ename in
the emp table can be seen in the ph1_undo and ph1_redo columns
The SQL can also be seen in sql_redo and sql_undo, but the
placeholders make searching for the information much easier

SCN SQL_REDO PH1_UNDO PH1_REDO

-------------- ---------------------------------------- --------- ---------


12763 UPDATE emp SET ename = 'JACKSON' WHERE.. WARD JACKSON

12763 UPDATE emp SET ename = JACKSON WHERE.. FORD JACKSON

12764 UPDATE dept SET dname = .. <<NULL>> <<NULL>>


12765 DELETE FROM dept WHERE .. <<NULL>> <<NULL>>

12765 INSERT INTO emp VALUES(1111,'REES'.. <<NULL>> REES

12765 DELETE FROM emp WHERE ename = 'COX' COX <<NULL>>

Carl Dudley - Staffordshire University 33


Tracking DDL Statements

DDL is not directly shown

The effect of a DDL statement can be seen by looking for


changes made by the sys user to the base dictionary tables
The timestamp column in v$logmnr_contents can be
used to find the time at which the DDL was issued
Useful for finding the timing of a DROP TABLE statement

Carl Dudley - Staffordshire University 34


Table Creation

Create a new table

CREATE TABLE employees (...

Evidence of table creation can be seen in sql_redo

insert into SYS.OBJ$(OBJ#,DATAOBJ#,OWNER#,NAME,NAMESPACE


,SUBNAME,TYPE#,CTIME,MTIME,STIME
,STATUS,REMOTEOWNER,LINKNAME,FLAGS)
values (2812,2812,19,'EMPLOYEES',1,NULL,2
,TO_DATE('09-MAR-2000 19:08:12','DD-MON-YYYY HH24:MI:SS')
,TO_DATE('09-MAR-2000 19:08:12','DD-MON-YYYY HH24:MI:SS')
,TO_DATE('09-MAR-2000 19:08:12','DD-MON-YYYY HH24:MI:SS')
,1,NULL,NULL,0);

Note obj# = dataobj#

Carl Dudley - Staffordshire University 35


Trigger Creation
Evidence of trigger creation can be seen in sql_redo

set transaction read write;


insert into SYS.TRIGGER$(OBJ#,TYPE#,UPDATE$,INSERT$,
DELETE$,BASEOBJECT,REFOLDNAME,REFNEWNAME,DEFINITION,
WHENCLAUSE,REFPRTNAME,ACTIONSIZE,ENABLED,PROPERTY,
SYS_EVTS,NTTRIGCOL,NTTRIGATT,ACTION#)
values (2811,0,0,0,0,0,NULL,NULL,NULL,NULL,NULL,73,NULL,
0,0,0,0,
'BEGIN
IF :NEW.SAL > 2*:OLD.COMM
THEN :NEW.SAL := :OLD.SAL;
END IF;
END;

Carl Dudley - Staffordshire University 36


Oracle9i LogMiner - Release 1
Full support for analysis of DDL statements

Use of online data dictionary

Use of dictionary definitions extracted into the redo stream

Automatic detection of dictionary staleness

Can now skip corrupted blocks in the redo stream

Limit analysis to committed transactions only

Analyse (MINE) changes to particular columns

GUI frontend - Oracle LogMiner Viewer (within OEM)


Easier to learn and use

Carl Dudley - Staffordshire University 37


Oracle9i LogMiner Dictionary Usage (1)
Data dictionary can be extracted into the redo stream with new option

EXECUTE dbms_logmnr_d.build
(options => dbms_logmnr_d.STORE_IN_REDO_LOGS);
No dictionary file to be managed
Dictionary is backed up via the redo logs
-- Produces lots of redo (minimum 12Mb) - but faster than flat file
Database must be in archivelog mode
No DDL allowed during extraction, so dictionary is consistent

To use in the analysis session, specify


EXECUTE dbms_logmnr.start_logmnr
(options => dbms_logmnr.DICT_FROM_REDO_LOGS);

The logmnr... dictionary views are populated


-- The objv# columns track version changes

Carl Dudley - Staffordshire University 38


Oracle9i LogMiner Dictionary Usage (2)

The online data dictionary can also be used to analyze the logs
Available when database is open
Specify the following in dbms_logmnr.start_logmnr procedure

EXECUTE dbms_logmnr.start_logmnr
(options => dbms_logmnr.DICT_FROM_ONLINE_CATALOG);

Use this to analyze recent redo logs or when the structure of objects
under test has not changed
Absence of mismatches between online dictionary and redo
log contents is assumed

Carl Dudley - Staffordshire University 39


Oracle9i LogMiner Dictionary Usage (3)

Tracking the Dictionary Changes


Update internal dictionary to keep track of changes in the redo stream
Can concatenate multiple options in the options clause

EXECUTE dbms_logmnr.start_logmnr
(options => dbms_logmnr.DICT_FROM_REDO_LOGS
+ dbms_logmnr.DDL_DICT_TRACKING);

Dictionary staleness can be detected


LogMiner knows about different object versions
Relevant when the dictionary is in a flat file or in the redo logs

Carl Dudley - Staffordshire University 40


Oracle9i LogMiner Support for DDL
Now shows DDL (as originally typed) plus details of user and process
issuing the statement

SELECT sql_redo FROM v$logmnr_contents


WHERE operation = DDL
AND seg_owner = FRED;

create table fred.dept(deptno number, dname varchar2(20));

SELECT sql_redo FROM v$logmnr_contents


WHERE operation = DDL
AND user_name = SYS;

create user fred identified by values C422E820763B4D55;

Carl Dudley - Staffordshire University 41


Oracle9i Supplemental Logging

Columns which are not actually changed can be logged


Use this to identify rows via primary keys
-- Not dependent on ROWID - gives portability
-- Invalidates all DML cursors in the cursor cache

Database supplemental logging


Columns involved in primary and unique keys can be logged
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA
(PRIMARY KEY, UNIQUE INDEX) COLUMNS;

To turn off supplemental logging


ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

Carl Dudley - Staffordshire University 42


Oracle9i Supplemental Logging Example

Example update statement

UPDATE emp SET sal = 1500 WHERE deptno = 10;

sql_redo without supplemental logging

update SCOTT.EMP set SAL = 1500


where DEPTNO = 10 and ROWID = `AAACOOAEBAADPCACI'

sql_redo with supplemental logging

update SCOTT.EMP set SAL = 1500


where DEPTNO = 10and EMPNO = 7782
and ROWID = `AAACOOAEBAADPCACI'

Carl Dudley - Staffordshire University 43


Table Level Supplemental Logging
Specify groups of columns to be logged
ALTER TABLE emp
ADD SUPPLEMENTAL LOG GROUP emp_log (ename,sal,comm);

ALTER TABLE emp


ADD SUPPLEMENTAL LOG GROUP emp_log (ename,sal,comm) ALWAYS;

For each changed row in a specific table


If ALWAYS is used
-- All columns in the group are logged if ANY column is changed
If ALWAYS is not used
-- All columns in the group are logged only if at least one
column in the group is changed
Observe in dba_log_groups, dba_log_group_columns
Remove log groups using a DROP statement
ALTER TABLE emp DROP SUPPLEMENTAL LOG GROUP emp_log;

Carl Dudley - Staffordshire University 44


Oracle9i Query by Data Value
mine_value
Returns actual value of data being changed
Can mine data from using redo_value and undo_value in
v$logmnr_contents
SELECT dbms_logmnr.mine_value(redo_value,FRED.EMP.SAL)
FROM v$logmnr_contents;

Returns the new value of sal from each redo record


If the column is not changed, it returns NULL
Could be used to find how many salaries were updated to > $50K
Which employees have had the new salaries
Who had more than a 10% increase
SELECT sql_redo
FROM v$logmnr_contents
WHERE dbms_logmnr.mine_value(redo_value,FRED.EMP.SAL) >
dbms_logmnr.mine_value(undo_value,FRED.EMP.SAL * 1.1
AND operation = UPDATE;
Carl Dudley - Staffordshire University 45
Oracle9i Query by Data Value (continued)
column_present
Distinguishes between different types of NULLs in redo_value
and undo_value by returning :
0 if the column has not been changed
1 if the column is being set to null

SELECT dbms_logmnr.mine_value(redo_value,FRED.EMP.SAL)
,sql_redo
FROM v$logmnr_contents
WHERE dbms_logmnr.mine_value(redo_value,FRED.EMP.SAL) IS NOT NULL
OR (dbms_logmnr.mine_value(redo_value,FRED.EMP.SAL) IS NULL
AND dbms_logmnr.column_present(redo_value,FRED.EMP.SAL) = 1);

No need for placeholder columns any more

Carl Dudley - Staffordshire University 46


Oracle9i LogMiner - New Features

Display information from committed transactions only


Rows in v$logmnr_contents are grouped by transaction identifier
EXECUTE dbms_logmnr.start_logmnr(options => ...
+ dbms_lognmnr.COMMITTED_DATA_ONLY);

Populates the cscn column and can show rolled back transactions
SCN CSCN SQL_REDO
---- ---- -------------------------------------------------------------
2012 2017 insert into FRED.DEPT values(50,ARTS,PARIS);
2016 2017 delete from FRED.DEPT where ROWID = `AAACOOAEBAADPCACI;

Continue past corruptions in the redo log


The info column in v$logmnr_contents shows
Log file corruption encountered
EXECUTE dbms_logmnr.start_logmnr(options => ...
+ dbms_lognmnr.SKIP_CORRUPTION);
Carl Dudley - Staffordshire University 47
Oracle9i LogMiner Viewer

Carl Dudley - Staffordshire University 48


Oracle9i LogMiner Viewer

Carl Dudley - Staffordshire University 49


Oracle9i Release 2 LogMiner Logical Standby

Logical Hot Standby


LogMiner is used to generate (imitate) SQL statements to keep the
standby in step
-- Automatic gathering of sections of redo logs used in logical
standbys
Allows the standby to be open for normal use
Can accommodate extra objects such as materialized views and
indexes

Carl Dudley - Staffordshire University 50


Oracle9i Release 2 LogMiner dbms_logmnr_session

Allows persistent sessions (LOGMNR_MAX_PERSISTENT_SESSIONS)


LogMiner core architecture has been redesigned

Support of long-running applications based on the redo streams


Allows the saving of the mining context and the resumption
of mining at some later time

Can be used to tune the performance of mining


Employ multiple processes
Memory usage (how much of the SGA)
Disk usage (fraction of tablespace staged for large transactions)
Log read frequency

Carl Dudley - Staffordshire University 51


Oracle9i Release 2 LogMiner Persistent Sessions

Can be programmed to run continuously and wait for new redo logs
Set wait_for_log flag in
dbms_logmnr_session.create_session

The Remote File Server can be set up to automatically add archived


log files to a persistent LogMiner session
Set auto_add_archived flag in
dbms_logmnr_session.create_session

Continuous analysis requires listener.ora, init.ora and


tnsnames.ora files to be configured
Similar to the requirements for hot standby

Carl Dudley - Staffordshire University 52


Oracle9i LogMiner (continued)

LogMiner now supports


Index-Organized Tables (NO) Clustered tables
Chained rows LOBs and LONGs (NO)
Direct loads Scalar object types
DDL statements (triggers?) Parallel DML

Still no support for


Objects (again!)nested tables, VARRAYs, REFs

Carl Dudley - Staffordshire University 53


Oracle LogMiner

Carl Dudley

Staffordshire University, UK

EOUG SIG Director


UKOUG SIG Director
cdudley@staffs.ac.uk

Carl Dudley - Staffordshire University 54

You might also like