You are on page 1of 78

ORACLE DATABASE ADMINISTRATION

HAND BOOK

BY

Y.SREENIVASA REDDY
JTO(DOTSOFT)
BASICS OF THE ORACLE DATABASE ARCHITECTURE

Topics

1. The Oracle Architecture


2. Starting and Stopping the Oracle Instance
3. Creating an Oracle Database.

The Oracle Architecture contains the following:

1. Oracle memory Structures


2. Oracle background Processes
3. Oracle disk utilization structures

Oracle Instance: All above three structures of the Oracle database server running together to
allow users to read and modify data are referred to as an Oracle Instance.

ORACLE MEMORY STRUCTURES:

This set of memory components represent a “Living” version of Oracle that is available
only when the instance is running.

There are 2 basic memory structures on the Oracle Instance

1. System Global Area (SGA)


2. Program Global Area (PGA)

The Oracle SGA: This contains

1. The Buffer Cache


2. The Shared Pool
3. The redo log buffer
4. Large Pool (from Oracle 8 onwards) (*)
5. Java Pool (from Oracle 8 onwards)

Buffer Cache: The Buffer Cache consists of buffers that are the size of database blocks.

The buffers are designed to store data blocks recently used by the user SQL statements in
order to improve performance for subsequent selects and data changes.

Shared Pool: This has 2 required components and one optional component. The required
components are:

1. Library Cache(Shared SQL Area)


2. dictionary cache (row cache)
Optional Component: Session information for user processes connecting to Oracle
Instance.

Redo Log Buffer: This stores online redo log entries in memory until they can be written to
disk.

(*) Usage of Shared Pool:

-Library Cache (Shared SQL Area)

This caches 1. SQL/PLSQL statements


2. Parsed form of SQL Statement
3. Execution Plans

- Oracle has hash value to search for PL/SQL statements in cache area of Library
Cache.
- There is limit for no.of statements to hash.
- The algorithm LRU (Least Recently used) is used to flush out the SQL statements
in Cache.

-Row Cache (Dictionary Cache) :

- Caches data Dictionary to improve response time on DD Queries.

Benefit of Caching DD is :

- All user processes and the Oracle database internal processes use the DD, the
database as a whole benefits in terms of performance from the presence of cached
DD in memory.

(*) The size of the Library Cache and Row Cache will be internally taken care by
Oracle and user has no control over it.
(*) If there is contention for memory bet’ rib cache and row cache – the row cache is
given priority.

Java Pool (8i) :

- Used for caching of Java Objects


- An ordinary Java file can be stored in Java Pool as database object.
- Minimum site is of Java Pool is 20 MB.

Usage of Database Buffer Cache: The size of this db buffer cache is controlled by the INIT.ORA
Parameter DB_BLOCK_BUFFERS = integer value.

DB_BLOCK_BUFFERS specifies the no. of database blocks that will be there in the db
buffer cache.
The size of database buffer cache is the value of DB_BLOCK_BUFFERS *
DB_BLOCK_SIZE.

Eg :- DB_BLOCK_SIZE = 8192 (5K) and


DB_BLOCK_BUFFERS is set to 1000 then db buffer cache of SGA = 8192 x
1000 = 8,192,000 bytes

Functionality:

- When a user process needs data from database Oracle first checks to see if the required
block is already in the db buffer cache. If it is, it is retrieved from the cache and a disk
I/O operation is avoided.
- Oracle maintains an LRU list of blocks in the cache, when a block is read, its identifier is
moved to the end of the list, making it the last block to be purged from the Cache.
- One exception for this rule is that blocks read as the result of full table scan are placed at
the top of the LRU list. The assumption is that they are unlikely to be requisite again
soon.
- This behavior may be overridden by using the cache clause when creating or updating
table.

(*) if the requested block is not resident in db buffer cache, then a free block must be created.
If the cache is full then the block at the top of the LRU list is removed and the requested block is
read from the disk.

- If the block at the top of the LRU is a “dirty block” – meaning that its data has been
modified – then identifier for that block is moved to dirty list to await processing by the
DBWR process.
- If a free buffer block cannot be located, the DBWR process is notified that its services
required and the dirty blocks are written to the disk.

HIT / MISS Ratio:

This is an important metric for database tuning.

- When a block of data is not available in the database buffer cache and must be read from
the dist, it is a cache-miss.
- When block is available in the buffer it is cache hit.

The ratio of these two occurrences is hit / miss.

- more the db buffer cache, better the hit / miss ratio will be since memory access in orders
of magnitude faster than disk access, most Oracle databases are configured with the
largest possible database buffer cache.
.
(*) Care must be taken not to set DB_BLOCK_BUFFERS so high that paging results, since
paging degrades the performance of the database much more than Oracle I/O operation from
disk.

Redo log buffer: - these redo log buffer holds information about changes to the database, called
redo log entries. These entries are used if database recovery is necessary and they contain
information required to reconstruct changes made by INSERT, UPDATE, DELETE, CREATE,
DROP or ALTER statements.

Functionality:

- It is circular i.e., when it is full entries are written to it from the beginning.
- The LGWR process writes the contents of the redo log buffer to the active redo log file
on disk.
- The site of redo log buffer is determined by the INIT.ORA parameter LOG_BUFFER in
bytes.

(*) The default value of this is 4 times of DB_BLOCK_SIZE but it is often desirable to set
the value higher, particularly if there are many or long transactions generating high rates of redo
generation.

Private SQL Area:

- This is under PGA


- This contains information unique to each user such as bind variables and runtime buffers
and is in turn associated with Library cache (i.e. shared SQL Area) that contains the actual parsed
SQL Code.

When a SQL statement is executed by user process, the following steps take place :

1. Oracle checks to see if the SQL statement is already in the SQL area. If so, the parsed
version is used for execution. If not, the statement is parsed and placed in the shared
SQL area.
2. A private SQL area is allocated and assigned to the user process.

The user process maintains the private SQL area. The no. of SQL statements that can be
maintained in the private SQL area is limited by the INIT.ORA parameter OPEN_CURSORS,
which has a default value of 50.

NOTE:

- If a user process is connected to the instance via a dedicated server process (i.e. MTS is
not in use), the private SQL area is allocated in the user’s PGA, not in the SGA.
- However, if the process is connected via a shared server process, the private SQL area
will be allocated in the SGA.
CONTROL STRUCTURES: The SGA contains other data required for the operation and control
of the Oracle Instance, including information about the state of database, the instance, locks and
individual process.

This information is shared across all processes for the instance. No user data is stored in
this area.

The Program Global Area (PGA) : Server


Process PGA

• Not shared and not writable


• Contains
- Sort area
- Session information
- Cursor state
- Stack space.

- The PGA is a memory region that contains data and controls information for single server
process or a single background process.
- It contrasts to SGA, which is shared and written to be several processes.

The PGA is also known as Process Global Areas that is used by only one process.

When using the dedicated server configuration, the PGA contains


ORACLE BACKGROUND PROCESSES :

DBWR :

- Database writer process


- Handles all data writes to disk.
- It works in conjunction with the oracle database buffer cache memory structure.
- It prevents users from ever accessing a disk to perform a data change such as
update, insert or delete.

LGWR :

- Log writer process


- This background process handles the writing of redo log entries from redo log
buffer to online redo log files on disk.
- This process also writes the log sequence number of the current online redo log to
the data file headers and to the control file.
- Finally LGWR handles initiating the process of clearing the dirty buffer write
quue.
At various times depending on database configuration, those updated blocks are written
to disk by DBWR. These events are called check points. LGWR handles telling DBWR to write
the changes.

NOTE: The LGWR performs sequential writes to the redo log file under the following situations

- When the redo log buffer is one-third full.


- When the timeout occurs (every 3 seconds)
- Before DBWR writes modified blocks in the database buffer cache to the data
files.
- When a transaction commits

SYSTEM MONITOR (SMON): The System Monitor is responsible for performing the following
operations:

- Instance recovery at startup


- Cleanup of unneeded temporary segments.
- Coalescing of contiguous free extents.
- Instance recovery for a failed CPU or instance in parallel server environment.

The SMON process wakes up at regular basis to check whether any of these operations
are required. In addition, other background processes can wake up SMON if they require on eof
these services.

The Process Monitor (PMON):

- The process recovery when a user process fails, including cache clean up and
freeing of resources the process was using.
- Rollback of uncommitted transactions
- Release of locks held by failed or terminated processes.
- Restart of failed dispatcher and shared server processes.

The PMON process wakes up on regular basis to check whether any of these operations
are required. In addition other background processes can wake up PMON if they require one of
these services.

Other Background Processes: (Optional Processes) : These other background processes may be
active, depending on the Oracle options installed and the specific configuration of the instance.

Check Point Process (CKPT): Always exists in Oracle 8 & Optional in Oracle 7

- CKPT is responsible for updating the headers of all Oracle data files at the time of
checkpoint.
In Oracle 7 this CKPT task may be performed by LGWR,
but if there are many data files in
The database then LGWR performance may be reduced. In such a case, the CKPT
Process can be enabled to perform this task through the use of the
CHECK_POINT_PROCESS Parameter in INIT.ORA.

NOTE : CKPT process does not actually write the updated header records to disk, that is still the
responsibility of DBWR process. i.e. in high activity databases, CKPT can be used to handle
writing log sequence numbers to the data file headers and control file, alleviating LGWR of that
responsibility.

The Archiver Process (ARCH): -

- The ARCH process is us ed to copy the contents of an online log file to another location
typically a disk file, when that log file becomes full.
- When all available online log files become full, the first file is reused.
- The mode of operation whereby the contents of each file are saved prior to reuse is called
archive log mode and is controlled by the ARCHIVELOG parameter in the ALTER
DATABASE Statement.
- The ARCH Process runs only when the instance is running in Archieve log mode.

The Recoverer Process (RECO): The RECO process is used by the Oracle distributed transaction
facility to recover from failures involving distributed transactions by:

- Connecting to other databases involved in the distributed transaction.


- Removing rows corresponding to any in doubt transaction from each database’s
transaction table.

If a database is not available when RECO attempts to connect it automatically attempts to


Connect again after waiting a predetermined interval.

Lock Process (LCKn) – One or more lock processes (named LCK0, LCK1,………) are used by
Oracle running with the Parallel Server Option to provide inter-instance locking. For most
Oracle instances, a single process, LCK0 is sufficient.

Some other processes are

Snnn -- The Shared Server Process


Dnnn -- The Dispatcher Process
Pnnn -- The Parallel Query Process
QMNs -- The Queue Monitor Process
SNPn -- The Job Queue Process
Oracle Disk Utilization Structures

There are two visions for the DBA to understand Storage Structures of Oracle.

1. The DBA sees the disk utilization of the Oracle Database consisting of logical data
structures. These structures include – Table spaces, segments and extents.
2. Through another vision the DBA sees the physical database files that store these
logical database structures.

Physical View
Logical View
Segments Datablocks
Extents Datafiles
tablespaces

Logical :

1. Table Space: Table Space is a logical database structure that is designed to store other
logical database structures.
2. Segment: Segment is a logical data object that stores the data of a table, index or series of
roll back entries.
3. Extent: An extent is similar to a segment in that extent stores information corresponding
to a table.

However, the difference is that an extent handles table growth when the row data for a
table exceeds the space allocated to it by the segment, the table acquires an extent to place the
additional data in.

Oracle Blocks:

When a data file is assigned to a table space, oracle formats the file into blocks for the
actual storage of data. The size of each oracle block is determined by DB_BLOCK_SIZE
parameter. The oracle block is divided into three distinct segments :

• The fixed block header Fixed Block Header


• The variable block header Variable Block Header
• The data storage area R1 R2 R3 ……………Rn

ROW 1
ROW 2
ROW3

Free space
FIXED BLOCK HEADER
The fixed Block Header will be exactly the same size for every block of a table,
regardless of the oracle block size.

The size of the fixed block header will be 57+(23xINITRANS)

Where INITRANS is the value specified for the INITRANS Parameter in the
create table statement when the table is created (default value in 1)

VARIABLE BLOCK HEADER

This immediately follows the fixed block header in the block and is dependent on the
no.of rows stored in the block.

Size is 4+(2 x no.of rows in the block)

The initial 4 bytes contain the table directory and there is 2 byte row directory for each
stored row.

NOTE:

FBH remains constant size for every block of a particular table for most tables in reality
unless INITRANS is specified VBG is dynamic and depends on the no.of rows stored.

Large Block Sizes may be more efficient.

The reason : eg : A database having an oracle block size of 2048 (2k) will loose about
3.9% of each block (57 + 3 bytes) to the FBH while a database having 8k block size will lose
only .97% of each block.

More smaller rows in a table will result in less available space in the remainder of the
block than will fewer, larger rows.

Data Storage Area : The balance of the block is used for data storage for example to store the
actual rows of a table The calculation of the available storage in each block is not
straightforward since it is dependent on several factors including :

• Oracle DB_BLOCK_SIZE
• Percent free space (PC TFREE)
• Average row length
• No.of rows stores per block.
The average row length can be estimated as

3 bytes row header


+1 byte per row LONG column
+3 byte per LONG Column
+average length of all table columns

The header space per row can be calculated as

3 + (number of non_LONG Column)+


3 x (no.of LONG columns)

The no.of rows per block can then be calculated as

((blocksize - (57 + 23 x INITRANS))

- ((blocksize - (57 + 23 x INITRANS))


+ (PCTFREE / 100)-4-2 x rows per block) / (avg.rowlength + header)

Finally the available space in the block can be calculated as

((blocksize - (57 + 23 x INITRANS))


- ((blocksize_(57 + 23 x INITRANS) x PCTFREE x 100)-4-2 x rows per block

Here in these calculations the possibility of trailing NULLs Long strings and so on are not taken
into account which may have impact on exact calculations.

EXTENT: AN extent is an amount of storage that is rounded upto the next multiple of the oracle
block size.

Storage Allocation Parameters :

A typical storage clause contains STORAGE (INITIAL : NEXT n MINEXTENTS n


PETINCRCASE p)

INITIAL: This defines how much space will be allocated to the first extent when an object is
created. This may be in k, kb,mb.

EG : INITIAL 1048576
INITIAL 1024k
INITIAL 1m

The default value for initial is the INITIAL value established in the DEFAULT
STORAGE for the table space in which the object is being created.
NOTE: Extent sizes should be specified as integer multiples the Oracle block size; otherwise
allocation is rounded up to the next oracle block. For eg,. With a 2k (2048) block size a request
for INITIAL 4097 results in three oracle blocks being allocated.

NEXT: The next parameter in the storage clause determines how much space will be allocated
for the second and subsequent extents allocated for an object. This NEXT parameter is specified
same as INITIAL parameter.

Although this parameter may be set to specific value that value may be modified
dynamically if the PCTINCREASE parameter is set to a non-zero value.

The default value for the NEXT is the NEXT value established in the DEFAULT
STORAGE for the table space in which the object is being created.

MINEXTENTS:

The MINEXTENTS parameter in the STORAGE clause determines how many storage
extents will be allocated to the object at the time of initial creation. Typically this parameter is
set to 1, but if it is set to a value greater than 1, the second and sub segments extents will use the
NEXT parameter and will be subject to dynamic modification if PCTINCREAE is set to a non-
zero value.

The default value for MINEXTENTS is the MINEXTENTS value established in the
DEFAULT STORAGE for the table space in which the object is being created.

PCTINCREASE:

The PCTINCREASE parameter in the STORAGE clause determines the degree to which
oracle will automatically increase the size of sub segment extent allocations.

This value is expressed as an integer percentage and is applied to the then current value
of NEXT parameter.

Eg : If an object is created with INITIAL 81920


NEXT 81920
PUTINCREASE 10 then
This means that after each new extent is allocated, oracle will dynamically increase the
value of the NEXT parameter for that object by 10%. See the following table

EXTENT Size of Bytes Size of Block Comments


Initial 81920 40 Uses Initial parameter
2nd 81920 40 Uses specified NEXT
parameter
3rd 90112 40 NEXT increased by 10%
4th 100352 49 NEXT increased by another
10% and rounded to next block

NOTE :If the default PCTINCREASE for a tablespace is set to 0, oracle will not automatically
coalesce smaller but contiguous extents back into larger extents in that table space.

As a result oracle may be unable to allocate a particular extent even when sufficient
contiguous blocks are available. So setting PCTINCREASE to a value of 1 will overcome this
problem.

A space Allocation Example :

Consider the SQL Statement :

CREATE TABLE DEPT

DEPTNO NUMBER (4)


DEPTNAME VARCHAR (30)
LOCATION VARCHAR (20)
STORAGE (INITIAL 8k NEXT 4k MINEXTENTS 1 PCTINCREASE 10)
PCTFREE 10
TABLE SPACE USERS ;

Oracle will allocate space for DEPT table as follows :

1. Oracle will look in the free space pool for 8k of space to assign as the INITIAL extent for
the table. Assuming a database with 4k-block size, two blocks would be allocated.
2. After providing space for block headers, 10% of remaining space would be reserved for
growth of data rows, as indicated by the PCTFREE 10 parameter.
3. Once the INITIAL extent is filled (in this case, both blocks are filled) oracle looks to the
NEXT extent parameter to determine how much additional space to allocate.

In this case a value of 4k is specified. Since the database has a block size of 4k, a single
block will be allocated and added to the table space allocation. After this extent is
allocated the value of the NEXT will be dynamically increased to 4506 because
PCTINCREASE is set to 10 (i.e. 4096 x 1.10)
4. When the 3rd block is filled, additional space is required. This time Oracle allocates two
blocks, since the current value of NEXT is 4506 and this value is rounded up to next
Oracle block.

Object Storage Sizing:

Free Lists: When an object (eg. A table) is created one or more oracle blocks are allocated to
that object (here each allocation is called an extent).

Storage is allocated according to the current STORAGE parameters.

Oracle maintains a list of blocks available in each table space called the “free block list”.
As blocks are added to a table, either through the allocation of an additional extent or by deleting
data from an existing block, they are added to the end of the free block list.

Since oracle allocates data to blocks by searching for a free block starting at the
beginning of the free block list, these newly freed or allocated blocks will be the last blocks used.

THE HIGH WATER MARK:

For each object, oracle also maintains a record of the highest relative block of the table
used to hold data. This high water mark is maintained in multiples of five blocks and is not reset
unless the TRUNCATE command is executed.
MANAGING THE PHYSICAL DATABASE STRUCTURE :

• Accessing and updating data


• Managing transaction concurrency
• Managing the database structure
• Managing storage allocation.

Accessing and updating data:


- Steps in SQL statement processing
- The benefits of the shared SQL area
- The function and contents of buffer cache
- Role of the server process
- Role of the DBWR process
- Events triggering DBWR activity

The flow of operation in processing a SQL statement is as follows :

- Oracle opens the statement: Oracle first obtains a cursor or memory address in the library
cache of the shared pool, where the statement will execute. Then oracle obtains some
space in the PGA called a private SQL area, where the statement return values will be
stored.

- Oracle parses the statement: Oracle creates a parse tree or execution plan for the
statement and places it in the shared pool. The execution plan is a list of operations used
to obtain data.

- Oracle creates bind variables: For Select statements, bind variables for return values are
created in the parsed SQL statement. This allows Oracle to share parsed operation but
not data in the shared SWL area in the library cache of the shared pool. For update, insert
and delete commands this step is omitted.

- Oracle will execute the statement:

At this point, Oracle performs all processing to complete the specified operation.
Row Migration: If PCTFREE is set to a low value, there may be insufficient space in a block to
accommodate a row that grows as a result of an update. When this happens the Oracle server
moves the entire row to a new block and leave a pointer from the original block to the new
location. This is known as row migration.

When a row is migrated I/O performance associated with this row decreases because the
oracle server must scan two data blocks to retrieves the row.

Row chaining: This occurs when a row is too large to fit into any block. This might occur when
the row contains the columns are very long. In this case, the oracle server divides the row into
smaller chunks called row pieces.

Each row piece is stored in a block along with the necessary pointers to retrieve and
assemble the entire row. Row chaining can be minimized by choosing a higher block size or by
splitting the table into multiple tables with fewer number of columns, if possible.
MANAGING INDEXES

Classification of indexes :

- Logical
• Single column or concatenated
• Unique or non unique

- Physical
• Partitioned or non partitioned
• B-tree or bitmap
- Normal or reverse Key (B-tree only)

Single column or concatenated Indexes :

- A single column index has only are column in the index key for ex. An index on the
employee no column of an employee tables.

- A concatenated index also known as composite index, is created on multiple columns in a


table. Columns in a concatenated index do not need to be in the same order as the columns in the
table nor do they need to be adjacent – for ex. An index on the department and job columns of an
employee table.

NOTE : The maximum columns in a composite key index is 32. However, the combined size of
all the columns cannot exceed roughly one-third of the data block size.

B-TREE INDEX :

INDEX
ENTRY

ROWID
Key column
value

Although all indexes use a B-tree structure, the term B-tree index is usually associated
with an index that stores a list of ROWIDS for each key.
Structure of a B-Tree Index:

At the top of the index is the toot, which contains entries that point to the next level in the
index. At the next level are branch blocks which in turn point to blocks at the next level in the
index. At the lowest level are the leaf nodes, which contain the index entries that point to rows
in the table. The leaf blocks are doubly linked to facilitate scanning the index in an ascending as
well as descending order of key value.

Format of Index Leaf Entries:

• An entry header, which stores number of columns and locking information.


• Key column length value pairs – which define the size of a column in the key followed
by the value for the column (the no.of such pairs is a maximum no.of columns in the
index)
• ROWID of a row, which contains the key, values.
The Oracle Server

Database Users :

1. Logging directly to the host


2. Using two-tiered (client – server) connection
3. using 3 tiered connection

Oracle error ORA_01034_Oracle not available.

PERFORMANCE TUNING

TUNING MEMORY ALLOCATION

- Proper sizing of these structures greatly improves database performance

> Understanding Memory Allocation Issues.


> Detecting memory allocation requirements
> Solving memory allocation problems

 tuning operating system memory requirements


 tuning the redo log buffer
 tuning private SQL and PL/SQL areas
 tuning shared pool
 tuning buffer cache
 tuning multiple buffer pools
 tuning sort areas
 reallocating memory
 reducing total memory usage.

Understanding Memory Allocation Issues. :

- For best performance store as much data as possible in memory.


- Tuning memory allocation involves distributing available memory to oracle memory
structures.
- Oracle memory requirements depend on your application.
- So tune memory allocation after tuning the application and SQL statements and this may
need resize some oracle memory structures to meet the needs of your modified statements
and application.
- Also tune memory allocation before tuning I/O. Allocating memory establishes the
amount I/O necessary for Oracle to operate.
Detecting memory allocation problems :

When operating system tools such as PS -ef or


PS-aux are used to examine the size of the Oracle
processes, it can be noticed that process seem larger. To interpret the statistics shown, determine
how much of the process size is attributable to shared memory heap and executable stack and
how much is the actual amount of memory the given process consumes.

The SZ statistic is given in units of page size (normally 4 kb) and normally includes the
shared overhead.

To calculate the private or per process memory usage, subtract shared memory and
executable stack figures from the value of SZ.

Eg. SZ + 20,000
(-) SHM - 15,000
(-) executable - 1,000
--------
Actual per process memory 4,000
-------

i.e. the individual process consumes only 4,000 blocks (pages) the other 16,000 pages are shared
by all processes.

Solving memory allocation problems :

For best results, resolve memory issues in the following order

⇒ Tuning operating system memory requirements


⇒ Tuning the redo log buffer
⇒ tuning private SQL and PL/SQL areas
⇒ tuning shared pool
⇒ tuning buffer cache
⇒ tuning multiple buffer pools
⇒ tuning sort areas
⇒ reallocating memory
⇒ Reducing total memory usage.
Tuning OS Memory Requirements :

Begin this tuning with the goals

⇒ Reducing paging and swapping


⇒ Fitting the SGA into main memory
⇒ Allocating adequate memory to individual users

The above goals apply in general to most operating systems but the details of tuning OS
varies.

Reducing paging and swapping :

The operating system may stores information in these places :

⇒ Real memory
⇒ Virtual memory
⇒ Expanded Storage
⇒ Disk

The operating system may also move information from one storage location to another.
This process is known as “paging” or “swapping” information that donot fit into real memory.
However, excessive swapping or paging can reduce the performance of many operating systems.

- Monitor the OS behavior with operating system utilities.


- Excessive paging or swapping indicates that new information is often moved into
memory. It indicates the total memory may not be large enough to hold every thing for
which the memory is allocated.
- Either increase the total memory on your system or decrease the amount of memory
allocated.

FITTING THE SGA INTO MAIN MEMORY:

- Because the purpose of the SGA is to store data in memory for fast access, the SGA
should always be within main memory.
- If pages of the SGA are swapped to disk, its data is no longer quickly accessible. On
most operating systems the disadvantages of excessive paging significantly outweighs the
advantages of large SGA.

- Although it is best to keep the entire SGA in memory, the contents of the SGA will be
split logically between “hot” & “Cold” parts.

- The hot parts are always in memory because they are always referenced.
- Some cold parts may be paged out and a performance penalty may result
from bringing them back in.
- A performance problem likely occurs, however, when the hot part of the
SGA cannot remain in memory.

- Data is swapped to disk because it is not referenced. You can cause Oracle to read the
entire SGA into memory when you start your instance by setting the value of the
initialization parameter PRE_PAGE_SGA.

- Operating system page table entries are then prebuilt for each page of the SGA. This
setting may increase the amount of time necessary for instance starting but it is likely to
decrease the amount of time necessary for Oracle to reach its full performance capacity
after startup.

NOTE : This setting does not prevent operating system from paging or swapping of SGA after
it is initially read into memory.

By querying SQC statement SHOWSGA we can see how much memory is allocated to
the SGA and each of its internal structure.

Eg . SHOW SGA The o/p is

Total SGA 18847360 bytes


Fixed Size 63104 bytes
Variable size 14155776 bytes
Database buffer 4096000 bytes
Redo buffers 532480 bytes

Expanded storage : Some IBM mainframe O/S have expanded storage or special memory in
addition to main memory to which paging can be performed very quickly. The O/S may be able
to page data between main memory and expanded storage faster than oracle can read and work
data between SGA and disk.

For this reason allowing a larger SGA to be swapped may lead to better performance than
ensuring that a smaller SGA remains in main memory. If your OS has expanded storage, take
advantage of it by allocating a larger SGA despite the resulting paging.

Allocating adequate memory to individual users :

On some O/S you may have control over the amount of physical memory allocated to
each user. Be sure all users are allocated enough memory to accommodated the resources they
need to use their application with Oracle.

Depending on your O/S these resources may include

⇒ The Oracle executable image


⇒ The SGA
⇒ Oracle application tools
⇒ Application Specific data

On some O/s oracle software can be installed so that a single executable image can be
shared by many users. By sharing executable images among users, you can reduce the amount
memory required by each user.

TUNING THE REDO LOG BUFFER :

The LOG_BUFFER parameter reserves space for redo log buffer that is fixed in size. On
machines with fast processors and relatively slow disks, the processors tuning private SQL and
PL/QL areas may be filling the rest of the buffer in the time it takes the redo log writer to move a
portion of the buffer to disk. The log writer (LGWR) always starts when the buffer begins to fill.
For this reason, a larger buffer makes it less likely that new entries collide with the part of the
buffer still being written.

The log buffer is normally small when compared with the total SGA size and a modest
increase can significantly enhance throughput.

Redo log space requests


- A key ratio if the space request ratio : ------------------------------
redo entries

If this ratio is greater than 1:5000, increase the size of the redo log buffer until the
space request ratio stops falling.

TUNING PRIVATE SQL AND PL/SQL AREAS:

This consists of

⇒ Identifying unnecessary parse calls


⇒ Reducing unnecessary parse calls

- A trade off exists between memory and reparsing


- With significant amounts of reparsing, less memory is needed.
- If creating more SQL statements reduces reparsing then client memory
requirements increase.
This is due to an increase in the no. of open cursors

- Tuning private SQL areas entails identifying unnecessary parse calls made by your
application and then reducing them. To reduce parse calls, you may have to increase number of
private SQL areas that your application can have allocated at once.
TUNING THE SHARED POOL:

This section explains how to allocate memory for key memory structures of the shared
pool Structures are listed in the order of importance for tuning.

⇒ TUNING THE LIBRARY CACHE


⇒ TUNING THE DATA DICTIONARY CACHE
⇒ TUNING THE LARGE POOL & SHARED POOL FORMATS
ARCHITECTURE
⇒ TUNING RESERVED SPACE FROM THE SHARED POOL.

NOTE: If you are using a reserved size for the shared pool, refer to “SHARED_POOL_SIZE
Too small”.
Oracle Alert, Trace Files & Events

Objective :

- Identify the location and usefulness of the alert log file.


- Identify the location and usefulness of the background & user process trace files.
- Retrieve and display wait events.
- Set events through OEM to be alerted about predefined situations.

Diagnostic Information :

• Trace Files

- Alert log files


- Background process Trace files
- User Trace Files

• Events

- Oracle wait events


- OEM Events

Alert Log File:

If an error occurs while your oracle instance is running, the messages are written to alert
log file. During startup of the database, if the Alert log file does not exists, Oracle server creates
one.

The alert log file of a database is a chronological log of messages and errors. Oracle
server uses the Alert log file as an alternative to display such information

Background Process Trace File:


If a background process detects an error the information is dumped into a trace file.

User Trace File:

Trace files can also be generated by server process at user request to display resource on
consumption during statement processing.

Oracle Wait Events:

If you are trouble shooting, you need to know when a process has waited for any
resources. Lists of wait events are present in the Oracle server.

Some dictionary views display the events for which sessions had to wait.

OEM Events:

The event manager of OEM allows systematic problem definitions by registering event
sets.

ALERT LOG FILE

- This consists of a chronological log of messages and errors.


- Check the Alert log file regularly to
o Detect internal errors (ORA_600) and background corruption errors (ORA_1578)
o Monitor database Operations
o View new default initialization parameters
- Remove or trim it regularly after checking.

Controlling Alert Log File :


The initialization parameter BACKGROUND_DUMP_DEST

On UNIX, the default value is $ORACLE_HOME/rdbms/log and the name of the file is
alert <SID>.log

On NT, the default value is %ORACLE_HOME%\Rdms80\Trace and the name of the


file is <Sid>ALRT.LOG

CONTROLLING THE BACKGROUND PROCESSED TRACE FILES

The following init.ora parameter controls the location of the background processes trace
files; BACKGROUND_DUMP_DEST.

On UNIX location is same as alert log and the name is <SID>_<Processname>_<PID>.trc

On NT the name is <Sid><PROCESSNAME>.TRC


USER TRACE FILES :

User trace files can also be generated by server processes at user or DBA request.

• Server process tracing is enabled or disabled at the session or instance level by

- The ALTER SESSION Command


- The SET_SQL_TRACE_IN_SESSION Procedure.
- The initialization parameter SQL_TRACE (set it to TRUE or FALSE). The
default is false.
• A user trace file contains

- Statistics for traced SQL statements for that session.


• A user trace file is useful for SQL tuning.

The session level tracing :

The following statement enables the writing to a trace file for a particular session
SQL>EXECUTE dbms_System. Set_sql_trace_in_session (8,12,TRUE), where 8 & 12 are the
SID and SERIAL # of the connected user.

- The DBMS_SYSTEM package is created in the


$ORACLE_HOME/rdbms/admin/dbmsutil.sql on UNIX.

The above script is automatically executed when catproc.sql is running.

The following statement enables the writing to a trace file for the session of the
connected user SQL>ALTERSESSION SET sql_trace=TRUE

Controlling the User Trace Files :

The following initialization parameters control the location and size of the user trace
files.

Where USER_DUMP_DEST defines where trace files will be created at the request of the user
or DBA.

MAX_DUMP_FILE_SIZE specified in O/S blocks, limits the size of user trace files.

On UNIX the default value is $ORACLE_HOME/rdbms/log and the name of the file is
<SID>_ora_<PID>.trc.

On NT, the default value is %ORACLE_HOME%\Rdbms80\Trace and the name of the file is
ora<PID>.TRC.
The MAX_DUMP_FILE_SIZE and USER_DUMP_TEST Parameters are dynamic
initialization parameters.

ORACEL WAIT EVENTS :

A collection of wait events provides information on the sessions that had to wait or must
wait for different reasons.

• V$EVENT_NAME View :

- Event #
- name
- parameter 1
- parameter 2
- parameter 3

There are more than 100 waiting events in Oracle servers. These events are listed in
V$EVENT_NAME View.

⇒ Warning : EVENT initialization parameter

- Be careful not to confuse the wait events with the EVENT init.ora parameter, nor
with the ALTER SESSION SET EVENTS Command, used to debug the system by the oracle
technical support personnel.

EG:- The actual value of a bind variable is not normally displayed in a trace file. But setting the
following event, however, it becomes possible to see it.

SQL>ALTER SESSION SET EVENTS ‘10046 trace name context forever, 2 level 4’

Level “0” is default, 4 = binds, 8 = waits, 12 = both. The higher the level the larger the trace
files are level 4 should be used normally.

SQL>SELECT name, parameter 1, parameter 2, parameter 3 FROM V$EVENT_NAME.

STATISTICS EVENT VIEWS :

The statistic results of the session that had to wait or are currently waiting for a resource are
stored in V$SESSION_EVENT & V$SESSION_WAIT Views.

Cumulated statistics for all sessions are stored in V$SYSTEM_EVENT


V$SYSTEM_EVENT View :

This view shows the total waits for a particular event since instance startup.

- If you are troubleshooting, you need to know when a process has waited for any resource.
Therefore, it becomes to query this view each time the system undergoes slowdown.

It contains the following columns :

• EVENT :Name of the wait event


• TOTAL _WAITS : Total no:of waits for event
• TOTAL_TIMEOUTS : Total no:of time outs for events
• TIME_WAITED : Total amount of time waited for this even, in hundredth of a second.
• AVERAGE_EVENT : The average amount of time waited for this event in hundredth of a
second.

V$SESSION_EVENT View shows the same above information by session. It includes the
columns listed in the previous page, with an extra column for SID to identify the session. You
can join the SID Column to V$SESSION_SID to find user details.

NOTE : You can query these views directly to find out about all systems waits since startup.

V$SESSION_WAIT View : This view lists the resources for which active sessions are waiting.

Columns description :

• SID : Session identifier


• SEQ : sequence no identifying the wait
• EVENT : resource or event waited for
• PITEXT : Description of first additional parameter, which corresponds to the
PARAMETER 1 described for the V$EVENT_NAME View.
• P1 : First Additional Parameter value
• P1 RAW : First Additional Parameter value in hex
• P2 text : Description of second Additional Parameter which corresponds to
the parameter 2 described for the V$EVENT_NAME Views.
• P2 : Second additional parameter value
• P2 RAW : Second Additional Parameter value in hex

Similar P3 has P3 TEXT, P3, P3 RAW

• WAIT_TIME :

Value Explanation
>0 : The sessions last wait time
=0 : The sessions is currently waiting
= -1 : The value was less than 1/100 of Second
= -2 : The system cannot provide timing information.

• SECONDS_IN_WAIT : Number of seconds the event waited


• STATE: WAITING, WAITED UNKNOWN TIME, WAITED SHORT TIME (less
than 1/100th of a second), WIATED KNOWN TIME (the value is stored in the
WAIT_TIME Column)

NOTE : Not all the parameter columns are used for all events.

TIMED STATISTICS Initialization parameter : Set the TIMED_STATISTICS Parameter


to TRUE to retrieve values in WAIT_TIME Column. It is a dynamic initialization parameter.

EVENT MANAGEMENT SYSTEM

- Monitors for unusual conditions in databases, nodes and network by creating event sets
- automatic problem detection by registering event sets
- automatic problem correction by applying fix it jobs

eg.

1. When a database shuts down, it can generate an alert (such an alert message sent to the
console, your pager no. or your e-mail)
2. When a tablespace is about to run out of free space it can generate an alert.
3. When a particular table approaches or reaches its maximum limit it can generate a
warning or an alert.

• Automate problem detection by registering event sets


• Automate problem correction by applying “Fix it jobs” that will be run automatically
when the event occurs.

EVENTS SETS (OEM)

• Predefined event sets :


- Through the event set library page
- The installed event sets
• Event sets created by the administrator.
Collecting System-Wide Statistics

SQL> SELECT name, class, value FROM V$sysstat;

This gives the following listing

Name Class Value


----- ------ -------
---- ------ -------

All kinds of system-wide statistics are catalogued in V$STATNAME view : around 180
available statistics

Oracle server displays all calculated system statistics in the V$SYSSTAT view. You can
query this view to find cumulative totals since the instance started

Class 1 refers to general instance activity


Class 2 refers to redo log buffer activity
Class 4 refers to locking
Class 8 refers to database buffer cache activity
Class 16 refers to OS activity
Class 32 refers to parallelisation
Class 64 refers to table access
Class 128 refers to debug give action

SGA Global Statistics :

SQL>SELECT * FROM V$SGASTAT;

General Session Related Statistics

SQL>SELECT Sid, username, type, server FROM V$SESSION

Eg : Determine the sessions that consume more than 30,000 bytes of PGA memory.

SQL>SELECT username, name, value from V$statname, V$SESSION s, V$SESSTAT t

Where s – sid = t-sid

and n.statistic # =t.statistic #


and s.type = ‘USER’
and s.username is not null
and n.name = ‘session memory’
and t-value > 30,000.
Session Waiting Events Statistics :

SQL>Select sid, event from V$SESSION_WAIT where Wait_TIME=0

After running the scripts utilbstat.sql and utilestat.sql as a result the Statistics report
contains

• Library cache statistics


• System statistics
• Wait event statistics
• Latch statistics
• Rollback contention statistics
• Buffer busy wait statistics
• Dictionary cache statistics
• I/O statistics per data file / table space
• Period of measurement

Report.txt

The report generated by utlestat.sql script contains a sequence of SELECT statements on


the difference table

SQL>Spool report.txt;
SQL>Spool ……from stats$lib;

The above topics are discussed in detail later, but now a brief discussion

Library cache statistics

Library cache contains shared SQl and PL / SQL areas. Tuning this area means reducing
missing or either parse or execute steps in the processing of a SQL or PL/SQL statement
whenever detecting that library cache misses affect the performance of Oracle.

System statistics

This section of the report provides for each system_wide static, a total number of
operations, the total no.of operation per logon. This helps you tuning several areas.

Ex : Statistic Total per Trasac Per logon

DBWR Check point buffer wri 219 .46 9.95


DBWR checkpoint write req 159 .33 7.23

DBWR checkpoint indicates the no.of checkpoint messages that were sent to DBWR.
The increase in I/O during a checkpoint can result in a decrease in performance.

Reduce the number and / or frequency of checkpoint by increasing the init.ora parameter
LOG_CHECKPOINT_INTERVAL.

However, be aware that infrequent checkpoints increase database receovery tune.

Ex 2 : Consistent gets DB block gets, physical reads

Statistic Total per Trasac Per logon

Consistent gets 559715 1168.51 25441.59


Db block gets 9949 20.77 452.23
Physical reads 419280 875.32 19058.18

Consistent gets is the no.of blocks accessed in the buffer cache for queries without the FOR
UPDATE clause.

DB block gets is the no.of blocks accessed in the buffer cache for INSERT, UPDATE and
SELECT FOR UPDATE statements.

Physical reads is the no,.of requests for a block that caused a physical I/O.

You calculate the hit ratio to detect if the size of the database buffer cache is large
enough (or) if it cannot keep often read blocks in memory.

Wait event statistics :

Each system wait event is a context switch which costs CPU time. By looking at total
time, you can often determine what is the bottleneck that processes are waiting for.

Latch statistics

Oracle uses latches to protect access to internet structure such as the library cache for
shared cursors, or the LRU list for data buffer in the buffer cache.

Tuning latch area consists of reducing contention for latches allocation.

Rollback contention statistics

If transactions have to wait to get a slot in roll back header before continuing performance is
decreasing .

This section helps you in determining contention for undo header.


Buffer busy wait statistics

This small section indicates if the “buffer busy wait” wait event is high, which class of
blocks is having high contention, data block, segment header or undo header.

Dictionary cache statistics

Every SQL or PL?SQL statement implies access to dictionary objects and therefore to
the dictionary cache. Misses in the dictionary cache cause an increnase in I/O and a
corresponding decrease in performance.

Tuning this area means displays the gets and misses for each type of item cached.

I/O statistics per data file / table space

This section displays how file I/O is spread across multiple disk drives, by counting the
number of physical read / write physical block read / writes and the amount of time spent for
these operations for each data file and table space.

Period of measurement

This section displays the time when utlbstat started to collect begin statistics when
utlestat started to collect end statistics.
TUNING CONSIDERATIONS FOR DIFFERENT APPLICATIONS :

Objectives :

• Use the available data access methods to tune the logical design of the database.
• Identify the demands of online transaction processing systems (OLTP)
• Identify the demands of decision support (DSS) systems
• Reconfigure systems on a temporary basis for particular needs.

OVERVIEW :

1. Data Design phase


2. Logical structure of the database

Data Design phase

The database design process undergoes a normalization stage in which data is analysed
to ensure that no reduldant data is held anywhere. However you amy need to denormalise it for
performance reasons.

Another consideration is the avoidance of contention on data eg. Consider a database 1


terrabyte in size on which a thousand of user access only 0.5% of data. The hotspot could cause
performance problems.

Logical structure of the database : This primarily concerns

The index design ensuring that the data is neither over nor under indexed (this doesn’t
encounter the PK indexes, designed in the previous stage, but additional indexed to support the
application).

The correct use of different types of indexes

- B-tree
- Bitmap
- Reverse

The appropriate use of sequences, clusters, index-organised tables


The necessity for histograms collection for the CBO
The use of parallelised queries
The optional use of partioning data

OLTP :

• These are high throughput, Insert / updata intensive segments


• Contain large volumes of data that
- grow continuously
- are accessed concurrently by hundreds of users
• The tuning goals are

- Availability
- Speed
- Concurrency
- Recoverability

DSS

- Perform queries on large amounts of data


- Make heavy use of full table scans

The tuning goals are

- High response time

When you design a DSS you must ensure that queries on large amounts of data can be
performed within a reasonable time.

The parallel query is particularly designed for DSS.

Multipurpose Applications :

• Combination of OLTP and dSS


• Hybrid system rely on several configurations

Data Access Method :

To enhance performance, you can use the following data access methods
⇒ Indexes

Bitmap
Reverse Key
B-Tree

⇒ Index organized tables


Clusters
Histograms
B-Tree Index :

When to use B-Tree Indexes

B-Tree indexes improves the performance of queries that select a small percentage of
rows from a table. As a general guidelines, you should create indexes on tables that are often
queried for less than 10% to 15% of the table’s rows. This value may be higher in situations
where all data can be retrieved from an index, or where the indexed columns can be used for
joining to other tables.

INDEX_ORGANISED TABLES :

IOT is like a regular table with index on one or none of its columns, but instead of maintaining
two separate segments for table and B-tree index, the database system maintains one single B-
Tree index which contains both :

• The primary key value


• The other column value for the corresponding row performance Benefits :

- There is no duplication of the values for the primary key column (index
and table column in indexed tables) less storage requirements
- Index organized tables provide faster key based access for queries
involving exact match or range search or both.

When to create Index_Organised Tables:

IOT are suitable for frequent data access through primary key or through any key that is
prefix of the primary key, such as in applications using inverted indexes used in text searches.
These indexes keep the value and all its locations together. Therefore each word has one entry
and that entry records all the places where the word occurs. Indexed_organized tables are very
efficient mechanisms for modeling inverted indexes.

Row Overflow :

PCTTHRESHOLD CLAUSE
INCLUDING CLAUSE
OVERFLOW CLAUSE

Syntax for Index_organized Tables

SQL>CREATE TABLE scolt.sales

office_cd NUMBER (3)


qtr_end DATE
revenue NUMBER (10,2)
review VARCHA R 2 (1000)
CONSTRAINT sales_pk
PRIMARY KEY (office_cd), qtr_end)
ORGANISATION INDEX TABLE SPACE INDEX
PCTTHRESHOLD 20
INCLUDING review
OVERFLOW TABLE SPACE user_data

PCTTHRESHOLD CLAUSE :

This clause specifies the percentage of space reserved in the index block for an
index_organized table flow. If a row exceeds the size calculated based on this value, all columns
after the column named in the INCLUDING Clause are moved to the overflow segment. If
overflow is not specified then rows exceeding the threshold are rejected. PCTTHRESHOLD
default to 50 and must be a value from 0 to 50.

Including clause : This clause specifies a column at which to divide an index organized table
row into index and overflow portions. All columns that follow the INCLUDING keyword are
stored n the overflow data segment. If this is not specified and a row size exceeds
PCTTHRESHOLD, all columns except the primary key columns will be moved to the overflow
area. The column is either the name of the last primary key column or any non primary
keycolumn.

OVERFLOW CLAUSE AND SEGMENT : this clause specifies that index organized table data
rows exceeding the specified threshold are placed in the data segment defined by the segments
attributes, which specify the tablespace, storage and block utilization parameters.

DICTIONARY VIEWS ON IOT

SQL> select table name, table space name, iot name, iot type from DBA_TABLES
SQL> select index name, index type, table space name, table name from dba indexes.
SQL> select segment name, table space name, segment type from dba segments

CLUSTERS

Definition : A cluster is a group of one or more tables that share the same data blocks because
they share common columns and are often used together in join queries.

Performance Benefits :

- DISK I/O is reduced and access tune improved for joins of clustered tables
- Each cluster key value is stored once for all the rows of the same key value therefore it
uses less storage.

Performance Consideration : Full table scans are generally slower on clustered tables than on
non-clustered tables.
Cluster Type :

- Index cluster
- Hash cluster

Index cluster : An index cluster uses an index known as cluster index to maintain the data within
the cluster.

• The cluster index must be available to store access or maintain data in an index cluster
• The cluster index is used to point to the block that contains the rows with a given key
value.
• The structure of a cluster index is similar to that of normal index. Although a normal
index does not sotre a NULL key value, cluster index store NULL keys. There is only
one entry for each key value in the cluster index. Therefore they are likely to be smaller
than a normal index on the same set of key values
• To store or retrieve rows from a cluster the Oracle server uses the cluster index to locate
the first row that corresponds to given key value and then retrieves the rows for the given
key.
• If several rows in an index cluster have the same cluster key, the cluster key is not
repeated for each row. In a table with large no.of rows per key value, use of an index
cluster may reduce the amount of space needed to store data.

Hash Cluster : A Hash Cluster uses a function to calculate the location of the row. The hash
function uses the cluster key and can either user defined or system generated:

- when a row is inserted into a table in a Hash Cluster

• The hash key columns are used to compute a hash value.


• The row is stored based on the hash value.

The hash function is used to locate the row while retrieving the data from a hashed table.

For equality searches that use the cluster key a hash cluster can provide greater
performance gains than an index cluster

• Only one segment to scan


• Very fast access to the row with the hash function providing the row address
immediately.

Situations where clusters are Useful :

When not to use clusters :


• Full scan is often executed on one of the clustered tables. This table is stored on
more blocks than if it had been created alone.
• If the data from all tables with the same cluster key value exceeds more than one
or two oracle blocks to access a row in a clustered table, oracle reads all blocks
containing rows with the same value.
• Partitioning is not compatible with clustering.

When not to use hash clusters :

• If the table is constantly growing and if it is impractical to rebuild a new large hash
cluster.
• If your application often performs full table scans and you had to allocate a great deal of
space to the hash cluster in anticipation of the table growing.

OLTP Requirements :

- Space Allocation :

• Avoid the performance load of dynamic space allocation allocate space explicitly to
tables clusters and indexes.
• Check growth patterns regularly to find the rate at which extents are being allocated so
that you can plan extents creation.

- Indexing :

• Indexing is critical to data retrieved in OLTP systems. DML statements on indexed


tables need index maintenance and this is a significant performance overhead. So your
indexing strategy must be closely geared to the real needs of the application.
• Indexing a foreign key helps child data to be modified w/o locking the parent data.
• B-Tree indexing is preferred to bitmap indexing because of locking issues affecting
DML operations when a B-Tree index entry is locked a single row is locked.
• Reverse key indexes avoid frequent B-Tree block splits for sequence columns. They
also make oracle parallel server application faster.
• You need to rebuild indexes regularly.
OLTP Requirements : Roll back segments :

Short Transactions :

Transactions are likely to be short, which has consequences for Roll back segment
configuration.

• They are unlikely to run out of roll back segment space.


• They need enough roll back segments to prevent contentions for transaction tables.

To get the correct number of roll back segments, you need to know about the transaction
pattern. For ex. Consider a system with the following characteristics

• 170 users logged on


• each use executing three transactions per minute, on average.
• Each transaction lasting on e second on average

There is only a very small mathematical probability that at any given time more than 8
transactions are active. So 8 roll back segments are enough.

The roll back segment extent size can be relatively small (10k if these small transactions
are the only ones on the system)

You will need to set MINEXTENTS to be atleast 10 for small databases and 20 for large
databases because

• Dynamic extension of roll back segment is just as much of a performance issue as


dynamic extension of tables.
• It also reduces the risk of hitting extents in use when wrapping roll back segments

OLTP Application Issues.

- Use constraints instead of application code


- make sure that code is shared
- use bind variables rather than literals.

Integrity constraints :

If there is choice between keeping application logic in procedural code or using declarative
constraints, bear in mind that constraints are always less expensive to process. Referential
integrity and CHECK constraints are the main types to consider here.

Shared Code :
Otherwise, you need to make certain that code is shared by stored procedural objects, such as
packages, procedures and functions.
Bind Variables :

You want to keep the overhead of parsing to a minimum. Try to ensure that the application code
uses bind variables rather than literals.

DSS Requirement

Storage Allocation :

• Set DB_BLOCK_SIZE to the maximum


• Set DB_FILE_MULTI_BLOCK_READ_COUNT carefully
• Ensure that extent sizes are multiples of this numner
• Run analyze regularly

1. DB_BLOCK_SIZE

You should normally set the DB_BLOCK_SIZE parameter to the maximum value
supported for your platform. Even if ithis means recreating a large database, it almost
certainly pays of because a large block size facilitates read extensive operations that are
characteristic of DSS application.

2. DB_FILE_MULTI_BLOCK_READ_COUNT parameter :

It determines during fully table scans how many databases blocks are read with a single
operating system read calls 8 or 8k block size, 4 or 16k block size.

A large value gives cheaper table scan cost and favours table scans over indexes.

DSS Requirement continued :

• Indexing

- Evaluate the need for indexes


- use bitmap indexes when possible
- use index_organized tables for large data retrieval by pk
- generate histograms for data indexes that are distributed non uniformaly.
• Clustering
- Hash clusters for performance access
• Partitioning

Indexing : consider how you can minimize the space and performance overhead of index
maintenance. Since, most queries use full table scans, you could :

• Dispense with indexes altogether


• Maintain them only for few tables which are accessed selectively
• Regularly generate histograms for data indexes that are distributed non-uniformly
• Choose bitmap indexes for queries or columns with few distinct values
They offer much faster retrieval access
For bulk inserts and updates, you must set the sorting init.ora parameters appropriately :
SORT_AREA_SIZE, BITMAP_MERGE_AREA_SIZE,
CREATE_BITMAP_AREA_SIZE
• Use index_organized tables for a faster key based access to tables data for queries
involving exact match and for range search and complete row data retreval.

Clustering : Both types of clusters are to be considered and especially has clusters for their best
access performance, excluding the tables growing regularly during bulk loads, except if you
have the possibility to recreate the cluster.

Partitioning : Though an option, this feature must be considered for large amounts of data for
which queries concentrate access on rows that were generated recently. It offers partition scan
instead of full table scan.

DSS Application Issues :

• Parse time is less important


• Execution plan must be optional

- use parallel query feature


- tune carefully, using hints if appropriate
- test an realistic amounts of data.
• Consider using PL/SQL functions to code logic into queries
• Bind variables are problematic.

Parse Time ;

The time taken to parse SELECT statements is likely to be very small proportion of the
time taken to execute the query. Tuning the library cache is much less of an issue for DSS thaw
for OLTP.

Your priority is an optional access path in the execution plan, small variations can cost
minutes or hours. Developers must

• Use parallel zed queries which enable multiple processes to work together simultaneously
to process a single SQL statement symmetric multiprocessor (SMP) clustered or
massively parallel processing (MPP) configurations gain the largest performance benefits
because the operation can be effectively split among many CPUs on a single system.
• Use the explain plan command to tune SQL statements and hints to control access paths.
If your application logic uses bind variables you look the befit of this feature : The
optimizer makes a blanket assumption about the selectivity.

Ex : if you use the following statement, the value of :1 is not known when the optimizer
determines the execution plan.

SQL>SELECT*DROM big table

Where number column = :1

The optimizer cannot calculate the no. of rows returned by this step exactly. Theis may
mean that the server may use a sub-optional execution plan.

So although it is recommended strongly to use bind variable in OLTP systems with index
access, they are not ideal for decision support systems since there are few opportunities that the
same statement is executed twice.

Parameters for hybrid systems

• Memory use

- SHARED_POOL_SIZE
- DB_BLOCK_BUFFERS These parameter will have higher values for
- SOR_AREA_SIZE day time (i.e. OLTP)

• Parallel query

- reconfigure parameters for DSS.

Parallel query

• During the day both PARALLEL_MIN_SERVERS & PARALLEL_MAX_SERVERS


could be set to zero to prevent parallelisation

• Full table scans of large tables can be restricted : use different day time profiles to limit
LOGICAL_REDS_PER_CALL OR CPU_PER_CALL and assign these profiles to users.
• In off-peak time, you can reset these parameters to the appropriate number and reassign
night time profiles to the DSS querying users.
SQL Tuning:

Objectives

• Use oracle tools to diagnose SQL statement performance


• Track and register module usage for packages, procedures and triggers
• Identify alternative SQL statements to enhance performance

Overview:

• Application tuning is the most important part of tuning


• Database administrators :

- may not directly involved in application tuning


- must be familiar with the impact that poorly written SQl statements can
have upon database performance

DIAGNOSTIC TOOLS

• EXPLAIN PLAN
• SQLTRACE
• TKPROF
• AUTOTRACE

EXPLAIN PLAN

• can be used without tracing


• needs PLAN_TABLE, created by utlxplan,sql
• changes may invalidate the plan

SQL>@ORACLE_HOME/rdbms/admin/utlxplan_sql

Diagnosing SQL statement performance:

1. Set initialization parameters


2. switch on SQL trace
3. run the application
4. switch off SQL Trace
5. Format the trace file
6. Interpret the output

1. The first step in diagnosing SQL statement performance is to ensure appropriate


initialization parameters. These may be set at the instance level, or the setting of some
parameters is supported at the system and session levels.
2. SQL trace must be invoked at either the instance or session level. Generally, it is better
from an overall performance stand point if it is invoked at a session level.

3. Run the application or SQL statement you want to diagnose.

4. Switch OFF SQL trace. This is necessary to properly close the trace file at the operating
system level (or the TKPROF utility will not provide valid output)

5. Use TKPROF to format the trace file generated during the trace session. Unless the
output file is formatted, it will be very difficult to interpret the results.

6. Use the O/p from TKPROF to diagnose the performance of the SQL statement.

Important Initialization parameters:

• MAX_DUMP_FILE_SIZE
• USER_DUMP_DEST
• TIMED_STATISTICS

Two parameters in the init.ora file control the size and destination of the output file from
the SQL trace facility.

MAX_DUMP_FILE_SIZE = n (default 500, measured in bytes if k or m is specified,


otherwise the no. represents operating system blocks)

USER_DUMP_DEST = directory

You must set a third parameter to get timing information

TIMED_STATISTICS = TRUE

The timing statistics have a resolution of one hundredth of a second. This means that any
operation that takes a hundredth of a second or less may not be timed accurately, especially for
simple queries that execute quickly.

In Oracle the TIMED_STATISTICS parameter can be set dynamically at the session


level using the ALTER SESSION command.

SWITCHING TRACE ON AND OFF

• Instance level :

SQL_TRACE = {TRUE / FALSE}

Setting the SQL_TRACE parameter at the instance level is one method to enable tracing.
However, it requires that the instance be shutdown then restarted when tracing is no longer
needed. This increases significant performance hit, because are sessions for the instance will be
traced.

• Session level : Session level tracing results in less of an overall performance hit, because
Specific sessions can be traced. Three methods for enabling or disabling SQL Trace are :

a) using the ALTER SESSION command, which results in tracing for the duration of
the session or until the value is set to FALSE
b) using the PL / SQL procedure DBMS_SESSION for the session
c) using the PL / SQL procedure DBMS_SESSION to enable tracing for a user
other than the logged in user at the session level systax.

a) ALTER SESSION SET sql_trace = {TRUE / FALSE}


b) EXECUTE sys.dbms_session.set_sql_trace = {TRUE / FALSE}
c) EXECUTE sys.dbms_system.set_sql_trace_in_session = (session_id, serial_id, {TRUE /
FALSE})

FORMATTING THE TRACE FILE :


$tkprof ora_19999.trc mufile.txt explain = scott / tiger
use TKPROF to format the trace file into a readable output.
Tkprof tracefile output file [sort_option][print=w][explain user name / password]
[insert=filename][sys=No][record=filename][table=scheme.table userno]

The trace file is created in the directory specified by the USER_DUMP-DEST parameter
and the output is placed in the directory specified by the output file name.

TKPROF

Trace Statistics Meaning


COUNT The times the statement was parsed or executed and the no.of fetch calls
issued for the statement
CPU Processing time for each phase in seconds (if the statement was found in the
shared pool this is ‘0’ for the parse phase) i.e. seconds to process
Elapsed Elapsed time in seconds (this is not usually very helpful because other
processed affect elapsed time i.e. seconds to execute
Disk Physical database blocks read from the database files (this statistic may be
quite low if the data was buffered
Query Logical buffers retrieved for consistent read (usually for select statements)
Current Logical buffers retrieved in current mode (usually for DML statements)
Rows Rows processed by the outer statement (for SELECT statements, this is
shown for the fetch phase, for DML statements it is shown for the execute
phase)

The sum of query and current is the total no.of logical buffer accessed.
ORACLE INSTANCE

OBJECTIVES

• Setting up operating system and password file authentication


• Creating the parameter file
• Starting up an instance and opening the database closing a database and shutting down
the instance
• Getting and setting parameter values
• Managing sessions
• Monitoring ALERT and trace files

SYS user has all of the base tables and views for the data dictionary SYSTEM. This has
additional tables and views are created that contain administrative information used by the
Oracle tools.

Managing Oracle Instance:

DBA users are SYS automatically created and granted DBA Role
SYSTEM

SYS : all the base tables and DD views are stored in SYS scheme
SYSTEM : additional tables and views are created that contain administrative information used
by Oracle tools.

Password authentication methods:

- operating system authentication


- password file authentication

Operating system authentication:

• setup the user to be authenticated by OS


• set REMOTE_LOGIN_PASSWORDFILE to None
• use following commands to connect to a database

CONNECT / AS SYSDBA
CONNECT / AS SYSOPER

• The use of command CONNECT INTERNAL used with earlier versions of oracle has
been replaced by the new syntax :

CONNECT INTERNAL / PW AS SYSDBA


• Oracle no longer supports setting DBA_AUTHORISATION Parameter in the registry to
BYPASS to enable connections w/o the use of a password.
USING PASSWORD FILE AUTHENTICATION:

• Create the password file using the password utility


• Set REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE (only one instance) /
SHARED (increasing instances)
• Use the following command to connect to a database CONNECT INTERNAL / ADMIN

CHANGING THE INTERNAL PASSWORD:

• Use the password utility on NT and UNIX to delete and create the password file OR
• Use the ORADIM80 utility on NT to delete and create a new password file.

STARTUP SHUTDOWN IN STAGES :

OPEN

All files opened as described by the


control file for this instance

MOUNT

Control file opened for


The session
NO MOUNT

Instance started
SHUTDOWN

- Only during creation of db or recreation of control file we start an instance w/o mounting
a database.
- The DB must be mounted but not opened during

• Renaming data files


• Enabling and disabling redo log archiving options
• Performing full db recovery.

- Startup Command

STARTUP [FORCE] [RESTRICT] [PFILE=filename]


[EXCLUSIVE / PARALLEL / SHARED]
[OPEN [RECOVEK] [ database]

\ MOUNT
\NO MOUNT]

Changing database availability:

To open the database from STARTUP NO MOUNT TO MOUNT stage or from MOUNT
to an open stage, use the ALTER DATABASE command

ALTER DATABASE [MOUNT / OPEN]

SHUTDOWN NORMAL TRANSACTIONS IMMEDIATE ABORT

Dynamic performance views :

- Once the instance is started in NOMOUNT stage V$views that can be read from memory
are accessible
- views tat read data from control file require the database be mounted the view
V$FIXED_TABLE displays all dynamic performance views.

OPEN

Data Dictionary

MOUNT

V$ reading data from disk


NO MOUNT

Dynamic performance
views reading from memory
SHUTDOWN

Eg : V$PARAMETER
V$SGA
SGA V$OPTION
V$ PROCESS
V$SESSION
V$VERSION
V$INSTANCE

MOUNT STAGE

V$THREAD
V$CONTROLFILE
Control V$DATABASE
File V$DATAFILE
V$DATAFILE_HEADER
V$LOFFILE
DYNAMIC INITIALISATION PARAMETERS :

ALTER SESSION SET SQL_TRACE = TRUE


ALTER SYSTEM SET TIMED_STATISTICS = TRUE
ALTER SYSTEM SET SORT_AREA_SIZE = 131072 DEFERRED

- The MAX_DUMP_FILE_SIZE and

USER_DUMP_DEST Parameters are dynamic initialization parameters

- Check the ALERT file periodically to

Detect internal errors (ORA_600) and block _corruption error


Monitor db operations
View the non-default initialization parameter

DD Views :

- Created during creation db – by the script sql.bsq


- The DD is located in SYSTEM TS and owned by SYS
Base tables
Data Dictionary views

After the db creation _ catlog.sql & catproc_sql must be run as the user SYS dbmsspool.sql –
enables to display the sizes of the objects in the shared pool and mark then for keeping or
unkeeping in order to reduce shared pool fragmentation.

Redo log files :

Used to minimize the loss of data.

Oracle server needs minimum of 2 online redo log file groups

Each member in a group has identical log sequence numbers which are incremented at
log switch.

The log.seg no is stored in control file and header of all data files.

Create db command :

MAXLOGFILES -- maximum limit is 255


MAXLOGMEMBERS determines max of members per group.

Init.ora : LOG_FILES should not exceed MAXLOGFILES*MAXLOGMEMBERS


Log switch occurs :

• When a commit occurs


• When a redo log buffer pool becomes 1/3 full
• When a LGWR time out occurs
• Before DBWR writes dirty buffers in the buffer cache to datafiles.

When a log switch occurs checkpoint is initiated.

A checkpoint occurs

• At every log switch


• When an instance has been shutdown with Normal, transactional or immediate option.
when forced by setting of init.ora parameters LOG_CHECKPOINT_INTERNAL interms
OS blocks
LOG_CHECKPOINT_TIMEOUT
• When manually requested by DBA.

Information about each check point goes to ALERT file if LOG_CHECKPOINTS_TO_ALERT


= TRUE in Init.ora

There are two ways in which online redo log file can be archived

- manually
- automatically

Init.ora parameter LOG_ARCHIVE_START=TRUE automatic


=FALSE Manual

Obtaining information about archiving:

SVRMGR>ARCHIEVE LOG LIST


Or query the V$DATABASE
- NAME
- LOGMODE

or V$INSTANCE
- ARCHIEVER

Information about log groups V$THREAD


 GROUPS
 CURRENT GROUP#
 SEQUENCE #

Info about Groups and members V$LOG


 GROUP #
 MEMBERS
 SEQUENCE #
 STATUS
 BYTES

V$LOGFILE

 GROUP
 STATUS
 MEMBER

STATUS - UNUSED
- CURRENT
- ACTIVE
- INACTIVE

LOG SWITCHES AND CHECKPOINTS

SVRMGR>ALTER SYSTEM SWITCH LOGFILE;

- The init_ora LOG_CHECKPOINT_INTERVAL=0 makes ckpt to initiate frequently since


a request will be initiated even single redo log buffer has been written since the last request.
- LOG_CHECKPOINT_TIMEOUT=0 makes disabling timed ckpt of redo buffer.

ADDING GROUPS ALTER DATABASE ADD LOG FILE (‘ /DISK3/Log3a_rdo’


(‘ /DISK4/Log3b_rdo’) SIZE 1m

MEMBERS ALTER DATABASE ADD LOGFILE MEMBERS

‘ file spec’, ‘ TO GROUP 1,


‘ file spec’, ‘ TO GROUP 2;

DROPPING ALTER DATABASE DROP LOGFILE GROUP 3;

When an online redo log group is dropped, the OS files are not deleted.

ALTER DATABASE DROP LOGFILE MEMBER ‘ /DISK4/Log2b_dbf’

CLEARING ALTER DATABASE CLEAR LOG FILE ‘/DISK4/Log2a.rdo’;

This command is equal to ADDING AND DROPPING A REDO LOG FILE.

“ “ can be issued even if there are only 2 log groups

SIZE Minimum size of redo log file is 50K and max size is OS specific.
Indexes

• Logical
- Single column
- Concatenated / composite
- Unique or non unique
• Physical
- Partitioned
- Non-partitioned
- B-Tree or Bitmap
- Normal or reverse key (B-Tree only)

Type : B-Tree
Reverse key
Bitmap

B-Tree Format of leaf: Header Key column-length value pairs Row ID

No.of columns size of column followed by Row ID of the Row

& locking info value of column

- Max no.of folumns in a composite key index is 32.


- However, the combined size of all columns cannot exceed 1/3 of block size.
- There will not any index to a row if all key columns are NULL.
- Reverse key indexes – NOSORT is used while creating index.
- Reverse key indexes – are not used for range values.
- PCTFREE - Maximum for index on range values
- Low for system generated sequences.
• BITMAP INDEX

Init.ora parameter CREATE_BITMAP_AREA_SIZE gives the area size to store


BITMAP INDEX IN MEMORY – Default value is 8 MB.

As a general rule for high cardinality value is more in MB’s


for low cardinality value is less in KB.

ALLOCATING AND DEALLOCATING INDEX SPACE

ALTER INDEX SCOTT.ord.


ALLOCATE EXTENT (SIZE 200K DATAFILE ‘ /DISK6/indx01 dst’);
ALTER INDEX SCOTT.ORD
DEALLOCATE UNUSED;
Views : DBA_INDEXES
DBA_IND_COLUMNS
DBA_OBJECTS
Managing Table spaces and Data Files

- Except for SYSTEMS TS or TS having with active RBS, TSs can be taken off line
leaving the db running.
- TSs can be switched between RW_and Readonly status

Creating TS :
CREATE TABLESPACE dotsoft
DATAFILE ‘C:\DOTSOFT\dotsoft1.dbf’ SIZE 100m
‘D:\DOTSOFT\dotsoft2.dbf’ SIZE 200m
MINIMUM EXTENT 500k
DEFAULT STORAGE (INITIAL 500k NEXT 500k
MAXEXTENTS 500 PCTINCREASE 0);

- The max no TSs in database 64k, but no.of TSs cannot exceed the no.of data files.
- The max no.of datafile per TS is 1023
- The MINIMUM EXTENT OPTION can only specified for a TS, not for the storage of
individual objects with this option the DBA controls fragmentation.
n-2
PCTINCREASE
SIZE of the extent = NEXT x 1 + --------------------
n 100

PCTINCREASE : DEFAULT VALUE 50


Min value = 0

- If MINEXTENTS is specified for a TS, the extents allocated in this TS are multiples of
this parameter.

ADDING DATAFILES ALTER TABLE SPACE app_data


ADD DATAFILE ‘ d:\dotsoft.dbf’ SIZE 200 M

TWO Ways of enlarging TS 1 way

2nd way

- change the size of the data file which exist, already


- for existing file ALTER DATABASE DOTSOFT
DATA FILE ‘D:\DOTSOFT2.DBF’
AUTOEXTEND ON NEXT 10M MAX SIZE 500M
- for new file ALTER TABLE SPACE DOTSOFT
ADD DATA FILE ‘/DOTSOFT2.DBF’ SIZE 200M

AUTOEXTEND ON NEXT 10M


MAXSIZE 500M

Manual RESIZING ALTER DATABASE DATA FILE

‘D:\DOTSOFT2.DBF’ RESIZE 200M

OFFLINE - NORMAL
- TEMPORARY
- IMMEDIATE

MOVING DATAFILES
ALTER TABLE SPACE DOTSOFT
RENAME DATAFILE ‘filename’ to ‘filename’;

NOTE : The source file name should match the names stored in the control file.

All types of datafiles ALTER DATABASE RENAME FILE ‘filename’ to ‘filename’;

DROPPING TABLE SPACE : DROP ABLE SPACE DOTSOFT INCLUDING CONTENTS

VIEWS : DBA_TABLE SPACES


DBA_DATA_FILES

Information from control files V$DATAFILE


V$TABLE SPACES

STORAGE STRUCTURE AND RELATIONSHIPS

Storage clause precedence :


Any storage parameter specified at the segment level overrides the corresponding option
set at the TS levels except for the MINIMUM EXTENT TS Parameter
Coalescing Free Space : This happens
• When SMON initiates a space transaction to merge adjacent free extents (it happens
when PCTINCREASE > 0)
• When oracle server needs to allocate an extent that needs space from more than are
adjacent free extent
• On demand when requested by SBA.
ALTER TABLE SPACE DOTSOFT COALESCE :

VIEW DBA_FREE_SPACE_COALESCED
INITRANS : default value = 1 for data segment
= 2 for Index segment

If transactions are more slot can be allocated from the free space in the block. MAXTRANS
default value 255.

Data Dictionary Views :

DBA_TABLESPACES

DBA_SEGMENTS DBA_DATA FILES

DBA_EXTENTS DBA_FREE_SPACE
Managing Rollback Segments:

RBS stores block information such as file and block ID and aho date (before modified
image)

System
Types of RBS Private (Default)
Non-System
Public

Shrinkage of RBS : Deallocation of extents is not done as soon as transactions are completed.
The process of deallocating extents is performed only when head moves from one extent to the
other. Extents are deallocated if

• The current size of RBS exceeds OPTIMAL


• There are contiguous inactive extents

Read Consistency : SNAPSHOT TOO OLD ERROR ORA_1555


Creating RBS with MINEXTENTS=20 is recommended to reduce the possibility of extension.
OPTIMAL > to size of initial size of RBS=Size of MINEXTENTS

AVOID Setting MAXEXTENTS = UNLIMITED

ALTER ROLLBACK SEGMENT RBS01 ONLINE

INIT_ORA parameters TRANSACTIONS


T (transactions)
TRANSACTIONS_PER_ROLLBACK_SEGMENT N =
TPR
N= No.of RBS
Storage Setting :

ALTER ROLLBACK SEGMENT RBS01 STORAGE NEXT k/m

MINEXTENT integer
MAXEXTENT integer / unlimited
OPTIMAL integer k/m/NULL

ALTER ROLLBACK SEGMENT RBS01 SHRINK TO 4m

SYS -- Private RBS


OWNER OF RBS
PUBLIC -- Public RBS

RBS Statistics
DBA_ROLLBABK_SEGMENT -----------------
V$ ROLL NAME V$ROLL STAT

Pending OFF LINE

Info RBS – Current activity


V$SESSION V$TRANSACTION

Read Consistency errors can be minimized with


• Higher MINEXTENTS
• Larger EXTENT SIZES
• Higher OPTIMAL VALUE

Blocking Sessions :
V$ROLLSTAT, V$SESSION, V$TRANSACTION
Get SID, SERIA# and kill the session
username
V$SESSION
Session information

V$TRANSACTION – Transactions currently using RBS

V$ROLL STAT – statistics of RBS

Managing Tables

Structure of a row :
ROWHEADER -- no.of columns in row chaining the info, row lock status
ROW DATA -- column length (1byte, 3 bytes), column value (250 bytes, more
than 250 bytes)

ROWID Data type :

Pseudo column 000000 FFF BBBBBB RRR

Data object no Relative File no Block No Row No


Size in bytes 32 10 22 16

ROWID needs 10 bytes of storage – 80 bits

Displayed using 64 bit encoding scheme

Creating a table :
• If MINEXTENT has been defined for the TS, the extent sizes for the table will get
rounded up to next higher multiple of MINIMUM EXTENT value
• If MINEXTENT > 1 and TS contains more than one data file, the extent will be spread
across the diff. Files in the TS.

- TO improve full table scans – Init.ora DB_FILE_MULTIBLOCK_READ_COUNT

Setting PCTFREE, PCT USED :

(average row size – initial row size)


PCTFREE = ----------------------------------------------- x 100
Average Row size

Average row size x 100


PCT USED = 100 – PCTFREE --------------------------------------------
Available Data space
Changing Storage and block utilization parameters :

Any of the block utilization parameters and some of storage parameters can be modified.

Storage

INITIAL - Cannot be modified


NEXT - will be rounded to a value that is a multiple of the block
size greater than or equal to the value specified
PCTINCREASE - Subsequent allocated extents
MINEXTENTS - can be changed to < present used extents in the table
MAXEXTENTS - can be changed to > present used extents in the table

Block utilization :

• PCTFREE - affects future inserts


• PCTUSED - affects all the blocks in the table
• INITRANS - affects only new blocks
• MAXTRANS - affects all blocks in the table

Manual allocation of extents :

ALTER TABLE Scott.employees

ALLOCATE EXTENT (SIZE 500 k


DATAFILE ‘ /DISK3/DOTSOFT3.DBF’);

If size is omitted, the default value NEXT EXTENT table from DBA_TABLES to allocate new
extent

High Water Mark : HWM is stored in the segment header of table

Finding the HWM – DBMS_SPACE_UNUSED_SPACE


Created when scripts dbmstuil_sql, prvutil.plb are invoked by catproc.sql

Deallocation of unused space

ALTER TABLE EMP DEALLOCATE UNUSED (KEEP int {k/M})

Above HWM

To release all the space below the HWM even HWM is below MINEXTENTS use KEEP = 0

Truncating a table : the DELETE will not fire while truncating


• Specify REUSE clause to retain all the space used by the table
• The effect of REUSE or DROP cascades to indexes.
Validating Table Structure :

ANALYSE command uses


• Deleting statistics from DD
• Validating structure of the table
• Identifying migrated and chained rows of the table

Init.ora : DB_BLOCK_CHECKSUM = TRUE to calculate the checksum and store in the


header of every data block writing it to disk

ANALYSE TABLE EMP


VALIDATE STRUCTURE CASCADE

NOTE : Use following PL/SQL procedures to analyse objects :


• DBMS_DDL.ANALYSE_OBJECT – for analyzing OBJECTS
• DBMS_UTILITY.ANALYSE_SCHEME -- for analyzing OBJECTS OWNED by user
• DBMS_UTILITY.ANALYSE_DATABASE -- for analyzing ALL OBJECTS IN DB

Detecting Row Migration : Use analyse command to find it

ANALYSE TABLE EMP


[COMPUTE STATISTICS / ESTIMATE STATISTICS [SAMPLE integer [ROWS /
PERCENT 3]

Retrieving Table Information :

DBA_OBJECTS DBA_TABLES DBA_SEGMENTS

Info about EXTENTS DBA_EXTENTS

DBMS_ROWID Package : available when run dbmsutil.sql

Temporary Table Space

Init.ora SORTAREA_SIZE

CREATE TABLESPACE DOTSOFT TEMPORARY DATAFILE ‘filespec’

- if a permanent TS is used for sorting an instance may have more than one temporary
segment in TS.
TEMP SEGS IN TEMP TS :
• Known as SORT SEGMENTS
• Only one segment per TS per instance
• Created when first disk sort occurs in the instance after startup
• Released on instance shutdown

Storage clause : INITIAL = NEXT= (multiple of SORT_AREA_SIZE) + DB_BLOCK_SIZE


PCTINCREASE = 0.

The MAXEXTENT clause only affects a temporary segment if the table space is a permanent
space.

VIEWS : DBA_SEGMENTS (Info about both types of temp.segments)


V$SORT_SEGMENT (gives soft extent pool used by instance)
V$SORT_USAGE (current active sorts for the instance

V$SESSION V$SORT_USAGE

Temporary segment activities information

Using NLs : Different types of encoding schemes

7 bit USASCII
Single_byte
8 bit EBCDIC
Varying_widthMultibyte
Mixed_width Multibyte
Unicode

Character Set of DB National Character set of DB


Defined at creation time Defined at creation time
Cannot be recreated Cannot be recreated
Store data column of type Store data column of type
CHAR,VARCHARZ,CLOB,LONG NCHAR,NVARCHAR2 & NCLOB
Can store varying width char. Set Can store fixed width and varying width multi
byte
char. set

- Database character set must be super set of 7 bit ASCII or 8 bit EBCDIC
- Oracle does not support NCS on LONG data type
- Specifying language_dependent behaviour

- Server side lang. dependent


Behaviour

Init.ora
NLS_LANGUAGE
• Lang. used for oracle messages
• Lang. used for day and month names and their
abbreviations
• Symbols for AD,BC,AM,PM
• The default sorting mechanism

NLS_TERRITORY
• Default date format, decimal character, group separator and
default ISO & local currency symbols
• Day and week numbering

Dependent language and territory default values

Parameter Values

NLS_LANGUAGE American
• NLS_DATE_LANGUAGE American
• NLS_SORT BINARY

NLS_TERRITORY
• NLS_CURRENCY $ American
• NLS_ISO-CURRENCY $American
• NLS_DATE_FORMAT DD-MM-YY
• NLS_NUMERIC_CHARACTERS DD-MM-YY

- Specifying language – dependent behaviour for the session


- Environment variable
NLS_LANG = <language>_<territory><charset>
The value of NLS_LANG overrides any values of the NLS initialization
- Data passed between client and server is converted automatically between the two
encoding schemes
- The db encoding scheme should be super set, or equivalent of all the client encoding
scheme. The conversion is transparent to client applicable.
- If environment variable ORA_NLS33 is not set, it is only possible to create the db with
default character set US7ASCII.

Specify lang. dependent behaviour for the session :

- ALTER SESSION SET


NLS_DATE_FORMAT = ‘DD-MM-YY’
- DBMS_SESSION.SET_NLS (‘NLS_DATE_FORMAT’,’DD-MM-YY’);

SORTING

- Provides a linguistic sort


set the parameter NLS_SORT = “GERMAN”
using ALTER SESSION SET NLS_SORT = GERMAN;

- USING NLS Parameters in SQL_Functions


SELECT TO_CHAR (hiredate, ‘DD-MM-YYYY’ ‘NLS_DATE_LANGUAGE =
GERMAN’)
FROM EMP;
- Import and loading Data using NLS

• Data will be converted from NLS_LANG to db character set during import


SQL * LOADER

Conventional - Date is converted into session char.set specified by


NLS_LANG, later to db char set

Direct - date is converted directly in db char set

VIEWS : NLS_DATABASE_PARAMETERS
- PARAMETER info about char.sets
- VALUE

Info about NLs settings :

• NLS_INSTANCE_PARAMETERS (including that were explicitly set)


• NLS_SESSIONS_PARAMETERS

Obtaining info about NLS Settings:

• V$NLS_VALID_VALUES
• V$NLS_PARAMETERS
Managing Users :

Authentication Mechanism

• Database
• Operating System
• Network

Creating a new user CREATE user INDENTIFIED BY


SERVER AUTHENICATION password
Externally
DEFAULT / TEMPORARY TABLESPACE
QUOTA
PASSWORD EXPIRE
ACCOUNT LOCK / UNLOCK
PROFILES

OS AUTHENTICATION : OS_AUTHENT_PREFIX = OS_, “ “, OPS$

Changing user quota on TS ALTER USER SCOTT


QUOTA 0 ON TS name

CONTROLLING ACCOUNT LOCK & PASSWORD

ALTER USER SCOTT IDENTIFIED BY PASSWORD / EXTERNALLY

[PASSWORD EXPIRE]
[ACCOUNT {LOCK / UNLOCK}]

- DROP USER with CASCADE option


- Monitoring users

Views : DBA_USERS DBA_TS_QUOTAS

Can be verified TS quotes for a user


User A/C status

If the user has unlimited quotas them MAX_BLOCKS = -1


MAX_BYTES = -1 will be set

SYSTEM
PRIVILEGES
OBJECT

System Privilege The ANY keyword in the privilege signifies that users have the privilege in
every scheme.

System Privilege Examples


CREATE CREATE
ALTER Any Index CREATE ANY
DROP ALTER ANY
DROP ANY Table
* CREATE INDEX privilege SELECT ANY
Is not there UPDATE ANY
DELETE ANY

CREATE ⇒ DROP also

CREATE CREATE
ALTER SESSION ALTER TS
RESTRICTED DROP
UNLIMITED

NOTE :
1. CREATE TABLE includes CREATE INDEX and the ANALYZE commands. But
the user must have a quota on TS or UNLIMITED TS.
2. UNLIMITED TS cannot be granted to a role
3. For truncating table, the DROP ANY TABLE privilege is necessary

Granting System Privilege : GRANT [SYS Privilege/ role]

To [user / role / public] sysdba


[ , user / role / PUBLIC] sysoper
(WITH ADMIN OPTION]

SYSDBA and SYSOPER Privilege : Connecting as SYSDBA with OS authentication gives a


user unrestricted Privilege to perform any operation DB or objects in DB.

Displaying System Privilege :


Database Level Session level.
DBA_SYS_PRIVS SESSION_PRIVS
• GRANTEE PRIVILEGE
• PRIVILAGE
• ADMIN OPTION
We can list SYS PRIVS granted to users and roles

SESSION_PRIVS : lists the PRIVS of current session to a user.


SYSTEM PRIVS RESTRICTION ORA 7 version :

07_DICTIONARY_ACCESSIBILITY = TRUE (defaults to TRUE)

this removes the restrictions on system privs with any keyword.

Revoking System Privs : REVOKE CREATE TABLE FROM USER 1.

Revoking Sys Privs with ADMIN OPTION

A B C

X
No Cascading Effect

OBJECT PRIVILEGES

TABLE VIEW SEQUENCE PROCEDURE

GRANT OPTION FOR OBJECT PRIV (THERE IS CASCADING EFFECT)

Displaying Object privs : To return all object privies granted to the specified user

DBA_TAB_PRIVS DBA_COL_PRIVS

PROFILES :
1. Are assigned to user by the CREATE / ALTER user command
2. can limit system resources on session or call level or both
3. Enforcing the resource limits is not required for enabling Oracle password request

CREATE PROFILE developer_prof LIMIT

[SESSIONS_PER_USER max_value]
[CPU_PER_SESSION max_value]
[CPU_PER_CALL max_value]
[CONNECT_TIME max_value]
[IDLE_TIME max_value]
[LOGICAL_READS_PER_SESSION max_value]
[COMPOSITE_LIMIT max_value]
[PRIVATE_SGA max_bytes]
COMPOSITE_LIMIT is the weighted sum of

CPU_PER_SESSION
CONNECT_TIME
LOGICAL_READS_PER_SESSION
PRIVATE_SGA

The DD view ⇒ RESOURCE_COST provides weightages assigned to different resources

ALTER RESOURCE COST : use to how to specify weightage for each session resource.
SESSION LEVEL CALL LEVEL
CPU_PER_SESSION CPU_PER_CALL
SESSIONS_PER_USER LOGICAL_READS_PER_CALL
CONNECT_TIME
IDLE_TIME
LOGICAL_READS_PER_SESSION
PRIVATE_SGA

ASSIGNING PROFILES TO USERS : CREATE USER SCOTT IDENTIFIED BY SCOTT


DEFAULT TABLESPCAE EMPLOYEE DATA01
TEMPORARY TABLESPACE TEMP
QUQOTA UNLIMITED ON EMPLOYEE DATA
01
PROFILE Developer_prof

OR

ALTER USER SCOTT


PROFILE Developer_prof affects subsequent sessions
Assigned only t users not to roles

Enabling resource limits : set Init.ora parameter

RESOURCE_LIMIT = TRUE
OR
ALTER SYSTEM SET RESOURCE_LIMIT=TRUE enables enforcement
FALSE disables enforcement (default)
Altering Profile : ALTER PROFILE Default limit (spes…………)

DROPPING : DROP PROFILE DEVELOPER_PROF [CASCADE]


Default profile cannot be dropped. Cascade effects subsequent sessions only.

Views display resources limits for a user use DBA_USERS, DBA_PROFILES

Password Management :
Features : ACCOUNT LOCKING
PASSOWRD AGING AND EXPIRY
PASSWORD HISTORY
PASSWORD COMPLEXITY AND VERIFICATION

Enabling pwd mgmt : similar to resource limit settings create profile to limit pwd settings and
assign them to users by CREATE OR ALTER USER command.

Creating Profile : PWD Settings

Creating Profile grace_5 LIMIT

FAILED_LOGIN_ATTEMPTS
PASSWORD_LOCK_TIME
PASSWORD_LIFE_TIME
PASSWORD_GRACE_TIME
PASSWORD_REUSE_TIME if one is set to default or unlimited.
PASSWORD_REUSE_MAX the other must be unlimited.
PASSWORD_VERIFY_FUNCTION

Password verification function : VERIFY_FUNCTION

Viewing PWD Info : - use DBA_USERS to get info about expiration and locking dates and
account status
- query DBA_PROFILE view to display PWD profile info.

Loading and reorganizing data :

Conventional path loading Direct load insert


Needs redo log entries direct data saves
Commit

Direct load Insert : INSERT /*+APPEND+/INTO Scott.emp


NOLOGGING SELECT * FROM Sctt.old_emp

This option not available when INSERT INTO VALUES Command is used
This option is available when INSERT INTO SELECT Command is used
This operation maintains indexes and also enforces all enabled constraints
It also enables other users to concurrently modify other rows in the table
All data loaded in this method is placed above HWM. If the table contains many blocks where
rows have been deleted, space may be wasted and full table scans may be slower.

Parallel_direct load

The ALTER SESSION ENABLE PARALLEL DML must be executedat the beginning of
transaction.
The object that is modified using parallel direct insert cannot be queried or modified
again within the same transaction.

INSERT /*+PARALLEL(SCOTT.EMP2)*/INTO SCOTT.EMP NOLOGGING


SELECT * FROM SCOTT_OLD_EMP.

SQL*LOADER users - control file


- data file
- parameter file
- bad file
- log file
- discard file

Conventional path load Direct path load


Builds arrays of rows and uses INSERT statement builds blocks of data and saves can
be used for both clustered and unclustered tables Directly into the extents allocated
For the table being loaded.
Redos are always generated redos are generated in parallel
Situations
All constraints are enforced only UNIQUE, NOT NULL, PK are
Enforced cannot load into clustered
tables.
Other users can make changes Other users cannot make changes
INSERT Triggers are fired INSERT Triggers donot fire

Parallel direct loades:

Indexes are not maintained by the load drop before parallel load and recreate later
Referential integrity check constraints and triggers must be disabled and have to be reenabled
manually.
Rows can only be appended to existing data. If existing data is to be replaced manually truncate
the table before parallel load.

BACKUP AND RECOVERY

B & R CONSIDERATIONS :

BACKUP AND RECOVERY GOALS :

• Protect the database from numerous types of failures.


• Increase the meantime_time_between failures (MTBF)
• Decrease mean_time_to_recover (MTTR)
• Minimize data loss

Defining a Backup and recovery strategy :


• Management concurrence (obtaining agreement from all levels of management)
• Business requirements
• Technical requirements
• Operational requirements

Business requirement :

• Mean_Time_To_Recover (MTTR)
• Mean time between failures (MTBF)
• Evolving process

Operational Requirements :

• 24 hour operations
• testing and validating backups
• database volatility
- are tables frequently updated
- is data highly volatile (so you need more frequent backups than a business
where data is static)
- does the structure of the database change often
- how often do you add data files.

Technical requirements :

• physical image copies of the o/s files


• logical copies of the objects in the database
• database configuration
• transaction volume affects desired frequently of backups.

Oracle recovery and Backup Configuration :

• Identify recovery implications of operating in ‘no archieve’ mode


• Describe the difference between “archieve log” mode and “ no archieve log”
mode
• Configure a database for archievelog mode and automatic archiving.
• Use init.ora parameters to duplex archieve log files.

Overview :

• No archieve log or archieve log mode


• Decision to use one or the other is dependent upon how much data can be lost due
to a failure
• By default a db is configured for noarchieve log mode. The characteristics in this
mode are
- redo log files are used in a circular fashion
- a redo log file can be reused immediately after a check point has taken
place.
- Once redo logs are overwritten, media recovery is only possible to th elast
full backup.
Implications of no archievelog mode :

• If a table space becomes unavailable because of a failure, you cannot continue t


ooperate the database until the TS has been dropped or the entire database has
been restored from backups.
• You may only perform os backups of the DB when the DB is shutdown.
• You must backup the entire set of db, redo and control files during each backups
• You will loose all the data since the last full backup
• You cannot perform online backups.

Media recovery options in Noarchievelog mode :

• You must restore the data files redo log files and control files from an earlier copy
of full database backup.
• If you used the export utility to backup the database, you can use the import utility
to restore lost data. However, this results in an incomplete recovery and
transactions may be lost.

6 PHYSICAL BACKUPS W/O ORACLE RMAN

OBJECTIVES :

• Perform DB backups using os commands


• Describe the recovery implications of closed and open DB backups
• Perform closed and open DB backups
• Identify the backup implications of the “logging” and “nologging” options
• Identify the different types of control file backups
• Discuss backup issues associated with read only DB’s
• List the DD views useful for backup operations.

Back up methods :
Closed database Open database

Physical backup

A db backup is an os backup of data files while the database is open or closed.


Physical backup methods :

• OS backup w/o archiving


• OS backup with archiving
- use the os backup method w/o archieving to recover to the point of the last backup after a
media failure.
- use the os backup method with archieving to recover to the point of failure after a media
failure.

Closed DB B’UP

- closed db backup is an OS backup of all the datafiles redo log files control files,
parameter files and the plod file and net 8 configuration files that constitute an oracle db.
- it is not necessary to include the online redo log files as part of a whole database backup.
However, you may want to do this as a safeguard in the event you have to restore the database
since this simplifies the process.

Advantages of Closed Backups:


• Conceptually simple
• Easy to perform
• Requires little operator interaction
• Reliable

Disadvantages :

• For online business operations the DB should always be available. so closed backup
cannot be performed as it needs shutting down of db.
• The mount of time that the db is unavailable is affected by the size of the db, the no.of
data files and the speed with which the copy operations on the datafiles can be performed.
• A recovery is only as good as the last full closed database backup and lost transactions
may have to be manually entered following a recovery operations.

10. Complete oracle recovery with archieving :

Use archievelog mode if


- data loss bet’ backups cannot be tolerated
- recovery to the time of failure is required (complete recover)
- recovery is faster reapplying data from archieve logs than batch jobs.

Media failure and recovery with archiving :

Failure
- loss of disk, data file or corruption
Recovery
- data files for restore must be offline
- restore only lost or damaged data files
- do not restore the control files, redo logs, password or parameter files
- recover the data files
When media failure occurs with a database in archivelog mode and to completely recover up to
the time of failure, you must have the following :

• A valid backup containing the lost or damaged datafiles after database was set in archieve
mode.
• All archieved logs from the backup you are restoring to the present time.

Recovery steps :

• Ensure files to be over written are not pen during restore


• Make sure you only restore the lost or damaged files from backup
• Place DB either mount or open mode
• Recover the datafiles using the recover command.

Recovery with archiving (complete recovery)

Advantages
o Only need to restore lost files
o No committed data is lost. Restoring the files, then applying archieved and redo
logs, brings the database to the current point in time.
o Recovery time = time to restore lost files and apply all archieved logs

Disadvantages :

You must have all archieved logs from the time of your last backup to the current time.

Complete Recovery methods :

1. closed DB recovery : For system, rollback segment data files or whole database
2. open database recovery, with DB initially open : for file loss
3. Open DB recovery with DB initially closed : for hardware failure
4. Recover a data file with no backup.
Recovery syntax : When DB is closed
SVRMGR>recover database
SVRMGR>recover data file ‘ /disk1/date/df2.dbf
SVRMGR>alter database recover database
When DB is open :
SVRMGR>recover table space USER_DATA
SVRMGR>recover data file 2
SVRMGR>alter database recover data file 2
Recovery using archieved logs :

• To change archieve location, use the “alter system archieve log ……. Command
• To automatically apply redo log files :
- issue “set autorecovery on “ before starting media recovery.
- Enter “ auto” when prompted for an archived log file
- Use the “recover automatic…..” command.

Files needed for recovery :

• View V$RECOVER_FILIE to locate data files needing recovery


• View V$ARCHIEVED_LOG for a list of all archieved logs for the data base
• View V$RECOVERY_LOG for a list of all archieved logs required for recovery.

HOT BACKUP FILE RECOVERY :

• Data files are not synchronized if in “hot backup” mode at time of failure.
• Recover the database using one of the methods already discussed.
• Determine what file (s) were in “hot backup” mode at the time of failure
• End the backup on the data files
• Open the DB

Clearing Redo log files :-

Using the “UNARCHIEVED” OPTION


SQL>alter database clear unarchieved log file group 1 using the “unrecoverable data file”
option
SQL >alter database clear unarchieved log file group 1 unrecoverable datafile

Loss of a Redo log file :


• No recovery is required
• Archiving still occurs using a good online redo log file
• Lossof all redo log files for a group should never occur if the DB is configured properly
using mirrored redo log files.
INCOMPLETE RECOVERY WITH ARCHIEVELOG
• Indentify the situations to use an incomplete recovery to recover the system
• Perform an incomplete database recovery
• Recover after losing current and active logs
• USE RMAN in an incomplete recovery
• Work with TS point_in_time recovery.

This is for situations for dbs in ARCHIVELOG MODE which need to be recovered before the
time of failure
INCOMPLETE RECOVERY :
Reconstructs the database to a prior point in time (before time of failure)

NOTE : This situatins results in loss of data from transactions committed after the time of
recovery. This data need to manually reentered so only perform this recovery when absolute
necessary. So incomplete recovery therefore is a difficult and time consuming operation.
Performing Incomplet Recovery :

• Requires a valid offline or online backup of all db files


• Needs all archieved logs from the backup until the specified time of recovery
• Used when a server failure occurs, such as
- a failed complete recovery operation
- important tables in the db are accidentally dropped

Reasons for incomplete recovery :

• user error
• complete recovery fails because an archived log is lost
• recovery using a backup control file
• loss of all unarchived redo logs and the data file

Recovery with archiving (incomplete recovery)

Advantages :-

• recovers prior to the time of failure


• provides recovery in situations where complete recovery is not possible
Disadvantages :-
• DB is taken back in time (lose data)
• Recovery time = backup time + time to restore all data files = apply required archived
logs
Types of Incomplete recovery :

• Time based recovery


• Cancel based recovery
• Recovery using a backup control file
• Change based recovery

Recovery steps :

1. Shut down and backup the DB


2. Restore all data files
3. Do not restore the control file, redo logs, password or parameter files
4. Mount the database and recover the data files before the time of failure
5. Open the DB with “reset logs”
6. Perform a closed DB backup.

You might also like