You are on page 1of 388

Study Material

Focus
[Type the abstract of the document here. The abstract is typically a
short summary of the contents of the document. Type the abstract of
the document here. The abstract is typically a short summary of the
contents of the document.]

11g R2
 

1.

1
2
1.Oracle Architecture
 
Objectives
 
These notes introduce the Oracle server architecture.  The architecture includes
physical components, memory components, processes, and logical structures.
 
Primary Architecture Components
 

 
 
The figure shown above details the Oracle architecture.
 
3
Oracle server:  An Oracle server includes an Oracle Instance and an Oracle
database.  
        An Oracle database includes several different types of files:  datafiles, control
files, redo log files and archive redo log files.  The Oracle server also accesses
parameter files and password files. 
        This set of files has several purposes. 
o   One is to enable system users to process SQL statements. 
o   Another is to improve system performance. 
o   Still another is to ensure the database can be recovered if there is a
software/hardware failure.
        The database server must manage large amounts of data in a multi-user
environment. 
        The server must manage concurrent access to the same data. 
        The server must deliver high performance.  This generally means fast response
times.
 
Oracle instance:  An Oracle Instance consists of two different sets of components:
        The first component set is the set of background processes (PMON, SMON,
RECO, DBW0, LGWR, CKPT, D000 and others). 
o   These will be covered later in detail – each background process is a
computer program. 
o   These processes perform input/output and monitor other Oracle processes
to provide good performance and database reliability. 
        The second component set includes the memory structures that comprise the
Oracle instance. 
o   When an instance starts up, a memory structure called the System Global
Area (SGA) is allocated. 
o   At this point the background processes also start. 
        An Oracle Instance provides access to one and only one Oracle database.   
 
Oracle database: An Oracle database consists of files. 
        Sometimes these are referred to as operating system files, but they are
actually database files that store the database information that a firm or
organization needs in order to operate. 
        The redo log files are used to recover the database in the event of application
program failures, instance failures and other minor failures.
        The archived redo log files are used to recover the database if a disk fails. 
        Other files not shown in the figure include:
o   The required parameter file that is used to specify parameters for
configuring an Oracle instance when it starts up. 

4
o   The optional password file authenticates special users of the database –
these are termed privileged users and include database administrators. 
o   Alert and Trace Log Files – these files store information about errors and
actions taken that affect the configuration of the database.
 
User and server processes:  The processes shown in the figure are
called user and server processes.  These processes are used to manage the
execution of SQL statements.
        A Shared Server Process can share memory and variable processing for
multiple user processes.
        A Dedicated Server Process manages memory and variables for a single user
process.
 
 
This figure from the Oracle Database Administration Guide provides another way of
viewing the SGA.
 

 
 
5
Connecting to an Oracle Instance – Creating a Session
 

 
 
System users can connect to an Oracle database through SQLPlus or through an
application program like the Internet Developer Suite (the program becomes the
system user).  This connection enables users to execute SQL statements.
 
The act of connecting creates a communication pathway between a user process and
an Oracle Server.  As is shown in the figure above, the User Process communicates
with the Oracle Server through a Server Process.  The User Process executes on the
client computer.  The Server Process executes on the server computer, and actually
executes SQL statements submitted by the system user.
 
The figure shows a one-to-one correspondence between the User and Server
Processes.  This is called a Dedicated Server connection.  An alternative
configuration is to use a Shared Server where more than one User Process shares a
Server Process. 
 
Sessions:  When a user connects to an Oracle server, this is termed a
session.  The User Global Area is session memory and these memory structures are
described later in this document.  The session starts when the Oracle server validates
the user for connection.  The session ends when the user logs out (disconnects) or if
the connection terminates abnormally (network failure or client computer failure). 
 
A user can typically have more than one concurrent session, e.g., the user may
connect using SQLPlus and also connect using Internet Developer Suite tools at the
same time.  The limit of concurrent session connections is controlled by the DBA. 
 
6
If a system users attempts to connect and the Oracle Server is not running, the system
user receives the Oracle Not Available error message.
 
 
Physical Structure – Database Files
 
As was noted above, an Oracle database consists of physical files.  The database itself
has:
        Datafiles – these contain the organization's actual data.
        Redo log files – these contain a chronological record of changes made to the
database, and enable recovery when failures occur.
        Control files – these are used to synchronize all database activities and are
covered in more detail in a later module.
 

 
Other key files as noted above include: 
        Parameter file – there are two types of parameter files. 
o   The init.ora file (also called the PFILE) is a static parameter file.  It
contains parameters that specify how the database instance is to start
up.  For example, some parameters will specify how to allocate memory to
the various parts of the system global area.
o   The spfile.ora is a dynamic parameter file.  It also stores parameters to
specify how to startup a database; however, its parameters can be
modified while the database is running.
7
        Password file – specifies which *special* users are authenticated to startup/shut
down an Oracle Instance.
        Archived redo log files – these are copies of the redo log files and are necessary
for recovery in an online, transaction-processing environment in the event of a
disk failure.
 
 
Memory Management and Memory Structures
 
Oracle Database Memory Management
 
Memory management - focus is to maintain optimal sizes for memory structures.
        Memory is managed based on  memory-related initialization parameters.  
        These values are stored in the init.ora file for each database.
 
Three basic options for memory management are as follows:
        Automatic memory management:
o   DBA specifies the target size for instance memory.
o   The database instance automatically tunes to the target memory size.
o   Database redistributes memory as needed between the SGA and the
instance PGA.
 
        Automatic shared memory management:
o   This management mode is partially automated.
o   DBA specifies the target size for the SGA.
o   DBA can optionally set an aggregate target size for the PGA or managing
PGA work areas individually.
 
        Manual memory management:
o   Instead of setting the total memory size, the DBA sets many initialization
parameters to manage components of the SGA and instance PGA
individually.
 
If you create a database with Database Configuration Assistant (DBCA) and choose
the basic installation option, then automatic memory management is the default.
 
The memory structures include three areas of memory: 
        System Global Area (SGA) – this is allocated when an Oracle Instance starts up.
        Program Global Area (PGA) – this is allocated when a Server Process starts up.
        User Global Area (UGA) – this is allocated when a user connects to create a
session.
8
 
System Global Area
 
The SGA is a read/write memory area that stores information shared by all database
processes and by all users of the database (sometimes it is called theShared Global
Area). 
o   This information includes both organizational data and control information used by
the Oracle Server. 
o   The SGA is allocated in memory and virtual memory. 
o   The size of the SGA can be established by a DBA by assigning a value to the
parameter SGA_MAX_SIZE in the parameter file—this is an optional parameter. 
 
The SGA is allocated when an Oracle instance (database) is started up based on
values specified in the initialization parameter file (either PFILE or SPFILE). 
 
The SGA has the following mandatory memory structures:
        Database Buffer Cache
        Redo Log Buffer
        Java Pool
        Streams Pool
        Shared Pool – includes two components:
o   Library Cache
o   Data Dictionary Cache
        Other structures (for example, lock and latch management, statistical data)
 Additional optional memory structures in the SGA include:
        Large Pool
The SHOW SGA SQL command will show you the SGA memory allocations. 
        This is a recent clip of the SGA for the DBORCL database at SIUE. 
        In order to execute SHOW SGA you must be connected with the special
privilege SYSDBA (which is only available to user accounts that are members of
the DBA Linux group).
 
SQL> connect / as sysdba
Connected.
SQL> show sga
 
Total System Global Area 1610612736 bytes
Fixed Size                  2084296 bytes
Variable Size            1006633528 bytes
Database Buffers          587202560 bytes
Redo Buffers               14692352 bytes

9
 Early versions of Oracle used a Static SGA.  This meant that if modifications to
memory management were required, the database had to be shutdown, modifications
were made to the init.ora parameter file, and then the database had to be restarted.  
 
Oracle 11g uses a Dynamic SGA.   Memory configurations for the system global area
can be made without shutting down the database instance.  The DBA can resize the
Database Buffer Cache and Shared Pool dynamically. 
 
Several initialization parameters are set that affect the amount of random access
memory dedicated to the SGA of an Oracle Instance.  These are:
 
        SGA_MAX_SIZE:  This optional parameter is used to set a limit on the amount
of virtual memory allocated to the SGA – a typical setting might be 1 GB;
however, if the value for SGA_MAX_SIZE in the initialization parameter file or
server parameter file is less than the sum the memory allocated for all
components, either explicitly in the parameter file or by default, at the time the
instance is initialized, then the database ignores the setting for
SGA_MAX_SIZE.  For optimal performance, the entire SGA should fit in real
memory to eliminate paging to/from disk by the operating system.
        DB_CACHE_SIZE:  This optional parameter is used to tune the amount memory
allocated to the Database Buffer Cache in standard database blocks. Block sizes
vary among operating systems.  The DBORCL database uses 8 KB blocks.  The
total blocks in the cache defaults to 48 MB on LINUX/UNIX and 52 MB on
Windows operating systems.
        LOG_BUFFER:   This optional parameter specifies the number of bytes
allocated for the Redo Log Buffer. 
        SHARED_POOL_SIZE:  This optional parameter specifies the number of bytes
of memory allocated to shared SQL and PL/SQL.  The default is 16 MB.  If the
operating system is based on a 64 bit configuration, then the default size is 64
MB.
        LARGE_POOL_SIZE:  This is an optional memory object – the size of the Large
Pool defaults to zero.  If the init.ora
parameterPARALLEL_AUTOMATIC_TUNING is set to TRUE, then the default
size is automatically calculated.
        JAVA_POOL_SIZE:   This is another optional memory object.  The default is 24
MB of memory.
 
The size of the SGA cannot exceed the parameter SGA_MAX_SIZE minus the
combination of the size of the additional
parameters, DB_CACHE_SIZE,LOG_BUFFER, SHARED_POOL_SIZE, LARGE_PO
OL_SIZE, and JAVA_POOL_SIZE.
 
10
Memory is allocated to the SGA as contiguous virtual memory in units termed
granules.  Granule size depends on the estimated total size of the SGA, which as was
noted above, depends on the SGA_MAX_SIZE parameter.  Granules are sized as
follows:
        If the SGA is less than 1 GB in total, each granule is 4 MB.
        If the SGA is greater than 1 GB in total, each granule is 16 MB.
 
Granules are assigned to the Database Buffer Cache, Shared Pool, Java Pool, and
other memory structures, and these memory components can dynamically grow and
shrink.  Using contiguous memory improves system performance.  The actual number
of granules assigned to one of these memory components can be determined by
querying the database view named V$BUFFER_POOL. 
 
Granules are allocated when the Oracle server starts a database instance in order to
provide memory addressing space to meet the SGA_MAX_SIZE parameter.  The
minimum is 3 granules:  one each for the fixed SGA, Database Buffer Cache, and
Shared Pool.  In practice, you'll find the SGA is allocated much more memory than
this.  The SELECT statement shown below shows a current_size of 1,152 granules.
 
SELECT name, block_size, current_size, prev_size, prev_buffers
FROM v$buffer_pool;
 
NAME                 BLOCK_SIZE CURRENT_SIZE  PREV_SIZE
PREV_BUFFERS
-------------------- ---------- ------------ ----------
------------
DEFAULT                    8192          560        576        7
1244
 
For additional information on the dynamic SGA sizing, enroll in Oracle's Oracle11g
Database Performance Tuning course.

Program Global Area (PGA)


 
A PGA is:
        a nonshared memory region that contains data and control information
exclusively for use by an Oracle process.
        A PGA is created by Oracle Database when an Oracle process is started.
        One PGA exists for each Server Process and each Background Process.  It
stores data and control information for a single Server Process or a
singleBackground Process. 

11
        It is allocated when a process is created and the memory is scavenged by the
operating system when the process terminates.  This is NOT a shared part of
memory – one PGA to each process only.
        The collection of individual PGAs is the total instance PGA, or instance PGA.
        Database initialization parameters set the size of the instance PGA, not
individual PGAs.
 
The Program Global Area is also termed the Process Global Area (PGA) and is a
part of memory allocated that is outside of the Oracle Instance. 
 

 
 

 
12
The content of the PGA varies, but as shown in the figure above, generally includes
the following:
 
        Private SQL Area:  Stores information for a parsed SQL statement – stores bind
variable values and runtime memory allocations.  A user session issuing SQL
statements has a Private SQL Area that may be associated with a Shared SQL
Area if the same SQL statement is being executed by more than one system
user.  This often happens in OLTP environments where many users are executing
and using the same application program.
o   Dedicated Server environment – the Private SQL Area is located in the
Program Global Area.
o   Shared Server environment – the Private SQL Area is located in the System
Global Area.
 
        Session Memory:  Memory that holds session variables and other session
information.
 
        SQL Work Areas:  Memory allocated for sort, hash-join, bitmap merge, and bitmap
create types of operations. 
o   Oracle 9i and later versions enable automatic sizing of the SQL Work Areas by
setting the WORKAREA_SIZE_POLICY = AUTO parameter (this is the
default!) and PGA_AGGREGATE_TARGET = n (where n is some amount of
memory established by the DBA).  However, the DBA can let the Oracle
DBMS determine the appropriate amount of memory.
 
 
User Global Area
The User Global Area is session memory.

 
 

13
A session that loads a PL/SQL package into memory has the package state stored to
the UGA.  The package state is the set of values stored in all the package variables at
a specific time. The state changes as program code the variables. By default, package
variables are unique to and persist for the life of the session.

The OLAP page pool is also stored in the UGA. This pool manages OLAP data
pages, which are equivalent to data blocks. The page pool is allocated at the start of
an OLAP session and released at the end of the session.  An OLAP session opens
automatically whenever a user queries a dimensional object such as acube. 

Note:  Oracle OLAP is a multidimensional analytic engine embedded in Oracle


Database 11g.  Oracle OLAP cubes deliver sophisticated calculations using
simple SQL queries - producing results with speed of thought response times.

The UGA must be available to a database session for the life of the session.  For this
reason, the UGA cannot be stored in the PGA when using a shared serverconnection
because the PGA is specific to a single process.  Therefore, the UGA is stored in the
SGA when using shared server connections, enabling any shared server process
access to it. When using a dedicated server connection, the UGA is stored in the PGA.

 
Automatic Shared Memory Management
 
Prior to Oracle 10G, a DBA had to manually specify SGA Component sizes through the
initialization parameters, such as SHARED_POOL_SIZE, DB_CACHE_SIZE,
JAVA_POOL_SIZE, and LARGE_POOL_SIZE parameters.
 
Automatic Shared Memory Management enables a DBA to specify the total SGA
memory available through the SGA_TARGET initialization parameter.  The Oracle
Database automatically distributes this memory among various subcomponents to
ensure most effective memory utilization.
 
The DBORCL database SGA_TARGET is set in the initDBORCL.ora file:
 
sga_target=1610612736
 
With automatic SGA memory management, the different SGA components are flexibly
sized to adapt to the SGA available.
 
Setting a single parameter simplifies the administration task – the DBA only specifies
the amount of SGA memory available to an instance – the DBA can forget about the
sizes of individual components. No out of memory errors are generated unless the
system has actually run out of memory.  No manual tuning effort is needed.
14
 
The SGA_TARGET initialization parameter reflects the total size of the SGA and
includes memory for the following components:
 Fixed SGA and other internal allocations needed by the Oracle Database
instance
 The log buffer
 The shared pool
 The Java pool
 The buffer cache
 The keep and recycle buffer caches (if specified)
 Nonstandard block size buffer caches (if specified)
 The Streams Pool
 
If SGA_TARGET is set to a value greater than SGA_MAX_SIZE at startup, then the
SGA_MAX_SIZE value is bumped up to accommodate SGA_TARGET.  

When you set a value for SGA_TARGET, Oracle Database 11g automatically sizes the
most commonly configured components, including:
 The shared pool (for SQL and PL/SQL execution)
 The Java pool (for Java execution state)
 The large pool (for large allocations such as RMAN backup buffers)
 The buffer cache
 
There are a few SGA components whose sizes are not automatically adjusted. The
DBA must specify the sizes of these components explicitly, if they are needed by an
application. Such components are:
 Keep/Recycle buffer caches (controlled
by DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE)
 Additional buffer caches for non-standard block sizes (controlled
by DB_nK_CACHE_SIZE, n = {2, 4, 8, 16, 32})
 Streams Pool (controlled by the new parameter STREAMS_POOL_SIZE)
 
The granule size that is currently being used for the SGA for each component can be
viewed in the view V$SGAINFO. The size of each component and the time and type of
the last resize operation performed on each component can be viewed in the
view V$SGA_DYNAMIC_COMPONENTS.
 

15
SQL> select * from v$sgainfo;
More...
 
NAME                                  BYTES RES
-------------------------------- ---------- ---
Fixed SGA Size                      2084296 No
Redo Buffers                       14692352 No
Buffer Cache Size                 587202560 Yes
Shared Pool Size                  956301312 Yes
Large Pool Size                    16777216 Yes
Java Pool Size                     33554432 Yes93
Streams Pool Size                         0 Yes
Granule Size                       16777216 No
Maximum SGA Size                 1610612736 No
Startup overhead in Shared Pool    67108864 No
Free SGA Memory Available                 0
 
11 rows selected.
 
Shared Pool
 

 
16
 
The Shared Pool is a memory structure that is shared by all system users. 
        It caches various types of program data. For example, the shared pool stores
parsed SQL, PL/SQL code, system parameters, and data dictionaryinformation.
        The shared pool is involved in almost every operation that occurs in the
database. For example, if a user executes a SQL statement, then Oracle
Database accesses the shared pool.
        It consists of both fixed and variable structures. 
        The variable component grows and shrinks depending on the demands placed
on memory size by system users and application programs.
 
Memory can be allocated to the Shared Pool by the
parameter SHARED_POOL_SIZE in the parameter file.  The default value of this
parameter is 8MB on 32-bit platforms and 64MB on 64-bit platforms. Increasing the
value of this parameter increases the amount of memory reserved for the shared pool.
 
You can alter the size of the shared pool dynamically with the ALTER SYSTEM
SET command.  An example command is shown in the figure below.  You must keep in
mind that the total memory allocated to the SGA is set by
the SGA_TARGET parameter (and may also be limited by the SGA_MAX_SIZE if it is
set), and since the Shared Pool is part of the SGA, you cannot exceed the maximum
size of the SGA.  It is recommended to let Oracle optimize the Shared Pool size.
 
The Shared Pool stores the most recently executed SQL statements and used data
definitions.  This is because some system users and application programs will tend to
execute the same SQL statements often.  Saving this information in memory can
improve system performance.
 
The Shared Pool includes several cache areas described below.
 
Library Cache
 
Memory is allocated to the Library Cache whenever an SQL statement is parsed or a
program unit is called.  This enables storage of the most recently used SQL and
PL/SQL statements.
 
If the Library Cache is too small, the Library Cache must purge statement definitions in
order to have space to load new SQL and PL/SQL statements.  Actual management of
this memory structure is through a Least-Recently-Used (LRU) algorithm.  This
means that the SQL and PL/SQL statements that are oldest and least recently used
are purged when more storage space is needed. 
 
17
The Library Cache is composed of two memory subcomponents:
        Shared SQL:  This stores/shares the execution plan and parse tree for SQL
statements, as well as PL/SQL statements such as functions, packages, and
triggers.  If a system user executes an identical statement, then the statement
does not have to be parsed again in order to execute the statement.
        Private SQL Area:  With a shared server, each session issuing a SQL statement
has a private SQL area in its PGA. 
o   Each user that submits the same statement has a private SQL area
pointing to the same shared SQL area.
o   Many private SQL areas in separate PGAs can be associated with the
same shared SQL area.
o   This figure depicts two different client processes issuing the same SQL
statement – the parsed solution is already in the Shared SQL Area.
 
 

 
Data Dictionary Cache
18
 
The Data Dictionary Cache is a memory structure that caches data dictionary
information that has been recently used. 
        This cache is necessary because the data dictionary is accessed so often.
        Information accessed includes user account information, datafile names, table
descriptions, user privileges, and other information.
 
The database server manages the size of the Data Dictionary Cache internally and the
size depends on the size of the Shared Pool in which the Data Dictionary Cache
resides.  If the size is too small, then the data dictionary tables that reside on disk must
be queried often for information and this will slow down performance.
 
Server Result Cache
 
The Server Result Cache holds result sets and not data blocks. The server result
cache contains the SQL query result cache and PL/SQL function result cache, which
share the same infrastructure.
 
SQL Query Result Cache
 
This cache stores the results of queries and query fragments. 
        Using the cache results for future queries tends to improve performance. 
        For example, suppose an application runs the same SELECT statement
repeatedly. If the results are cached, then the database returns them
immediately.
        In this way, the database avoids the expensive operation of rereading blocks and
recomputing results.
 
PL/SQL Function Result Cache
 
The PL/SQL Function Result Cache stores function result sets. 
        Without caching, 1000 calls of a function at 1 second per call would take 1000
seconds.
        With caching, 1000 function calls with the same inputs could take 1 second total.
        Good candidates for result caching are frequently invoked functions that depend
on relatively static data.
        PL/SQL function code can specify that results be cached.
 
 

19
Buffer Caches
 
A number of buffer caches are maintained in memory in order to improve system
response time.
 
Database Buffer Cache
 
The Database Buffer Cache is a fairly large memory object that stores the actual data
blocks that are retrieved from datafiles by system queries and other data manipulation
language commands.
 
The purpose is to optimize physical input/output of data.
 
When Database Smart Flash Cache (flash cache) is enabled, part of the buffer
cache can reside in the flash cache.
        This buffer cache extension is stored on a flash disk device, which is a solid
state storage device that uses flash memory.
        The database can improve performance by caching buffers in flash memory
instead of reading from magnetic disk.
        Database Smart Flash Cache is available only in Solaris and Oracle Enterprise
Linux.
 
A query causes a Server Process to look for data.
        The first look is in the Database Buffer Cache to determine if the requested
information happens to already be located in memory – thus the information
would not need to be retrieved from disk and this would speed up performance. 
        If the information is not in the Database Buffer Cache, the Server Process
retrieves the information from disk and stores it to the cache.
        Keep in mind that information read from disk is read a block at a
time, NOT a row at a time, because a database block is the smallest
addressable storage space on disk. 
 
Database blocks are kept in the Database Buffer Cache according to a Least Recently
Used (LRU) algorithm and are aged out of memory if a buffer cache block is not used
in order to provide space for the insertion of newly needed database blocks.
 
There are three buffer states:
        Unused - a buffer is available for use - it has never been used or is currently
unused.
        Clean - a buffer that was used earlier - the data has been written to disk.
        Dirty - a buffer that has modified data that has not been written to disk.
 
20
Each buffer has one of two access modes:
        Pinned - a buffer is pinned so it does not age out of memory.
        Free (unpinned).
 
The buffers in the cache are organized in two lists:
        the write list and,
        the least recently used (LRU) list.
 
The write list (also called a write queue) holds dirty buffers – these are buffers that
hold that data that has been modified, but the blocks have not been written back to
disk.
 
The LRU list holds unused, free clean buffers, pinned buffers, and free dirty buffers
that have not yet been moved to the write list.  Free clean buffers do not contain any
useful data and are available for use.  Pinned buffers are currently being accessed.
 
When an Oracle process accesses a buffer, the process moves the buffer to the most
recently used (MRU) end of the LRU list – this causes dirty buffers to age toward the
LRU end of the LRU list. 
 
When an Oracle user process needs a data row, it searches for the data in the
database buffer cache because memory can be searched more quickly than hard disk
can be accessed.  If the data row is already in the cache (a cache hit), the process
reads the data from memory; otherwise a cache miss occurs and data must be read
from hard disk into the database buffer cache. 
 
Before reading a data block into the cache, the process must first find a free buffer.
The process searches the LRU list, starting at the LRU end of the list.  The search
continues until a free buffer is found or until the search reaches the threshold limit of
buffers. 
 
Each time a user process finds a dirty buffer as it searches the LRU, that buffer is
moved to the write list and the search for a free buffer continues. 
 
When a user process finds a free buffer, it reads the data block from disk into the buffer
and moves the buffer to the MRU end of the LRU list.
 
If an Oracle user process searches the threshold limit of buffers without finding a free
buffer, the process stops searching the LRU list and signals the DBWn background
process to write some of the dirty buffers to disk.  This frees up some buffers.
 

21
Database Buffer Cache Block Size
 
The block size for a database is set when a database is created and is determined by
the init.ora parameter file parameter named DB_BLOCK_SIZE. 
        Typical block sizes are 2KB, 4KB, 8KB, 16KB, and 32KB. 
        The size of blocks in the Database Buffer Cache matches the block size for the
database. 
        The DBORCL database uses an 8KB block size. 
        This figure shows that the use of non-standard block sizes results in multiple
database buffer cache memory allocations.
 

 
 
Because tablespaces that store oracle tables can use different (non-standard) block
sizes, there can be more than one Database Buffer Cache allocated to match block
sizes in the cache with the block sizes in the non-standard tablespaces.
 
The size of the Database Buffer Caches can be controlled by the
parameters DB_CACHE_SIZE and DB_nK_CACHE_SIZE to dynamically change the
memory allocated to the caches without restarting the Oracle instance.
 
You can dynamically change the size of the Database Buffer Cache with the ALTER
SYSTEM command like the one shown here:
 
ALTER SYSTEM SET DB_CACHE_SIZE = 96M;
 
You can have the Oracle Server gather statistics about the Database Buffer Cache to
help you size it to achieve an optimal workload for the memory allocation. This
22
information is displayed from the V$DB_CACHE_ADVICE view.   In order for statistics
to be gathered, you can dynamically alter the system by using theALTER SYSTEM
SET DB_CACHE_ADVICE (OFF, ON, READY) command.  However, gathering
statistics on system performance always incurs some overhead that will slow down
system performance.
 
SQL> ALTER SYSTEM SET db_cache_advice = ON;
 
System altered.
 

SQL> DESC V$DB_cache_advice;


 Name                                      Null?    Type
 ----------------------------------------- -------- ------------
 ID                                                 NUMBER
 NAME                                               VARCHAR2(20)
 BLOCK_SIZE                                         NUMBER
 ADVICE_STATUS                                      VARCHAR2(3)
 SIZE_FOR_ESTIMATE                                  NUMBER
 SIZE_FACTOR                                        NUMBER
 BUFFERS_FOR_ESTIMATE                               NUMBER
 ESTD_PHYSICAL_READ_FACTOR                          NUMBER
 ESTD_PHYSICAL_READS                                NUMBER
 ESTD_PHYSICAL_READ_TIME                            NUMBER
 ESTD_PCT_OF_DB_TIME_FOR_READS                      NUMBER
 ESTD_CLUSTER_READS                                 NUMBER
 ESTD_CLUSTER_READ_TIME                             NUMBER
 
SQL> SELECT name, block_size, advice_status FROM
v$db_cache_advice;
 
NAME                 BLOCK_SIZE ADV
-------------------- ---------- ---
DEFAULT                    8192 ON
<more rows will display>
21 rows selected.
 
SQL> ALTER SYSTEM SET db_cache_advice = OFF;
 
System altered.
 
 

23
KEEP Buffer Pool
 
This pool retains blocks in memory (data from tables) that are likely to be reused
throughout daily processing.  An example might be a table containing user names and
passwords or a validation table of some type.
 
The DB_KEEP_CACHE_SIZE parameter sizes the KEEP Buffer Pool.
 
RECYCLE Buffer Pool
 
This pool is used to store table data that is unlikely to be reused throughout daily
processing – thus the data blocks are quickly removed from memory when not needed.
 
The DB_RECYCLE_CACHE_SIZE parameter sizes the Recycle Buffer Pool. 
 
Redo Log Buffer
 

 
24
The Redo Log Buffer memory object stores images of all changes made to database
blocks. 
        Database blocks typically store several table rows of organizational data.  This
means that if a single column value from one row in a block is changed, the block
image is stored.  Changes include INSERT, UPDATE, DELETE, CREATE,
ALTER, or DROP.
        LGWR writes redo sequentially to disk while DBWn performs scattered writes of
data blocks to disk.
o   Scattered writes tend to be much slower than sequential writes.
o   Because LGWR enable users to avoid waiting for DBWn to complete its
slow writes, the database delivers better performance.
 
The Redo Log Buffer as a circular buffer that is reused over and over.  As the buffer
fills up, copies of the images are stored to the Redo Log Files that are covered in
more detail in a later module.
 
 
Large Pool
 
The Large Pool is an optional memory structure that primarily relieves the memory
burden placed on the Shared Pool.  The Large Pool is used for the following tasks if it
is allocated:
        Allocating space for session memory requirements from the User Global Area
where a Shared Server is in use. 
        Transactions that interact with more than one database, e.g., a distributed
database scenario.
        Backup and restore operations by the Recovery Manager (RMAN) process.
o   RMAN uses this only if the BACKUP_DISK_IO =
n and BACKUP_TAPE_IO_SLAVE = TRUE parameters are set. 
o   If the Large Pool is too small, memory allocation for backup will fail and
memory will be allocated from the Shared Pool.
        Parallel execution message buffers for parallel server
operations.  The PARALLEL_AUTOMATIC_TUNING = TRUE parameter must
be set.
 
The Large Pool size is set with the LARGE_POOL_SIZE parameter – this is not a
dynamic parameter.  It does not use an LRU list to manage memory.
 
 

25
Java Pool
 
The Java Pool is an optional memory object, but is required if the database has
Oracle Java installed and in use for Oracle JVM (Java Virtual Machine). 
        The size is set with the JAVA_POOL_SIZE parameter that defaults to 24MB.
        The Java Pool is used for memory allocation to parse Java commands and to
store data associated with Java commands.
        Storing Java code and data in the Java Pool is analogous to SQL and PL/SQL
code cached in the Shared Pool.
 
 Streams Pool
 
This pool stores data and control structures to support the Oracle Streams feature of
Oracle Enterprise Edition. 
        Oracle Steams manages sharing of data and events in a distributed
environment.
        It is sized with the parameter STREAMS_POOL_SIZE.
        If STEAMS_POOL_SIZE is not set or is zero, the size of the pool grows
dynamically.
 Processes
 
You need to understand three different types of Processes:
        User Process:  Starts when a database user requests to connect to an Oracle
Server.
        Server Process:  Establishes the Connection to an Oracle Instance when a
User Process requests connection – makes the connection for the User Process.
        Background Processes:  These start when an Oracle Instance is started up.
 
Client Process
 
In order to use Oracle, you must connect to the database.  This must occur whether
you're using SQLPlus, an Oracle tool such as Designer or Forms, or an application
program.  The client process is also termed the user process in some Oracle
documentation.

26
 
 
This generates a User Process (a memory object) that generates programmatic calls
through your user interface (SQLPlus, Integrated Developer Suite, or application
program) that creates a session and causes the generation of a Server Process that is
either dedicated or shared.
 
Server Process
 

 
27
A Server Process is the go-between for a Client Process and the Oracle Instance.  
        Dedicated Server environment – there is a single Server Process to serve each
Client Process. 
        Shared Server environment – a Server Process can serve several User
Processes, although with some performance reduction. 
        Allocation of server process in a dedicated environment versus a shared
environment is covered in further detail in the Oracle11g Database Performance
Tuning course offered by Oracle Education.
 
 
Background Processes
 
As is shown here, there are both mandatory, optional, and slave background
processes that are started whenever an Oracle Instance starts up.  These background
processes serve all system users.  We will cover mandatory process in detail.
 
          Mandatory Background Processes
        Process Monitor Process (PMON)
        System Monitor Process (SMON)
        Database Writer Process (DBWn)
        Log Writer Process (LGWR)
        Checkpoint Process (CKPT)
        Manageability Monitor Processes (MMON and MMNL)
        Recover Process (RECO)
 
Optional Processes
        Archiver Process (ARCn)
        Coordinator Job Queue (CJQ0)
        Dispatcher (number “nnn”) (Dnnn)
        Others
 
This query will display all background processes running to serve a database:
 
SELECT PNAME
FROM   V$PROCESS
WHERE  PNAME IS NOT NULL
ORDER BY PNAME;
  

28
PMON
 
The Process Monitor (PMON) monitors other background processes. 
        It is a cleanup type of process that cleans up after failed processes.
        Examples include the dropping of a user connection due to a network failure or
the abnormal termination (ABEND) of a user application program. 
        It cleans up the database buffer cache and releases resources that were used by
a failed user process.
        It does the tasks shown in the figure below.
 

  
SMON
 
The System Monitor (SMON) does system-level cleanup duties. 
        It is responsible for instance recovery by applying entries in the online redo log
files to the datafiles. 
        Other processes can call SMON when it is needed.
        It also performs other activities as outlined in the figure shown below.
 

 
29
If an Oracle Instance fails, all information in memory not written to disk is lost.  SMON
is responsible for recovering the instance when the database is started up again.  It
does the following:
        Rolls forward to recover data that was recorded in a Redo Log File, but that had
not yet been recorded to a datafile by DBWn.  SMON reads the Redo Log Files
and applies the changes to the data blocks.  This recovers all transactions that
were committed because these were written to the Redo Log Files prior to
system failure.
        Opens the database to allow system users to logon.
        Rolls back uncommitted transactions.
 
SMON also does limited space management.  It combines (coalesces) adjacent areas
of free space in the database's datafiles for tablespaces that are dictionary managed. 
 
It also deallocates temporary segments to create free space in the datafiles.
 
DBWn (also called DBWR in earlier Oracle Versions)
 
The Database Writer writes modified blocks from the database buffer cache to the
datafiles.
 
        One database writer process (DBW0) is sufficient for most systems.
        A DBA can configure up to 20 DBWn processes (DBW0 through DBW9 and
DBWa through DBWj) in order to improve write performance for a system that
modifies data heavily.
        The initialization parameter DB_WRITER_PROCESSES specifies the number of
DBWn processes. 
 
The purpose of DBWn is to improve system performance by caching writes of
database blocks from the Database Buffer Cache back to datafiles. 
        Blocks that have been modified and that need to be written back to disk are
termed "dirty blocks." 
        The DBWn also ensures that there are enough free buffers in the Database
Buffer Cache to service Server Processes that may be reading data from
datafiles into the Database Buffer Cache. 
        Performance improves because by delaying writing changed database blocks
back to disk, a Server Process may find the data that is needed to meet a User
Process request already residing in memory!
 
        DBWn writes to datafiles when one of these events occurs that is illustrated in
the figure below.
 
30
 
LGWR
 
The Log Writer (LGWR) writes contents from the Redo Log Buffer to the Redo Log
File that is in use. 
        These are sequential writes since the Redo Log Files record database
modifications based on the actual time that the modification takes place. 
        LGWR actually writes before the DBWn writes and only confirms that a COMMIT
operation has succeeded when the Redo Log Buffer contents are successfully
written to disk. 
        LGWR can also call the DBWn to write contents of the Database Buffer Cache to
disk. 
        The LGWR writes according to the events illustrated in the figure shown below.
 

31
 
CKPT
 
The Checkpoint (CPT) process writes information to update the database control files
and headers of datafiles.
        A checkpoint identifies a point in time with regard to the Redo Log Files where
instance recovery is to begin should it be necessary. 
        It can tell DBWn to write blocks to disk.
        A checkpoint is taken at a minimum, once every three seconds. 
 

 
Think of a checkpoint record as a starting point for recovery.  DBWn will have
completed writing all buffers from the Database Buffer Cache to disk prior to the
checkpoint, thus those records will not require recovery.  This does the following:
        Ensures modified data blocks in memory are regularly written to disk – CKPT
can call the DBWn process in order to ensure this and does so when writing a
checkpoint record.
        Reduces Instance Recovery time by minimizing the amount of work needed for
recovery since only Redo Log File entries processed since the last checkpoint
require recovery.
        Causes all committed data to be written to datafiles during database shutdown.
 

32
 
If a Redo Log File fills up and a switch is made to a new Redo Log File (this is covered
in more detail in a later module), the CKPT process also writes checkpoint information
into the headers of the datafiles. 
 
Checkpoint information written to control files includes the system change number (the
SCN is a number stored in the control file and in the headers of the database files that
are used to ensure that all files in the system are synchronized), location of which
Redo Log File is to be used for recovery, and other information.
 
CKPT does not write data blocks or redo blocks to disk – it calls DBWn and LGWR as
necessary.
 
MMON and MMNL
The Manageability Monitor Process (MMNO) performs tasks related to
the Automatic Workload Repository (AWR) – a repository of statistical data in the
SYSAUX tablespace (see figure below) – for example, MMON writes when
a metric violates its threshold value, taking snapshots, and capturing statistics value
for recently modified SQL objects.
 

33
 
The Manageability Monitor Lite Process (MMNL) writes statistics from the Active
Session History (ASH) buffer in the SGA to disk. MMNL writes to disk when the ASH
buffer is full.
 
The information stored by these processes is used for performance tuning – we survey
performance tuning in a later module.
 
RECO
 
The Recoverer Process (RECO) is used to resolve failures of distributed transactions
in a distributed database.
        Consider a database that is distributed on two servers – one in St. Louis and one
in Chicago.
        Further, the database may be distributed on servers of two different operating
systems, e.g. LINUX and Windows.
        The RECO process of a node automatically connects to other databases
involved in an in-doubt distributed transaction.

34
        When RECO reestablishes a connection between the databases, it automatically
resolves all in-doubt transactions, removing from each database's pending
transaction table any rows that correspond to the resolved transactions.
 
Optional Background Processes
 
Optional Background Process Definition:
        ARCn: Archiver – One or more archiver processes copy the online redo log files to
archival storage when they are full or a log switch occurs.
        CJQ0:  Coordinator Job Queue – This is the coordinator of job queue processes for
an instance. It monitors the JOB$ table (table of jobs in the job queue) and starts job
queue processes (Jnnn) as needed to execute jobs The Jnnn processes execute
job requests created by the DBMS_JOBS package.
        Dnnn:  Dispatcher number "nnn", for example, D000 would be the first dispatcher
process – Dispatchers are optional background processes, present only when the
shared server configuration is used. Shared server is discussed in your readings on
the topic "Configuring Oracle for the Shared Server". 
        FBDA: Flashback Data Archiver Process – This archives historical rows of tracked
tables into Flashback Data Archives. When a transaction containing DML on a
tracked table commits, this process stores the pre-image of the rows into the
Flashback Data Archive. It also keeps metadata on the current rows.  FBDA
automatically manages the flashback data archive for space, organization, and
retention
 
Of these, you will most often use ARCn (archiver) when you automatically archive redo
log file information (covered in a later module).
 
 
ARCn
 
While the Archiver (ARCn) is an optional background process, we cover it in more
detail because it is almost always used for production systems storing mission critical
information.  
        The ARCn process must be used to recover from loss of a physical disk drive for
systems that are "busy" with lots of transactions being completed.
        It performs the tasks listed below.
 

35
 
When a Redo Log File fills up, Oracle switches to the next Redo Log File. 
        The DBA creates several of these and the details of creating them are covered in
a later module. 
        If all Redo Log Files fill up, then Oracle switches back to the first one and uses
them in a round-robin fashion by overwriting ones that have already been used.
        Overwritten Redo Log Files have information that, once overwritten, is lost
forever.
 
ARCHIVELOG Mode:
        If ARCn is in what is termed ARCHIVELOG mode, then as the Redo Log Files fill
up, they are individually written to Archived Redo Log Files.
        LGWR does not overwrite a Redo Log File until archiving has completed. 
        Committed data is not lost forever and can be recovered in the event of a disk
failure. 
        Only the contents of the SGA will be lost if an Instance fails.
 
In NOARCHIVELOG Mode:
        The Redo Log Files are overwritten and not archived. 
        Recovery can only be made to the last full backup of the database files. 
        All committed transactions after the last full backup are lost, and you can see
that this could cost the firm a lot of $$$.
 
When running in ARCHIVELOG mode, the DBA is responsible to ensure that the
Archived Redo Log Files do not consume all available disk space!  Usually after two
complete backups are made, any Archived Redo Log Files for prior backups are
deleted.
 
36
Slave Processes
 
Slave processes are background processes that perform work on behalf of other
processes.
 
Innn: I/O slave processes -- simulate asynchronous I/O for systems and devices that
do not support it. In asynchronous I/O, there is no timing requirement for
transmission, enabling other processes to start before the transmission has finished.
        For example, assume that an application writes 1000 blocks to a disk on an
operating system that does not support asynchronous I/O.
        Each write occurs sequentially and waits for a confirmation that the write was
successful.
        With asynchronous disk, the application can write the blocks in bulk and perform
other work while waiting for a response from the operating system that all blocks
were written.
 
Parallel Query Slaves -- In parallel execution or parallel processing, multiple
processes work together simultaneously to run a single SQL statement.
        By dividing the work among multiple processes, Oracle Database can run the
statement more quickly.
        For example, four processes handle four different quarters in a year instead of
one process handling all four quarters by itself.
        Parallel execution reduces response time for data-intensive operations on large
databases such as data warehouses. Symmetric multiprocessing (SMP) and
clustered system gain the largest performance benefits from parallel execution
because statement processing can be split up among multiple CPUs. Parallel
execution can also benefit certain types of OLTP and hybrid systems.

37
 
Logical Structure
 
It is helpful to understand how an Oracle database is organized in terms of a logical
structure that is used to organize physical objects. 
 

 
Tablespace:  An Oracle database must always consist of at least
two tablespaces (SYSTEM and SYSAUX), although a typical Oracle database will
multiple tablespaces.  
        A tablespace is a logical storage facility (a logical container) for storing objects
such as tables, indexes, sequences, clusters, and other database objects. 
        Each tablespace has at least one physical datafile that actually stores the
tablespace at the operating system level.  A large tablespace may have more
than one datafile allocated for storing objects assigned to that tablespace. 
        A tablespace belongs to only one database.
        Tablespaces can be brought online and taken offline for purposes of backup and
management, except for the SYSTEM tablespace that must always be online.
        Tablespaces can be in either read-only or read-write status.
 
Datafile:  Tablespaces are stored in datafiles which are physical disk objects. 
        A datafile can only store objects for a single tablespace, but a tablespace may
have more than one datafile – this happens when a disk drive device fills up and
a tablespace needs to be expanded, then it is expanded to a new disk drive. 
38
        The DBA can change the size of a datafile to make it smaller or later.  The file
can also grow in size dynamically as the tablespace grows.
 
Segment:  When logical storage objects are created within a tablespace, for example,
an employee table, a segment is allocated to the object. 
        Obviously a tablespace typically has many segments.
        A segment cannot span tablespaces but can span datafiles that belong to a
single tablespace.
 
Extent:  Each object has one segment which is a physical collection of extents. 
        Extents are simply collections of contiguous disk storage blocks.  A logical
storage object such as a table or index always consists of at least one extent –
ideally the initial extent allocated to an object will be large enough to store all
data that is initially loaded.
        As a table or index grows, additional extents are added to the segment. 
        A DBA can add extents to segments in order to tune performance of the system.
        An extent cannot span a datafile.
 
Block:  The Oracle Server manages data at the smallest unit in what is termed
a block or data block.  Data are actually stored in blocks.

 
A physical block is the smallest addressable location on a disk drive for read/write
operations. 
39
 
An Oracle data block consists of one or more physical blocks (operating system
blocks) so the data block, if larger than an operating system block, should be an even
multiple of the operating system block size, e.g., if the Linux operating system block
size is 2K or 4K, then the Oracle data block should be 2K, 4K, 8K, 16K, etc in
size.  This optimizes I/O.
 
The data block size is set at the time the database is created and cannot be
changed.  It is set with the DB_BLOCK_SIZE parameter.  The maximum data block
size depends on the operating system.
 
Thus, the Oracle database architecture includes both logical and physical structures as
follows:
        Physical:  Control files; Redo Log Files; Datafiles; Operating System Blocks.
        Logical:  Tablespaces; Segments; Extents; Data Blocks.
 
 
SQL Statement Processing
 
SQL Statements are processed differently depending on whether the statement is a
query, data manipulation language (DML) to update, insert, or delete a row, or data
definition language (DDL) to write information to the data dictionary. 
 

 
Processing a query:
        Parse:
o   Search for identical statement in the Shared SQL Area.

40
o   Check syntax, object names, and privileges.
o   Lock objects used during parse.
o   Create and store execution plan.
        Bind: Obtains values for variables.
        Execute: Process statement.
        Fetch: Return rows to user process.
 
Processing a DML statement:
        Parse: Same as the parse phase used for processing a query.
        Bind: Same as the bind phase used for processing a query.
        Execute:
o   If the data and undo blocks are not already in the Database Buffer Cache,
the server process reads them from the datafiles into the Database Buffer
Cache.
o   The server process places locks on the rows that are to be modified. The
undo block is used to store the before image of the data, so that the DML
statements can be rolled back if necessary.
o   The data blocks record the new values of the data.
o   The server process records the before image to the undo block and updates
the data block.  Both of these changes are made in the Database Buffer
Cache.  Any changed blocks in the Database Buffer Cache are marked as
dirty buffers.  That is, buffers that are not the same as the corresponding
blocks on the disk.
o   The processing of a DELETE or INSERT command uses similar steps.  The
before image for a DELETE contains the column values in the deleted row,
and the before image of an INSERT contains the row location information.
 
Processing a DDL statement:
        The execution of DDL (Data Definition Language) statements differs from the
execution of DML (Data Manipulation Language) statements and queries,
because the success of a DDL statement requires write access to the data
dictionary.
        For these statements, parsing actually includes parsing, data dictionary lookup,
and execution.  Transaction management, session management, and system
management SQL statements are processed using the parse and execute
stages.  To re-execute them, simply perform another execute.
 
 
 
 

41
2. Oracle Server
 

Objectives
 
These notes familiarize you with database administration software used by a DBA
including:
        Oracle Universal Installer (OUI)
        Oracle SQL*Plus
        Oracle Database Configuration Assistant (DBCA)
        Oracle Enterprise Manager (OEM)
        Database Upgrade Assistant (DBUA)
 

Database Administration Software


 
This table details the Oracle Database Administration Software. 
 
DBA Software Software Description
Oracle Universal This software is the standard software used to install, modify
Installer (upgrade), and remove Oracle software components for all
Oracle products.
Oracle Database This is a GUI tool that can be used to create, delete, or modify
Configuration databases; however, it does not provide a lot of control on the
Assistant database creation process.
SQL*Plus Used by the DBA and system users to access data in an Oracle
database.
Oracle Enterprise A GUI tool for administering one or more databases.
Manager
Database Upgrade Can be started in command line mode (command is dbua) for
Assistant (DBUA) LINUX, or by selecting the DBUA from the Oracle Configuration
and Migrations Tools menu option – this upgrades Oracle
databases to version 11g.
 
42
Special Database Administrative Users
 
Database administrators require extra privileges in order to administer an Oracle
database.  In a LINUX and Windows environment, some of these privileges are
granted by assigning user accounts to special groups.  On our LINUX server, you'll find
that your account is assigned to the DBA group.
 
There are two special database user accounts named SYS and SYSTEM that are
always created automatically whenever an Oracle database is created.  These
accounts are granted the role DBA which is a special role that is predefined with every
database and has all of the system privileges needed to perform DBA activities. 
 
SYS: 
        In the past, the user SYS was identified initially with the
password change_on_install; however, now the Oracle Universal Installer (OUI)
and Database Configuration Assistant (DBCA) both prompt for a password
during software installation.
        SYS is the owner of the data dictionary.
        SYSDBA and SYSOPER Privileges. 
o   When you connect to a database as SYS, it is made by specifying that
the connection is made as either SYSDBA or SYSOPER. 
o   These are two special privilege classifications used to identify DBAs and
privileged connections are enabled through use of a password file that is
discussed in detail in a later module.
        Here is a sample connection script for a connection to
the sobora2.siue.edu server at SIUE by the user dbock, and then to
the DBORCL database as the user SYS with the special privileged connection
as SYSDBA using SQLPlus.  Note that when queried for the ORACLE_SID
(Oracle system identifier that identifies the name of the database),
that dbock responded with DBORCL.
 
<invoke a PuTTY window from MS Windows by starting up PuTTY>
login as: dbock
43
dbock@sobora2.isg.siue.edu's password: <the password does not
display>
Last login: Mon May 20 12:25:16 2013 from 24-207-183-
37.dhcp.stls.mo.charter.com
ORACLE_SID = [dbock] ? DBORCL          <Note: This sets the
ORACLE_SID for the database>
ORACLE_SID  = DBORCL                   <The ORACLE_SID and
ORACLE_HOME echo>
ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_1
 
 
/home/dbock
dbock/@sobora2.isg.siue.edu=>
 
<here the user connects to sqlplus in nolog mode>
dbock/@sobora2.isg.siue.edu=>sqlplus /nolog
 
SQL*Plus: Release 11.2.0.3.0 Production on Mon May 20 23:25:33
2013
 
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 
login.sql loaded.
SQL>
 
<here the user connects as the user SYS with privileges as
SYSDBA>
 
SQL> connect sys as sysdba

44
Enter password: <no password is needed by a user validated as a
member of the Linux DBA group>
Connected.
login.sql loaded.
SQL>
<here the user is executing an SQL SELECT statement to retrieve
database data>
SQL> select table_name from dba_tables
  2  where owner='DBOCK';
More...
 
TABLE_NAME
------------------------------
ROOM
BEDCLASSIFICATION
BED
PATIENT
PATIENTNOTE
DEPARTMENT
EMPLOYEE
DEPENDENT
SPECIALTY
EMPLOYEESPECIALTY
PROJECT
More...
 
conn
SYSTEM:

45
        The user SYSTEM was identified initially by the password manager in the past,
but now the OUI and DBCA both prompt for passwords during software
installation.
        Tables and views created/owned by the user SYSTEM contain administrative
information used by Oracle tools and administrative scripts used to track
database usage.
        Here the database user connects as SYSTEM using role SYSDBA.
 
SQL> connect system as sysdba
Enter password:
Connected.
login.sql loaded.
SQL>
 
Additional DBA accounts may be created for a database to perform routine day-to-day
administrative duties. 
 
The passwords for SYS and SYSTEM should be immediately changed after creating
an Oracle database.
 
 

SQLPlus
 
As you saw above, you can connect to SQLPlus in order to do the following:
        Work with a database.
        Startup and shutdown a database.
        Create and run queries, modify row data, add rows, etc.
 

46
SQLPlus includes standard SQL plus additional add-on commands, such as
the DESCribe command that Oracle provides to make it easier to work with
databases. 
 
When you use SQLPlus for startup and shutdown of your own database, you will
connect using /nolog mode, then connect as SYSDBA.  The following sequence fails
for databases (such as DBORCL) that are protected by a password file that authorizes
special accounts to connect with SYSDBA privileges.
 
dbockstd/@sobora2=>sqlplus /nolog
 
SQL*Plus: Release 11.2.0.3.0 Production on Mon May 20 23:32:58
2013
 
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 
login.sql loaded.
SQL> connect / as sysdba
ERROR:
ORA-01031: insufficient privileges
 
The sequence of commands shown above WILL work in an environment where
password files are not used to authorize special account connections – where the DBA
has decided the environment is secure enough to rely on belonging to the operating
system DBA group to validate database administrator access to the database. 

Oracle Universal Installer


 
The Installation Manual for the Oracle Universal Installer (OUI) is available online
at:http://docs.oracle.com/cd/E11857_01/em.111/e12255/oui1_introdu
ction.htm
 
47
For users with CD-ROM with the Oracle Enterprise Server software, the OUI is part of
the CD-ROM bundle
 
The OUI is Java-based and enables installation for all Java-enabled operating system
platforms – this makes the installation process common across platforms.
It requires about 200Mb for OUI files on Windows, and 116Mb for Unix and Linux
installations.
 Oracle Universal Installer 11g Release 2 (11.2) offers the following features:
 
         An
XML-based centralized inventory.  The XML format enables third-party
Java applications to query the inventory for information about installed software.
 
         Cloning of existing Oracle home. 
o    Enables copying an existing Oracle home to another location and "fix it up"
by updating the installation configuration to be specific to the new
environment.
o    Cloning makes it easy to propagate a standard setup without having to
install and configure after installation.
 
         Better support for cluster environments
o    Oracle Universal Installer now replicates its inventory to all nodes that
participate in a cluster-based installation.
o    You can invoke Oracle Universal Installer from any node on the cluster that
is part of the installation.
o    You can then upgrade, remove, or patch existing software from any node.
 
         True silent capability
o    When running Oracle Universal Installer in silent mode on a character mode
console, you no longer need to specify an X-server or set the DISPLAY
environment variable on UNIX.
o    No GUI classes are instantiated, making the silent mode truly silent.
 
         Ability to record your Oracle Universal Installer session to a response file
o    This feature makes it easy to duplicate the results of a successful
installation on multiple systems.
o    All the options you selected during the installation are saved in the resulting
response file.
 
         More accurate disk space calculations

48
o    Oracle Universal Installer now uses a more accurate method of calculating
the disk space your Oracle products require.
o    This feature reduces the risk of running out of disk space during an
installation.
 
         Automatically launched software after installation
o    Some Oracle products now take advantage of a new feature that enables
the software to launch automatically immediately after the installation.
 
         Cleaner deinstallation and upgrades
o    Deinstallation completely removes all software, leaving no "bits" behind.
o    This also completely removes files associated with configuration assistants
and patchsets.
o    Oracle homes can also be removed from the inventory and registry.
o    For deinstalling 11.2 Oracle Clusterware, Database, and client homes, OUI
prompts you to run the deinstall/deconfig utility from the home.
 
         Integratedprerequisite checking
o    Provides a prerequisite checking tool to diagnose the readiness of an
environment for installation.
o    The prerequisite checks are run as part of the installation process, but can
also be run as a separate application.
 
         Supportfor Desktop Class and Server Class.  The following installation types
are available for the database:
o    Desktop Class
  Choose this option if you are installing on a laptop or desktop class
system.
  This option includes a starter database and provides minimal
configuration.
  This option is designed for users that want to quickly bring up and run
the database.
 
o    Server Class
  Choose this option if you are installing on a server class system, such
as what you would use when deploying Oracle in a production data
center.
  This option provides more advanced configuration options.
  Advanced configuration options available using this installation
type include Oracle RAC, Automatic Storage Management,
backup and recovery configuration, integration with Enterprise
Manager Grid Control, and more fine-grained memory tuning,
as well as other options.
49
  For the Server Class option, the Typical Installation method is
selected by default.
  It enables you to quickly install the Oracle Database using
minimal input.
  This method installs the software and optionally creates a
general-purpose database using the information that you
specify in this dialog.
 Utilities
Oracle offers two utilities for software deployment:
         Oracle Universal Installer to install Oracle products
 
         OPatch to apply interim patches.
o    OPatch is an Oracle-supplied utility that assists you with the process of
applying interim patches to Oracle's software.
o    OPatch 11.2 is a Java-based utility that can run on either OUI-based Oracle
homes or standalone homes.
o    It works on all operating systems for which Oracle releases software.
o    For more information on OPatch, see the Oracle OPatch User's Guide.
 
Oracle Home
An Oracle home is the system context in which the Oracle products run.
 
The Oracle Universal Installer supports the installation of several active Oracle
homes on the same host.
         An Oracle home is a directory into which all Oracle software is installed.
         This is pointed to by an environment variable named ORACLE_HOME.
 
This context consists of the following:
 
         Directory location where the products are installed
         Corresponding system path setup
         Program groups associated with the products installed in that home (where
applicable)
         Services running from that home

Oracle Base
The Oracle base location is the location where Oracle Database binaries are stored.
 
         During installation, you are prompted for the Oracle base path.
         Typically, an Oracle base path for the database is created during Oracle Grid
Infrastructure installation.
50
         To prepare for installation, Oracle recommends that you only set the
ORACLE_BASE environment variable to define paths for Oracle binaries and
configuration files.
         Oracle Universal Installer (OUI) creates other necessary paths and environment
variables in accordance with the Optimal Flexible Architecture (OFA) rules for
well-structured Oracle software environments.
 
For example, with Oracle Database 11g, Oracle recommends that you do not set an
Oracle home environment variable allow OUI to create it instead.
If the Oracle base path is /u01/app/oracle, then by default, OUI
creates /u01/app/oracle/product/11.2.0/dbhome_1
 as the Oracle home path
 The ORAPARAM.INI File
The oraparam.ini file is used to provide initialization parameters for the OUI.  These
parameters specify the behavior of specific OUI parameters, and each product
installation has a unique oraparam.ini file.
 Generally you will not need to edit the oraparam.ini file, but understanding its contents
can help you to troubleshoot problems that may occur.  For example:
        OUI provides a default value for most installations on the File Locations page
that points to the location of the product's installation kit or stage. This default
value is stored in the oraparam.ini file.
        The oraparam.ini file also identifies the location of the Java Runtime Environment
(JRE) required for the installation.
        In the staging area, it is located in the same directory as the executable file.
 
Installation Modes – OUI supports installation in 3 modes
        Interactive:
  Use the graphical user interface to walk through the installation by
responding to dialog prompts.
  A good mode for installing a small number of products on a small number of
computers
 

51
Note: At SIUE, we use the vncserver and vncviewer products for UNIX/LINUX to
provide a graphical user interface environment.  We will not be using these in class;
however, they are readily available for use if needed as free downloads from the web.
         Suppressed:
  Provide installation information by using a combination of a response file or
command line entries with certain interactive dialogs.
  You can choose which dialogs to suppress by supplying the information at
the command line.
  This method is most useful when an installation has a common set of
parameters that can be captured in a response file, in addition to custom
information that must be input by hand.
         Silent:
  Use OUI's silent installation mode to bypass the graphical user interface and
supply the necessary information in a response file.
  This method is most useful when installing the same product multiple times
on multiple machines.
 Startup
 Initially the OUI performs environment checks to see if the environment meets the
requirements of the software to be installed.  Results of prerequisite checks are logged
to all results are logged in the installActions<timestamp>.log file
 On a LINUX/LINUX server, the installation program is runInstaller on
the INSTALL\install\linux directory of the CD-ROM provided by Oracle when you
negotiate a lease for the software.  The command to install Oracle is:
 $ ./runInstaller
 On a Windows Server, the installation program is setup.exe on the CD-ROM.  The
command to install Oracle is:
 D:\> setup.exe
 If a response file approach is desired, file templates are available for LINUX in the
stage/response directory and for Windows in the Response directory.  The commands
to install using a response file for LINUX and Windows are:
 
$ ./runInstaller –responsefile filename [-silent] [-nowelcome]
52
D:\> setup.exe –responsefile filename [-silent]
 where filename = the name of the response file; silent runs the installer in silent mode
without feedback; and nowelcome means the Welcome window does not display.
 A sample response file set of commands for LINUX is shown here—you do not need
to try to memorize this set of commands—they are simply provided so that you will
have some idea of what the response files looks like.
 [General]
RESPONSEFILE_VERSION=1.7.0
[Session]
LINUX_GROUP_NAME="dba"
FROM_LOCATION='/u01/app/oracle/product/11g/inventory/Scripts/ins
tall1.jar"
ORACLE_HOME='/u01/app/oracle/product/11g"
ORACLE_HOME_NAME="Ora10g"
TOPLEVEL_COMPONENT={"oracle.server", "11.2.0.2.0"}
SHOW_COMPONENT_LOCATIONS_PAGE=false
SHOW_SUMMARY_PAGE=false
SHOW_INSTALL_PROGRESS_PAGE=false
SHOW_REQUIRED_CONFIG_TOOL_PAGE=false
SHOW_OPTIONAL_CONFIG_TOOL_PAGE=false
SHOW_END_SESSION_PAGE=false
NEXT_SESSION=true
SHOW_SPLASH_SCREEN=true
SHOW_WELCOME_PAGE=false
SHOW_ROOTSH_CONFIRMATION=true #Causes the root.sh script to run.
SHOW_EXIT_CONFIRMATION=true
INSTALL_TYPE="Typical”
s_GlobalDBName="DBORL.isg.siue.edu"
s_mountPoint="/u01/app/oracle/11g/dbs"
53
s_dbSid="DBORCL"
b_createDB=true
Sample Screen Shots - Oracle Database 11gR2 Installer
 This section provides screen shots of the installation of Oracle Database version
11gR2 software.  The installation is accompished through use of a series of GUI
screens that take you through 11 steps.  We skip some of the steps.
 Step 1 is used to specify your email so you can receive security updates from My
Oracle Support.  My Oracle Support is NOT a free site - you must have a paid up
license to access the site.

 
 
Step 2 enables downloading the Zip files that contain the binaries for Oracle 11gR2.
 

54
 
In Step 3 you have three options as shown by the radio buttons. Here the database
software only was installed. The other two options enable create and configure a
database as well as upgrade an existing database.
 

55
 
Step 4 enables you to select the type of installation.  Here the single instance database
installation option was selected.
 

56
 
We skipped Step 5--it is used to select the product language. 
Here in Step 6 the edition of the RDBMS software is selected.  Notice the space
requirements are given as estimates next to the options.
 

57
 
Step 7 enables you to specify the values for ORACLE_BASE and ORACLE_HOME in
terms of the directories that serve as those values. 
 

58
 
In Step 8 you can name the DBA administrative group at the operating system
level.  Normally you might name the group "dba" - here the administrator installing the
software named the group "dba1".
 

59
 
In Step 9 the installer checks the operating system to determine if the minimum
requirements for installation are met.  Mostly it is checking for available memory.
 

60
 
In Step 10, the screen shows a recap of the installing options selected. 
 

61
 
Step 11 shows the actual progress of the installation of the product.  For the single
instance installation, this took about 40 minutes to install.
 

62
 
As Step 11 progressed, various script windows would pop up directing the
administrator to perform various tasks.  Here the task displayed is to run a script
named root.sh.  At SIUE a different group (the operating system folks) has to run this
script as the database administrator group does not have "root" level operating system
permissions (nor do they want such permissions).
 

63
 
The Finish screen shot is now shown - it just says you have finished.
 

Oracle Database Configuration Assistant


 
This assistant is covered in more detail in a later lesson.  It allows you to:
        Create a database
        Configure database options
        Delete a database
        Manage templates used for these tasks.
 

Oracle Enterprise Manager


 
64
The Oracle 11g Oracle Enterprise Manager (OEM) is a GUI, Internet-based product
that executes inside a web browser such as Internet Explorer.  The OEM:
        Enables you to manage a number of Oracle tools and services
        Manage the network of management servers and intelligent agents used to track
and manage Oracle databases. 
        Manage multiple databases from a single client platform.
 
The OEM is a Web-based, Grid Control Console with multiple tabs that looks like the
following:
        This is the Home screen. 
        Note the different tabs to access different components of the console.
        You can choose the targets (databases, application servers, etc.) to monitor. 
        The management server is monitoring 40 targets. Of these 17 are up, and 15 are
down while 7 are unknown. The unknown are probably student databases that no
longer exist.

 
Information that OEM needs in order for a DBA to manage databases is stored in the
OEM repository.
        The OEM repository is a database itself of information about databases. 
        You can install OEM as a separate database on a server, or as a tablespace
within an existing database. 

65
        We have installed OEM at SIUE in a separate database.
 
The OEM architecture is illustrated here.
 

 
 
This is an n-tier architecture shown in the figure above is used by OEM. 
 
First Tier:  The first tier includes client computers that provide a graphical user
interface for DBAs. 
 

66
Second Tier:  The second tier includes the Management Service (a J2EE Web
App) and the accompanying database repository.  The Management Service is a
program that executes on the server where the OEM repository/database is
located.   The Management Service is started on our LINUX machine as shown in
these commands.  In order to stop the service, you must be a privileged user of the
Enterprise Manager repository (which you as students are not).
 
$ oemctl
Usage: oemctl start  oms
       oemctl stop   oms <EM Username>/<EM Password>
       oemctl status oms <EM Username>/<EM Password>[@<OMS-
hostname>]
 
Third Tier:  A group of Oracle Management Agents manage various targets such as
databases, application servers, listeners, and hosts that can be on different network
nodes in the organizational network, and these agents execute tasks from the
Management Server.   The ID for the intelligent agent is dbsnmp.
 
The above is the Targets screen.  This shows two target servers
– sobora1.isg.siue.edu and soroba2.isg.siue.edu. 
 

67
  

 
This screen within Targets shows the Databases (note the menu options on the blue
bar).
        The DBORCL.siue.edu database has a status of up with 0 critical alerts. 
        The version of Oracle is 11.2.0.3.0. 
        The ORACLE.siue.edu database also has a status of up with 0 critical alerts and
9 warnings, and it also runs on Oracle RDBMS 11.2.0.3.0. 
        DBORCL is located on SOBORA2 while the ORACLE database is on
SOBORA1.
 
 

68
 
This is the Deployments screen. It shows that all critical patch advisories for the Oracle
RDBMS installations are up to date.  There are two being monitored:  an Oracle 10g
version and an Oracle 11g version. 
 

69
 
 
The Enterprise Manager is a very complex tool.  Oracle corporation offers a 3 to 4 day
course of study to teach the detailed usage of the Enterprise Manager product.
 
 
 

70
2. Database Startup
 

Objectives
 
These notes cover the use of environment variables and Oracle naming conventions
for files.  You will:
        learn to create and understand initialization parameter files and several of the
specified parameters.
        learn to startup and shutdown an Oracle Instance.
        become familiar with and use diagnostic files.
 
 

Environment Variables
 

Operating System Environment Variables


 
Oracle makes use of environment variables on the server and client computers in
both LINUX and Windows operating systems in order to:
        establish standard locations for files, and
        make it easier for you to use Oracle. 
 
On LINUX, environment variables values can be displayed by typing the
command env at the operating system prompt.  It is common to have quite a few
environment variables.  This example highlights those variables associated with the
logged on user and with the Oracle database and software:  
 
dbock/@sobora2.isg.siue.edu=>env
_=/bin/env
SSH_CONNECTION=::ffff:24.207.183.37 25568 ::ffff:146.163.252.102
22

71
PATH=/bin:/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin:.:/u01/app
/oracle/produ                     ct/11.2.0/dbhome_1/bin
SHELL=/bin/ksh
HOSTNAME=sobora2.isg.siue.edu
USER=dbock
ORACLE_BASE=/u01/app/oracle/
SSH_CLIENT=::ffff:24.207.183.37 25568 22
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
TERM=xterm
ORACLE_SID=DBORCL
LANG=en_US.UTF-8
SSH_TTY=/dev/pts/2
LOGNAME=dbock
MAIL=/var/spool/mail/oracle1
LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/dbhome_1/lib
HOME=/u01/home/dbock
ORACLE_TERM=vt100
 
To create or set an environment variable value, the syntax is:
 
VARIABLE_NAME = value
export VARIABLE_NAME
 
An example of setting the ORACLE_SID database system identifier is shown here:
 
dbock/@sobora2.isg.siue.edu=> ORACLE_SID=USER350
dbock/@sobora2.isg.siue.edu=> export ORACLE_SID
 
72
This can be combined into a single command as shown here:
 
dbock/@sobora2.isg.siue.edu=> export ORACLE_SID=USER350
 
The following environment variables in a LINUX environment are used for the server.
 
HOME
Command:  HOME=/u01/student/dbock
Use:  Stores the location of the home directory for your files for your assigned LINUX
account.  You can always easily change directories to your HOME by typing the
command:   cd $HOME
 
Note:  The $ is used as the first character of the environment variable so that LINUX
uses the value of the variable as opposed to the actual variable name.
 
LD_LIBRARY_PATH
Command:  LD_LIBRARY_PATH=/u01/app/oracle/product/11.2.0/dbhome_1
/lib
Use:  Stores the path to the library products used most commonly by you.  Here the
first entry in the path points to the library products for Oracle that are located in the
directory /u01/app/oracle/product/11.2.0/dbhome_1/lib.   For multiple
entries, you can separate Path entries by a colon.
 
ORACLE_BASE
Command:  ORACLE_BASE=/u01/app/oracle
Use:  Stores the base directory for the installation of Oracle products.  Useful if more
than one version of Oracle is loaded on a server.  Other than that, this variable does
not have much use.  We are not using it at SIUE.
 
ORACLE_HOME
73
Command:  ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
Use:  Enables easy changing to the home directory for Oracle products.  All directories
that you will use are hierarchically below this one.  The most commonly used
subdirectories are named dbs and rdbms. 
 
ORACLE_SID
Command:  ORACLE_SID=USER350  (or the name of your database)
Use:  Tells the operating system the system identifier for the database.  One of the
databases on the SOBORA2 server is named DBORCL – when you create your own
database, you will use you’re a database name assigned by your instructor as
the ORACLE_SID system identifier for your database.
 
ORACLE_TERM
Command:  ORACLE_TERM=vt100
Use:  In LINUX, this specifies the terminal emulation type.  The vt100 is a very old type
of emulation for keyboard character input.
 
PATH
Command:  PATH=/u01/app/oracle/product/11.2.0/dbhome_1/bin:/bin:/
usr/bin:/usr/local/bin:.
Use:  This specifies path pointers to the most commonly used binary files.  A critical
entry for using Oracle is
the=/u01/app/oracle/product/11.2.0/dbhome_1/bin entry that points to the
Oracle binaries.  If you upgrade to a new version of Oracle, you will need to upgrade
this path entry to point to the new binaries.
 

Windows Variables
 
In a Windows operating system environment, environment variables are established by
storing entries into the system registry.  Your concern here is primarily with the
installation of Oracle tools software on a client computer. 
74
 
Windows and Oracle allows and recommends the creation of more than
one ORACLE_HOME directory (folder) on a Windows client computer.  This is
explained in more detail in the installation manuals for the various Oracle software
products. 
 
Basically, you should use one folder as an Oracle Home for Oracle Enterprise Manager
software and a different folder as an Oracle Home for Oracle's Internet Developer Suite
– this suite of software includes Oracle's Forms, Reports, Designer, and other tools for
developing internet-based applications.
 
 

Initialization Parameter Files


 
When an Oracle Instance is started, the characteristics of the Instance are established
by parameters specified within the initialization parameter file that is read during
startup.  In the figure shown below, the initialization parameter file is
named spfiledb01.ora; however, you can select any name for the parameter file—the
database here has an ORACLE_SID value of db01.   
 
 

75
 
There are two types of initialization parameter files:
        Static parameter file:  This has always existed and is known as the PFILE and
is commonly referred to as the init.ora file.  The actual naming convention used
is to name the file initSID.ora where SID is the system identifier (database
name) assigned to the database. 
        Server (persistent) parameter file:  This is the SPFILE (also termed the server
parameter file) and is commonly referred to as the spfileSID.ora. 
 
There are two types of parameters:
        Explicit parameters.  These have entries in the parameter file.
        Implicit parameters.  These have no entries in the parameter file and Oracle
uses default values.
 
Initialization parameter files include the following:
        Instance parameters.
        A parameter to name the database associated with the file.
        SGA memory allocation parameters.
76
        Instructions for handling online redo log files.
        Names and locations of control files.
        Undo segment information.
 
 

PFILE
 
This is a plain text file.  It is common to maintain this file either by editing it with
the vi editor, or by FTPing it to my client computer, modifying it with Notepad, and
then FTPing it back to the SOBORA2 server.
 
The file is only read during database startup so any modifications take effect the next
time the database is started up.  This is an obvious limitation since shutting down and
starting up an Oracle database is not desirable in a 24/7 operating environment.
 

 
The naming convention followed is to name the file initSID.ora where SID is the
system identifier.  For example, the PFILE for the departmental SOBORA2server for
the database named DBORCL is named initDBORCL.ora.
 

77
When Oracle software is installed, a sample init.ora file is created.  You can create
one for your database by simply copying the init.ora sample file and renaming it.  The
sample command shown here creates an init.ora file for a database
named USER350.  Here the file was copied to the user's HOME directory and
named initUSER350.ora.
 
$ cp $ORACLE_HOME/dbs/init.ora  $HOME/initUSER350.ora
 
You can also create an init.ora file by typing commands into a plain text file using an
editor such as Notepad. 
 
NOTE:  For a Windows operating system, the default location for the init.ora file
is C:\Oracle_Home\database.
 
This is a listing of the initDBORCL.ora file for the database named DBORCL.  We will
cover these parameters in our discussion below.
 
# Copyright (c) 1991, 1997, 1998 by Oracle Corp.
 
db_name='DBORCL'
audit_file_dest='/u01/oradata/DBORCL/adump'
audit_trail ='db'  
compatible ='11.2.0' 
control_files=(/u01/oradata/DBORCL/DBORCLcontrol01.ctl, 
  /u02/oradata/DBORCL/DBORCLcontrol02.ctl,
  /u03/oradata/DBORCL/DBORCLcontrol03.ctl)
db_block_size=8192 
db_domain='siue.edu' 
db_recovery_file_dest='/u01/app/oracle/fast_recovery_area' 
db_recovery_file_dest_size=1G 
78
diagnostic_dest='/u01/app/oracle' 
dispatchers='(PROTOCOL=TCP) (SERVICE=ORCLXDB)'  
log_archive_dest_1='LOCATION=/u01/oradata/DBORCL/arch' 
log_archive_format='DBORCL_%t_%s_%r.arc' 
memory_target=1G  
open_cursors=300  
processes = 150   
remote_login_passwordfile='EXCLUSIVE' 
#UNDO_Management is Auto by default
undo_tablespace='UNDOTBS1'  
# End of file
 
        The example above shows the format for specifying values:  keyword = value.
        Each parameter has a default value that is often operating system dependent. 
        Generally parameters can be specified in any order.
        Comment lines can be entered and marked with the # symbol at the beginning of
the comment.
        Enclose parameters in quotation marks to include literals.
        Usually operating systems such as LINUX are case sensitive so remember this
in specifying file names.
 
The basic initialization parameters – there are about 255 parameters –the actual
number changes with each version of Oracle.  Most are optional and Oracle will use
default settings for them if you do not assign values to them.  Here the most commonly
specified parameters are sorted according to their category.
 
         DB_NAME (mandatory) – specifies the local portion of a database name.
o   Maximum name size is 8 characters.
o   Must begin with alphanumeric character.
79
o   Once set it cannot be changed without recreating the database.
o   DB_NAME is recorded in the header portion of each datafile, redo log file,
and control file.
 
        DB_BLOCK_SIZE (mandatory) – specifies the size of the default Oracle block
in the database.  At database creation time, the SYSTEM, TEMP, and SYSAUX
tablespaces are created with this block size.  An 8KB block size is about the
smallest you should use for any database although 2KB and 4KB block sizes are
legal values.
 
        DB_CACHE_SIZE and DB_nK_CACHE_SIZE (recommended, optional):
o   DB_CACHE_SIZE – specifies the size of the area the SGA allocates to hold
blocks of the default size.  If the parameter is not specified, then
thedefault is 0 (internally determined by the Oracle Database). If the
parameter is specified, then the user-specified value indicates a minimum
value for the memory pool.
o   DB_nK_CACHE_SIZE –  specifies up to four other non-default block sizes,
and is useful when transporting a tablespace from another database with a
block size other than DB_BLOCK_SIZE.  This parameter is only used
when you have a tablespace(s) that is a non-standard size.
o   This parameter is NOT in the initDBORCL.ora parameter file - it was used
often in the past, but is now usually allowed to default. 
 
        DB_FILE_MULTIBLOCK_READ_COUNT = 16 (recommended) – used to
minimize I/O during table scans.
o   It specifies the maximum number of blocks read in one I/O operation during
a sequential scan (in this example the value is set to 16).
o   The total number of I/Os needed to perform a full table scan depends on
such factors as the size of the table, the multiblock read count, and whether
parallel execution is being utilized for the operation. 
o   Online transaction processing (OLTP) and batch environments typically
have values in the range of 4 to 16 for this parameter.

80
o   This parameter is NOT in the initDBORCL.ora parameter file.
 
        DB_RECOVERY_FILE_DEST and DB_RECOVERY_FILE_DEST_SIZE (recom
mended) – specifies the default location for the flash recovery area.
o   The flash recovery area contains multiplexed copies of current control files
and online redo logs, as well as archived redo logs, flashback logs, and
RMAN backups. 
o   Specifying this parameter without also specifying
the DB_RECOVERY_FILE_DEST_SIZE initialization parameter is not allowed.
 
        CURSOR_SHARING (optional) – setting this to FORCE or SIMILAR allows
similar SQL statements to share the Shared SQL area in the
SGA.  TheSIMILAR specification doesn't result in a deterioration in execution
plans for the SQL statements.  A setting of EXACT allows SQL statements to
share the SQL area only if their text matches exactly.
 
        OPEN_CURSORS (recommended) – a cursor is a handle or name for
a private SQL area—an area in memory in which a parsed statement and other
information for processing the statement are kept. 
o   Each user session can open multiple cursors up to the limit set by the
initialization parameter OPEN_CURSORS.  OPEN_CURSORS specifies the
maximum number of open cursors (handles to private SQL areas) a
session can have at once.
o   You can use this parameter to prevent a session from opening an excessive
number of cursors. 
 
        AUDIT_FILE_DEST (recommended) – specifies the operating system directory
into which the audit trail is written when the AUDIT_TRAIL initialization
parameter is set to os, xml, or xml, extended. 
o   The audit records will be written in XML format if the AUDIT_TRAIL
initialization parameter is set to xml or xml, extended.
o   It is also the location to which mandatory auditing information is written
and, if so specified by the AUDIT_SYS_OPERATIONS initialization
parameter, audit records for user SYS.

81
o   The first default value is: ORACLE_BASE/admin/ORACLE_SID/adump
o   The second default value (used if the first default value does not exist or
is unusable, is: ORACLE_HOME/rdbms/audit
        TIMED_STATISTICS (optional) – a setting of TRUE causes Oracle to collect
and store information about system performance in trace files or for display in
the V$SESSSTATS and V$SYSSTATS dynamic performance views.  Normally
the setting is FALSE to avoid the overhead of collecting these statistics. Leaving
this on can cause unnecessary overhead for the system.
 
        CONTROL_FILES (mandatory) – tells Oracle the location of the control files to
be read during database startup and operation.  The control files are typically
multiplexed (multiple copies).
 
#Control File Configuration
CONTROL_FILES =
("/u01/student/dbockstd/oradata/USER350control01.ctl",
"/u02/student/dbockstd/oradata/USER350control02.ctl")
 
        DIAGNOSTIC_DEST (recommended) – this parameter specifies where Oracle
places "dump" files caused by actions such as the failure of a user or background
process. 
o   This parameter is new to Oracle 11g.
o   It specifies an alternative location for the "diag" directory contents.
o   It is part of the new ADR (Automatic Diagnostic Repository) and Incident
Packaging System -- these allow quick access to alert and diagnostic
information.
o   The default value of $ADR_HOME by default is $ORACLE_BASE/diag. 
o   This replaced the older udump, bdump, and cdump (user dump, background
dump, core dump) directories used up to version Oracle 10g.
 
diagnostic_dest='/u01/student/dbockstd/diag'
 
        LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_n (mandatory if running
in archive mode):
o   You choose whether to archive redo logs to a single destination
or multiplex the archives. 
82
o   If you want to archive only to a single destination, you specify that
destination in the LOG_ARCHIVE_DEST initialization parameter.
o   If you want to multiplex the archived logs, you can choose whether to
archive to up to ten locations (using the LOG_ARCHIVE_DEST_n parameters) or
to archive only to a primary and secondary destination
(using LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST).
 
        LOG_ARCHIVE_FORMAT (optional, but recommended if running in archive
mode) – specifies the format used to name the system generated archive log
files so they can be read by Recovery Manager to automate recovery.
 
#Archive
log_archive_dest_1='LOCATION=/u01/student/dbockstd/oradat
a/arch' 
log_archive_format='USER350_%t_%s_%r.arc'
 
        SHARED_SERVERS (optional) – this parameter specifies the number of server
processes to create when an instance is started. If system load decreases, then
this minimum number of servers is maintained. Therefore, you should take care
not to set SHARED_SERVERS too high at system startup.
 
        DISPATCHERS (optional) – this parameter configures dispatcher processes in
the shared server architecture.
 
#Shared Server Only use these parameters for a Shared
Server
# installation – the parameter starts shared server if
set > 0
SHARED_SERVERS=2
#Uncomment and use first DISPATCHERS parameter if the
listener
#is configured for SSL security
83
#(listener.ora and sqlnet.ora)
#DISPATCHERS='(PROTOCOL=TCPS)(SER=MODOSE)',
#            '(PROTOCOL=TCPS)
(PRE=oracle.aurora.server.SGiopServer)'
DISPATCHERS='(PROTOCOL=TCP)(SER=MODOSE)",
            '(PROTOCOL=TCP)
(PRE=oracle.aurora.server.SGiopServer)',
            '(PROTOCOL=TCP)'
 
        COMPATIBLE (optional) – allows a newer version of Oracle binaries to be
installed while restricting the feature set as if an older version was installed –
used to move forward with a database upgrade while remaining compatible with
applications that may fail if run with new software versions.  The parameter can
be increased as applications are reworked.
 
        INSTANCE_NAME (Optional) – in a Real Application Clusters environment,
multiple instances can be associated with a single database service. Clients can
override Oracle's connection load balancing by specifying a particular instance by
which to connect to the database.  INSTANCE_NAMEspecifies the unique name of
this instance.  In a single-instance database system, the instance name is usually
the same as the database name.
 
#Miscellaneous
COMPATIBLE='11.2.0'
INSTANCE_NAME=USER350
 
        DB_DOMAIN (recommended) – this parameter is used in a distributed
database system.  DB_DOMAIN specifies the logical location of the database
within the network structure. You should set this parameter if this database is or
ever will be part of a distributed system.
 
#Distributed, Replication, and SnapShot
84
DB_DOMAIN='isg.siue.edu'
 
        REMOTE_LOGIN_PASSWORDFILE (recommended) – specifies the name of
the password file that stores user names and passwords for privileged
(DBAs, SYS, and SYSTEM) users of the database.
 
#Security and Auditing
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
 
        MEMORY_TARGET (recommended) – The amount of shared memory
available for Oracle to use when dynamically controlling the SGA and PGA. This
parameter is dynamic, so the total amount of memory available to Oracle can be
increased or decreased, provided it does not exceed
theMEMORY_MAX_TARGET limit. The default value is "0".
 
  #Memory sizing
  MEMORY_TARGET=1G
 
        PGA_AGGREGATE_TARGET (recommended, but not needed if
MEMORY_TARGET is set) and SORT_AREA_SIZE (no longer
recommended) –specifies the target aggregate PGA memory available to all
server processes attached to the instance.
o   When managing memory manually, Oracle RDBMS tries to ensure the total
PGA memory allocated for all database server processes and background
processes does not exceed this target.
o   In the past, this was an often used parameter to improve sorting
performance, this parameter SORT_AREA_SIZE specifies (in bytes) the
maximum amount of memory Oracle will use for a sort. 
o   Now Oracle doesn’t recommend using the parameter unless the instance is
configured with a shared server option.  Instead use the
PGA_AGGREGATE_TARGET parameter instead (use a minimum of
10MB, the default Oracle setting is 20% of the size of the SGA).
85
 
        JAVA_POOL_SIZE, LARGE_POOL_SIZE and SHARED_POOL_SIZE (optiona
l) – these parameters size the shared pool, large pool, and Java pool. These are
automatically sized by the Automatic Shared Memory Management (ASSM) if
you set the MEMORY_TARGET or SGA_TARGET initialization parameter.
o   To let Oracle manage memory, set the SGA_TARGET parameter to the
total amount of memory for all SGA components.
o   Even if SGA_TARGET is set, you can also set these parameters when you
want to manage the cache sizes manually.
o   The total of the parameters cannot exceed the
parameter SGA_MAX_SIZE which specifies a hard upper limit for the
entire SGA.
 
        SGA_TARGET (recommended, but not needed if MEMORY_TARGET is
set) – a SGA_TARGET specifies the total size of all SGA
components.  IfSGA_TARGET is specified, then the following memory pools are
automatically sized:
o   Buffer cache (DB_CACHE_SIZE)
o   Shared pool (SHARED_POOL_SIZE)
o   Large pool (LARGE_POOL_SIZE)
o   Java pool (JAVA_POOL_SIZE)
 
#Pool sizing
SGA_TARGET=134217728
 
#Alternatively you can set these individually to
establish minimum sizes for these caches, but this is not
recommended
DB_CACHE_SIZE=1207959552
JAVA_POOL_SIZE=31457280
LARGE_POOL_SIZE=1048576
86
SHARED_POOL_SIZE=123232153   #This is the minimum for 10g
 
        PROCESSES (recommended) – this parameter represents the total number of
processes that can simultaneously connect to the database, including
background and user processes. 
o   The background processes is generally 15 and you would add the # of
maximum concurrent users.
o   There is little or no overhead associated with making PROCESSES too big.
 
        JOB_QUEUE_PROCESSES (recommended, especially to update
materialized views) – specifies the maximum number of processes that can be
created for the execution of jobs per instance.
o   Advanced queuing uses job queues for message propagation.
o   You can create user job requests through the DBMS_JOB package. 
o   Some job queue requests are created automatically. An example is refresh
support for materialized views. If you wish to have your materialized views
updated automatically, you must set JOB_QUEUE_PROCESSES to a value of one
or higher.
 
#Processes and Sessions
PROCESSES=150
JOB_QUEUE_PROCESSES=10
 
        FAST_START_MTTR_TARGET (optional) – this specifies the number of
seconds the database takes to perform crash recovery of a single instance.
 
#Redo Log and Recovery
FAST_START_MTTR_TARGET=300
 

87
        RESOURCE_MANAGER_PLAN (optional) – this specifies the top-level
resource plan to use for an instance.
o   The resource manager will load this top-level plan along with all its
descendants (subplans, directives, and consumer groups).
o   If you do not specify this parameter, the resource manager is off by default. 
o   If you specify a plan name that does not exist within the data dictionary,
Oracle will return an error message.
 
#Resource Manager
RESOURCE_MANAGER_PLAN=SYSTEM_PLAN
 
        UNDO_MANAGEMENT and UNDO_TABLESPACE (recommended but
actually required for most installations) – Automatic Undo Management
automates the recovery of segments that handle undo information for
transactions.
o   It is recommended to set the UNDO_MANAGEMENT parameter
to AUTO.  This is the default value.
o   Specify the name of the UNDO tablespace with
the UNDO_TABLESPACE parameter.
o   Only one UNDO tablespace can be active at a time.
 
#Automatic Undo Management
#UNDO_Management is Auto by default
UNDO_TABLESPACE=undo1
 
So, which parameters should you include in your PFILE when you create a
database?  I suggest a simple init.ora file initially - you can add to it as time goes on in
this course. 
  

SPFILE
88
 
The SPFILE is a binary file.  You must NOT manually modify the file and it must
always reside on the server.  After the file is created, it is maintained by the Oracle
server.
 
The SPFILE enables you to make changes that are termed persistent across startup
and shutdown operations.  You can make dynamic changes to Oracle while the
database is running and this is the main advantage of using this file. 
 
The default location is in the $ORACLE_HOME/dbs directory with a default name
of spfileSID.ora.  For example, a database named USER350 would have
aSPFILE with a name of spfileUSER350.ora. 
 

 
As is shown in the figure above, you can create an SPFILE from an existing PFILE by
typing in the command shown while using SQL*Plus.  Note that the filenames are
enclosed in single-quote marks. 
 

Recreating a PFILE
 
You can also create a PFILE from an SPFILE by exporting the contents through use of
the CREATE command.   You do not have to specify file names as Oracle will use the
spfile associated with the ORACLE_SID for the database to which you are connected.

89
 
CREATE PFILE FROM SPFILE;
 
You would then edit the PFILE and use the CREATE command to create a
new SPFILE from the edited PFILE.
 
 

The STARTUP Command


 
The STARTUP command is used to startup an Oracle database.  You have learned
about two different initialization parameter files.  There is a precedence to which
initialization parameter file is read when an Oracle database starts up as only one of
them is used.
 
These priorities are used when you simply issue the STARTUP command within SQL
to startup a database.
        Oracle knows which database to startup based on the value of ORACLE_SID.
        Oracle uses the priorities listed below to decide which parameter file to use
during startup.
 
STARTUP
 
        First Priority:  the spfileSID.ora on the server side is used to start up the
instance.
        Second Priority:  If the spfileSID.ora is not found, the default SPFILE on the
server side is used to start the instance.
        Third Priority:  If the default SPFILE is not found, the initSID.ora on the server
side will be used to start the instance.
 
A specified PFILE can override the use of the default SPFILE to start an
instance.  Examples:
90
 
STARTUP PFILE=$ORACLE_HOME/dbs/initUSER350.ora
 
Or
 
STARTUP PFILE=$HOME/initUSER350.ora
 
        A PFILE can optionally contain a definition to indicate use of an SPFILE. 
        This is the only way to start the instance with an SPFILE in a non-default
location.  
        To start the database with an SPFILE not in the default location, SPFILE=<full
path and filename> must be placed in the PFILE.
 
Example PFILE parameter:
 
 SPFILE=$HOME/initUSER350.ora
 

Modifying SPFILE Parameters


 
Earlier you read that an advantage of the SPFILE is that certain dynamic parameters
can be changed without shutting down the Oracle database.  These changes are made
as shown in the figure below by using the ALTER SYSTEM command.  Modifications
made in this way change the contents of the SPFILE.  If you shutdown the database
and startup again, the modifications you previously made will take effect because
the SPFILE was modified.

91
 

 
The ALTER SYSTEM SET command is used to change the value of instance
parameters and has a number of different options as shown here.
 
ALTER SYSTEM SET parameter_name = parameter_value
[COMMENT 'text'] [SCOPE = MEMORY|SPFILE|BOTH]
[SID= 'sid'|'*']
 
where
 
        parameter_name:  Name of the parameter to be changed
        parameter_value:  Value the parameter is being changed to
        COMMENT:  A comment to be added into the SPFILE next to the parameter
being altered
        SCOPE:  Determines if change should be made in memory, SPFILE, or in both
areas
        MEMORY:  Changes the parameter value only in the currently running instance
        SPFILE:  Changes the parameter value in the SPFILE only
92
        BOTH:  Changes the parameter value in the currently running instance and the
SPFILE
        SID:  Identifies the ORACLE_SID for the SPFILE being used
        'sid':  Specific SID to be used in altering the SPFILE
        '*':  Uses the default SPFILE
 
Here is an example coding script within SQL*Plus that demonstrates how to display
current parameter values and to alter these values. 
 
SQL> SHOW PARAMETERS timed_statistics
 
 
NAME                 TYPE        VALUE
------------------   ----------- -----
timed_statistics     boolean     FALSE
 
 
SQL> ALTER SYSTEM SET timed_statistics = FALSE
  2  COMMENT = 'temporary setting' SCOPE=BOTH
  3  SID='USER350';
 
System altered.
 
You can also use the ALTER SYSTEM RESET command to delete a parameter setting
or revert to a default value for a parameter.
 
SQL> ALTER SYSTEM RESET timed_statistics
  2  SCOPE=BOTH

93
  3  SID='USER350';
 
System altered.
 
SQL> SHOW PARAMETERS timed_statistics
 
NAME                 TYPE        VALUE
------------------   ----------- -----
timed_statistics     boolean     FALSE
  

Starting Up a Database
 Instance Stages
 Databases can be started up in various states or stages.  The diagram shown below
illustrates the stages through which a database passes during startup and shutdown.
 

 
 
NOMOUNT:  This stage is only used when first creating a database or when it is
necessary to recreate a database's control files.  Startup includes the following tasks.
        Read the spfileSID.ora or spfile.ora or initSID.ora.
94
        Allocate the SGA.
        Startup the background processes.
        Open a log file named alert_SID.log and any trace files specified in the
initialization parameter file.
        Example startup commands for creating the Oracle database and for the
database belonging to USER350 are shown here.
 
SQL> STARTUP NOMOUNT PFILE=$ORACLE_HOME/dbs/initDBORCL.ora
SQL> STARTUP NOMOUNT PFILE=$HOME/initUSER350.ora
 
MOUNT:  This stage is used for specific maintenance operations.  The database is
mounted, but not open.  You can use this option if you need to:
        Rename datafiles.
        Enable/disable redo log archiving options.
        Perform full database recovery.
        When a database is mounted it
o   is associated with the instance that was started during NOMOUNT stage.
o   locates and opens the control files specified in the parameter file.
o   reads the control file to obtain the names/status of datafiles and redo log
files, but it does not check to verify the existence of these files.
        Example startup commands for maintaining the Oracle database and for the
database belonging to USER350 are shown here.
 
SQL> STARTUP MOUNT PFILE=$ORACLE_HOME/dbs/initDBORCL.ora
SQL> STARTUP MOUNT PFILE=$HOME/initUSER350.ora
 
OPEN:  This stage is used for normal database operations.  Any valid user can
connect to the database.  Opening the database includes opening datafiles and redo
log files.  If any of these files are missing, Oracle will return an error.  If errors occurred
95
during the previous database shutdown, the SMON background process will initiate
instance recovery.  An example command to startup the database in OPEN stage is
shown here.
 
SQL> STARTUP PFILE=$ORACLE_HOME/dbs/initDBORCL.ora
SQL> STARTUP PFILE=$HOME/initUSER350.ora
 
If the database initialization parameter file is in the default location
at $ORACLE_HOME/dbs, then you can simply type the command STARTUP and the
database associated with the current value of ORACLE_SID will startup.
 
Startup Command Options:
 
You can force a restart of a running database that aborts the current Instance and
starts a new normal instance with the FORCE option.
 
SQL> STARTUP FORCE PFILE=$HOME/initUSER350.ora
 
Sometimes you will want to startup the database, but restrict connection to users with
the RESTRICTED SESSION privilege so that you can perform certain maintenance
activities such as exporting or importing part of the database. 
 
SQL> STARTUP RESTRICT PFILE=$HOME/initUSER350.ora
 
You may also want to begin media recovery when a database starts where your
system has suffered a disk crash.
 
SQL> STARTUP RECOVER PFILE=$HOME/initUSER350.ora
 

96
On a LINUX server, you can automate startup/shutdown of an Oracle database by
making entries in a special operating system file named oratab located in
the/var/opt/oracle directory. 
 
IMPORTANT NOTE:  If an error occurs during a STARTUP command, you must issue
a SHUTDOWN command prior to issuing another STARTUP command.
  

ALTER DATABASE Command


 
You can change the stage of a database.  This example changes the database from
OPEN to READ ONLY.
 
SQL> startup mount pfile=$HOME/initUSER350.ora
ORACLE instance started.
 
Total System Global Area   25535380 bytes
Fixed Size                   279444 bytes
Variable Size              20971520 bytes
Database Buffers            4194304 bytes
Redo Buffers                  90112 bytes
Database mounted.
 
SQL> ALTER DATABASE user350 OPEN READ ONLY;
 
Database altered.
 
 

97
Restricted Mode
 
Earlier you learned to startup the database in a restricted mode with
the RESTRICT option.   If the database is open, you can change to a restricted mode
with the ALTER SYSTEM command as shown here.  The first command restricts logon
to users with restricted privileges.  The second command enables all users to connect.
 
SQL> ALTER SYSTEM ENABLE RESTRICTED SESSION;
SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION;
 
One of the tasks you may perform during restricted session is to kill current user
sessions prior to performing a task such as the export of objects (tables, indexes,
etc.).  The ALTER SYSTEM KILL SESSION 'integer1, integer2' command is used to
do this.  The values of integer1 and integer2 are obtained from the SID and SERIAL#
columns in the V$SESSION view.  The first six SID values shown below are for
background processes and should be left alone!  Notice that the
users SYS and USER350 are connected.  We can kill the session for user account
name DBOCKSTD.
 
SQL> SELECT sid, serial#, status, username FROM v$session WHERE
username='DBOCK';
 
       SID    SERIAL# STATUS   USERNAME
---------- ---------- -------- ------------------------------
260                1352 INACTIVE DBOCK
 
SQL> ALTER SYSTEM KILL SESSION '260,1352';
 
System altered.

98
 
Now when DBOCK attempts to select data, the following message is received.
 
SQL> select patientid, lastname, firstname, bedno
  2  from patient
  3  where bedno=1;
*
ERROR at line 1:
ORA-00028: your session has been killed
 
When a session is killed, PMON will rollback the user's current transaction and release
all table and row locks held and free all resources reserved for the user.
 
 

READ ONLY Mode


 
You can open a database as read-only provided it is not already open in read-write
mode.  This is useful when you have a standby database that you want to use to
enable system users to execute queries while the production database is being
maintained. 
 

99
 
 

Database Shutdown
 
The SHUTDOWN command is used to shutdown a database instance.  You must be
connected as either SYSOPER or SYSDBA to shutdown a database. 
 
Shutdown Normal:  This is the default shutdown mode. 
        No new connections are allowed.
        The server waits for all users to disconnect before completing the shutdown.
        Database and redo buffers are written to disk.
        The SGA memory allocation is released and background processes terminate.
        The database is closed and dismounted.
        The shutdown command is:
 
Shutdown
  
Or
100
 
Shutdown Normal
 
Shutdown Transactional:  This prevents client computers from losing work.
        No new connections are allowed.
        No connected client can start a new transaction.
        Clients are disconnected as soon as the current transaction ends.
        Shutdown proceeds when all transactions are finished.
        The shutdown command is:
 
Shutdown Transactional
 
Shutdown Immediate:  This can cause client computers to lose work.
        No new connections are allowed.
        Connected clients are disconnected and SQL statements in process are not
completed.
        Oracle rolls back active transactions.
        Oracle closes/dismounts the database.
        The shutdown command is:
 
Shutdown Immediate
 
Shutdown Abort:  This is used if the normal or transactional or immediate options
fail.  This is the LEAST favored option because the next startup will require instance
recovery and you CANNOT backup a database that has been shutdown with the
ABORT option.
        Current SQL statements are immediately terminated.
        Users are disconnected.
101
        Database and redo buffers are NOT written to disk.
        Uncommitted transactions are NOT rolled back.
        The Instance is terminated without closing files.
        The database is NOT closed or dismounted.
        Database recovery by SMON must occur on the next startup.
        The shutdown command is:
 
Shutdown Abort
 
 

Diagnostic Files
 
These files are used to store information about database activities and are useful tools
for troubleshooting and managing a database.  There are several types of diagnostic
files.
 
Starting with Oracle 11g, the $ORACLE_BASE parameter value is the anchor for
diagnostic and alert files.  New in Oracle 11g is the new ADR (Automatic Diagnostic
Repository) and Incident Packaging System.  It is designed to allow quick access to
alert and diagnostic information.
        The new $ADR_HOME directory is located by default
at $ORACLE_BASE/diag. 
        There are directories for each instance
at $ORACLE_HOME/diag/$ORACLE_SID.   
        The new initialization parameter DIAGNOSTIC_DEST can be used to specify an
alternative location for the diag directory contents.
 

102
In 11g, each $ORACLE_HOME/diag/$ORACLE_SID directory may contain these new
directories:
        alert - A new alert directory for the plain text and XML versions of the alert log.
        incident - A new directory for the incident packaging software.
        incpkg - A directory for packaging an incident into a bundle.
        trace - A replacement for the ancient background dump (bdump) and user dump
(udump) destination.  This is where the alert_SID.log is stored.
        cdump - The old core dump directory retains it's Oracle 10g name.
 
Oracle 11g writes two alert logs. 
        One is written as a plain text file and is named alert_SID.log (for example a
database named USER350 would have an alert log namedalert_USER350.log.
        The other alert log is formatted as XML and is named log.xml. 
        The alert log files are stored by default
to:  $ORACLE_BASE/diag/rdbms/$ORACLE_SID.
        It will be stored to the location specified by DIAGNOSTIC_DEST if you set that
parameter.  I found the DBORCL alert log named alert_DBORCL.loglocated
at /u01/app/oracle/diag/rdbms/dborcl/DBORCL/trace.   This location directory
was generated based on a setting of DIAGNOSTIC_DEST = '/u01/app/oracle'.
 
You can access the alert log via standard SQL using the new V$DIAG_INFO  view:
 
column name format a22;
column value format a55;
select name, value from v$diag_info;
 
NAME                   VALUE
---------------------- -----------------------------------------
Diag Enabled           TRUE
ADR Base               /u01/app/oracle
ADR Home               /u01/app/oracle/diag/rdbms/dborcl/DBORCL
Diag
Trace             /u01/app/oracle/diag/rdbms/dborcl/DBORCL/trace

103
Diag
Alert             /u01/app/oracle/diag/rdbms/dborcl/DBORCL/alert
Diag
Incident          /u01/app/oracle/diag/rdbms/dborcl/DBORCL/incid
ent
Diag
Cdump             /u01/app/oracle/diag/rdbms/dborcl/DBORCL/cdump
Health
Monitor         /u01/app/oracle/diag/rdbms/dborcl/DBORCL/hm
Default Trace
File     /u01/app/oracle/diag/rdbms/dborcl/DBORCL/trace/DBORCL_o
                       ra_25119.trc
Active Problem Count   1
Active Incident Count  2
 
11 rows selected.
 
You can enable or disable user tracing with the ALTER SESSION command as shown
here.
 
 ALTER SESSION SET SQL_TRACE = TRUE
 
        You can also set the SQL_TRACE = TRUE parameter in the initialization
parameter files.
 
 

104
4. Tablespaces and Datafiles
 

Objectives
 
These notes cover the creation and management of tablespaces and their associated
datafiles.  You will learn how to create both locally managed and dictionary managed
tablespaces.
 

Tablespaces vs. Datafiles


 
An Oracle database is comprised of tablespaces. 
 
Tablespaces logically organize data that are physically stored in datafiles. 
        A tablespace belongs to only one database, and has at least one datafile that is
used to store data for the associated tablespace. 
        The term "tablespaces" is misleading because a tablespace can store tables, but
can also store many other database objects such as indexes, views, sequences,
etc.
        Because disk drives have a finite size, a tablespace can span disk drives when
datafiles from more than one disk drive are assigned to a tablespace.  This
enables systems to be very, very large.
        Datafiles are always assigned to only one tablespace and, therefore, to only one
database. 
 
As is shown in the figure below, a tablespace can span datafiles. 
 

105
 
 

Tablespace Types
 
There are three types of tablespaces:  (1) permanent, (2) undo, and (3) temporary.
        Permanent – These tablespaces store objects in segments that are permanent –
that persist beyond the duration of a session or transaction.
        Undo – These tablespaces store segments that may be retained beyond a
transaction, but are basically used to:
o   Provide read consistency for SELECT statements that access tables that
have rows that are in the process of being modified.
o   Provide the ability to rollback a transaction that fails to commit.
        Temporary – This tablespace stores segments that are transient and only exist
for the duration of a session or a transaction.  Mostly, a temporary tablespace
stores rows for sort and join operations.
 
106
How Many Tablespaces Are Needed for a Database?
 
Beginning with Oracle 10g, the smallest Oracle database is two tablespaces.  This
applies to Oracle 11g.
o   SYSTEM – stores the data dictionary.
o   SYSAUX – stores data for auxiliary applications (covered in more detail later in
these notes).
 
In reality, a typical production database has numerous tablespaces.  These
include SYSTEM and NON-SYSTEM tablespaces.
 
SYSTEM – a tablespace that is always used to store SYSTEM data that includes data
about tables, indexes, sequences, and other objects – this metadata comprises the
data dictionary.
        Every Oracle database has to have a SYSTEM tablespace—it is the first
tablespace created when a database is created.
        Accessing it requires a higher level of privilege.
        You cannot rename or drop a SYSTEM tablespace.
        You cannot take a SYSTEM tablespace offline. 
        The SYSTEM tablespace could store user data, but this is not normally done—a
good rule to follow is to never allow allow the storage of user segments in
the SYSTEM tablespace.
        This tablespace always has a SYSTEM Undo segment.
 
The SYSAUX tablespace stores data for auxiliary applications such as the LogMiner,
Workspace Manager, Oracle Data Mining, Oracle Streams, and many other Oracle
tools.
        This tablespace is automatically created if you use the Database Creation
Assistant software to build an Oracle database.

107
        Like the SYSTEM tablespace, SYSAUX requires a higher level of security and it
cannot be dropped or renamed.
        Do not allow user objects to be stored in SYSAUX.  This tablespace should only
store system specific objects.
        This is a permanent tablespace.
 
All other tablespaces are referred to as Non-SYSTEM.  A different tablespace is used
to store organizational data in tables accessed by application programs, and still a
different one for undo information storage, and so on.  There are several reasons for
having more than one tablespace:
        Flexibility in database administration.
        Separate data by backup requirements.
        Separate dynamic and static data to enable database tuning.
        Control space allocation for both applications and system users.
        Reduce contention for input/output path access (to/from memory/disk).
 
 

CREATE TABLESPACE Command


 
To create a tablespace you must have the CREATE TABLESPACE privilege. 
 
The full CREATE TABLESPACE (and CREATE TEMPORARY TABLESPACE)
command syntax is shown here. 
 
CREATE TABLESPACE tablespace
  [DATAFILE clause]
  [MINIMUM EXTENT integer[K|M]]
  [BLOCKSIZE integer [K]]
  [LOGGING|NOLOGGING]
108
  [DEFAULT storage_clause ]
  [ONLINE|OFFLINE]
  [PERMANENT|TEMPORARY]
  [extent_management_clause]
  [segment_management_clause]
 
As you can see, almost all of the clauses are optional.  The clauses are defined as
follows:
 
        TABLESPACE: This clause specifies the tablespace name. 
        DATAFILE: This clause names the one or more datafiles that will comprise the
tablespace and includes the full path, example:
 
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' SIZE
10M
 
        MINIMUM EXTENT:  Every used extent for the tablespace will be a multiple of
this integer value.  Use either T, G, M or K to specify terabytes, gigabytes,
megabytes, or kilobytes.
        BLOCKSIZE:  This specifies a nonstandard block size – this clause can only be
used if the DB_CACHE_SIZE parameter is used and at least one
DB_nK_CACHE_SIZE parameter is set and the integer value for BLOCSIZE
must correspond with one of the DB_nK_CACHE_SIZE parameter settings.
        LOGGING:  This is the default – all tables, indexes, and partitions within a
tablespace have modifications written to Online Redo Logs.
        NOLOGGING:  This option is the opposite of LOGGING and is used most often
when large direct loads of clean data are done during database creation for
systems that are being ported from another file system or DBMS to Oracle. 
        DEFAULT storage_clause:  This specifies default parameters for objects
created inside the tablespace.  Individual storage clauses can be used when
objects are created to override the specified DEFAULT.

109
        OFFLINE:  This parameter causes a tablespace to be unavailable after creation.
        PERMANENT:  A permanent tablespace can hold permanent database objects.
        TEMPORARY:  A temporary tablespace can hold temporary database objects,
e.g., segments created during sorts as a result of ORDER BY clauses or JOIN
views of multiple tables.  A temporary tablespace cannot be specified for
EXTENT MANAGEMENT LOCAL or have the BLOCKSIZE clause specified.
        extent_management_clause: This clause specifies how the extents of the
tablespace are managed and is covered in detail later in these notes.
        segment_management_clause:  This specifies how Oracle will track used and
free space in segments in a tablespace that is using free lists or bitmap objects. 
        datafile_clause:  filename [SIZE integer [K|M]  [REUSE]
               [ AUTOEXTEND ON | OFF ]
               filename:  includes the path and filename and file size.  .
               REUSE: specified to reuse an existing file. 
        NEXT:  Specifies the size of the next extent.
        MAXSIZE:  Specifies the maximum disk space allocated to the
tablespace.  Usually set in megabytes, e.g., 400M or specified as UNLIMITED. 
 
 

Tablespace Space Management


 
Tablespaces can be either Locally Managed to Dictionary Managed.  Dictionary
managed tablespaces have been deprecated (are no longer used--are obsolete) with
Oracle 11g; however, you may encounter them when working at a site that is using
Oracle 10g.
 
When you create a tablespace, if you do not specify extent management, the default is
locally managed.
 
Locally Managed 
110
The extents allocated to a locally managed tablespace are managed through the use
of bitmaps. 
        Each bit corresponds to a block or group of blocks (an extent). 
        The bitmap value (on or off) corresponds to whether or not an extent is allocated
or free for reuse. 
 

 
        Local management is the default for the SYSTEM tablespace beginning with
Oracle 10g.
        When the SYSTEM tablespace is locally managed, the other tablespaces in the
database must also be either locally managed or read-only.
        Local management reduces contention for the SYSTEM tablespace because
space allocation and deallocation operations for other tablespaces do not need to
use data dictionary tables.
        The LOCAL option is the default so it is normally not specified. 
 
        With the LOCAL option, you cannot specify any DEFAULT
STORAGE, MINIMUM EXTENT, or TEMPORARY clauses. 
 
Extent Management
        UNIFORM – a specification of UNIFORM means that the tablespace is managed
in uniform extents of the SIZE specified.

111
o   use UNIFORM to enable exact control over unused space and when you can
predict the space that needs to be allocated for an object or objects.
o   Use K, M, G, T, etc  to specify the extent size in kilobytes, megabytes,
gigabytes, terabytes, etc.  The default is 1M; however, you can specify the
extent size with the SIZE clause of the UNIFORM clause.
o   For our small student databases, a good SIZE clause value is 128K.
o   You must ensure with this setting that each extent has at least 5 database
blocks.
        AUTOALLOCATE – a specification of AUTOALLOCATE instead of UNIFORM,
then the tablespace is system managed and you cannot specify extent sizes. 
o   AUTOALLOCATE is the default. 
  this simplifies disk space allocation because the database automatically
selects the appropriate extent size.
  this does waste some space but simplifies management of tablespace.
o   Tablespaces with AUTOALLOCATE are allocated minimum extent sizes
of 64K with a minimum of 5 database blocks per extent.
 
Advantages of Local Management:  Basically all of these advantages lead to
improved system performance in terms of response time, particularly the elimination of
the need to coalesce free extents.
 
        Local management avoids recursive space management operations.  This can
occur in dictionary managed tablespaces if consuming or releasing space in an
extent results in another operation that consumes or releases space in an undo
segment or data dictionary table.
        Because locally managed tablespaces do not record free space in data
dictionary tables, they reduce contention on these tables.
        Local management of extents automatically tracks adjacent free space,
eliminating the need to coalesce free extents.
        The sizes of extents that are managed locally can be determined automatically
by the system.

112
        Changes to the extent bitmaps do not generate undo information because they
do not update tables in the data dictionary (except for special cases such as
tablespace quota information).
 
Example CREATE TABLESPACE command – this creates a locally
managed Inventory tablespace with AUTOALLOCATE management of extents.
 
CREATE TABLESPACE inventory
    DATAFILE '/u02/student/dbockstd/oradata/USER350invent01.dbf'
SIZE 50M
    EXTENT MANAGEMENT LOCAL AUTOALLOCATE;
 
Example CREATE TABLESPACE command – this creates a locally
managed Inventory tablespace with UNIFORM management of extents with extent
sizes of 128K.
 
CREATE TABLESPACE inventory
    DATAFILE '/u02/student/dbockstd/oradata/USER350invent01.dbf'
SIZE 50M
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K;
 
Possible Errors
You cannot specify the following clauses when you explicitly specify EXTENT
MANAGEMENT LOCAL:
o   DEFAULT storage clause
o   MINIMUM EXTENT
o   TEMPORARY
 

Segment Space Management in Locally Managed Tablespaces


113
Use the SEGMENT SPACE MANAGEMENT clause to specify how free and used
space within a segment is to be managed.  Once established, you cannot alter the
segment space management method for a tablespace.
 
MANUAL:  This setting uses free lists to manage free space within segments.
o   Free lists are lists of data blocks that have space available for inserting
rows.
o   You must specify and tune the PCTUSED, FREELISTS, and FREELIST
GROUPS storage parameters.
o   MANUAL is usually NOT a good choice.
AUTO:  This uses bitmaps to manage free space within segments.
o   This is the default.
o   A bitmap describes the status of each data block within a segment with
regard to the data block's ability to have additional rows inserted.
o   Bitmaps allow Oracle to manage free space automatically.
o   Specify automatic segment-space management only for permanent, locally
managed tablespaces.
o   Automatic generally delivers better space utilization than manual, and it is
self-tuning.
 
Example CREATE TABLESPACE command – this creates a locally
managed Inventory tablespace with AUTO segment space management.
 
CREATE TABLESPACE inventory
    DATAFILE '/u02/student/dbockstd/oradata/USER350invent01.dbf'
SIZE 50M
    EXTENT MANAGEMENT LOCAL
    SEGMENT SPACE MANAGEMENT AUTO;
 
 
114
Dictionary Managed 
With this approach the data dictionary contains tables that store information that is
used to manage extent allocation and deallocation manually. 
 
NOTE:  Keep in mind you will NOT be able to create any tablespaces of this type
in your 11g database.  This information is provided in the event you have to work
with older databases.
 

 
The DEFAULT STORAGE clause enables you to customize the allocation of
extents.  This provides increased flexibility, but less efficiency than locally managed
tablespaces.
 
Example – this example creates a tablespace using all DEFAULT STORAGE clauses.
 
CREATE TABLESPACE inventory
  DATAFILE '/u02/student/dbockstd/oradata/USER350invent01.dbf'
SIZE 50M
  EXTENT MANAGEMENT DICTIONARY
  DEFAULT STORAGE (
    INITIAL 50K

115
    NEXT 50K
    MINEXTENTS 2
    MAXEXTENTS 50
    PCTINCREASE 0);   
 
        The tablespace will be stored in a single, 50M datafile.
        The EXTENT MANAGEMENT DICTIONARY clause specifies the management.
        All segments created in the tablespace will inherit the default storage parameters
unless their storage parameters are specified explicitly to override the default.
 
The storage parameters specify the following:
        INITIAL – size in bytes of the first extent in a segment.
        NEXT – size in bytes of second and subsequent segment extents.
        PCTINCREASE – percent by which each extent after the second extent grows.
o   SMON periodically coalesces free space in a dictionary-managed
tablespace, but only if the PCTINCREASE setting is NOT zero.
o   Use ALTER TABLESPACE <tablespacename> COALESCE to manually
coalesce adjacent free extents.
        MINEXTENTS – number of extents allocated at a minimum to each segment
upon creation of a segment.
        MAXEXTENTS – number of extents allocated at a maximum to a segment – you
can specify UNLIMITED.
 
 

UNDO Tablespace
 
The Undo tablespace is used for automatic undo management.   Note the required
use of the UNDO clause within the CREATE command shown in the figure here.

116
 

 
More than one UNDO tablespace can exist, but only one can be active at a time.
 
A later set of notes will cover UNDO management in detail.
 
 

TEMPORARY Tablespace
 
A TEMPORARY tablespace is used to manage space for sort operations.  Sort
operations generate segments, sometimes large segments or lots of them depending
on the sort required to satisfy the specification in
a SELECT statement's WHERE clause. 
 
Sort operations are also generated by SELECT statements that join rows from within
tables and between tables. 
 
Note the use of the TEMPFILE instead of a DATAFILE specification for a temporary
tablespace in the figure shown below. 
 

117
 
        Tempfiles are also in a NOLOGGING mode. 
        Tempfiles cannot be made read only or be renamed. 
        Tempfiles are required for read-only databases. 
        Tempfiles are not recovered during database recovery operations. 
        The UNIFORM SIZE parameter needs to be a multiple of
the SORT_AREA_SIZE to optimize sort performance.
        The AUTOALLOCATE clause is not allowed for temporary tablespaces.
        The default extent SIZE parameter is 1M.
 
 
Default Temporary Tablespace
 
Each database needs to have a specified default temporary tablespace.  If one is not
specified, then any user account created without specifying aTEMPORARY
TABLESPACE clause is assigned a temporary tablespace in
the SYSTEM tablespace! 
 
This should raise a red flag as you don't want system users to execute SELECT
commands that cause sort operations to take place within the SYSTEM tablespace.
 
If a default temporary tablespace is not specified at the time a database is created, a
DBA can create one by altering the database.
118
 
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
 
After this, new system user accounts are automatically allocated temp as their
temporary tablespace.  If you ALTER DATABASE to assign a new default temporary
tablespace, all system users are automatically reassigned to the new default
tablespace for temporary operations.
 
Limitations: 
        A default temporary tablespace cannot be dropped unless a replacement is
created.  This is usually only done if you were moving the tablespace from one
disk drive to another.
        You cannot take a default temporary tablespace offline – this is done only for
system maintenance or to restrict access to a tablespace temporarily.  None of
these activities apply to default temporary tablespaces.
        You cannot alter a default temporary tablespace to make it permanent.
 
Temporary Tablespace Groups
You can have more than one temporary tablespace online and active.  Oracle supports
this through the use of temporary tablespace groups – this is a synonym for a list of
temporary tablespaces.
        A single user can have more than one temporary tablespace in use by assigning
the temporary tablespace group as the default to the user instead of a single
temporary tablespace.
        Example:  Suppose two temporary tablespaces
named TEMP01 and TEMP02 have been created.  This code assigns the
tablespaces to a group namedTEMPGRP.
 
SQL> ALTER TABLESPACE temp01 TABLESPACE GROUP tempgrp;
Tablespace altered.
 

119
SQL> ALTER TABLESPACE temp02 TABLESPACE GROUP tempgrp;
Tablespace altered.
 
        Example continued:  This code changes the database's default temporary
tablespace to TEMPGRP – you use the same command that would be used to
assign a temporary tablespace as the default because temporary tablespace
groups are treated logically the same as an individual temporary tablespace.
 
SQL> ALTER DATABASE DEFAULT TEMPORARY TABLESPACE tempgrp;
Database altered.
 
        To drop a tablespace group, first drop all of its members.  Drop a member by
assigning the temporary tablespace to a group with an empty string.
 
SQL> ALTER TABLESPACE temp01 TABLESPACE GROUP '';
Tablespace altered.
 
        To assign a temporary tablespace group to a user, the CREATE USER
SQL command is the same as for an individual tablespace.  In this
exampleuser350 is assigned the temporary tablespace TEMPGRP.
 
SQL> CREATE USER user350 IDENTIFIED BY secret_password
2           DEFAULT TABLESPACE users
3     TEMPORARY TABLESPACE tempgrp;
 
 

USERS, DATA and INDEXES Tablespaces


 
Most Oracle databases will have a USERS permanent tablespace. 
120
        This tablespace is used to store objects created by individual users of the
database. 
        At SIUE we use the USERS tablespace as a storage location for tables, indexes,
views, and other objects created by students.
        All students share the same USERS tablespace.
 
Many Oracle databases will have one or more DATA tablespaces. 
        A DATA tablespace is also permanent and is used to store application data
tables such as ORDER ENTRY or INVENTORY MANAGEMENT applications. 
        For large applications, it is often a practice to create a special DATA tablespace
to store data for the application.   In this case the tablespace may be named
whatever name is appropriate to describe the objects stored in the tablespace
accurately. 
 
Oracle databases having a DATA (or more than one DATA) tablespace will also have
an accompanying INDEXES tablespace.
        The purpose of separating tables from their associated indexes is to improve I/O
efficiency. 
        The DATA and INDEXES tablespaces will typically be placed on different disk
drives thereby providing an I/O path for each so that as tables are updated, the
indexes can also be updated simultaneously.
 

Bigfile Tablespaces
 
A Bigfile tablespace is best used with a server that uses a RAID storage device with
disk stripping – a single datafile is allocated and it can be up to 8EB(exabytes, a million
terabytes) in size with up to 4G blocks.
 
Normal tablespaces are referred to as Smallfile tablespaces.
 
Why are Bigfile tablespaces important?
121
        The maximum number of datafiles in an Oracle database is limited (usually
to 64K files) – think big here—think about a database for the internal revenue
service.
o   A Bigfile tablespace with 8K blocks can contain a 32 terabyte datafile.
o   A Bigfile tablespace with 32K blocks can contain a 128 terabyte datafile.
o   These sizes enhance the storage capacity of an Oracle database.
o   These sizes can also reduce the number of datafiles to be managed.
 
        Bigfile tablespaces can only be locally managed with automatic segment space
management except for locally managed undo tablespaces, temporary
tablespaces, and the SYSTEM tablespace.
        If a Bigfile tablespace is used for automatic undo or temporary segments, the
segment space management must be set to MANUAL.
        Bigfile tablespaces save space in the SGA and control file because fewer
datafiles need to be tracked.
        ALTER TABLESPACE commands on a Bigfile tablespace do not reference a
datafile because only one datafile is associated with each Bigfile tablespace.
 
Example – this example creates a Bigfile tablespace named Graph01 (to store data
that is graphical in nature and that consumes a lot of space).  Note use of
theBIGFILE keyword.
 
CREATE BIGFILE TABLESPACE graph01
 DATAFILE '/u03/student/dbockstd/oradata/USER350graph01.dbf'
SIZE 10g;
 
        Example continued:  This resizes the Bigfile tablespace to increase the capacity
from 10 gigabytes to 40 gigabytes.
 
SQL> ALTER TABLESPACE graph01 40g;

122
Tablespace altered.
 
        Example continued:  This sets the AUTOEXTEND option on to enable the
tablespace to extend in size 10 gigabytes at a time.
 
SQL> ALTER TABLESPACE graph01 AUTOEXTEND ON NEXT 10g;
Tablespace altered.
 
Notice in the above two examples that there was no need to refer to the datafile by
name since the Bigfile tablespace has only a single datafile.
 

Compressed Tablespaces
 
This type of tablespace is used to compress all tables stored in the tablespace.
        The keyword DEFAULT is used to specify compression when followed by the
compression type. 
        You can override the type of compression used when creating a table in the
tablespace.
 
Compression has these advantages:
        Compression saves disk space, reduces memory use in the database buffer
cache, and can significantly speed query execution during reads.
        Compression has a cost in CPU overhead for data loading and DML. However,
this cost might be offset by reduced I/O requirements.
 
This example creates a compressed tablespace named COMP_DATA.  Here
the Compress for OLTP clause specifies the type of compression.  You can study the
other types of compression on your own from your readings.
 
CREATE TABLESPACE comp_data
    DATAFILE '/u02/oradata/DBORCL/DBORCLcomp_data.dbf' SIZE 50M

123
    DEFAULT COMPRESS FOR OLTP
    EXTENT MANAGEMENT LOCAL
    SEGMENT SPACE MANAGEMENT AUTO;
 
Tablespace created.
 
 

Encrypted Tablespaces
 
Only permanent tablespaces can be encrypted.
        Purpose is to protect sensitive data from unauthorized access through the
operating system file system.
        Tablespace encryption is transparent to applictions.
        All tablespace blocks are encrypted including all segment types. 
        Data from an encrypted tablespace is automatically encrypted when written to an
undo tablespace, redo logs, and temporary tablespaces.
        Partitioned tables/indexes can have both encrypted and non-encrypted
segments in different tablespaces.
        The database must have the COMPATIBLE parameter set to 11.1.0 or higher.
        There is no disk space overhead for encrypting a tablespace.
 
Encryption requires creation of an Oracle wallet to store the master encryption key.
 
Transparent data encryption supports industry-standard encryption algorithms.  The
default is AES128 algorithm that uses 128-bit keys.
 
This example creates an encrypted tablespace named SECURE_DATA that uses 256-
bit keys.
 
CREATE TABLESPACE secure_data

124
    DATAFILE '/u02/oradata/DBORCL/DBORCLsecure_data.dbf' SIZE
50M
    ENCRYYPTION USING 'AES256' EXTENT MANAGEMENT LOCAL
    DEFAULT STORAGE(ENCRYPT);
 
Tablespace created.
 
You cannot encrypt an existing tablespace with the ALTER
TABLESPACE statement.  You would need to export the data from an unencrypted
tablespace and then import it into an encrypted tablespace.
 
 

Read Only Tablespaces


 
A tablespace may be made read only.  One purpose for this action is to enable system
maintenance that involves dropping tables and associated indexes stored in the
tablespace.  This can be accomplished while a tablespace is in read only mode
because the DROP command affects only information in the Data Dictionary which is in
the SYSTEM tablespace, and the SYSTEM tablespace is not read only. 
 
The command to make a tablespace read only is:
 
ALTER TABLESPACE tablespace_name READ ONLY;
 
This also causes an automatic checkpoint of the tablespace.
 
If the tablespace being modified is locally managed, the segments that are associated
with the dropped tables and index are changed to temporary segments so that the
bitmap is not updated. 
 
125
To change a tablespace from read only to read/write, all datafiles for the tablespace
must be online. 
 
ALTER TABLESPACE tablespace_name READ WRITE;
 
Another reason for making a tablespace read only is to support the movement of the
data to read only media such as CD-ROM.  This type of change would probably be
permanent.  This approach is sometimes used for the storage of large quantities of
static data that doesn’t change.  This also eliminates the need to perform system
backups of the read only tablespaces.  To move the datafiles to a read only media, first
alter the tablespaces as read only, then rename the datafiles to the new location by
using the ALTER TABLESPACE RENAME DATAFILE option.. 
 
 

Offline Tablespaces
 
Most tablespaces are online all of the time; however, a DBA can take a
tablespace offline.  This enables part of the database to be available – the
tablespaces that are online – while enabling maintenance on the offline
tablespace.  Typical activities include:
        Offline tablespace backup – a tablespace can be backed up while online, but
offline backup is faster.
        Recover an individual tablespace or datafile.
        Move a datafile without closing the database.
 
You cannot use SQL to reference offline tablespaces – this simply generates a system
error.  Additionally, the action of taking a tablespace offline/online is always recorded in
the data dictionary and control file(s).  Tablespaces that are offline when you shutdown
a database are offline when the database is again opened.
 
The commands to take a tablespace offline and online are simple ALTER
TABLESPACE commands.  These also take the associated datafiles offline.
126
 
ALTER TABLESPACE application_data OFFLINE;
ALTER TABLESPACE application_data ONLINE;
 
The full syntax is:
 
ALTER TABLESPACE tablespace
{ONLINE |OFFLINE [NORMAL|TEMPORARY|IMMEDIATE|FOR RECOVER]}
 

NORMAL:  All data blocks for all datafiles that form the tablespace are written from the
SGA to the datafiles.  A tablespace that is offline NORMAL does not require any type
of recovery when it is brought back online.
 
TEMPORARY:  A checkpoint is performed for all datafiles in the tablespace.  Any
offline files may require media recovery. 
 
IMMEDIATE:  A checkpoint is NOT performed.  Media recovery on the tablespace is
required before it is brought back online to synchronize the database objects.
 
FOR RECOVER:  Used to place a tablespace in offline status to enable point-in-time
recovery. 
 
Errors and Restrictions: 
        If DBWn fails to write to a datafile after several attempts, Oracle will
automatically take the associated tablespace offline – the DBA will then recover
the datafile.
        The SYSTEM tablespace cannot be taken offline. 
        Tablespaces with active undo segments or temporary segments. 
 

127
 

Tablespace Storage Settings


 
Note: You will not be able to practice the commands in this section because
Dictionary-Managed tablespaces cannot be created in Oracle 11g.
 
Any of the storage settings for Dictionary-Managed tablespaces can be modified with
the ALTER TABLESPACE command.  This only alters the default settings for future
segment allocations.
 

Tablespace Sizing
 
Normally over time tablespaces need to have additional space allocated.  This can be
accomplished by setting the AUTOEXTEND option to enable a tablespace to increase
automatically in size.
        This can be dangerous if a “runaway” process or application generates data and
consumes all available storage space. 
        An advantage is that applications will not ABEND because a tablespace runs out
of storage capacity.

128
        This can be accomplished when the tablespace is initially created or by using
the ALTER TABLESPACE command at a later time.
 
CREATE TABLESPACE application_data
  DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
SIZE 200M
  AUTOEXTEND ON NEXT 48K MAXSIZE 500M;
 
This query uses the DBA_DATA_FILES view to determine if AUTOEXTEND is
enabled for selected tablespaces in the SIUE DBORCL database.
 
SELECT tablespace_name, autoextensible
FROM dba_data_files;
 
TABLESPACE_NAME                AUT
------------------------------ ---
SYSTEM                         NO
SYSAUX                         NO
UNDOTBS1                       YES
USERS                          NO
 
        Manually use the ALTER DATABASE command to resize a datafile.
 
ALTER DATABASE
  DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
  AUTOEXTEND ON MAXSIZE 600M;
 
This command looks similar to the above command, but this one resizes a datafile
while the above command sets the maxsize of the datafile.   
129
 
ALTER DATABASE
  DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
  RESIZE 600M;
 
        Add a new datafile to a tablespace with the ALTER TABLESPACE command. 
 
ALTER TABLESPACE application_data
  ADD DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
  SIZE 200M;
 
 

Moving/Relocating Tablespaces/Datafiles
 
The ALTER TABLESPACE command can be used to move datafiles by renaming
them.  This cannot be used if the tablespace is the SYSTEM or contains active undo or
temporary segments. 
 

130
 
The ALTER DATABASE command can also be used with the RENAME option.   This
is the method that must be used to move the SYSTEM tablespace because it cannot
be taken offline.  The steps are: 
    1. Shut down the database.
    2. Use an operating system command to move the files.
    3. Mount the database.
    4. Execute the ALTER DATABASE RENAME FILE command.
 
ALTER DATABASE RENAME
  FILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
    TO '/u02/student/dbockstd/oradata/USER350data01.dbf'
  SIZE 200M;
 
    5. Open the database.
 
 

Dropping Tablespaces
 
Occasionally tablespaces are dropped due to database reorganization.  A tablespace
that contains data cannot be dropped unless the INCLUDING CONTENTSclause is
added to the DROP command.  Since tablespaces will almost always contain data, this
clause is almost always used. 
 
A DBA cannot drop the SYSTEM tablespace or any tablespace with active
segments.  Normally you should take a tablespace offline to ensure no active
transactions are being processed. 
 
An example command set that drops the compressed
tablespace COMP_DATA created earlier is:
131
 
ALTER TABLESPACE comp_data OFFLINE;
 
DROP TABLESPACE comp_data
  INCLUDING CONTENTS AND DATAFILES
  CASCADE CONSTRAINTS;
 
The AND DATAFILES clause causes the datafiles to also be deleted.  Otherwise, the
tablespace is removed from the database as a logical unit, and the datafiles must be
deleted with operating system commands. 
 
The CASCADE CONSTRAINTS clause drops all referential integrity constraints where
objects in one tablespace are constrained/related to objects in another tablespace. 
 
 

Non-Standard Block Sizes


It may be advantageous to create a tablespace with a nonstandard block size in order
to import data efficiently from another database.  This also enables transporting
tablespaces with unlike block sizes between databases.
        A block size is nonstandard if it differs from the size specified by
the DB_BLOCK_SIZE initialization parameter.
        The BLOCKSIZE clause of the CREATE TABLESPACE statement is used to
specify nonstandard block sizes.
        In order for this to work, you must have already set DB_CACHE_SIZE and at
least one DB_nK_CACHE_SIZE initialization parameter values to correspond to
the nonstandard block size to be used.
        The DB_nK_CACHE_SIZE initialization parameters that can be used are:
o   DB_2K_CACHE_SIZE
o   DB_4K_CACHE_SIZE
o   DB_8K_CACHE_SIZE
132
o   DB_16K_CACHE_SIZE
o   DB_32_CACHE_SIZE
 
        Note that the DB_nK_CACHE_SIZE parameter corresponding to the standard
block size cannot be used – it will be invalid – instead use
theDB_CACHE_SIZE parameter for the standard block size.
 
Example – these parameters specify a standard block size of 8K with a cache for
standard block size buffers of 12M.  The 2K and 16K caches will be configured with
cache buffers of 8M each.
 
DB_BLOCK_SIZE=8192
DB_CACHE_SIZE=12M
DB_2K_CACHE_SIZE=8M
DB_16K_CACHE_SIZE=8M
 
Example – this creates a tablespace with a blocksize of 2K (assume the standard block
size for the database was 8K).
 
CREATE TABLESPACE inventory
  DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
      SIZE 50M
  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128K
  BLOCKSIZE 2K;
 
 
 

Managing Tablespaces with Oracle Managed Files


 
133
As you learned earlier, when you use an OMF approach,
the DB_CREATE_FILE_DEST parameter in the parameter file specifies that datafiles
are to be created and defines their location.  The DATAFILE clause to name files is not
used because filenames are automatically generated by the Oracle Server, for
example,ora_tbs1_2xfh990x.dbf.
 
You can also use the ALTER SYSTEM command to dynamically set this parameter in
the SPFILE parameter file.
 
ALTER SYSTEM SET
  DB_CREATE_FILE_DEST = '/u02/student/dbockstd/oradata';
 
Additional tablespaces are specified with the CREATE TABLESPACE command
shown here that specifies not the datafile name, but the datafile size.  You can also
add datafiles with the ALTER TABLESPACE command.
 
CREATE TABLESPACE application_data DATAFILE SIZE 100M;
 
ALTER TABLESPACE application_data ADD DATAFILE;
 
Setting the DB_CREATE_ONLINE_LOG_DEST_n parameter prevents log files and
control files from being located with datafiles – this will reduce I/O contention. 
 
When OMF tablespaces are dropped, their associated datafiles are also deleted at the
operating system level. 
 

Tablespace Information in the Data Dictionary


 

134
The following data dictionary views can be queried to display information about
tablespaces.
        Tablespaces:  DBA_TABLESPACES, V$TABLESPACE
        Datafiles:  DBA_DATA_FILES, V$_DATAFILE
        Temp files:  DBA_TEMP_FILES, V$TEMPFILE
 
You should examine these views in order to familiarize yourself with the information
stored in them.
 
This is an example query that will display free and used space for each tablespace in a
database.
SELECT /* + RULE */  df.tablespace_name "Tablespace",
       df.bytes / (1024 * 1024) "Size (MB)",
       SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
       Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
       Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
  FROM dba_free_space fs,
       (SELECT tablespace_name,SUM(bytes) bytes
          FROM dba_data_files
         GROUP BY tablespace_name) df
 WHERE fs.tablespace_name (+)  = df.tablespace_name
 GROUP BY df.tablespace_name,df.bytes
UNION ALL
SELECT /* + RULE */ df.tablespace_name tspace,
       fs.bytes / (1024 * 1024),
       SUM(df.bytes_free) / (1024 * 1024),
       Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1),
       Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes)
  FROM dba_temp_files fs,
       (SELECT tablespace_name,bytes_free,bytes_used
          FROM v$temp_space_header

135
         GROUP BY tablespace_name,bytes_free,bytes_used) df
 WHERE fs.tablespace_name (+)  = df.tablespace_name
 GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used
 ORDER BY 4 DESC;
 

This shows output for the DBORCL database located on the SOBORA2 server.


 
Tablespace          Size (MB)  Free (MB)  % Free     % Used
------------------- ---------- ---------- ---------- --------
UNDOTBS1              179      166.75      93         7
TEMP                  50         44          88         12
USERS                 25     21.6875     87         13
SYSAUX                325      141.25      43         57
SYSTEM                325      65.625      20         80
 
 
 

136
Module 5 – Create a Database
 

Objectives
 
 Learn the Optimal Flexible Architecture (OFA) for database planning. 
 Create an Oracle database manually. 
 Familiarize with the Oracle Database Configuration Assistant.
 Familiarize with creating an Oracle database using Oracle Managed Files.
 

Database Planning
 
Database planning is based on your understanding of the purpose and type of
database needed.  Many organizations have multiple databases that serve different
purposes such as online-transaction-processing (OLTP), data warehousing, decision
support systems, and general purpose usage.
 
Database planning also requires you to outline the database architecture in terms of:
        How many data files will be needed? 
o   Let's assume we will start with a basic, general purpose usage database.
o   We will need database files for the different tablespaces:
1.   SYSTEM: data dictionary.
2.   SYSAUX: tables and objects for various Oracle software products.
3.   UNDOTBS: undo tablespace for undo recovery.
4.   TEMP: storage for temporary objects, such as sorting.
5.   DATA: permanent organizational data objects, e.g., Orders,
Customers, Products, etc.
6.   INDEXES: permanent indexes for tables stored in the DATA
tablespace.

137
7.   USERS: storage for user generated objects that are not part of the
organizational data.
8.   Any special usage tablespaces, such as dedicated tablespaces for
large applications.
        Control file location? 
o   To maximize recoverability, 2 or more copies of the control file are needed.
o   Any disk drive will do.
        Redo Log file location and usage?
o   To maximize recoverability, 2 or more groups with multiple redo log files per
group are needed.
o    Select a disk drive to minimize I/O contention.
        What size system global area will the database need?
o   Start out with about 1 G. 
o   Let Oracle optimize the memory allocation for the various caches.
        Is database archiving necessary?
o   Production systems - archiving is essential.
o   Archiving requires storage capacity for archive redo log files and backups of
the database.
o   You must monitor the disk storage usage.
        How many disk drives are available?
o   For SOBORA2, you have 3 disks drives available.
o   Additional disk drives cost $$$ - the purchase of more space would need to
be justified.
        How many instances and/or databases will run? This is situational
dependent.  Your student database will be a single instance.
        Will the database be distributed? Again, this is situational dependent.
 
 

138
Optimal Flexible Architecture (OFA)
 
Oracle’s method for the physical layout of an Oracle database’s file structure is termed
the OFA. 
        This approach requires the allocation Oracle database files across several
different physical disk devices in order to provide good system performance for
data retrieval/storage. 
        The OFA is most applicable when the database server has more than one
physical disk drive although the OFA naming convention is applicable to any
database server.
 
The objective of the OFA approach is to make it easy to administer a growing database
where you need to add data, add users, create new databases or tablespaces, add
hardware, and distribute the input-output load across disk drives.
 
This gives the steps to implementing an OFA.
 
Step 1.  Establish an Operating System Directory Structure  
 
An orderly operating system directory structure is one in which any database file can
be stored on any disk drive resource.  This is accomplished by creating
theORACLE_BASE environment variable. 
        The ORACLE_BASE is the base subdirectory for Oracle.
        On SOBORA2, ORACLE_BASE=/u01/app/oracle
        Depending on the installation, you may or may not find that this variable has
been created for any given database. 
        Under $ORACLE_BASE are additional subdirectories where Oracle objects are
stored following a standard approach.  These may include:
 
 

139
dbock/@sobora2.isg.siue.edu=>ls -al
total 24
drwxr-xr-x   6 oracle dba 4096 May 14 23:19 .
drwxr-xr-x   5 oracle dba 4096 Jun 16  2009 ..
drwxr-xr-x   2 oracle dba 4096 May 14 11:26 checkpoints
drwxrwxr-x  11 oracle dba 4096 May 14 10:44 diag
drwxrwxr-x   2 oracle dba 4096 May 14 23:19 fast_recovery_area
drwxr-xr-x   4 oracle dba 4096 May 14 10:41 product
 
        These directories are created to store files associated with the Oracle RDBMS
software – not with a specific database.
        PRODUCT – This subdirectory stores the Oracle kernel and other files that make
up the Oracle Relational DBMS product. 
o   Under the PRODUCT subdirectory you will find a subdirectory for each
version of Oracle that is installed. 
o   Examine this to determine the version number for our current version of
Oracle. 
o   This approach to subdirectory structuring allows the installation of newer
versions of Oracle without taking older versions off-line until all testing and
changeover requirements are completed for the newer version of Oracle.
 
Step 2.  Identify Available Disk Drives and Establish the Oradata Subdirectory
 
The DBA will identify the available disk drives for use in creating a database. 
        Examine the SOBORA2 Server at SIUE to determine how many disk drives
available.
        There are three available. 
        Space allocated for your work is on directories like this.  Replace dbockstd (the
dbock student account) with your own SIUE EID value.
 

140
/u01/student/dbockstd
/u02/student/dbockstd
/u03/student/dbockstd
 
Typically, a DBA will create a subdirectory named oradata (as noted above) on each
disk drive – this subdirectory on each drive will store the files that comprise the
database.   
 
Spreading the database files across multiple disk drives reduces the contention that
will exist between the datafiles for simultaneous use of input-output paths.
 
Step 3.  Create Different Groups of File Objects
 
This step involves the creation of separate Tablespaces to store objects.
        Initially you will create a database with four tablespaces: SYSTEM, SYSAUX,
UNDO, and TEMP. 
        You will add additional tablespaces as needed. This also allows the DBA to
separate and more easily manage object groups as Tablespaces.  
        Examples of additional database tablespaces include USERS, DATA,
INDEXES, BIGFILE, and others.
 
Some rules that are recommended include:
        Separate groups of objects with different fragmentation characteristics in different
tablespaces; for example, avoid storing data and undo segments together
because their fragmentation characteristics are different
        Separate groups of segments that will contend for disk resources in different
tablespaces; for example, avoid storing data tables and their associated
indexes together because Oracle would like to write both of these
simultaneously as data rows are inserted, deleted, or updated, so are the
associated indexes.
        Separate groups of segments that have different behavioral characteristics in
different tablespaces; for example, avoid storing database objects that need daily
backup in tablespaces with objects that only need yearly backup.
141
        Create a separate tablespace for each large application – smaller applications
can usually share a tablespace such as DATA without many contention
problems.
        Store static segments separately from high use dynamic segments, such as
those used for Data Manipulation Language (DML).
        Store staging tables for a data warehouse in their own tablespace.
        Store materialized views in a separate tablespace from the tablespace used to
store the base tables on which the materialized views are built.
        If tables/indexes are partitioned, store each partition in its own tablespace.
 
Step 4.  Maximize Database Reliability
 
In order for a database to startup, a database must have a control file that is valid.
        The control file is not corrupted and is up to date. 
        Because this file is critical, it is common to keep at least three active
copies of the database control file on three different physical disk
drives.  Thus, if a disk drive crashes, a good copy of the control file is
available.
        You will start out with two copies of the control file for your student database.
Later you will add more.
 
A database should have at least three database redo log groups.
        Although only two log files are required at a minimum, this is generally not
adequate.
        Database reliability is maximized by isolating the groups to the extent possible
on disk drives serving few or no files that will be active while the database is in
use.
        You will start out with two groups with one redo log file per group. Later you will
add more.
 
You should try to store tablespaces whose data participate in disk resource contention
on separate disk drives.
 

142
A Minimum OFA Recommended Disk Configuration
 
This is the minimum configuration recommended by Oracle requires five disk drives.
 
        DISK1: Oracle Executables, SYSTEM Tablespace datafiles, SYSAUX
Tablespace datafiles, USERS Tablespace datafiles, and 1st copy of the control
file.
        DISK2: DATA Tablespace datafiles, 2nd copy of the control file, Redo Log Files.
        DISK3: INDEXES Tablespace datafiles, 3rd copy of the control file, Redo Log
Files.
        DISK4: UNDO Tablespace datafiles, TEMP Tablespace datafile, and any export
files.
        DISK5: Archive Redo Log files, Redo Log Files.
    
 This configuration is NOT the most desirable configuration because having the
Redo Logs and Archive Redo Log files on the same disk drive will cause some
contention problems. 
 It is also not desirable to have SYSTEM and USERS tablespaces on the same
disk drive.

 Other Acceptable Configurations


 Obviously everything is not always optimal.  A fairly acceptable configuration is the 3-
disk drive configuration because you can still separate the location of the control files.
        DISK1: Oracle Executables, SYSTEM Tablespace datafiles, SYSAUX
Tablespace datafiles, Redo Log Files, UNDO Tablespace datafiles, any export
files, 1st copy of the control file.
        DISK2: DATA Tablespace datafiles, USERS Tablespace datafiles, TEMP
Tablespace datafiles, 2nd copy of the control file, Redo Log Files.
        DISK3: Archive Redo Log Files, Redo Log Files, INDEXES Tablespace
datafiles, 3rd copy of the control file.
 

143
 

OFA File Naming Standard


 
You need to have a standard file naming convention – this will make it easier to do
backups and to find files. 
 
Mount point—This is the top point in the physical disk file structure.  The
recommended naming format is:
 
<string constant><numeric value>
 
 where <string constant> can be one or more letters and <numeric value> is a two
or three digit value. 
 Typical UNIX and LINUX examples for naming the disk drives:  /u01 /u02
/u03   or   /a01  /a02  /a03. 
 
Software executables—each version of Oracle that is installed (e.g., 11g, 10g, and
incremental versions) should reside in separate directories with a naming format of:
 
<string constant><numeric value>/<directory type>/<product owner>
 
 where <directory type> implies the type of file in the directory and <product
owner> is the name of the user that owns/installs the directory files.
 
 Example:  /u01/app/oracle would contain application-related
files (app) installed by the user named oracle.
 
 This lists the different versions of the Oracle RDBMS on SOBORA2.
 

144
dbock/@sobora2.isg.siue.edu=>ls -al
drwxr-xr-x  3 oracle dba 4096 May 15  2009 10.2.0
drwxr-xr-x  3 oracle dba 4096 May 14 10:41 11.2.0
 
Database files—each set of database files belonging to a single database should
reside in separate directories with the naming format of:
 
<string constant><numeric value>/<oradata>/<database name>
 
 Where <oradata> is a fixed directory name and <database name> is the value of
the DB_Name initialization parameter.
 Example:  /u02/oradata/DBORCL
 
Tablespace file names—Internal Oracle tablespace names can be up to 30
characters in length. 
        In a UNIX/LINUX environment it is advisable to limit the names of the files that
store tablespaces to eight characters or less because portable UNIX/LINUX
filenames are limited to 14 characters to facilitate copying. 
        Use a suffix of <n>.dbf where the value of <n> is two digits – thus the suffix
requires six characters leaving eight characters to name the tablespace.
        Only datafiles, control files, and redo log files should be stored in the <database
name> directory.
        This is the organization of the DBORCL database:
 
ORGANIZATION OF THE DBORCL DATABASE
 
Tablespaces: 
Tablespace                      Size (MB)  Free (MB)     % Free     % Used
------------------------------ ---------- ---------- ---------- ----------
UNDOTBS1                              435   423.6875         97          3
145
USERS                                22.5     19.375         86         14
SYSAUX                                340    10.1875          3         97
SYSTEM                                480      6.625          1         99
TEMP                                   28          0          0        100

select file_name, tablespace_name from dba_data_files;


 FILE_NAME                                TABLESPACE
---------------------------------------- ----------
/u02/oradata/DBORCL/DBORCLusers01.dbf    USERS
/u01/oradata/DBORCL/DBORCLsysaux01.dbf   SYSAUX
/u03/oradata/DBORCL/DBORCLundotbs01.dbf  UNDOTBS1
/u01/oradata/DBORCL/DBORCLsystem01.dbf   SYSTEM
 
select file_name, tablespace_name from dba_temp_files;
FILE_NAME                                TABLESPACE
---------------------------------------- ----------
/u03/oradata/DBORCL/DBORCLtemp01.dbf     TEMP
 
Data Files/Redo Log Files/Control Files
/u01 DBORCLsystem01.dbf; DBORCLsysaux01.dbf
DBORCLredo01a.rdo; DBORCLredo02a.rdo; DBORCLredo03a.rdo
DBORCLcontrol01.ctl
/u02 DBORCLusers01.dbf
DBORCLredo01b.rdo; DBORCLredo02b.rdo; DBORCLredo03b.rdo
DBORCLcontrol02.ctl
/u03 DBORCLtemp01.dbf; DBORCLundotbs01.dbf
DBORCLcontrol03.ctl
 
 
146
 

Database Creation
 

Prerequisites
 
In order to create a new database, you must have a privileged account that is
authenticated by either the operating system or the use of a special Oracle database
password file. 
        We accomplish this at SIUE by assigning your account to the LINUX group
named dba. 
        This means that you have SYSDBA privileges. 
 
You must ensure that the memory available for the SGA, Oracle executables, and
background processes is sufficient. 
 
You must calculate the necessary disk space for the database. 
        This is a fairly complex set of calculations if you follow Oracle's method for
calculating disk space. 
        Alternatively, you can create a simple database and estimate disk space for the
production database from the simple database.
 

Authentication Methods for DBAs


 
If the database to be administered is local – this means that you will sit at the terminal
where the database resides – then you can use local database administration
authentication by the operating system.  You can also do this if you have a secure
connection.  Alternatively, as is shown in the figure below, you can use a password file.
 

147
 
A password file is created by using the password utility named orapwd.  When you
connect using SYSDBA privilege, you connect to the SYS schema.  The steps to using
password file authentication are given here.
 
NOTE: AS A STUDENT YOU DO NOT NEED A PASSWORD FILE—DO NOT
CREATE ONE.
 
1. Create the password file using the password utility orapwd.  Example:
 
orapwd file=filename password=password entries=max_users
 
where:
 
        filename: Name of the password file (mandatory parameter).  You select a
filename, typically orapwSID.

148
        password: The password for SYSOPER and SYSDBA (mandatory parameter)
        entries: The maximum number of distinct users allowed to connect as SYSDBA
or SYSOPER.  You must create a new password file if the number of DBAs
exceeds the number of entries set when the password file was created – delete
the file and create a new one, then reauthenticate users.  Note: There are no
spaces around the equal-to (=) character.
 
Example:
 
orapwd file=$ORACLE_HOME/dbs/orapwUSER350 password=admin
entries=25
 
2. Set the REMOTE_LOGIN_PASSWORDFILE parameter in the PFILE to a value
of EXCLUSIVE.  
        EXCLUSIVE means only one instance can use the password file and that the
password file contains names other than SYS. 
        Using an EXCLUSIVE password file you can grant SYSDBA or SYSOPER
privileges to individual users.
 
3. Connect to the database using the password file created above.
 
CONNECT sys/admin AS SYSDBA
 
4.  Assign privileges to each user that will function as a DBA.
 
GRANT SYSDBA TO USER350;
 
Password files should be stored on UNIX/LINUX and Windows server computers at the
following directory locations: 
        UNIX and LINUX: $ORACLE_HOME/dbs
149
        Windows: %ORACLE_HOME%/database
 
 

Creating a Database
 
There are three ways to create an Oracle database.
1.   Oracle Universal Installer – During installation of the Oracle software, this
product provides you options for creating several different types of databases,
and these can be modified later to meet OFA-compliance guidelines.
2.   Oracle Database Configuration Assistant (DBCA) – This product provides
several options for creating different database types, and it also allows you to
upgrade an existing database.
3.   Manual Database Creation – This approach used the CREATE
DATABASE command  and is the approach we will take because it teaches you
much more about the database creation process.  Usually you do this by creating
an SQL script and then executing the script.
 
You must select a database name.  At SIUE, you will name your database to match
your Oracle user account name assigned in the course, for example,USER350.

 
Manual Database Creation
 

Steps to Create a Database


 
This outlined the steps to creating a useable database.
1.   Set the operating system environment
variables ORACLE_HOME, ORACLE_SID, PATH, and LD_LIBRARY_PATH.
2.   Create directories to store your database files.

150
3.   Create an initSID.ora parameter file (PFILE) and store it to
your $HOME directory.
4.   Create the Database. 
        Use SQLPlus and connect as SYS AS SYSDBA. 
        Create a SPFILE from the PFILE (NOTE: the step that creates the SPFILE
is optional—you will use just a PFILE for your databases -- you will not
create a SPFILE).
        Connect as SYS AS SYSDBA and execute the CREATE
DATABASE command in SQLPlus.
5.   Run the required catalog.sql and catproc.sql scripts to create the data
dictionary.  Run the pupbld.sql script to create the product user profile.
6.   Create additional tablespaces such as a USERS tablespace for user data and
any other tablespaces that may be required to meet the needs of the database.
 

1.  Setting the Environment Variables


 
Before a database is created, the operating system environment must be configured
and the Oracle RDBMS server software must have already been installed. At a
minimum, five environment variables must be set:
 
        ORACLE_HOME
        ORACLE_BASE
        ORACLE_SID,
        PATH
        LD_LIBRARY_PATH. 
 
Some of these variables are already be set for your student accounts on
the SOBORA2 server. 
        Use the operating system command:  env  -- to check the environment variable
values. 
151
        If they are not set, modify your .profile file so that they are always set when you
connect. 
        Later in this document you will learn how to modify the .profile file. 
 
ORACLE_HOME is the full path to the top directory in which the Oracle software is
installed.  The directory for ORACLE_HOME should be supplied by the person who
installed the server, usually the system administrator or the DBA.  At SIUE, the value
for ORACLE_HOME for Oracle version 11g:
 
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
 
ORACLE_BASE -- you have already learned this is the base directory for installing
Oracle products.
 
ORACLE_BASE=/u01/app/oracle
 
ORACLE_SID is the name assigned to an instance of a database. 
        The ORACLE_SID (system identifier) is used by the operating system to
distinguish different database instances running on the machine. 
        When you create your database at SIUE, you will name the database as
specified by your instructor, for example your database may be
named USER350.
 
ORACLE_SID=USER350
 
IMPORTANT NOTE:  UNIX and LINUX are case-sensitive for directory and file
names.  Since Oracle interfaces with the operating system when naming files,
you must ensure that in naming the database you are consistent in your use of
either lower case or upper case – use of upper case when naming the database
is the generally accepted industry standard, e.g., USER350.
 
PATH defines the directories the operating system searches to find executables, such
as SQLPlus. 
152
        The Oracle RDBMS executables are located in $ORACLE_HOME/bin and this
must be added to the PATH variable. 
        The PATH for student accounts on the DBORCL server should already
include $ORACLE_HOME/bin. 
        You can examine the current PATH for your account with the operating
system env command.
 
PATH=/u01/app/oracle/product/11.2.0/dbhome_1/bin:/bin:/usr/bin:/
usr/local/bin:.
 
LD_LIBRARY_PATH defines the directories in which required library files are
stored.  The value for the SOBORA2 server for this variable is: 
 
LD_LIBRARY_PATH= /u01/app/oracle/product/11.2.0/dbhome_1/lib
 
If any of the environment variables are not set for your Oracle Server account, you can
edit the .profile file by using either the vi editor. 
        Before you edit any file, you should always create a backup copy of the file so
you can recover if necessary.  Here the user dbockstd is copying the.profile file
to a backup file named .profile.bkp.
 
dbockstd/@sobora2.isg.siue.edu=>cp .profile .profile.bkp
 
        Another option is to use FTP software to transfer the .profile to your client
computer, edit the file with a plain text editor such as Microsoft Notepad, then
FTP the file back to your $HOME directory on the Oracle Server
        Again, ensure you create a backup of the file prior to modifying it. 
        Here are example commands for setting environment variables using the
LINUX/UNIX operating system shell on the Oracle server.  If you add these to
the.profile file, do NOT type the operating system prompt (the dollar sign symbol
$).
 
Example -- Bourne or Korn shell:
 
153
$ ORACLE_SID= USER350; export ORACLE_SID
$ LD_LIBRARY_PATH=/usr/lib:$ORACLE_HOME/lib
$ export LD_LIBRARY_PATH
 

2.  Create Directories to Store Your Database Files


 
You need to create directories where your database will be stored. 
        Create directories named oradata inside your HOME directory on each of your
allocated disk drives (/u01, /u02, and /u03).
        Set the permissions to 775 to allow Oracle to read/write/execute for the directory
in order to avoid any permission problems during database creation.  The setting
of “5” is necessary for other binaries to read/execute for the directory. 
        Example - here the user dbockstd is on their HOME directory on
drive /u01.  The mkdir command creates the directory.  The chmod command
sets the permissions for the directory.
 
/u01/student/dbockstd
dbockstd/@sobora2.isg.siue.edu=>mkdir oradata
 
/u01/student/dbockstd
dbockstd/@sobora2.isg.siue.edu=>chmod 775 oradata
 
 

3.  Create the initSID.ora Initialization Parameter File (PFILE)


 
The initSID.ora file (PFILE) stores the parameters used to initialize a database
instance during startup. 
        The parameters in this file also affect characteristics of the database when it is
created.
        Name the file according to your SIUE assigned USER account,
e.g., initUSER350.ora.
154
 
You must create an initSID.ora file prior to attempting to create the database with the
CREATE DATABASE command.
        During installation of the Oracle server software, a sample init.ora file is copied
to the $ORACLE_HOME/dbs directory. 
        You should not attempt to modify this file; rather, you need to create a copy of
the file to your own $HOME directory. 
        When you copy the file to your $HOME directory, you need to name
it initSID.ora where SID=your account name on the SOBORA2 server, for
example,initUSER350.ora. 
        Set the permissions on the file to 775 (read-write-execute for yourself and the
group, and read-execute for the world) in order to avoid any permission errors
during database creation.
 
You can set the permissions on the file when you FTP it to your $HOME directory or
when the file is on the directory, use the operating system command to change the
permissions.  Example:
 
dbockstd/@sobora2.isg.siue.edu=>chmod 775 initUSER350.ora
 
This is the list of parameter settings from the example init.ora file
        Note that the database is being created at directory location $HOME/oradata for
this example.
        The directory $HOME/oradata has permission settings of 775 on the directory
itself.
        You must alter the settings to match your SIUE EID, e.g., replace dbockstd with
your EID.
        Change all of the directories listed to match your allocated HOME on the server.
 
# Change USER001 to your assigned user account.
# Change YOURHOME to your assigned SIUE EID for a HOME directory
 
db_name='USER001'
audit_file_dest='/u01/student/YOURHOME/oradata/adump'

155
audit_trail ='db'
compatible ='11.2.0'
control_files =
("/u01/student/YOURHOME/oradata/USER001control01.ctl",
    "/u02/student/YOURHOME/oradata/USER001control02.ctl")
db_block_size=8192
db_domain='siue.edu'
db_recovery_file_dest='/u01/app/oracle/fast_recovery_area'
db_recovery_file_dest_size=1G
diagnostic_dest='/u01/student/YOURHOME'
dispatchers='(PROTOCOL=TCP) (SERVICE=USER001XDB)'
 
# Uncomment the next two lines when turning on archivelog mode
# specify a location for the log archive destination
# LOG_ARCHIVE_DEST_1 = 'LOCATION =
/u01/student/YOURHOME/oradata/archive'
# log_archive_format='USER001_%t_%s_%r.arc'
 
memory_target=1G
open_cursors=300
processes = 150
remote_login_passwordfile='EXCLUSIVE'
#UNDO_Management is Auto by default
undo_tablespace='UNDOTBS1'
# End of file
 
 

156
Create the Directories
 
Every directory referenced in the init.ora file must be created and have the permission
settings established before attempting to execute a CREATE
DATABASEcommand.  In the above init.ora file, these include:
 
audit_file_dest='/u01/student/YOURHOME/oradata/adump'
diagnostic_dest='/u01/student/YOURHOME'
# LOG_ARCHIVE_DEST_1 = 'LOCATION =
/u01/student/YOURHOME/oradata/archive'
 
        Note that the db_recovery_file_dest directory already exists. 
        The directors on /u01, /u02, and /u03 named oradata must also exist.
 
 
The UNDO_MANAGEMENT initialization parameter determines whether the Oracle
server automatically or the DBA manually handles undo data. 
        You do not need to set UNDO_MANAGEMENT to AUTO -- it is AUTO by
default. 
        The name of the UNDO tablespace is established by
the UNDO_TABLESPACE a parameter. 
        This tablespace name must match the tablespace name in your CREATE
DATABASE command (shown in the next section of these notes).  Example:
 
undo_tablespace='UNDOTBS1'
 
 

4. Create the Database


 
You are now ready to execute the CREATE DATABASE command. 

157
        This assumes that you have planned for all files that will comprise your initial
database.
        The SQLPlus software is used when creating the database.
        The Oracle executable for SQLPlus is sqlplus.
 
During database creation, the Oracle software is only aware of the SYS user and
the SYSDBA role. 
        To create a database you must connect to SQLPlus as the user SYS and the
role SYSDBA. 
        This can be accomplished by using one of several methods -- we will use PuTTY
for a secure telnet session.
        The steps are outlined below.
 
1.  Connect to the SOBORA2 Server through a PuTTY SSH session or by using SQL
worksheet. 
        Enter your account name and password at the operating system prompts. 
        During login respond to the script asking for the ORACLE_SID with the name of
the database you are about to create (if you mess this up, the easiest corrective
action is to log off and login again).
 
login as: my_siue_eid
my_siue_eid@sobora2.siue.edu's password:
ORACLE_SID = [USER350] ?
ORACLE_HOME = [/home/oracle] ?
 
System responds back:
ORACLE_SID  = USER350
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
 
2.  Connect to SQLPlus.  In situations where the DBA's operating system account is
assigned to the dba administrator's group, you can connect to sqlplus by using one of
the two command sequences shown here.   Example:
 
158
dbockstd/@sobora2.isg.siue.edu=>sqlplus '/ as sysdba'
 
SQL*Plus: Release 11.2.0.3.0 Production on Fri May 24
23:52:25 2013
 
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 
Connected to an idle instance.
 
SQL>
 
 
or
 
dbockstd/@sobora2.isg.siue.edu=>sqlplus /nolog
 
SQL*Plus: Release 11.2.0.3.0 Production on Fri May 24
23:51:40 2013
 
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 
SQL> connect / as sysdba
Enter password: <you can just press the Enter key a password
is not needed>
Connected to an idle instance.
 
SQL>
 

159
When you connect, you can verify that you are connected as SYS within SQL*Plus by
using the SHOW USER command:
 
SQL> show user
USER is "SYS"
 
 
3.   Start up the database in NOMOUNT mode. 
        This assumes the PFILE is stored to your $HOME directory and you are
currently located at your $HOME directory.
 
SQL> startup nomount pfile=initUSER350.ora
 
        This command assumes you are NOT necessarily located at your $HOME
directory and can be issues while you are examining other directories.
 
SQL> startup nomount pfile=$HOME/initUSER350.ora
 
        Oracle’s response to the STARTUP command is shown here.  The number of
bytes allocated to the various memory structures may differ from those shown
here.
 
ORACLE instance started.
 
Total System Global Area 1068937216 bytes
Fixed Size                  2235208 bytes
Variable Size             616563896 bytes
Database Buffers          444596224 bytes
Redo Buffers                5541888 bytes
160
 
4.  Create a SQL script file that contains the CREATE DATABASE command.   This is
easily done using Windows Notepad and you can FTP the script to your $HOME
account on the SOBORA2 Server.  Using a script is better than typing the command in
at the SQLPlus prompt because you are very likely to make a typographical error when
typing the command. 
 
Example for the USER350 database: (pay attention to the placement of commas as
shown in blue and the TEMPFILE specification for the Temporary Tablespace): 
 
CREATE DATABASE USER350
 USER SYS IDENTIFIED BY password1
 USER SYSTEM IDENTIFIED BY password2
  LOGFILE
  GROUP 1 ('/u03/student/dbockstd/oradata/USER350redo01a.log')
SIZE 64M,
  GROUP 2 ('/u03/student/dbockstd/oradata/USER350redo02a.log')
SIZE 64M
 MAXLOGFILES 20
 MAXLOGMEMBERS 5
 MAXLOGHISTORY 1
 MAXDATAFILES 25
 MAXINSTANCES 1
 CHARACTER SET US7ASCII
 NATIONAL CHARACTER SET AL16UTF16
 DATAFILE '/u01/student/dbockstd/oradata/USER350system01.dbf'
   SIZE 325M REUSE EXTENT MANAGEMENT LOCAL
 SYSAUX DATAFILE '/u01/student/dbockstd/oradata/USER350sysaux01.
dbf'
   SIZE 325M REUSE
161
 DEFAULT TEMPORARY TABLESPACE temp
   TEMPFILE '/u02/student/dbockstd/oradata/USER350temp01.dbf'
   SIZE 50M REUSE
 UNDO TABLESPACE undotbs1
   DATAFILE '/u02/student/dbockstd/oradata/USER350undo01.dbf'
   SIZE 50M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
 
 
5.  Execute the SQL script to create the database.  Assuming the script is
named create_database.sql, the SQL start command is used to run the script. 
 
SQL>  start create_database.sql
 
or
 
SQL> @create_database.sql
 
The script will run for a few minutes (depends on the size of the tablespaces you are
creating and the number of other students working on the server) and eventually
the SQL> prompt will display with a message telling you that the database was
created.
 
Troubleshooting
 
If the CREATE DATABASE command fails, you must troubleshot the
problem.  Potential errors include:
 
        Syntax errors in the CREATE DATABASE script.  Determine the errors
and correct them.  You will also have to delete any control files, redo log
162
files, etc, that were created up to the point of failure in the CREATE
DATABASE script.
        One or more database/control file(s) already exists – delete the file(s) and
any other @files that were created,  and run the CREATE
DATABASEscript again.
        Insufficient directory permission.  You did not provide permission for the
DBA group to which you belong to write to the directory where you are
creating a file that is part of the database.  The permission setting needs to
be a minimum of 775 (although in an operational setting we would probably
use 660 or 770, here we are trying to avoid permission setting
problems).  You can use the chmod command to set
permissions.  Example setting the permission level for the oradata: 
 
$ chmod 775 oradata
 
        The CREATE DATABASE command creates several files then abends
because of an inability to update security files – you haven’t created a
Password file for the database, but your init.ora file references one with
the remote_login_passwordfile=EXCLUSIVE statement.  Solution –
comment this statement out in your init.ora file because you will not be
using a password file initially; delete all database files created, shutdown
the instance (Shutdown Immediate), log off SQL*Plus, logon again and
issue the CREATE DATABASE command again.
        There is insufficient disk space available.  You will need to revise your
database plan and select alternative disk drive resources for one or more
files.

 5.  Run Scripts to Create the Data Dictionary and Product User


Profile
 
1.  Now you are ready to create the data dictionary.  Run
the catalog.sql and catproc.sql scripts located
in $ORACLE_HOME/rdbms/admin after the database is created. 
 

163
o   CATALOG.SQL – this script creates views against the data dictionary tables,
dynamic performance views, and public synonyms for most of the views.   The
group PUBLIC is granted readonly access to the views.
o   CATPROC.SQL – this script sets up PL/SQL for use by creating numerous
tables, views, synonyms, comments, package bodies, and other database
objects.
 
Both scripts must be run as SYS.  You should still be connected as SYS, but if you
took a break and need to reconnect, the UNIX and SQL*Plus commands if you are not
connected are:
 
$ sqlplus /nolog
SQL> connect sys as sysdba
SQL> Password:  <press the enter key here>
SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql
 
As the script executes, numerous messages will flash across the screen.  This may
take several minutes to finish.
 
At the end of the script you will see:
 
Commit complete.
 
PL/SQL procedure successfully completed.
 
SQL>
 
2.  Now run the catproc.sql script. 
 

164
SQL> @$ORACLE_HOME/rdbms/admin/catproc.sql
 
During execution of the script, you will see various messages.  Some of them are error
messages, but these are NOT actual errors – usually they are caused by the script
trying to drop or checking for the existence of a non-existent object before creating the
object.
 
During execution of the script you will also have to respond to a few questions from the
script – pressing the keyboard Enter key is a sufficient response.
 
On a system that is not busy, the total time for both scripts to complete is about 10 to
15 minutes.  Since our SOBORA2 server will be quite busy, we'll probably lecture on
some topics while these scripts are running.
 
At the end of the script you will see:
 
 
SQL> SELECT dbms_registry_sys.time_stamp('CATPROC') AS
timestamp FROM DUAL;
 
TIMESTAMP
-----------------------------------------------------
COMP_TIMESTAMP CATPROC    2013-05-26 16:14:34
 
1 row selected.
 
SQL>
SQL> SET SERVEROUTPUT OFF
 

165
3.  After these scripts have run, verify that the objects are valid.  The following query
returns any invalid objects.
 
SELECT owner,object_name,object_type
FROM dba_objects
WHERE status = 'INVALID'
ORDER BY owner,object_type,object_name;
 
You should receive the message "no rows selected".
 
If any objects are listed as invalid, shutdown your Oracle Instance, then delete all
of the files you created that comprise your database and try again to create the
database.

 
4.  Run the pupbld.sql script. 
 
Run the pupbld.sql script located in directory $ORACLE_HOME/sqlplus/admin to
create the Product User Profile table and related procedures.  Running this script,
among other purposes, prevents a warning message each time a user connects to
SQL*Plus. 
 
IMPORTANT:  The script must be run while you are connected as the
user SYSTEM.  The script runs very quickly – just a few seconds.
 
If you exited, SQL*Plus reconnect as the user SYSTEM.  Remember the default
password for this user is manager– however, you may have set the password as part
of your CREATE DATABASE command or you may have changed it already.  Earlier,
the CREATE PASSWORD example command for USER350 set theSYSTEM user's
password to the value password2.
 
SQL> connect system/<password>
166
SQL> @$ORACLE_HOME/sqlplus/admin/pupbld.sql
 
At the end of the script, you will see this message.
 
SQL> -- End of pupbld.sql

 
6.  Creating Tablespaces
 
After creating the database, it is appropriate to create additional tablespaces.  The
following tablespaces are recommended as a minimum starting point.  In a production
environment, additional tablespaces would be created to support various applications
and/or special requirements. 
 
At this point you should realize that the TEMP and UNDO01 tablespaces were created
as part of the CREATE DATABASE command.  You should proceed to create
the USERS tablespace.  We will leave the creation of additional tablespaces,
specifically DATA and INDEXES to later computer laboratory assignments.
        USERS – stores user data.
        TEMP – stores temporary data created by ORDER BY sorting and table
joins.  You may have created this as a TEMPORARY TABLESPACE as part of
the CREATE DATABASE command.
        UNDO – used for undo segments that support recovery operations.  You may
have also created this as part of the CREATE DATABASE command.
        DATA – used to store permanent organizational tables and clusters.
        INDEXES – used to store permanent organizational indexes to tables and
clusters.
 
Connect as SYS AS SYSDBA.
 
SQL> connect sys as sysdba
167
Enter password:
Connected.
 
CREATE TABLESPACE users
DATAFILE '/u02/student/dbockstd/oradata/USER350users01.dbf'
   SIZE 5M REUSE EXTENT MANAGEMENT LOCAL;
 
Tablespace created.
 
 
 

After the CREATE DATABASE Command


 
If the CREATE DATABASE command executes properly, you will now have an Oracle
Instance of your database running and in OPEN stage.  It is available for use.
 
The CREATE DATABASE command always creates two users, SYS and SYSTEM
regardless of whether or not you specify this as part of the command.  If you did not
create them with unique passwords as part of the command, then they need to have
their passwords changed immediately.  You can change them with the following SQL
command:
 
SQL> ALTER USER sys IDENTIFIED BY new_password;
SQL> ALTER USER system IDENTIFIED BY new_password; 
 
This is the end of the manual creation of the database.  In future modules, you will
continue to learn how to populate the database with additional tablespaces, tables,
indexes, views, user accounts, roles, profiles, and other database objects.
 

168
Examine directory that stores your alert.log file.  For the USER350 database, this is
located at:
 
/u01/student/dbockstd/diag/rdbms/user350/USER350/trace
 
The base directory for the Automatic Diagnostic Repository (ADR) files is set with the
DIAGNOSTIC_DEST parameter in your init.ora file. 
 
        Within the ADR home directory are subdirectories:
alert - The XML formatted alertlog -- it is named alert_USER350.log.
trace - files and text alert.log file
cdump - core files
        The XML formatted alert.log is named as log.xml.
 
Confirm the existence of the alert.log file.  Another module will cover the alert file in
more detail, but you should understand that the alert file stores messages about the
startup, shutdown, and operation of your database.  You should examine is regularly
for error messages.
 
 

Database Creation Using OMF


 
While we will not practice creating a database using the Oracle Managed Files
(OMF) approach, you need to be familiar with this approach as a new DBA. 
 
OMF simplifies file administration because it eliminates the need to directly manage
files within an Oracle database.  This is a good approach to use for a smaller database
that runs in a Windows operating system environment.
 
Under OMF, files are named as follows:
        Control Files:  ora_%u.ctl

169
        Redo Log Files:  ora_%g_%u.log
        Datafiles:  ora_%t_%u.dbf
        Temporary Datafiles:  ora_%t_%u.tmp
 
%u is a unique 8-character string that is system generated.
%t is the tablespace name.
%g is the Redo Log File group number.
 
The parameter DB_CREATE_FILE_DEST can be set to specify the default location for
datafiles.  The parameter DB_CREATE_ONLINE_LOG_DEST_N can be set to specify
the default locations for Redo Log Files and Control Files where the value 'N' can be a
maximum of 5 locations. 
 
Example initialization parameter file entries for OMF:
 
DB_CREATE_FILE_DEST=/$HOME/oradata/u01
DB_CREATE_ONLINE_DEST_1=/$HOME/oradata/u02
DB_CREATE_ONLINE_DEST_2=/$HOME/oradata/u03
 
Since these parameters specify file locations, the CREATE DATABASE command with
OMF is simplified to simply require a specification of the database name.
 
CREATE DATABASE USER350;
 
The result of the CREATE DATABASE command with these specifications would be:
        SYSTEM Tablespace of 300MB to 600MB in size that is autoextensible with
unlimited size located at $HOME/oradata/u01.
        Two online Redo Log groups with two members each of size 100MB.  The
locations would be $HOME/oradata/u02 and $HOME/oradata/u03.
170
        If the initialization parameter file specifies automatic undo management, an undo
tablespace datafile will be created at location $HOME/oradata/u01 (same
location as the SYSTEM tablespace) that is 10MB in size and which is
autoextensible up to an unlimited size.  It will be named SYS_UNDOTBS.
        If the initialization parameter file does NOT specify any control files, the CREATE
DATABASE command will automatically cause two control files to be created,
one in each location where the Redo Log Files are located.
 
 

Dropping a Database
 
Dropping a database removes its datafiles, redo log files, control files, and initialization
parameter files. 
        Generally you would use this as a student to drop a database in order to start
over again if you make major mistakes.
        Dropping a production database should only be done after creating a valid cold
backup, because all data is deleted.
 

Drop Database Command


 
To drop a database, execute this statement.
 
DROP DATABASE;
 
        The DROP DATABASE statement first deletes all control files and all other
database files listed in the control file.
        It then shuts down the database instance.
 
To use the DROP DATABASE statement successfully, the database must
be mounted in exclusive and restricted mode.  Connect as SYS as SYSDBA.
 

171
SHUTDOWN;
 
<the database will be closed, dismounted, and the Oracle
instance shut down>
 
STARTUP FORCE MOUNT;
 
ALTER SYSTEM ENABLE RESTRICTED SESSION;
 
DROP DATABASE;
 
The DROP DATABASE statement has no effect on archived log files, nor does it have
any effect on copies or backups of the database.  It is best to use RMAN to delete such
files.  If the database is on raw disks, the actual raw disk special files are not deleted.
 
If you used the Database Configuration Assistant to create your database, you can
use that tool to delete (drop) your database and remove the files.
   
 
 
 
 

 
 

172
Module 6 – Data Dictionary
 
Objectives
 
These notes introduce concepts about the Oracle data dictionary. 
 Learn to use data dictionary views and learn how they are created. 
 Write queries for the data dictionary and dynamic performance views.
 
Database Objects
 
One of the steps completed when creating a database is the execution of
the catalog.sql and catproc.sql scripts. 
 
        These scripts create and populate the Data Dictionary with a number of
database objects. 
        The Data Dictionary stores metadata; that is, data about data. 
 
        Other objects created by these scripts include the Dynamic Performance
Tables that enable a DBA to monitor and tune an Oracle Database Instance.
        PL/SQL packages were also created as well as Database event
triggers.  These latter two sets of objects are not covered in this course.
 
Data Dictionary
 
The figure below details the important characteristics of the Data Dictionary. 
        Typically, system users are not allowed to access Data Dictionary tables and
views.
        However, your accounts at SIUE are authorized access to the data dictionary for
the DBORCL database in order for you to learn more about the data dictionary. 
 

173
 
 
The Data Dictionary consists of two components:
 
        Base Tables:  These tables store descriptions of objects in the database. 
o   These are the first objects created in the data dictionary. 
o   These are created when the oracle RDBMS software runs a special script
named sql.bsq when a database is created by the CREATE
DATABASEcommand – you do not see the sql.bsq script execute, but it
does. 
o   You should never attempt to write to these tables – never
use DML commands to attempt to update base tables directly.
o   An example Base Table is IND$ that stores index information.
        User-Accessible Views:  These views summarize information in the base tables
in order to make the information easier for a DBA to use. 
o   These views are created by the catalog.sql script.
o   Using the Oracle Universal Installer to create a database, then
the catalog.sql and catproc.sql scripts are run automatically.
o   An example data dictionary user-accessible view is TABS – it stores
information about tables you create as a system user.  TABS is a synonym
for the view ALL_TABLES.
 
The Data Dictionary stores information about all database objects created by system
users and information systems professionals including tables, views, indexes, clusters,

174
procedures, functions, synonyms, sequences, triggers and the like.  For each object,
the Data Dictionary stores:
        Disk space allocation (usually in bytes).
        Integrity constraint information.
        Default values for columns.
        Oracle user names (accounts)
        Privilege and role information for users.
        Auditing information – who has accessed/updated objects.
 
 
Data Dictionary Usage
 
The Data Dictionary is used by the Oracle RDBMS as illustrated in the figure shown
below.
 

 
The Data Dictionary Views are divided into three sets of views.  These are
differentiated by the scope of the information that they present.
 
        DBA:  These views display information about objects stored in all schemas (a
schema is a logical organization of objects belong to an individual system user). 
o   These views are named DBA_xxx where xxx is the object name.
o   Because these views belong to the DBA and were created by the
owner SYS, you may (or may not) need to reference those that do not have
public synonyms by qualifying the object name with the owner.  Example:
 
SELECT owner, object_name, object_type
FROM SYS.DBA_OBJECTS;
 
175
        ALL:  These views display "all" information that an individual user of the
database is authorized to access – this will include information about your
objects as well as information about objects for which you have access
permissions. 
o   If you connect to the database, the ALL_xxx views will display all
information about objects of all database schemas (if you have access
permissions). 
o   These views also provide information about objects to which you have
access by virtue of the assignment of either public or explicit grants of
access privileges.
 
SELECT owner, object_name, object_type
FROM ALL_OBJECTS;
 
o   The ALL_ views obey the current set of enabled roles. Query results
depend on which roles are enabled, as shown in the following example:
SQL> SET ROLE ALL;
 
Role set.
 
SQL> SELECT COUNT(*) FROM ALL_OBJECTS;
 
COUNT(*)
----------
13140
 
SQL> SET ROLE NONE;
 
Role set.
 
SQL> SELECT COUNT(*) FROM ALL_OBJECTS;
 
COUNT(*)
----------
12941
 
 
        USER:  These views display information that you would most likely want to
access. 
o   These USER_xxx views refer to your own objects in your own schema. 
o   These views display only rows pertinent to you as a user.
o   These views are a subset of the ALL views.
176
o   These rows do not usually display the OWNER column.
 
SELECT object_name, object_type
FROM USER_OBJECTS;
 
 
In general, Data Dictionary Views answer questions about:
        when an object was created,
        is the object part of another objects,
        who owns the object,
        what privileges do you have as a system user,
        what restrictions are on an object.
 
Practice selecting information from the following three
views:  dba_objects, all_objects, user_objects.  You may wish to use the SQL*Plus
command DESC to describe the views.
 
Additional queries you may wish to execute to practice accessing parts of the data
dictionary:
 
SELECT table_name, comments
FROM dictionary
WHERE table_name LIKE '%INDEXES%';
 
SELECT table_name, comments
FROM dictionary
WHERE table_name LIKE 'DBA_SEG%';
 
DESC dba_users;
 
COLUMN account_status FORMAT A20;
SELECT username, account_status, lock_date
FROM dba_users;
 
 
The DUAL Table
Oracle maintains a table named DUAL that is used by Oracle and by user programs to
produce a guaranteed known result such as the production of a value through use of
an Oracle defined function.
 
        The table has one column named DUMMY and one row containing the value X. 
        SYS owns the DUAL table, but all users can select from it.
177
        Selecting from DUAL is useful for computing a constant expression with a
SELECT statement, since DUAL has only one row, the constant is returned only
once.
 
Example:  Suppose you want to know the ASCII equivalent of the ASCII value
76?  The following SELECT statement fails to execute and generates an error
message.
 
SQL> SELECT ASCII(76);
SELECT ASCII(76)
               *
ERROR at line 1:
ORA-00923: FROM keyword not found where expected
 
You can select the value from DUAL and the SELECT statement succeeds.
 
SQL> SELECT ASCII(76) FROM dual;
More...
 
 ASCII(76)
----------
        55
 
  
Dynamic Performance Tables and Views
 
The Oracle Server records database activity to the Dynamic Performance Tables. 
        These are complemented by Dynamic Performance Views - virtual tables that
exist only in memory when the database is running.
         The tables provide real-time condition information about database operation.
        DBAs use these tables—most users should not be able to access these tables. 
        The tables cannot be altered – they are fixed and are owned by the user SYS. 
        All Dynamic Performance Tables have names that begin with the letters V_$. 
        Views of these tables are created along with synonyms that begin with the
letters V$.   Did you notice the difference in these names – yes, that's right,
theunderscore? 
 
What information is stored in the following tables:  V$DATAFILE, V$FIXED_TABLE? –
Answer:  Information about the database's datafiles and information about all of the
dynamic performance tables and views. 
 
Examples Dynamic Performance Table views:
178
        V$CONTROLFILE:  Lists the names of the control files
        V$DATABASE:  Contains database information from the control file.
        V$DATAFILE:  Contains datafile information from the control file
        V$INSTANCE:  Displays the state of the current instance
        V$PARAMETER:  Lists parameters and values currently in effect for the session
        V$SESSION:  Lists session information for each current session
        V$SGA:  Contains summary information on the system global area (SGA)
        V$SPPARAMETER:  Lists the contents of the SPFILE
        V$TABLESPACE:  Displays tablespace information from the control file
        V$THREAD:  Contains thread information from the control file
        V$VERSION:  Version numbers of core library components in the Oracle server
 
Note: Refer to the “Oracle11g Database Reference” document for a complete list of
dynamic performance views and their columns.
 
 
How the Data Dictionary Is Used
 
Primary uses:
        Oracle (internally) accesses information about users, schema objects, and
storage structures.
o   This is done to validate a query executed by a system user.
o   Validates permission and security.
o   Verifies that referenced objects in queries actually exist.
        Oracle modifies the data dictionary for each DDL statement executed.
        Oracle users access the data dictionary as a read-only reference.
 
Modifying the Data Dictionary:
        Only Oracle (SYS) should ever modify the data dictionary.
        During upgrades to new release versions of the Oracle RDBMS, scripts are
provided to upgrade the data dictionary.
        Most (not all) data dictionary views have public synonyms to enable users to
access information conveniently.  System users must avoid creating public
synonyms that conflict with existing public synonyms.
        Oracle software products can reference the data dictionary and add tables and
views the product needs to function to the data dictionary.
 
Fast Data Dictionary Access:
        A lot of data dictionary information is cached in the Dictionary Cache through
use of a least recently used (LRU) algorithm.
        Comments columns from the data dictionary are not usually cached.
179
 
 
Administrative Scripts
 
SQL*Plus scripts provided with Oracle extend beyond
the catalog.sql and catproc.sql scripts you've used already.  There are four
categories of administrative scripts.
 
cat*.sql:  In addition to catalog.sql and catproc.sql, there are other scripts to create
information used by Oracle utilities.
        catadt.sql creates data dictionary views to display metadata for types and other
objects in the ORDBMS (Object Relational DBMS). 
        The catnoadt.sql script drops these tables and views. 
 
dbms*.sql and pvt*.plb:  These scripts create objects for predefined Oracle packages
that can extend the Oracle server functionality. 
        dbmspool.sql is a script that enables a DBA to display the sizes of objects in
the shared pool and mark them for retention/removal in the SGA to reduce
shared pool fragmentation.
 
utl*.sql:  These scripts are run to add views and tables for database utilities.
        utlxplan.sql creates a table that can be used to view the execution plan for a
SQL statement.
 
 
 
 
 
 
 

180
 

Module 7 – Control File Maintenance


 

Objectives
 
These notes explain how:
        a control file is used.
        to examine control files contents. 
        to multiplex control files. 
        to manage control files with an Oracle Managed Files (OMF) approach.
 

Introduction
 
As you've learned from thus far in the course, a Control File is a small binary file that
stores information needed to startup an Oracle database and to operate the
database.  
 

181
 
        A control file belongs to only one database. 
        A control file(s) is created at the same time the database is created based on
the CONTROL_FILES parameter in the PFILE.
        If all copies of the control files for a database are lost/destroyed, then database
recovery must be accomplished before the database can be opened.
        An Oracle database reads only the first control file listed in the PFILE; however, it
writes continuously to all of the control files (where more than one exists).
        You must never attempt to modify a control file as only the Oracle Server should
modify this file. 
        While control files are small, the size of the file is affected by the
following CREATE DATABASE or CREATE CONTROLFILE command
parameters if they have large values. 
o   MAXLOGFILES
o   MAXLOGMEMBERS
o   MAXLOGHISTORY
o   MAXDATAFILES
o   MAXINSTANCES
 
 

Contents of a Control File


 
Control files record the following information:
 
        Database name – recorded as specified by the initialization parameter
DB_NAME or the name used in the CREATE DATABASE statement.
        Database identifier – recorded when the database is created.
        Time stamp of database creation.

182
        Names and locations of datafiles and online redo log files.  This information is
updated if a datafile or redo log is added to, renamed in, or dropped from the
database.
        Tablespace information.  This information is updated as tablespaces are added
or dropped.
        Redo log history – recorded during log switches.
        Location and status of archived logs – recorded when archiving occurs.
        Location and status of backups – recorded by the Recovery Manager utility.
        Current log sequence number –recorded when log switches occur.
        Checkpoint information – recorded as checkpoints are made.
 

Multiplexing Control Files


 
Control files should be multiplexed – this means that more than one identical copy is
kept and each copy is stored to a separate, physical disk drive – of course your
Server must have multiple disk drives in order to do this.  Even if only one disk drive is
available, you should still multiplex the control files.
o   This eliminates the need to use database recovery if a copy of a control file is
destroyed in a disk crash or through accidental deletion.
o   You can keep up to eight copies of control files – the Oracle Server will
automatically update all control files specified in the initialization parameter file to
a limit of eight.
o   More than one copy of a control file can be created by specifying the location and
file name in the CONTROL_FILES parameter of the PFILE when the database is
created. 
o   During database operation, only the first control file listed in
the CONTROL_FILES parameter is read, but all control files listed are written to
in order to maintain consistency.
o   One approach to multiplexing control files is to store a copy to every disk drive
used to multiplex redo log members of redo log groups.
 
183
You can also add additional control files.  When using a PFILE, this is accomplished by
shutting down the database, copying an existing control file to a new file on a new disk
drive, editing the CONTROL_FILES parameter of the PFILE, then restarting the
database.

 
 If you are using an SPFILE, you can use the steps specified in the figure shown
here.  The difference is you name the control file in the first step and create the copy in
step 3.

 
184
 

Create New Control Files Command


 
A DBA will create new control files in these situations:
 
        All control files for the database have been permanently damaged and you do
not have a control file backup.
        You want to change the database name.
o   For example, you would change a database name if it conflicted with
another database name in a distributed environment. 
o   Note:  You can change the database name and DBID (internal database
identifier) using the DBNEWID utility. See Oracle Database Utilities for
information about using this utility.
 
Example:
 
CREATE CONTROLFILE
   SET DATABASE USER350
   LOGFILE GROUP 1 ('/u01/oradata/prod/redo01_01.log',
                    '/u02/oracle/prod/redo01_02.log'),
           GROUP 2 ('/u01/oracle/prod/redo02_01.log',
                    '/u02/oracle/prod/redo02_02.log'),
           GROUP 3 ('/u01/oracle/prod/redo03_01.log',
                    '/u02/oracle/prod/redo03_02.log')
   RESETLOGS
   DATAFILE '/u01/student/dbockstd/oradata/USER350system01.dbf' SIZE 350M,
            '/u01/student/dbockstd/oradata/USER350sysaux01.dbf' SIZE 350M,
            '/u02/student/dbockstd/oradata/USER350undo01.dbf' SIZE 64M,
            '/u02/student/dbockstd/oradata/USER350users01.dbf' SIZE 5M
            '/u02/student/dbockstd/oradata/USER350temp01.dbf' SIZE 64M
   MAXLOGFILES 50
   MAXLOGMEMBERS 3
   MAXLOGHISTORY 400
   MAXDATAFILES 200
   MAXINSTANCES 6
   ARCHIVELOG;
 
        The CREATE CONTROLFILE statement can potentially damage specified
datafiles and redo log files.
185
        It is only issued as a command in NOMOUNT stage.
        Omitting a filename can cause loss of the data in that file, or loss of access to
the entire database.
        Use caution when issuing this statement and be sure to follow the instructions
in "Steps for Creating New Control Files".
        If the database had forced logging enabled before creating the new control file,
and you want it to continue to be enabled, then you must specify theFORCE
LOGGING clause in the CREATE CONTROLFILE statement.
 
Steps to use when a control file must be recreated:
1.  Make a list of all datafiles and redo log files of the database.
SELECT MEMBER FROM V$LOGFILE;
SELECT NAME FROM V$DATAFILE;
SELECT VALUE FROM V$PARAMETER WHERE NAME = 'control_files';
If you have no such lists and your control file has been damaged so that the
database cannot be opened, try to locate all of the datafiles and redo log files
that constitute the database. Any files not specified in step 5 are not recoverable
once a new control file has been created. Moreover, if you omit any of the files
that comprise the SYSTEM tablespace, you might not be able to recover the
database.
2.  Shut down the database.
If the database is open, shut down the database normally if possible. Use
the IMMEDIATE or ABORT clauses only as a last resort.
3.  Back up all datafiles and redo log files of the database.
4.  Start up a new instance, but do not mount or open the database:
STARTUP NOMOUNT
5.  Create a new control file for the database using the CREATE
CONTROLFILE statement.
When creating a new control file, specify the RESETLOGS clause if you have
lost any redo log groups in addition to control files. In this case, you will need to
recover from the loss of the redo logs (step 8). You must specify the
RESETLOGS clause if you have renamed the database. Otherwise, select the
NORESETLOGS clause.

186
6.  Store a backup of the new control file on an offline storage device.
7.  Edit the CONTROL_FILES initialization parameter for the database to indicate
all of the control files now part of your database as created in step 5 (not
including the backup control file). If you are renaming the database, edit the
DB_NAME parameter in your instance parameter file to specify the new name.
8.  Recover the database if necessary. If you are not recovering the database, skip
to step 9.
If you are creating the control file as part of recovery, recover the database. If
the new control file was created using the NORESETLOGS clause (step 5), you
can recover the database with complete, closed database recovery.
If the new control file was created using the RESETLOGS clause, you must
specify USING BACKUP CONTROL FILE. If you have lost online or archived
redo logs or datafiles, use the procedures for recovering those files.
9.  Open the database using one of the following methods:
        If you did not perform recovery, or you performed complete, closed
database recovery in step 8, open the database normally.
ALTER DATABASE OPEN;
        If you specified RESETLOGS when creating the control file, use the
ALTER DATABASE statement, indicating RESETLOGS.
ALTER DATABASE OPEN RESETLOGS;

 What if a Disk Drive Fails?  Recovering a Control File


 
Use the following steps to recover from a disk drive failure that has one of the
database’s control files located on the drive.
        Shut down the instance.
        Replace the failed drive.
        Copy a control file from one of the other disk drives to the new disk drive – here
we assume that u02 is the new disk drive and control02.ctl is the damaged file.
 $ cp /u01/oracle/oradata/control01.ctl
/u02/oracle/oradata/control02.ctl
 

187
        Restart the instance.  If the new media (disk drive) does not have the same disk
drive name as the damaged disk drive or if you are creating a new copy while
awaiting a replacement disk drive, then alter the CONTROL_FILES parameter in
the PFILE prior to restarting the database.
        No media recovery is required.
        If you are awaiting a new disk drive, you can alter
the CONTROL_FILES parameter to remove the name of the control file on the
damaged disk drive – this enables you to restart the database.
 

Backup Control Files and Create Additional Control Files


 
Oracle recommends backup of control files every time the physical database structure
changes including:
        Adding, dropping, or renaming datafiles.
        Adding or dropping a tablespace, or altering the read/write state of a tablespace.
        Adding or dropping redo log files or groups.
 
Use the ALTER DATABASE BACKUP CONTROLFILE statement to backup control
files.
 
ALTER DATABASE BACKUP CONTROLFILE TO
‘/u02/oradata/backup/control.bkp’;
 
Now use an SQL statement to produce a trace file (write a SQL script to the trace file)
that can be edited and used to reproduce the control file.
 
ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
 To create additional control files follow the steps specified earlier for multiplexing
control files.
  
188
Dropping a Control File
 
Control files are dropped when a location is no longer appropriate, e.g., a disk drive
has been eliminated from use for a database. 
1.   Shut down the database.
2.   Edit the init.ora file CONTROL_FILES parameter by removing the old control file
name.
3.   Restart the database.
 

 Oracle Managed Files Approach


 
Control files are automatically created with the Oracle Managed Files (OMF)
approach during database creation even if you do not specify file locations/names with
the CONTROL_FILES parameter—it is preferable to specify file locations/names.
 
With OMF, if you wish to use the init.ora file to manage control files, you must use the
filenames generated by OMF.  
        The locations are specified by
the DB_CREATE_ONLINE_LOG_DEST_n parameter.
        If the above parameter is not specified, then their location is defined by
the DB_CREATE_FILE_DEST parameter. 
 
Control file names generated with OMF can be found within the alertSID.log that is
automatically generated by the CREATE DATABASE command and maintained by
the Oracle Server.

Control File Information


 
Several dynamic performance views and SQL*Plus commands can be used to obtain
information about control files. 
        V$CONTROLFILE – gives the names and status of control files for an Oracle
Instance.
189
        V$DATABASE – displays database information from a control file.
        V$PARAMETER – lists the status and location of all parameters. 
        V$CONTROLFILE_RECORD_SECTION – lists information about the control file
record sections. 
        SHOW PARAMETER CONTROL_FILES command – lists the name, status, and
location of control files. 
 
The queries shown here were executed against the DBORCL database used for
general instruction in our department.
 
CONNECT / AS SYSDBA
 
SELECT name FROM v$controlfile;
 
NAME
----------------------------------------------------------------
/u01/student/dbockstd/oradata/USER350control01.ctl
/u02/student/dbockstd/oradata/USER350control02.ctl
 
SELECT name, value FROM v$parameter
WHERE name='control_files';
 
NAME VALUE
--------------------------- ---------------------------------

control_files /u01/student/dbockstd/oradata/USER350c
ontrol01.ctl,
/u02/student/dbockstd/oradata/USER350control02.ctl
 
DESC v$controlfile_record_section;

190
 Name                   Null?    Type
 --------------------- -------- ----------------------------
 TYPE                           VARCHAR2(28)
 RECORD_SIZE                    NUMBER
 RECORDS_TOTAL                  NUMBER
 RECORDS_USED                   NUMBER
 FIRST_INDEX                    NUMBER
 LAST_INDEX                     NUMBER
 LAST_RECID                     NUMBER
  
SELECT type, record_size, records_total, records_used
FROM v$controlfile_record_section
WHERE type='DATAFILE';
 
TYPE                         RECORD_SIZE RECORDS_TOTAL RECORDS_USED
---------------------------- ----------- ------------- ------------
DATAFILE                        520            25           4

 The RECORDS_TOTAL shows the number of records allocated for the section that


stores information on data files. 
 Several dynamic performance views display information from control files including: 
        V$BACKUP
        V$DATAFILE,
        V$TEMPFILE
        V$TABLESPACE
        V$ARCHIVE
        V$LOG
        V$LOGFILE
 
 

Module 8 – Redo Log Files


 
191
Objectives
 
These notes cover the purpose of On-line Redo Log Files. 
 Learn the architectural approach for structuring Redo Log Files and Groups.
 Learn to switch logs and use checkpoints. 
 Learn to multiplex On-line and Off-line Redo Log Files.
 
On-Line Redo Log Files
 
Redo Log File Basics
 
Redo Log Files enable the Oracle Server or DBA to redo transactions if a database
failure occurs.  This is their ONLY purpose – to enable recovery.
 
Transactions are written synchronously to the Redo Log Buffer in the System Global
Area. 
        All database changes are written to redo logs to enable recovery.
        As the Redo Log Buffer fills, the contents are written to Redo Log Files. 
        This includes uncommitted transactions, undo segment data, and schema/object
management information. 
        During database recovery, information in Redo Log Files enable data that has
not yet been written to datafiles to be recovered. 
 
Redo Thread
If a database is accessed by multiple instances, a redo log is called a redo thread.
        This applies mostly in an Oracle Real Application Clusters environment.
        Having a separate thread for each instance avoids contention when writing to
what would otherwise be a single set of redo log files - this eliminates a
performance bottleneck.
 
Redo Log File Organization – Multiplexing
 
The figure shown below provides the general approach to organizing on-line Redo Log
Files.  Initially Redo Log Files are created when a database is created, preferably in
groups to provide for multiplexing.  Additional groups of files can be added as the need
arises.
 

192
 
        Each Redo Log Group has identical Redo Log Files (however, each Group
does not have to have the same number of Redo Log Files).
        If you have Redo Log Files in Groups, you must have at least two Groups. The
Oracle Server needs a minimum of two on-line Redo Log Groups for normal
database operation. 
        The LGWR concurrently writes identical information to each Redo Log File in a
Group. 
        Thus, if Disk 1 crashes as shown in the figure above, none of the Redo Log Files
are truly lost because there are duplicates. 
        Redo Log Files in a Group are called Members. 
o   Each Group Member has an identical log sequence number and is the
same size – the members within a group cannot be different sizes. 
o   The log sequence number is assigned by the Oracle Server as it writes to
a log group and the current log sequence number is stored in the control
files and in the header information of all Datafiles – this enables
synchronization between Datafiles and Redo Log Files.
o   If the group has more members, you need more disk drives in order for the
use of multiplexed Redo Log Files to be effective.
 
A Redo Log File stores Redo Records (also called redo log entries).
        Each record consists of "vectors" that store information about:
o   changes made to a database block. 
o   undo block data.
o   transaction table of undo segments.
        These enable the protection of rollback information as well as the ability to roll
forward for recovery.
193
        Each time a Redo Log Record is written from the Redo Log Buffer to a Redo Log
File, a System Change Number (SCN) is assigned to the committed transaction.
 
 
Where to Store Redo Log Files and Archive Log Files
 
Guidelines for storing On-line Redo Log Files versus Archived Redo Log Files:
1.   Separate members of each Redo Log Group on different disks as this is required
to ensure multiplexing enables recovery in the event of a disk drive crash.
2.   If possible, separate On-line Redo Log Files from Archived Log Files as this
reduces contention for the I/O path between the ARCn and LGWRbackground
processes. 
3.   Separate Datafiles from On-line Redo Log Files as this
reduces LGWR and DBWn contention.  It also reduces the risk of losing both
Datafiles and Redo Log Files if a disk crash occurs.
 
You will not always be able to accomplish all of the above guidelines – your ability to
meet these guidelines will depend on the availability of a sufficient number of
independent physical disk drives.
 
 Redo Log File Usage
 
Redo Log Files are used in a circular fashion. 
        One log file is written in sequential fashion until it is filled, and then the second
redo log begins to fill.  This is known as a Log Switch.  
        When the last redo log is written, the database begins overwriting the first redo
log again. 
 

194
 
        The Redo Log file to which LGWR is actively writing is called the current log
file.
        Log files required for instance recovery are categorized as active log files.
        Log files no longer needed for instance recovery are categorized as inactive log
files.
        Active log files cannot be overwritten by LGWR until ARCn has archived the data
when archiving is enabled.
 
 
Log Writer Failure
 
What if LGWR cannot write to a Redo Log File or Group?  Possible failures and the
results are:
1.   At least one Redo Log File in a Group can be written – Unavailable Redo Log
Group members are marked as Invalid, a LGWR trace file is generated, and an
entry is written to the alert file – processing of the database proceeds normally
while ignoring the invalid Redo Log Group members.
2.   LGWR cannot write to a Redo Log Group because it is pending archiving –
Database operation halts until the Redo Log Group becomes available (could be
through turning off archiving) or is archived.
3.   A Redo Log Group is unavailable due to media failure  – Oracle generates an
error message and the database instance shuts down.  During media recovery, if
the database did not archive the bad Redo Log, use this command to disable
archiving so the bad Redo Log can be dropped:
 
ALTER DATABASE CLEAR UNARCHIVED LOG
 
4.   A Redo Log Group fails while LGWR is writing to the members – Oracle
generates an error message and the database instance shuts down.  Check to
see if the disk drive needs to be turned back on or if media recovery is
required.  In this situation, just turn on the disk drive and Oracle will perform
automatic instance recovery.
 
Sometimes a Redo Log File in a Group becomes corrupted while a database instance
is in operation. 
        Database activity halts because archiving cannot continue.
        Clear the Redo Log Files in a Group (here Group #2) with the statement:
 
ALTER DATABASE CLEAR LOGFILE GROUP 2;
 

195
How large should Redo Log Files be, and how many Redo Log
Files are enough? 
 
The size of the redo log files can influence performance, because the behavior of the
DBWn and ARCn processes (but not the LGWR process) depend on the redo log
sizes.
       Generally, larger redo log files provide better performance.
         Undersized log files increase checkpoint activity and reduce performance.
 It may not always be possible to provide a specific size recommendation for redo
       

log files, but redo log files in the range of a hundred megabytes to a few
gigabytes are considered reasonable.
 Size your online redo log files according to the amount of redo your system
       

generates. A rough guide is to switch logs at most once every twenty minutes;


however more often switches are common when using Data Guard for primary
and standby databases.
       It is also good for the file size to be such that a filled group can be archived to a
single offline storage unit when such an approach is used.
 If the LGWR generates trace files and an alert file entry that Oracle is waiting
       

because a checkpoint is not completed or a group has not been archived, then
test adding another redo log group (with its files).
 
 
This provides facts and guidelines for sizing Redo Log files.
        Minimum size for an On-line Redo Log File is 4MB.
        Maximum size and Default size depends on the operating system. 
        The file size depends on the size of transactions that process in the database.

 
196
 

Module 9 – Storage Structures


 

Objectives
 
        Learn the physical structures of a database including segments, extents and data
blocks. 
        Learn the different segment types.
        Learn about control block space usage.
  

Logical versus Physical Structures


  

 
 

197
Segment Types
 
Objects in an Oracle database such as tables, indexes, clusters, sequences, etc., are
comprised of segments.  There are several different types of segments.
 
Table:  Data are stored in tables.  When a table is created with the CREATE TABLE
command, a table segment is allocated to the new object.
        Table segments do not store table rows in any particular order.
        Table segments do not store data that is clustered or partitioned.
        The DBA has almost no control over the location of rows in a table.
        The segment belongs to a single tablespace.
 
Table Partition:  If a table has high concurrent usage, that is simultaneous access by
many different system users as would be the case for a SALES_ORDERtable in an
online-transaction processing environment, you will be concerned with scalability and
availability of information as the DBA.  This may lead you to create a table that is
partitioned into more than one table partition segment.
        A partitioned table has a separate segment for each partition.
        Each partition may reside in a different tablespace.
        Each partition may have different storage parameters.
        The Oracle Enterprise Edition must have the partitioning option installed in order
to create a partitioned table.
 
Cluster:  Rows in a cluster segment are stored based on key value
columns.  Clustering is sometimes used where two tables are related in a strong-weak
entity relationship.
        A cluster may contain rows from two or more tables.
        All of the tables in a cluster belong to the same segment and have the same
storage parameters.

198
        Clustered table rows can be accessed by either a hashing algorithm or by
indexing.
 
Index:  When an index is created as part of the CREATE TABLE or CREATE
INDEX command, an index segment is created. 
        Tables may have more than one index, and each index has its own segment.
        Each index segment has a single purpose – to speed up the process of locating
rows in a table or cluster.
 
Index-Organized Table:  This special type of table has data stored within the index
based on primary key values.  All data is retrievable directly from the index structure (a
tree structure).
 
Index Partition:  Just as a table can be partitioned, so can an index.  The purpose of
using a partitioned index is to minimize contention for the I/O path by spreading index
input-output across more than one I/O path.
        Each partition can be in a different tablespace.
        The partitioning option of Oracle Enterprise Edition must be installed.
 
Undo:  An undo segment is used to store "before images" of data or index blocks prior
to changes being made during transaction processing.  This allows a rollback using the
before image information.
 
Temporary:  Temporary segments are created when commands and clauses such
as CREATE INDEX, SELECT DISTINCT, GROUP BY, and ORDER BYcause Oracle
to perform memory sort operations. 
        Often sort actions require more memory than is available. 
        When this occurs, intermediate results of sort actions are written to disk so that
the sort operation can continue – this allows information to swap in and out of
memory by writing/reading to/from disk. 
        Temporary segments store intermediate sort results.
 

199
LOB:  Large objects can be stored as one or more columns in a table.  Large objects
(LOBs) include images, separate text documents, video, sound files, etc. 
        These LOBs are not stored in the table – they are stored as separate segment
objects.
        The table with the column actually has a "pointer" value stored in the column
that points to the location of the LOB.
 
Nested Table:  A column in one table may consist of another table
definition.  The inner table is called a "nested table" and is stored as a separate
segment. This would be done for a SALES_ORDER table that has
the SALES_DETAILS (order line rows) stored as a nested table. 
 
Bootstrap Segment:  This is a special cache segment created by the sql.bsq script
that runs when a database is created. 
        It stores initial data dictionary cache information when a database is opened.
        This segment cannot be queried or updated and requires no DBA maintenance.
  

Storage Clauses/Parameters
 
When database objects are created, the object always has a set of storage
parameters.  This figure shows three ways that an object can obtain storage clause
parameters. 
 

200
Tablespaces have space managed depending on the type of tablespace:
        Locally Managed Tablespaces – use bitmaps to track used and free space –
Locally managed is the default for non-SYSTEM permanent tablespaces when
the type of extent management is not specified at the time a tablespace is
created.
o   Tablespace extents for Locally Managed are either (1) Uniform specified
with the UNIFORM clause or (2) variable extent sizes determined by the
system with the AUTOALLOCATE clause. 
  Uniform:
        Specify an extent size or use the 1MB default size.
        Each extent contains at least 5 database blocks.
  System Managed:
        Oracle determines optimal size of additional extents with a
minimum extent size of 64KB.
        With SEGMENT SPACE MANAGEMENT AUTO, the minimum
extent size is 1MB if the Database Block size is 16K or larger.
        Dictionary Managed Tablespaces – tables in the data dictionary track space
utilization.
 
Facts about storage parameters:
        Segment storage parameters can override the tablespace level defaults with the
exception of two parameters.  You cannot override the MINIMUM
EXTENTor UNIFORM SIZE tablespace parameters.
        If you do not specify segment storage parameters, then a segment will inherit the
tablespace default parameters.
        If tablespace default storage parameters are not set, the Oracle server system
default parameters are used.
        Locally managed tablespaces cannot have the storage
parameters INITIAL, NEXT, PCTINCREASE, and MINEXTENTS specified;
however, these parameters can be specified at the segment level. 

201
        When storage parameters of a segment are modified, the modification only
applies to extents that are allocated after the modification takes place.
 

Extents
 
Extents are allocated in chunks that are not necessarily uniform in size, but the space
allocated is contiguous on the disk drive as is shown in this figure.
        When a database object such as a table grows, additional disk space is
allocated to its segment of the tablespace in the form of an extent.
        This figure shows two extents of different sizes for the Department table
segment.
 

 
 
In order to develop an understanding of extent allocation to segments, review
this CREATE TABLESPACE command. 
 
CREATE TABLESPACE data
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf'
    SIZE 20M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 40K; 
 
        The above specifies use of local management for extents.
202
        The default size for all extents is specified through the UNIFORM
SIZE parameter as 40K. 
        Since this parameter cannot be overridden, all segments in this tablespace will
be allocated with segments that are 40K in size.
  
This next CREATE TABLESPACE command creates a dictionary managed
tablespace. 
Note:  You will not be able to execute this command for your database as
dictionary managed tablespaces are not allowed with Oracle 11g.
        Here the DEFAULT STORAGE parameter is used to specify the size of extents
allocated to segments created within the tablespace. 
        These parameters can be overridden by parameter specifications in the object
creation command, for example, a CREATE TABLE command. 
 
CREATE TABLESPACE data
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' SIZE
20M
EXTENT MANAGEMENT DICTIONARY
DEFAULT STORAGE (
    INITIAL 128K
    NEXT 40K
    PCTINCREASE 50
    MINEXTENTS 1
    MAXEXTENTS 999); 
 
        INITIAL specifies the initial extent size (the first extent allocated).
o   A size that is too large here can cause failure of the database if there is not
any area on the disk drive with sufficient contiguous disk space to satisfy
the INITIAL parameter. 

203
o   When a database is built to store information from an older system that is
being converted to Oracle, a DBA may have some information about how
large initial extents need to be in general and may specify a larger size as
is done here at 128K.   
        NEXT specifies the size of the next extent (2nd, 3rd, etc).
o   This is termed an incremental extent. 
o   This can also cause failure if the size is too large. 
o   Usually a smaller value is used, but if the value is too small, segment
fragmentation can result. 
o   This must be monitored periodically by a DBA which is why dictionary
managed tablespaces are NOT preferred. 
        PCTINCREASE can be very troublesome.
o   If you set this very high, e.g. 50% as is shown here, the segment extent size
can increase by 7,655% over just 10 extents. 
o   Best solution: a single INITIAL extent of the correct size followed by a small
value for NEXT and a value of 0 (or a small value such as 5)
forPCTINCREASE.
 
Use smaller default INITIAL and NEXT values for a dictionary-managed tablespace's
default storage clauses as these defaults can be over-ridden during the creation of
individual objects (tables, indexes, etc.) where the STORAGE clause is used in
creating the individual objects.
 
        MINEXTENTS and MAXEXTENTS parameters specify the minimum and
maximum number of extents allocated by default to segments that are part of the
tablespace. 
 
The default storage parameters can be overridden when a segment is created as is
illustrated in this next section.
 

204
Example of a CREATE TABLE Command
This shows the creation of a table named Orders in the Data01 tablespace.

        Data01 is locally managed.

        The storage parameters specified here override the storage parameters for the Data01 tablespace.

CREATE TABLE Orders ( 


    Order_Id        NUMBER(3) PRIMARY KEY, 
    Order_Ddate     DATE DEFAULT (SYSDATE), 
    Ship_Date       DATE, 
    Client          VARCHAR(3) NOT NULL, 
    Amount_Due      NUMBER(10,2), 
    Amount_Paid     NUMBER(10,2) ) 
PCTFREE 5 PCTUSED 65 
STORAGE (

    INITIAL 48K

    NEXT 48K

    PCTINCREASE 5

    MINEXTENTS 1

    MAXEXTENTS UNLIMITED ) 
TABLESPACE Data01; 

Allocation/Deallocation:  When a tablespace is initially created, the first datafile (and


subsequent datafiles) created to store the tablespace has a header which may be one
or more blocks at the beginning of the file as is shown in the figure below. 
        As segments are created, extended, or altered free extents are allocated. 
        The below figure shows that extents can vary in size.
        This figure represents a Locally Managed tablespace where the Locally
Managed tablespace's extent size is specified by the EXTENT MANAGEMENT
LOCAL AUTOALLOCATE clause—recall that AUTOALLOCATE enables Oracle to
decide the appropriate extent size for a segment.  In an older Oracle database, it
could also represent a Dictionary Managed tablespace.
        As segments are dropped, altered, or truncated, extents are released to become
free extents available for reallocation. 

205
        The first extent is allocated to a segment, even though the data blocks may be
empty.
        Oracle formats the blocks for an extent only as they are used - they can actually
contain old data.
        Extents for a segment must always be in the same tablespace, but can be in
different datafiles.
        The first data block of every segment contains a directory of the extents in the
segment.
        If you delete data from a segment, the extents/blocks are not returned to the
tablespace for reuse.  Deallocation occurs when:
o   You DROP a segment.
o   You use an online segment shrink to reclaim fragmented space in a
segment.
 
ALTER TABLE employees ENABLE ROW MOVEMENT;
ALTER TABLE employees SHRINK SPACE CASCADE;
 
o   You can rebuild or coalesce an index segment.
o   You truncate a table or table cluster, which removes all rows.
        Over time, segments in a tablespace's datafiles can become fragmented due to
the addition of extents as is shown in this figure.
 

206
 Database Block
 
The Database Block or simply Data Block, as you have learned, is the smallest size
unit for input/output from/to disk in an Oracle database.
        A data block may be equal to an operating system block in terms of size, or may
be larger in size, and should be a multiple of the operating system block.
        The DB_BLOCK_SIZE parameter sets the size of a database's standard blocks
at the time that a database is created. 
        DB_BLOCK_SIZE has to be a multiple of the physical block size allowed by the
operating system for a server’s storage devices. 
        If DB_BLOCK_SIZE is not set, then the default data block size is operating
system-specific. The standard data block size for a database is 4KB or 8KB.
  

207
 
         Oracle also supports the creation of databases that have more than one block
size.  This is primarily done when you need to specify tablespaces with different block
sizes in order to maximize I/O performance. 
        You've already learned that a database can have up to four nonstandard block
sizes specified. 
        Block sizes must be sized as a power of two between 2K and 32K in size,
e.g., 2K, 4K, 8K, 16K, or 32K. 
        A sub cache of the Database Buffer Cache is configured by Oracle for each
nonstandard block size. 
 
Standard Block Size:  The DB_CACHE_SIZE parameter specifies the size of
the Database Buffer Cache.  However, if SGA_TARGET is set
andDB_CACHE_SIZE is not, then Oracle decides how much memory to allocate to
the Database Buffer Cache.  The minimum size for DB_CACHE_SIZE must be
specified as follows:
208
        One granule where a granule is a unit of contiguous virtual memory allocation in
RAM. 
        If the total System Global Area (SGA) based on SGA_MAX_SIZE is less
than 128MB, then a granule is 4MB.
        If the total SGA is greater than 128MB, then a granule is 16MB. 
        The default value for DB_CACHE_SIZE is 48MB rounded up to the nearest
granule size. 
 
Nonstandard Block Size:  If a DBA wishes to specify one or more nonstandard block
sizes, the parameter following parameters are set. 
        The data block sizes should be a multiple of the operating system's block size
within the maximum limit to avoid unnecessary I/O.
        Oracle data blocks are the smallest units of storage that Oracle can use or
allocate.
        Do not use the specified DB_BLOCK_SIZE value to set nonstandard block
sizes. 
        For example, if the standard block size is 8K, do not use
the DB_8K_CACHE_SIZE parameter.
        DB_2K_CACHE_SIZE   -- parameter for 2K nonstandard block sizes.
        DB_4K_CACHE_SIZE   -- parameter for 4K nonstandard block sizes.
        DB_8K_CACHE_SIZE   -- parameter for 8K nonstandard block sizes.
        DB_16K_CACHE_SIZE   -- parameter for 16K nonstandard block sizes.
        DB_32K_CACHE_SIZE   -- parameter for 32K nonstandard block sizes.
 
Nonstandard Block Size Tablespaces:  The BLOCKSIZE parameter is used to
create a tablespace with a nonstandard block size.  Example:
 CREATE TABLESPACE special_apps
  DATAFILE
'/u01/student/dbockstd/oradata/USER350_spec_apps01.dbf'
  SIZE 20M
  BLOCKSIZE 32K; 

209
 
        Here the nonstandard block size specified with the BLOCKSIZE clause is 32K. 
        This command will not execute unless the DB_32K_CACHE_SIZE parameter
has already been specified because buffers of size 32K must already be
allocated in the Database Buffer Cache as part of a sub cache.
 
There are some additional rules regarding the use of multiple block sizes:
        If an object is partitioned and resides in more than one tablespace, all of the
tablespaces where the object resides must be the same block size.
        Temporary tablespaces must be the standard block size.  This also applies to
permanent tablespaces that have been specified as default temporary
tablespaces for system users.
 
What Block Size To Use?
Use the largest block size available with your operating system for a new database.
        Using a larger database block size should improve almost every performance
factor.
        Larger database block sizes keep indexes from splitting levels.
        Larger database block sizes keep more data in memory longer.
        If the database has excessive buffer busy waits (due to a large # of users
performing updates and inserts), then increase the freelists parameter setting for
the table or other busy objects.
 

Data Block Contents


 
This figure shows the components of a data block.  This is the structure regardless of
the type of segment to which the block belongs.
 

210
 
Block header – contains common and variable components including the block
address, segment type, and transaction slot information. 
        The block header also includes the table directory and row directory. 
        On average, the fixed and variable portions of block overhead
total 84 to 107 bytes.
        Table Directory – used to track the tables to which row data in the block
belongs.
o   Data from more than one table may be in a single block if the data are
clustered.
o   The Table Directory is only used if data rows from more than one table
are stored in the block, for example, a cluster.
        Row Directory - used to track which rows from a table are in this block. 
o   The Row Directory includes for each row or row fragment in the row data
area.

211
o   When space is allocated in the Row Directory to store information about a
row, this space is not reclaimed upon deletion of a row, but is reclaimed
when new rows are inserted into the block.
o   A block can be empty of rows, but if it once contained rows, then data will
be allocated in the Row Directory (2 bytes per row) for each row that ever
existed in the block.
        Transaction Slots are space that is used when transactions are in progress that
will modify rows in the block. 
        The block header grows from top down.
        Data space (Row Data) – stores row data that is inserted from the bottom up.
 
Free space in the middle of a block can be allocated to either the header or data
space, and is contiguous when the block is first allocated.
        Free space is allocated to allow variable character and numeric data to expand
and contract as data values in existing rows are modified. 
        New rows are also inserted into free space. 
        Free space may fragment as rows in the block are modified or deleted. 
 
Oracle (the SMON background process) automatically and transparently coalesces the
free space of a data block periodically only when the following conditions are true:
        An INSERT or UPDATE statement attempts to use a block that contains
sufficient free space to contain a new row piece.
        The free space is fragmented so that the row piece cannot be inserted in a
contiguous section of the block.
 
After coalescing, the amount of free space is identical to the amount before the
operation, but the space is now contiguous. This figure shows before and after
coalescing free space.
 

212
 
 
Table Data in a Segment:  Table data is stored in the form of rows in a data block. 
        The figures below show the block header then the data space (row data) and the
free space.
        Each row consists of columns with associated overhead. 
        The storage overhead is in the form of "hidden" columns accessible by the
DBMS that specify the length of each succeeding column.
 
 

213
 

214
 
 
        Rows are stored right next to each other with no spaces in between. 
        Column values are stored right next to each other in a variable length format. 
        The length of a field indicates the length of each column value (variable length -
Note the Length Column 1, Length Column 2, etc., entries in the figure).
        Column length of 0 indicates a null field. 
        Trailing null fields are not stored.
 

Row Chaining and Migrating


 
There are two situations where a data row may not fit into a single data block:
        The row is too large to fit into one data block when it is first inserted, or the table
contains more than 255 columns (the maximum for a row piece). 
o   In this case, Oracle stores the data for the row in a chain of data blocks
(one or more) reserved for that segment. 
o   Row chaining most often occurs with large rows, such as rows that contain
a column of datatype LONG or LONG RAW. 
215
o   Row chaining in these cases is unavoidable.
 

 
        A row that originally fit into one data block has one or more columns updated so
that the overall row length increases, and the block's free space is already
completely filled.
o   In this case, Oracle migrates the data for the entire row to a new data
block, assuming the entire row can fit in a new block.
o   Oracle preserves the original row piece of a migrated row to point to the
new block containing the migrated row.
o   The rowid of a migrated row does not change.
 

216
 
When a row is chained or migrated, I/O performance associated with this
row decreases because Oracle must scan more than one data block to retrieve the
information for the row.
 
 

Manual Data Block Free Space Management -- Database Block


Space Utilization Parameters
 
Manual data block management requires a DBA to specify how block space is used
and when a block is available for new row insertions. 
        This is the default method for data block management for dictionary managed
tablespace objects (another reason for using locally managed tablespaces with
UNIFORM extents). 

217
        Database block space utilization parameters are used to control space allocation
for data and index segments. 
 
The INITTRANS parameter:
        specifies the initial number of transaction slots created when a database block
is initially allocated to either a data or index segment. 
        These slots store information about the transactions that are making changes to
the block at a given point in time. 
        The amount of space allocated for a transaction slot is 23 bytes.
        If you set INITRANS to 2, then there are 46 bytes (2 * 23) pre-allocated in the
header, etc. 
        These slots are in the database block header.
         
The INITTRANS parameter:
        specifies a minimum level of concurrent access. 
        The default is 1 for a data segment and 2 for an index segment. 
        If a DBA specifies INITTRANS at 4, for example, this means that 4 transactions
can be concurrently making modifications to the database block. 
        Also, setting this to a figure that is larger than the default can eliminate the
processing overhead that occurs whenever additional transaction slots have to
be allocated to a block's header when the number of concurrent transactions
exceeds the INITTRANS parameter.
 
The MAXTRANS parameter:
        specifies the maximum number of concurrent transactions that can modify rows
in a database block. 
        Surprisingly, the default maximum is 255.  This value is quite large.
         This parameter is set to guarantee that there is sufficient space in the block to
store data or index data.
 
Example:  Suppose a DBA
sets INITTRANS at 4 and MAXTRANS at 10.  Initially, 4 transaction slots are allocated
in the block header.  If 6 system users process concurrent transactions for a given
block, then the number of transaction slots increases by 2 slots to 6 slots.  Once this
space is allocated in the header, it is not deallocated. 
 

218
What happens if 11 system users attempt to process concurrent transactions for a
given block?  The 11th system user is denied access – an Oracle error message is
generated – until current transactions complete (either are committed or rolled back). 
 
 

The PCTFREE and PCTUSED Parameters


 
You, as the DBA, must decide how much Free Space is needed for data blocks in
manual management of data blocks. 
 
You set the free space with the PCTFREE and PCTUSED parameters at the time that
you create an object like a Table or Index. 
 
PCTFREE:  The PCTFREE parameter is used at the time an object is created to set
the percentage of usable block space to be reserved during row insertion for possible
later updates to rows in the block. 
        PCTFREE is the only space parameter used for Automatic Segment Space
Management.
        The parameter guarantees that at least PCTFREE space is reserved for updates
to existing data rows.  PCTFREE reserves space for growth of existing rows
through the modification of data values.
        This figure shows the situation where the PCTFREE parameter is set
to 20 (20%). 
        The default value for PCTFREE is 10%.
        New rows can be added to a data block as long as the amount of space
remaining is at or greater than PCTFREE. 
        After PCTFREE is met (this means that there is less space available than
the PCTFREE setting), Oracle considers the block full and will not insert new
rows to the block.

219
 
PCTUSED:  The parameter PCTUSED is used to set the level at which a block can
again be considered by Oracle for insertion of new rows.  It is like a low water mark
whereas PCTFREE is a high water mark.  The PCTUSED parameter sets the minimum
percentage of a block that can be used for row data plus overhead before new rows
are added to the block.
        After a data block is filled to the limit determined by PCTFREE, Oracle Database
considers the block unavailable for the insertion of new rows until the percentage
of that block falls beneath the parameter PCTUSED.
        As free space grows (the space allocated to rows in a database
block decreases due to deletions or updates), the block can again have new
rows inserted but only if the percentage of the data block in use falls
below PCTUSED. 
        Example:  if PCTUSED is set at 40, once PCTFREE is hit, the percentage of
block space used must drop to 39% or less before row insertions are again
made.

220
        The system default for PCTUSED is 40. 
        Oracle tries to keep a data block at least PCTUSED full before using new blocks.
        The PCTUSED parameter is not set when Automatic Segment Space
Management is enabled.  This parameter only applies when Manual Segment
Space Management is in use.
 
This figure depicts the situation where PCTUSED is set to 40 and PCTFREE is set
to 20 (40% and 20% respectively).

221
Both PCTFREE and PCTUSED are calculated as percentages of the available data
space – Oracle deducts the space allocated to the block header from the total block
size when computing these parameters.
 
Generally PCTUSED plus PCTFREE should add up to 80.  The sum
of PCTFREE and PCTUSED cannot exceed 100.  If PCTFREE is 20,
and PCTUSED is 60, this will ensure at least 60% of each block is used while
saving 20% for row updates.
 
Effects of PCTFREE and PCTUSED:
 
A high PCTFREE has these effects:
        There is a lot of space for the growth of existing rows in a data block.
        Performance is improved since data blocks do not need to be reorganized very
frequently.
        Performance is improved because chaining is reduced.
        Storage space within a data block may not be used efficiently as there is always
some empty space in the data blocks.
 
A low PCTFREE has these effects (basically the opposite effect of high PCTFREE):
        There is less space for growth of existing rows.
        Performance may suffer due to the need to reorganize data in data blocks more
frequently:
o   Oracle may need to migrate a row that will no longer fit into a data block
due to modification of data within the row.
o   If the row will no longer fit into a single database block, as may be the case
for very large rows, then database blocks are chained together logically
with pointers.  This also causes a performance hit.  This may also cause a
DBA to consider the use of a nonstandard block size.  In these situations,
I/O performance will degrade. 

222
o   Examine the extent of chaining or migrating with
the ANALYZE command.  You may resolve row chaining and migration by
exporting the object (table), dropping the object, and then importing the
object.
        Chaining may increase resulting in additional Input/Output operations.
        Very little storage space within a data block is wasted.
 
A high PCTUSED has these effects:
        Decreases performance because data blocks may experience more migrated
and chained rows.
        Reduces wasted storage space by filling each data block more fully.
 
A low PCTUSED has these effects:
        Performance improves due to a probable decrease in migrated and chained
rows.
        Storage space usage is not as efficient due to more unused space in data
blocks.
 
Guidelines for setting PCTFREE and PCTUSED:
 
If data for an object tends to be fairly stable (doesn't change in value very much), not
much free space is needed (as little as 5%).  If changes occur extremely often and
data values are very volatile, you may need as much as 40% free space.  Once this
parameter is set, it cannot be changed without at least partially recreating the object
affected.
         Update activity with high row growth – the application uses tables that are
frequently updated affecting row size – set PCTFREE moderately high
andPCTUSED moderately low to allow for space for row growth.
PCTFREE = 20 to 25
PCTUSED = 35 to 40
(100 – PCTFREE) – PCTUSED = 35 to 45
223
 
        Insert activity with low row growth – the application has more insertions of
new rows with very little modification of existing rows – set PCTFREE low
andPCTUSED at a moderate level.  This will avoid row chaining.  Each data
block has its space well utilized but once new row insertion stops, there are no
more row insertions until there is a lot of storage space again available in a data
blocks to minimize migration and chaining.
PCTFREE = 5 to 10
PCTUSED = 50 to 60
(100 – PCTFREE) – PCTUSED = 30 to 45
 
        Performance primary importance and disk space is readily available – when
disk space is abundant and performance is the critical issue, a DBA must ensure
minimal migration or chaining occurs by using very high PCTFREE and very
low PCTUSED settings.  A lot of storage space will be wasted to minimize
migration and chaining.
PCTFREE = 30
PCTUSED = 30
(100 – PCTFREE) – PCTUSED = 40
 
        Disk space usage is important and performance is secondary – the
application uses large tables and disk space usage
is criticial.  Here PCTFREEshould be very low while PCTUSED is very high – the
tables will experience some data row migration and chaining with a performance
hit.
PCTFREE = 5
PCTUSED = 90
(100 – PCTFREE) – PCTUSED = 5
 
Free lists:  With Manual Segment Space Management, when a segment is created, it
is created with a Free List that is used to track the blocks allocated to the segment that
are available for row insertions. 
224
        A segment can have more than one free list if the FREELISTS parameter is
specified in the storage clause when an object is created. 
        If a block has free space that falls below PCTFREE, that block is removed from
the free list.
        Oracle improves performance by not considering blocks that are almost full as
candidates for row insertions.
  

Automatic Segment Space Management


 
Free space can be managed either automatically or manually. 
        Automatic simplifies the management of
the PCTUSED, FREELISTS, and FREELIST GROUPS parameters.
        Automatic generally provides better space utilization where objects may vary
considerably in terms of row size. 
        This can also yield improved concurrent access handling for row insertions. 
        A restriction is that you cannot use this approach if a tablespace will
contain LOBs.
 
The free and used space for a segment is tracked with bitmaps instead of free lists.
        The bitmap is stored in the header section of the segment, in a separate set of
blocks called bitmapped blocks. 
        The bitmap tracks the status of each block in a segment with respect to available
space. 
        Think of an individual bit as either being "on" to indicate the block is available or
"off" to indicate a block is or is not available.
        When a new row needs to be inserted into a segment, the bitmap is searched for
a candidate block.  This search occurs much more rapidly than can be done with
a Free List because a Bit Map Index can often be entirely stored in memory and
the use of a Free List requires searching a chain data structure (linked list).
 
Automatic segment management can only be enabled at the tablespace level, and only
if the tablespace is locally managed.  An example CREATE TABLESPACE command
is shown here.
225
 
CREATE TABLESPACE user_data
DATAFILE '/u01/student/dbockstd/oradata/USER350data01.dbf' SIZE
20M
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 40K
SEGMENT SPACE MANAGEMENT AUTO; 
 
The SEGMENT SPACE MANAGEMENT AUTO clause specifies the creation of the
bitmapped segments. 
 
Automatic segment space management offers the following benefits:
        Ease of use.
        Better space utilization, especially for the objects with highly varying size rows.
        Better run-time adjustment to variations in concurrent access.
        Better multi-instance behavior in terms of performance/space utilization.
 
Statements that can increase the amount of free space in a database block: 
        DELETE statements that delete rows, and
        UPDATE statements that update a column value to a smaller value than was
previously required.
        INSERT statements, but only if the tablespace allows for compression and the
INSERT causes data to be compressed, thereby freeing up some space in a
block.
        Both of these statements release space that can be used subsequently by an
INSERT statement.
        Released space may or may not be contiguous with the main area of free space
in a data block.
 
Oracle coalesces the free space of a data block only when:

226
        An INSERT or UPDATE statement attempts to use a block that contains enough
free space to contain a new row piece, or
        the free space is fragmented so the row piece cannot be inserted in a
contiguous section of the block.
 
Oracle does this compression only in such situations, because otherwise the
performance of a database system decreases due to the continuous compression of
the free space in data blocks.
 

Using the Data Dictionary to Manage Storage


 
Periodically you will need to obtain information from the data dictionary about storage
parameter settings.  The following views are useful.
        DBA_EXTENTS – information on space allocation for segments.
        DBA_SEGMENTS – stores information on segments.
        DBA_TABLESPACES – a row is added when a tablespace is created. 
        DBA_DATA_FILES – a row is added for each datafile in the database.
        DBA_FREE_SPACE – shows the space in each datafile that is free.
 
SELECT owner, segment_name, tablespace_name
FROM dba_extents;
 
OWNER      SEGMENT_NAME    TABLESPACE_NAME
---------- --------------- ---------------
SYS        FILE$           SYSTEM
SYS        BOOTSTRAP$      SYSTEM
SYS        OBJ$            SYSTEM
. . . more rows

227
 
SELECT owner, segment_name, tablespace_name "TS Name",
    initial_extent "Init", next_extent "Next",
    min_extents "Min", max_extents "Max", pct_increase "Pct"
FROM dba_segments
WHERE owner = 'DBOCK';
 
OWNER SEGMENT_NAME    TS Name    Init    Next  Min    Max  Pct
----- --------------- ------- ------- ------- ---- ------ ----
DBOCK DEPARTMENT      DATA01    49152   49152    1   1000    1
DBOCK DEPT_LOCATIONS  DATA01    49152   49152    1   1000    1
DBOCK PROJECT         DATA01    49152   49152    1   1000    1
DBOCK EMPLOYEE        DATA01    49152   49152    1   1000    1
DBOCK ASSIGNMENT      DATA01    49152   49152    1   1000    1
DBOCK DEPENDENT       DATA01    49152   49152    1   1000    1
 
SELECT owner, segment_name, extents, blocks
FROM dba_segments
WHERE owner = 'DBOCK';
 
OWNER SEGMENT_NAME       EXTENTS     BLOCKS
----- --------------- ---------- ----------
DBOCK TWONAME                  1         15
DBOCK WEATHER                  1         15
DBOCK WORKERANDSKILL           2         30
DBOCK PK_DEPARTMENT            1         15
 

228
SELECT tablespace_name "Ts Name", Count(*),
    MAX(blocks), SUM(blocks)
FROM dba_free_space
GROUP BY tablespace_name;
 
Ts Name         COUNT(*) MAX(BLOCKS) SUM(BLOCKS)
--------------- -------- ----------- -----------
DATA01                 1         172         172
DATA02                 1         120         120
INDEX01                1         248         248
SYSAUX                 1       36688       36688
SYSTEM                 1       16016       16016
UNDO01                52        1144        8944
USERS                  1         632         632
 
 
 

 
229
Module 10 – Managing Undo Data
 

Objectives
 
These notes teach you about managing undo data including the method used to
implement automatic undo data management.  You will also learn to create and modify
undo segments and how to query the data dictionary to retrieve undo segment
information.
        Beginning with Release 11g, for a default installation, Oracle Database
automatically manages undo.
        There is typically no need for DBA intervention.
        If your installation uses Oracle Flashback operations, you may need to perform
some undo management tasks to ensure the success of these operations.
 

Undo Purpose
 
Undo records are used to:
        Roll back transactions when a ROLLBACK statement is issued
        Recover the database
        Provide read consistency
        Analyze data as of an earlier point in time by using Oracle Flashback Query
        Recover from logical corruptions using Oracle Flashback features
 

Transactions
 
Transaction – collection of SQL data manipulation language (DML) statements treated
as a logical unit.
        Failure of any statement results in the transaction being "undone".
        If all statements process, SQLPlus or the programming application will issue
a COMMIT to make database changes permanent.
        Transactions implicitly commit if a user disconnects from Oracle normally.

230
        Abnormal disconnections result in transaction rollback.
        The command ROLLBACK is used to cancel (not commit) a transaction that is in
progress.
 
SET TRANSACTION – Transaction boundaries can be defined with the SET
TRANSACTION command.
        This has no performance benefit achieved by setting transaction boundaries, but
doing so enables defining a savepoint.
        Savepoint – allows a sequence of DML statements in a transaction to be
partitioned so you can roll back one or more or commit the DML statements up to
the savepoint.
        Savepoints are created with the SAVEPOINT savepoint_name command.
        DML statements since the last savepoint are rolled back with the ROLLBACK
TO SAVEPOINT savepoint_name command.
 
 

Undo vs. Rollback


 
In earlier versions of Oracle, the term rollback was used instead of undo, and instead
of managing undo segments, the DBA was responsible for managingrollback
segments. 
        Rollback segments were one of the primary areas where problems often arose;
thus, the conversion to automatic undo management is a significant
improvement.
        You will see parts of the data dictionary and certain commands still use the
term Rollback for backward compatibility.
 

 
231
Undo Segments
 
There are two methods for managing undo data: 
(1) automatic undo management – automatic undo management is preferred.
        This is the type of undo management used when you create an UNDO
tablespace and specify use of automatic undo management. 
        Automatic undo management is the default for Oracle 11g for a new
database.
(2) manual undo management – manual undo management is the only method
available for Oracle 8i and earlier versions of Oracle and is the type of
management that involves use of rollback segments.
 

 
Undo data – old data values from tables are saved as undo data by writing a copy of
the image from a data block on disk to an undo segment.  This also stores the
location of the data as it existed before modification.
 
232
Undo segment header – this stores a transaction table where information about
current transactions using this particular segment is stored.
        A serial transaction uses only one undo segment to store all of its undo data. 
        A single undo segment can support multiple concurrent transactions.
 
Purpose of Undo Segments – Undo segments have three purposes:  (1) Transaction
Rollback, (2) Transaction Recovery, and (3) Read Consistency.
 
Transaction Rollback:  Old images of modified columns are saved as undo data to
undo segments. 
        If a transaction is rolled back because it cannot be committed or the application
program directs a rollback of the transaction, the Oracle server uses the undo
data to restore the original values by writing the undo data back to the table/index
row.
        If you disconnect non-normally, rollback of uncommitted transactions is
automatic.
 
Transaction Recovery:  Sometimes an Oracle Instance will fail and transactions in
progress will not complete nor be committed. 
        Redo Logs bring both committed and uncommitted transactions forward to
the point of instance failure.
        Undo data is used to undo any transactions that were not committed at the point
of failure.
        Recovery is covered in more detail in a later set of notes.
 
Read Consistency:  Many users will simultaneously access a database. 
        These users should be hidden from modifications to the database that have not
yet committed. 
        Also, if a system user begins a program statement execution, the statement
should not see any changes that are committed after the transaction begins. 
        Old values stored in undo segments are provided to system users accessing
table rows that are in the process of being changed by another system user in
order to provide a read-consistent image of the data.
 

233
 
In the figure shown above, an UPDATE command has a lock on a data block from
the EMPLOYEE table and an undo image of the block is written to the undo
segment.  The update transaction has not yet committed, so any
concurrent SELECT statement by a different system user will result in data being
displayed from the undo segment, not from the EMPLOYEE table.  This read-
consistent image is constructed by the Oracle Server.
 
 

Undo Segment Types


 
A SYSTEM undo segment is created in the SYSTEM tablespace when a database is
created. 
        SYSTEM undo segments are used for modifications to objects stored in the
SYSTEM tablespace. 
        This type of Undo Segment works identically in both manual and automatic
mode.
 

234
Databases with more than one tablespace must have at least one non-SYSTEM undo
segment for manual mode or a separate Undo tablespace for automatic mode. 
 
Manual Mode:  A non-SYSTEM undo segment is created by a DBA and is used for
changes to objects in a non-SYSTEM tablespace.  There are two types of non-
SYSTEM undo segments:  (1) Private and (2) Public.
 
Private Undo Segments:  These are brought online by an instance if they are listed in
the parameter file. 
        They can also be brought online by issuing an ALTER ROLLBACK SEGMENT
segment_name ONLINE command.
        Prior to Oracle 9i, undo segments were named rollback segments and the
command has not changed. 
        Private undo segments are used for a single Database Instance.
 
Public Undo Segments:  These form a pool of undo segments available in a
database. 
        These are used with Oracle Real Application Clusters as a pool of undo
segments available to any of the Real Application Cluster instances. 
        You can learn more about public undo segments by studying the Oracle Real
Application Clusters and Administration manual.
 
Deferred Undo Segments:  These are maintained by the Oracle Server so a DBA
does not have to maintain them. 
        They can be created when a tablespace is brought offline (immediate,
temporary, or recovery).
        They are used for undo transactions when the tablespace is brought back
online. 
        They are dropped by the Oracle Server automatically when they are no longer
needed. 
 
 

235
Automatic Undo Management
 
The objective is a "set it and forget it" approach to Undo Management.
        Automatic Undo Management requires the creation of an Undo tablespace. 
        An auto-extending undo tablespace named UNDOTBS1 is automatically created
when you create the database with Database Configuration Assistant(DBCA).
        Oracle allows a DBA to allocate one active Undo tablespace per Oracle
Instance. 
        The Oracle Server automatically maintains undo data in the Undo tablespace. 
        Oracle automatically creates, sizes, and manages undo segments.
 
Automatic Undo Segments are named with a naming convention
of:  _SYSMUn_<generated number>$
For example, they may be
named:  _SYSMU1_1872589076$ and _SYSMU2_1517779068$, etc.
 
Configuration:  When a single Undo tablespace exists in a database:
        Automatic Undo Management is the default.
        With 11g, there is no need to set the UNDO_MANAGEMENT parameter in the
initialization to AUTO.
        Oracle will automatically use the single Undo Tablespace when in AUTO mode.
        If more than one Undo tablespace exists (so they can be switched if necessary,
but only one can be active), the UNDO_TABLESPACE parameter in the
initialization file is used to specify the name of the Undo tablespace to be used by
Oracle Server when an Oracle Instance starts up.
        If no Undo tablespace exists, Oracle will start up a database and use
the SYSTEM tablespace undo segment for undo.
         An alert message will be written to the alert file to warn that no Undo tablespace
is available.
        If you use the UNDO_TABLESPACE parameter and the tablespace referenced
does not exist, the STARTUP command will fail.
 
Examples:
236
UNDO_MANAGMENT=AUTO  or  UNDO_MANAGMENT=MANUAL
 
UNDO_TABLESPACE=UNDO01
 
        You cannot dynamically change UNDO_MANAGEMENT from AUTO to
MANUAL or vice-versa.
         When in MANUAL mode, the DBA must create and manage undo segments for
the database. 
 
You can alter the system to change the active Undo tablespace that is in use as
follows:
 
ALTER SYSTEM SET undo_tablespace = UNDO02;
 
 
Creating the Undo Tablespace:  There are two methods of creating an undo
tablespace manually. 
 
1.   Create one by specifying a clause in the CREATE DATABASE command.  
 
CREATE DATABASE USER350
  (... more clauses go here ...)
 UNDO TABLESPACE undo01
  DATAFILE '/u02/student/dbockstd/oradata/USER350undo01.dbf'
    SIZE 20M AUTOEXTEND ON NEXT 1M MAXSIZE 50M
 (... more clauses follow the UNDO TABLESPACE clause here ...)
 
        In the example command shown above, the Undo tablespace is
named UNDO01. 
        If the Undo tablespace cannot be created, the entire CREATE
DATABASE command fails.
237
 
 
2.   You can also create an Undo tablespace with the CREATE UNDO
TABLESPACE command.
 
CREATE UNDO TABLESPACE undo02
  DATAFILE '/u02/student/dbockstd/oradata/USER350undo02.dbf'
  SIZE 25M REUSE AUTOEXTEND ON;
 
        This is the same as the normal CREATE TABLESPACE command but with
the UNDO keyword added.
 
 

Altering and Dropping an Undo Tablespace


 
The ALTER TABLESPACE command can be used to modify an Undo
tablespace.  For example, the DBA may need to add an additional datafile to the Undo
tablespace.
 
ALTER TABLESPACE undo01
  ADD DATAFILE '/u02/student/dbockstd/oradata/USER350undo02.dbf'
  SIZE 30M REUSE AUTOEXTEND ON;
 
The DBA can also use the following clauses: 
        RENAME
        DATAFILE [ONLINE | OFFLINE]
        BEGIN BACKUP
        END BACKUP
 
238
Use the ALTER SYSTEM command to switch between Undo tablespaces – remember
only one Undo tablespace can be active at a time.
 
ALTER SYSTEM SET UNDO_TABLESPACE=undo03;
 
If any of the following conditions exist for the tablespace being switched to, an error is
reported and no switching occurs:
        The tablespace does not exist
        The tablespace is not an undo tablespace
        The tablespace is already being used by another instance (in an Oracle RAC
environment only)
 
The database is online while the switch operation is performed, and user transactions
can be executed while this command is being executed.
        When the switch operation completes successfully, all transactions started after
the switch operation began are assigned to transaction tables in the new undo
tablespace.
        The switch operation does not wait for transactions in the old undo tablespace to
commit.
        If there are any pending transactions in the old undo tablespace, the old undo
tablespace enters into a PENDING OFFLINE mode (status).
        In this mode, existing transactions can continue to execute, but undo records for
new user transactions cannot be stored in this undo tablespace.
 
 
The DROP TABLESPACE command can be used to drop an Undo tablespace that is
no longer needed – it cannot be an active undo tablespace.
 
DROP TABLESPACE undo02
  INCLUDING CONTENTS AND DATAFILES;
 
        The Undo tablespace to be dropped cannot be in use. 

239
        The clause INCLUDING CONTENTS AND DATAFILES causes the contents
(segments) and datafiles at the operating system level to be deleted.
        If it is active, you must switch to a new Undo tablespace and drop the old one
only after all current transactions are complete. 
        The following query will display any active transactions.  The PENDING
OFFLINE status indicates that the Undo segment within the Undo tablespace
has active transactions.  There are no active transactions when the query returns
no rows. 
 
SELECT a.name, b.status
FROM v$rollname a, v$rollstat b
WHERE a.name IN (SELECT segment_name
                 FROM dba_segments
                 WHERE tablespace_name = 'UNDOTBS1')
      AND a.usn = b.usn;
 
NAME                   STATUS
---------------------- ----------
_SYSSMU1_1872589076$   ONLINE
_SYSSMU2_1517779068$   ONLINE
_SYSSMU3_1524324367$   ONLINE
_SYSSMU4_2700621624$   ONLINE
_SYSSMU5_709693897$    ONLINE
_SYSSMU6_3285739792$   ONLINE
_SYSSMU7_1962453367$   ONLINE
_SYSSMU8_4260361871$   ONLINE
_SYSSMU9_2502292647$   ONLINE
_SYSSMU10_2550878863$  ONLINE
 
240
10 rows selected..
 
 

Other Undo Management Parameters


 
Older application programs may have programming code (PL/SQL) that use the SET
TRANSACTION USE ROLLBACK SEGMENT statement to specify a specific rollback
segment to use when processing large, batch transactions.  Such a program has not
been modified to Automatic Undo Management and normally this command would
return an Oracle error:  ORA-30019:  Illegal rollback segment operation in
Automatic Undo mode. 
 
You can suppress these errors by specifying
the UNDO_SUPPRESS_ERRORS parameter in the initialization file with a value
of TRUE.
 
A DBA can also determine how long to retain undo data to provide consistent reads.  If
undo data is not retained long enough, and a system user attempts to access data that
should be located in an Undo Segment, then an Oracle error:  ORA-1555 snapshot
too old error is returned – this means read-consistency could not be achieved by
Oracle.
 

Undo Retention
After a transaction is committed, undo data is no longer needed for rollback or
transaction recovery purposes. 
        However, for consistent read purposes, long-running queries may require this old
undo information for producing older images of data blocks.
        Several Oracle Flashback features can also depend upon the availability of older
undo information.
        For these reasons, it is desirable to retain the old undo information for as long as
possible.
 
Automatic undo management always uses a specified undo retention period.

241
         This is the minimum amount of time that Oracle Database attempts to retain old
undo information before overwriting it.
        Old (committed) undo information that is older than the current undo retention
period is said to be expired and its space is available to be overwritten by new
transactions.
        Old undo information with an age that is less than the current undo retention
period is said to be unexpired and is retained for consistent read and Oracle
Flashback operations.
 
Oracle Database automatically tunes the undo retention period based on undo
tablespace size and system activity.
        You can optionally specify a minimum undo retention period (in seconds) by
setting the UNDO_RETENTION initialization parameter.
        The exact impact this parameter on undo retention is as follows:
o   The UNDO_RETENTION parameter is ignored for a fixed size undo
tablespace. The database always tunes the undo retention period for the best
possible retention, based on system activity and undo tablespace size.
o   For an undo tablespace with the AUTOEXTEND option enabled, the database
attempts to honor the minimum retention period specified
byUNDO_RETENTION.
o   When space is low, instead of overwriting unexpired undo information, the
tablespace auto-extends.
o   If the MAXSIZE clause is specified for an auto-extending undo tablespace,
when the maximum size is reached, the database may begin to overwrite
unexpired undo information.
 
If Undo Segment data is to be retained a long time, then the Undo tablespace will need
larger datafiles. 
        The UNDO_RETENTION parameter defines the period in seconds.
        You can set this parameter in the initialization file or you can dynamically alter it
with the ALTER SYSTEM command:
 
ALTER SYSTEM SET UNDO_RETENTION = 43200; 
 
        The above command will retain undo segment data for 720 minutes (12 hours) –
the default value is 900 seconds (15 minutes).
        This sets the minimum undo retention period.
242
        If the tablespace is too small to store Undo Segment data for 720 minutes, then
the data is not retained – instead space is recovered by the Oracle Server to be
allocated to new active transactions.
 
Oracle 11g automatically tunes undo retention by collecting database use statistics
whenever AUTOEXTEND is on.
        Specifying UNDO_RETENTION sets a low threshold so that undo data is
retained at a minimum for the threshold value specified, providing there is
sufficient Undo tablespace capacity.
        The RETENTION GUARANTEE clause of the CREATE UNDO
TABLESPACE statement can guarantee retention of Undo data to support DML
operations, but may cause database failure if the Undo tablespace is not large
enough – unexpired Undo data segments are not overwritten.
        The TUNED_UNDORETENTION column of the V$UNDOSTAT dynamic
performance view can be queries to determine the amount of time Undo data is
retained for an Oracle database.
        Query the RETENTION column of the DBA_TABLESPACES view to determine
the setting for the Undo tablespace – possible values
are GUARANTEE,NOGUARANTEE, and NOT APPLY (for tablespaces other
than Undo).
 

Sizing and Monitoring an Undo Tablespace


 
Three types of Undo data exists in a Undo tablespace:
        Active (unexpired) – these segments are needed for read consistency even after
a transaction commits.
        Expired – these segments store undo data that has been committed and all
queries for the data are complete and the undo retention period has been
reached.
        Unused – these segments have space that has never been used.
 

243
The minimum size for an Undo tablespace is enough space to hold before-image
versions of all active transactions that have not been committed or rolled back.
 
When space is inadequate to support changes to uncommitted transactions for rollback
operations, the error message ORA-30036: Unable to extend segment
by space_qtr in undo tablespace tablespace_name is displayed, and the DBA must
increase the size of the Undo tablespace.
 
Initial Size – enable automatic extension (use the AUTOEXTEND ON clause with
the CREATE TABLESPACE or ALTER TABLESPACE commands) for Undo
tablespace datafiles so they automatically increase in size as more Undo space is
needed.
        After the system stabilizes, if you decide to used a fixed-size Undo tablespace,
then Oracle recommends setting the Undo tablespace maximum size to
about 10% more than the current size.
        The Undo Advisor software available in Oracle Enterprise Manager can be
used to calculate the amount of Undo retention disk space a database needs.
 
 

Undo Data Statistics


 
The V$UNDOSTAT view displays statistical data to show how well a database is
performing. 
        Each row in the view represents statistics collected for a 10-minute interval. 
        You can use this to estimate the amount of undo storage space needed for the
current workload. 
        If workloads vary considerably throughout the day, then a DBA should conduct
estimations during peak workloads.
        The column SSOLDERRCNT displays the number of queries that failed with
a "Snapshot too old" error.
 

244
SELECT TO_CHAR(end_time, 'yyyy-mm-dd hh24:mi') end_time,
undoblks, ssolderrcnt
FROM v$undostat;
 
END_TIME           UNDOBLKS SSOLDERRCNT
---------------- ---------- -----------
. . .
. . .
2013-06-07 01:13          0           0
2013-06-07 01:03         65           0
2013-06-07 00:53          0           0
2013-06-07 00:43          1           0
 
576 rows selected.
 
In order to size an Undo tablespace, a DBA needs three pieces of information.  Two
are obtained from the initialization
file:  UNDO_RETENTION andDB_BLOCK_SIZE.  The third piece of information is
obtained by querying the database:  the number of undo blocks generated per
second.
 
 
SELECT (SUM(undoblks))/SUM((end_time-begin_time) * 86400)
FROM v$undostat;
 
(SUM(UNDOBLKS))/SUM((END_TIME-BEGIN_TIME)*86400)
------------------------------------------------
                                      .063924708
 
245
In this next query, the END_TIME and BEGIN_TIME columns are DATE data and
subtractions of these results in days – converting days to seconds is done by
multiplying by 86,400, the number of seconds in a day.  This value needs to be
multiplied by the size of an undo block – the same size as the database block defined
by the DB_BLOCK_SIZE parameter.
 
The number of bytes of Undo tablespace storage needed is calculated by this query:
 
SELECT (UR * (UPS * DBS)) + (DBS * 24) As "Bytes"
FROM (SELECT value As UR
      FROM v$parameter
      WHERE name = 'undo_retention'),
      (SELECT (SUM(undoblks)/SUM(((end_time -
              begin_time) * 86400))) As UPS
       FROM v$undostat),
      (SELECT value As DBS
       FROM v$parameter
       WHERE name = 'db_block_size');
 
     Bytes
----------
668641.879
 
Convert this figure to megabytes of storage by dividing by 1,048,576 (the number of
bytes in a megabyte).  The Undo tablespace needs to be about 0.64 MBaccording to
this calculation, although this is because the sample database has very few
transactions.
 
 

246
Undo Quota
 
An object called a resource plan can be used to group users and place limits on the
amount of resources that can be used by a given group. 
        This may become necessary when long transactions or poorly written
transactions consume limited database resources. 
        If the database has no resource bottlenecks, then the allocating of quotas can be
ignored.
 
Sometimes undo data space is a limited resource.  A DBA can limit the amount of undo
data space used by a group by setting the UNDO_POOL parameter which defaults
to unlimited. 
        If the group exceeds the quota, then new transactions are not processed until old
ones complete. 
        The group members will receive the ORA-30027: Undo quota violation – failed
to get %s (bytes) error message. 
 
Resource plans are covered in more detail in a later set of notes.
 
 

Undo Segment Information


 
The following views provide information about undo segments:
        DBA_ROLLBACK_SEGS
        V$ROLLNAME  -- the dynamic performance views only show data for online
segments.
        V$ROLLSTAT
        V$UNDOSTAT
        V$SESSION
        V$TRANSACTION
 
247
This query lists information about undo segments in the
SIUE DBORCL database.  Note the two segments in the SYSTEM tablespace and the
remaining segments in the UNDO tablespace.
 
COLUMN segment_name FORMAT A15;
COLUMN owner FORMAT A10;
COLUMN tablespace_name FORMAT A15;
COLUMN status FORMAT A10;
 
SELECT segment_name, owner, tablespace_name, status
FROM dba_rollback_segs;
 
SEGMENT_NAME                   OWNER      TABLESPACE_NAME STATUS
------------------------------ ---------- ---------------
----------
SYSTEM                         SYS        SYSTEM          ONLINE
_SYSSMU1_1872589076$           PUBLIC     UNDOTBS1        ONLINE
_SYSSMU2_1517779068$           PUBLIC     UNDOTBS1        ONLINE
_SYSSMU3_1524324367$           PUBLIC     UNDOTBS1        ONLINE
_SYSSMU4_2700621624$           PUBLIC     UNDOTBS1        ONLINE
_SYSSMU5_709693897$            PUBLIC     UNDOTBS1        ONLINE
_SYSSMU6_3285739792$           PUBLIC     UNDOTBS1        ONLINE
_SYSSMU7_1962453367$           PUBLIC     UNDOTBS1        ONLINE
_SYSSMU8_4260361871$           PUBLIC     UNDOTBS1        ONLINE
_SYSSMU9_2502292647$           PUBLIC     UNDOTBS1        ONLINE
_SYSSMU10_2550878863$          PUBLIC     UNDOTBS1        ONLINE
 
11 rows selected.

248
 
The owner column above specifies the type of undo segment.  SYS means a private
undo segment.
 
This query is a join of the V$ROLLSTAT and V$ROLLNAME views to display statistics
on undo segments currently in use by the Oracle Instance.  The usncolumn is a
sequence number.
 
COLUMN name FORMAT A22;
 
SELECT n.name, s.extents, s.rssize, s.hwmsize, s.xacts, s.status
FROM v$rollname n, v$rollstat s
WHERE n.usn = s.usn;
 
NAME                      EXTENTS     RSSIZE    HWMSIZE      XACTS STATUS
---------------------- ---------- ---------- ---------- ----------
----------
SYSTEM                          6     385024     385024          0 ONLINE
_SYSSMU1_1872589076$            4    2220032    3268608          0 ONLINE
_SYSSMU2_1517779068$            3    1171456   10608640          0 ONLINE
_SYSSMU3_1524324367$            4    2220032    3268608          0 ONLINE
_SYSSMU4_2700621624$            3    1171456   11657216          0 ONLINE
_SYSSMU5_709693897$             4    2220032    3137536          0 ONLINE
_SYSSMU6_3285739792$            4    2220032    9560064          0 ONLINE
_SYSSMU7_1962453367$            4    2220032    2220032          0 ONLINE
_SYSSMU8_4260361871$           17    2088960    3268608          0 ONLINE
_SYSSMU9_2502292647$            3    1171456    3268608          0 ONLINE
_SYSSMU10_2550878863$          13   11657216   11657216          0 ONLINE

 
11 rows selected.

249
 
o   EXTENTS = number of extents in the rollback segment.
o   RSSIZE = rollback segment size (bytes)
o   HWMSIZE = high water mark of the rollback segment size (bytes)
o   XACTS = number of active transactions (notice in the above there are none).
 
This query checks the use of an undo segment by any currently active transaction by
joining the V$TRANSACTION and V$SESSION views.
 
SELECT s.username, t.xidusn, t.ubafil, t.ubablk, t.used_ublk
FROM v$session s, v$transaction t
WHERE s.saddr = t.ses_addr;
 
 
USERNAME                           XIDUSN     UBAFIL     UBABLK  USED_UBLK
------------------------------ ---------- ---------- ---------- ----------
DBOCK                                   8          3       3246          1

 
o   XIDUSN = Undo segment number
o   UBAFIL = Undo block address (UBA) filenum
o   UBABLK = UBA block number
o   USED_UBLK = Number of undo blocks used
 
 

 Flashback Features
 
Flashback features allow DBAs and users to access database information from a
previous point in time.
        Undo information must be available so the retention period is important.
        Example:  If an application requires a version of the database that is up to 12
hours old, the UNDO_RETENTION must be set to 43200.
250
        The RETENTION GUARANTEE clause needs to be specified.
 
The Oracle Flashback Query option is supplied through the DBMS_FLASHBACK
package at the session level. 
 
At the object level, Flashback Query uses the AS OF clause to specify the point in time
for which data is viewed.
 
Flashback Version Query enables users to query row history through use of
a VERSIONS clause of a SELECT statement. 
 
Example:  This SELECT statement retrieves the state of an employee record for an
employee named Sue at 9:30 AM on June 13, 2013 because it was discovered that
Sue's employee record was erroneously deleted.
 

SELECT * FROM employee AS OF TIMESTAMP


   TO_TIMESTAMP('2013-06-13 09:30:00', 'YYYY-MM-DD HH:MI:SS')
   WHERE name = 'SUE';
 

This INSERT statement restores Sue's employee table information.


 

INSERT INTO employee


  (SELECT * FROM employee AS OF TIMESTAMP
   TO_TIMESTAMP('2013-06-13 09:30:00', 'YYYY-MM-DD HH:MI:SS')
     WHERE name = 'SUE');
 
Other information about Flashback features will be covered in other notes covering the
topic of database recovery.
 
251
Module 11 – Tables and Clusters
 

Objectives
 
This lesson focuses on the creation and modification of tables as the primary object
used to store database information. 
        Learn the Oracle data types.
        Learn the structure of rows in tables, and the management of storage structures
in a table. 
        Reorganize, truncate and drop tables including dropping table columns.
        Create tables with constraints. 
        Create clusters.   
 
A separate set of notes covers index creation applicable to tables. 
 

Introduction
 
Tables are the most basic data storage object in Oracle.  Data are stored in rows and
columns. 
        Each column has a name and data type.
        Width is specified, e.g, VARCHAR2(25), or precision and scale,
e.g., NUMBER(7,2) or may be predetermined as with the DATE data type.
        Columns may have defined integrity constraints
        Virtual columns may be defined - derived from an expression.
        Data encryption can be transparent.
 
There are four types of tables used to store data in an Oracle database.
 

252
Tables:  Also called a regular table or ordinary table, this is the default table type
and the primary focus of this instructional module.  Rows are stored to tables of this
type in any order so this is also referred to as a heap structure, which is an unordered
collection of data rows.  A table has a single segment.
 
Partitioned Tables:  This table type is used to build scalable applications.
        A partitioned table has one or more partitions.  Each partition stores rows that
are partitioned by either range, hash, composite, or list partitioning.
        Each partition is a separate segment, sometimes in different tablespaces (thus
the scalable application).
 
Index-Organized Tables (IOT):   The IOT structure is a variant of a B-tree structure. 
        Data is stored in an index-organized table in a B-tree index structure in primary
key sorted order. 
        Instead of maintaining a table in one storage location and the primary key index
in another location, each entry in the B-tree also stores non-key column values
as well, so the nodes in the tree can become quite large. 
        An overflow segment may exist for the storage of longer row lengths.
        These tables provide fast key-based access for queries involving exact matches
and range searches.
        Storage requirements are reduced as the primary key is only stored in the table,
not duplicated in a separate index.
 
Clustered Tables:  This table type is comprised of a single or group of tables that
share the same data block, thus an individual data block can have rows stored in it
from more than one table. 
        The tables must share a common column(s)  that stores a common domain of
values called the cluster key.
        Clustering is transparent to the applications that query and manipulate the data
for the clustered tables.
        The cluster key does not have to be the same as the primary key – it may be the
same or a different column set.
253
        Clusters are created to improve performance.  Random access to individual
record sets in a cluster is faster, but table scans are slower.
 
 
Data Types (Built-In)
 
Oracle uses several built-in data  types to store scalar data, collections, and
relationships.  These data types vary a bit from the ANSI standard for Structured Query
Language – the Oracle data types encompass the ANSI standard.
 
Data Type Definition/Use
CHAR – Fixed-length character data types, such as CHAR and NCHAR, are
stored with padded blanks.  NCHAR is a Globalization Support data
NCHAR
type that enables the storage of either fixed-width or variable-width
character sets. The maximum size is determined by the number of
bytes required to store one character, with an upper limit of 2,000
bytes per row. The default is one character or one byte, depending
on the character set.
VARCHAR2 - Variable-length character data types use only the number of bytes
NVARCHAR2 needed to store the actual column value, and can vary in size for
each row, up to 4,000 bytes. VARCHAR2 and NVARCHAR2 are
examples of variable-length character data types.
NUMBER Numbers in an Oracle database are always stored as variable-
length data.
They can store up to 38 significant digits. Numeric data types
require: One byte for the exponent; One byte for every two
significant digits in the mantissa; One byte for negative numbers if
the number of significant digits is less than 38 bytes.
DATE The Oracle server stores dates in fixed-length fields of seven bytes.
An Oracle DATE always includes the time.
TIMESTAMP This data type stores the date and time including fractional seconds
up to nine decimal places. TIMESTAMP WITH TIME ZONE and
TIMESTAMP WITH LOCAL TIME ZONE can use time zones to

254
factor items such as daylight savings time.
TIMESTAMP and TIMESTAMP WITH LOCAL TIME ZONE can be
used in primary keys, TIMESTAMP WITH TIME ZONE cannot.
RAW This data type enables the storage of small binary data. The Oracle
server does not perform character set conversion when RAW data
is transmitted across machines in a network or if RAW data is
moved from one database to another using Oracle utilities. The
number of bytes needed to store the actual column value, and can
vary in size for each row, up to 2,000 bytes.
LONG, LONG CLOB and LONG store large fixed-width character data.  NCLOB
RAW, and LOB stores large fixed-width national character set data.  BLOB and
LONG RAW store unstructured data.  LONG and LONG RAW data
types were previously used for unstructured data, such as binary
images, documents, or geographical information, and are primarily
provided for backward compatibility.  These data types are
superseded by the LOB data types. LOB data types are distinct from
LONG and LONG RAW, and they are not interchangeable. LOBs
will not support the LONG application programming interface (API),
and vice versa.
ROWID and ROWID and UROWID can be queried along with other columns in a
UROWID table.  ROWID is a unique identifier for each row in the
database.  ROWID is not stored explicitly as a column
value.  Although the ROWID does not directly give the physical
address of a row, it can be used to locate the row.  ROWID provides
the fastest means of accessing a row in a table.  ROWIDs are
stored in indexes to specify rows with a given set of key
values.  UROWID supports ROWIDs of foreign tables (non-Oracle
tables) and can store all kinds of ROWIDs.  For example: A
UROWID DataType is required to store a ROWID for rows stored in
an index-organized table (IOT).  The value of the parameter
COMPATIBLE must be set to Oracle8.1 or higher to use UROWID.
VARRAY Varying arrays (VARRAY) are useful to store lists that contain a
small number of elements, such as phone numbers for a
customer.  VARRAYs have the following characteristics:  An array is
an ordered set of data elements; All elements of a given array are of
the same data type; Each element has an index, which is a number
corresponding to the position of the element in the array;  The
255
number of elements in an array determines the size of the
array;  The Oracle server allows arrays to be of variable size, which
is why they are called VARRAYs, but the maximum size must be
specified when declaring the array type.
NESTED Nested tables provide a means of defining a table as a column
TABLES within a table.  They can be used to store sets that may have a
large number of records such as number of items in an
order.  Nested tables generally have the following characteristics:  A
nested table is an unordered set of records or rows;  The rows in a
nested table have the same structure;  Rows in a nested table are
stored separate from the parent table with a pointer from the
corresponding row in the parent table;  Storage characteristics for
the nested table can be defined by the database
administrator;  There is no predetermined maximum size for a
nested table.
REFs Relationship types are used as pointers within the database. The
use of these types requires the Objects option.  As an example,
each item that is ordered could point to or reference a row in the
PRODUCTS table, without having to store the product code.
XMLType Stores XML data.
 
Oracle also allows a system user (programmer usually) to define abstract data types
for use within programming applications.
 

ROWID Format
 
The ROWID data type is used to store internal row identifiers.  The format for ROWID
columns is shown in the figures below.
 
Extended ROWID Format – used in partitioned tables.
 

256
 
Restricted ROWID Format – used in non-partitioned tables.
 

 
Data Object Number:  Assigned to each data object (table or index) when it is created
and is unique within the database.
 
Relative File Number:  Unique to each file within a tablespace.
 
Block Number:  The position of the block containing the row within the file.
 
Row Number:  The position of the row directory stored in the block header.
 
SELECT DepartmentNumber As Dno, DepartmentName, Rowid
FROM Department;
 
       DNO DEPARTMENTNAME            ROWID
---------- ------------------------- ------------------
         1 Medical Surgical Ward 1   AAAM5yAAEAAAAB0AAA
         2 Radiology                 AAAM5yAAEAAAAB0AAB

257
         3 Emergency-Surgical        AAAM5yAAEAAAAB0AAC
         4 Oncology Ward             AAAM5yAAEAAAAB0AAD
         5 Critical Care-Cardiology  AAAM5yAAEAAAAB0AAE
         6 Pediatrics-Gynecology     AAAM5yAAEAAAAB0AAF
         7 Pharmacy Department       AAAM5yAAEAAAAB0AAG
         8 Admin/Labs                AAAM5yAAEAAAAB0AAH
         9 OutPatient Clinic         AAAM5yAAEAAAAB0AAI
 
9 rows selected.
 
Here, AAAM5y is the data object number, AAE is the relative file number, AAAAB0 is
the block number, and AAA (AAB, AAC) are the row numbers for department #7, #3,
and #1, respectively.
 

 Row Structure
 
Earlier we examined the structure of a row in a database block.  We review the
structure here with the figure shown below. 
 

258
 
Each row has a row header (not shown above) that stores the number of columns in a
row, any information about chained columns, and the row lock status.
 
Columns are stored in the order in which they were defined and trailing NULL columns
are not stored.  Each column stores the column length followed immediately by the
column value.  Adjacent rows do not have any space in between them.  Each row has
a slot in the row directory in the block header that points to the row.
 
 

259
Creating a Regular Table
 
The CREATE TABLE command creates relational (regular) tables and object
tables.  An object table differs from a regular table in that it can use an object type to
define a column. 
 
We will base our discussion of creating tables on the following Entity-Relationship
diagram as we will include the clauses needed to enforce referential integrity among
the rows stored in the tables.
 

 
 
The ER diagram depicts minimal non-key data in order to simplify the discussion
without any loss of generality to a real-world situation. The following rules apply:

260
        The primary key identifier for the COURSE entity is the CourseNumber attribute.
        The primary key identifier for the SECTION entity is the concatenation of
the SectionNumber, Term, and Year attributes, each course being assigned a
unique SectionNumber within a specific Term and Year.
        The primary key identifier of the STUDENT entity is the SSN (social security
number) attribute.
 

Types of Data Integrity


 
Creating tables requires a basic understanding of the various types of data integrity
that need to be enforced.   Data integrity is often enforced by specifyingconstraints.
 
Each constraint is named using some type of naming convention. 
        Use a suffix or prefix (such as PK for primary key) to denote the type of
constraint. 
o   PK stands for Primary Key constraint.
o   NN stands for Not Null constraint.
o   FK stands for Foreign Key constraint.
o   CK stands for a Check constraint.
o   UN stands for a Unique constraint.
        If you don't name constraints, then Oracle assigns default names to the
constraints that are not very meaningful, thus constraints are almost always
named by the DBA in order that they may be examined later by using the data
dictionary. 
 
Nulls.  A Null rule defined on a column allows or disallows inserts and/or updates of
rows containing a Null value.  You can specify Not Null as an integrity constraint for a
column.  If a column is allowed to have a null value, you simply do not use an integrity
constraint for that column.  Example: 
 
261
    Student_Name       VARCHAR(50)

        CONSTRAINT Stu_Name_NN NOT NULL,

 
 
Unique Column Values.  This constraint restricts row insertion and updates to unique
values for the specified column or set of columns.  Example: 
 
    Student_ID        CHAR(8)

        CONSTRAINT StudentID_UN Unique,

 
 
Primary Key Values.  This constraint specifies uniqueness for the primary key and
can be applied to one column or a set of columns.  Example – this thisexample creates
the constraint and creates a primary key index stored in the tablespace named
DATA_INDEX and the index has a PCTFREE of 5%: 
 
 SSN               CHAR(9)

    CONSTRAINT Student_PK PRIMARY KEY

       USING INDEX TABLESPACE Data_Index PCTFREE 5,

 
Check Values.  This is a constraint on a column or set of columns that enforces a
specified condition. 
        Example – the Grade column must have one of the values or must be NULL: 
 
Grade              CHAR(2)

    CONSTRAINT Grade_Check_CK

        CHECK (Grade IN ('A','B','C','D','E','WP')),

        Example the Enrolled_hours column must be less than 6.


 

Enrolled_Hours     NUMBER(2)

262
    CONSTRAINT Enrolled_Hours_CK

        CHECK (Enrolled_Hours < 6),

        A single column can have multiple Check constraints that reference the column
in its definition.
        There is no limit to the number of Check constraints that you can define on a
column.
        You must ensure that the constraints do not conflict with one another as Oracle
will not test for conflicts. 
        Example – the Grade column must have one of the values and must also
be NOT NULL.
 
Grade              CHAR(2)

    CONSTRAINT Grade_Check_CK

        CHECK (Grade IN ('A','B','C','D','E','WP'))

    CONSTRAINT Grade_NN NOT NULL,

 
        A Check integrity constraint on a column or set of columns requires that a
specified condition be true or unknown for every row of the table.  If a data
manipulation language (DML) statement results in the condition of
the Check constraint evaluating to FALSE, the statement is rolled back and the
row is not stored in the table because the data would violate
the Check constraint.
 
        Check constraints enable you to enforce very specific or sophisticated integrity
rules by specifying a check condition.  The condition of a Check constraint has
some limitations:
o   It must be a Boolean expression evaluated using the values in the row
being inserted or updated.
o   It cannot contain subqueries, sequences, the SQL functions SYSDATE,
UID, USER, or USERENV, or the pseudocolumns LEVEL or ROWNUM.
 

263
 
Referential Integrity.  Oracle supports a number of referential integrity rules to
guarantee that the value stored in a column in a table is a valid reference to an existing
row in another table.  That is, the row inserted must reference a valid row in another
table. 
 
For example, when storing a value for Course_Number in the SECTION table, the
database management system software (DBMS) will ensure that the value
forCourse_Number exists within the referenced Course_Number column of
the COURSE table.  The SECTION table references the COURSE table.
 
Referential Integrity also permits foreign keys to have null values; however, this is
uncommon.  As an example, a student might have a Major and this would reference
the MAJORS Table (not shown in our ER diagram).  Some students, however, would
not have declared a Major.  If a foreign key is part of a primary key, then null values are
not allowed.
 
The referential integrity rules are as follows:
 
        Restrict.  This constraint disallows the update or deletion of referenced
data.  This rule is not enforceable by a clause that is used as part of a
CREATE TABLE command – enforce it by creating a trigger.
 
        Set Null.  This constraint ensures that any action that updates or deletes
referenced data causes associated data values in other tables to be set
to Nullwhen an appropriate referenced value is no longer defined.  Example: 
 
/* Create Owner Table –
Testing Referential Integrity Clauses */
CREATE TABLE Course (
    Course_Number    CHAR(7)
        CONSTRAINT Course_PK PRIMARY KEY
264
            USING INDEX Tablespace Index01
            PCTFREE 5,
    Description      VARCHAR(40)
        CONSTRAINT Description_NN NOT NULL,
    Hours            Number(2)
        CONSTRAINT Hours_Less_12_CK
            CHECK (Hours < 12)
        CONSTRAINT Hours_Greater_Zero
            CHECK (Hours > 0)
    )
PCTFREE 5 PCTUSED 60
TABLESPACE DATA01 ;
 
INSERT INTO Course VALUES ('CMIS460','Advanced VB
Programming',3);
INSERT INTO Course VALUES ('CMIS565','Oracle DBA',3);
 
The TABLESPACE clause specifies the tablespace where a table is created – here it
is USERS.  If this clause is omitted, a table is created in the default tablespace of the
schema owner who creates the table.
 
The USING INDEX clause to cause index creation for the primary key to be stored to
the tablespace named INDEX01.
 
The PCTFREE clause for the Index tablespace specifies the amount of free space left
in data blocks that store index entries.  The PCTFREE and PCTUSEDspecify
parameters for the table.
 
 

265
/* Create Member Table */
CREATE TABLE Section (
    Section_Number  Number(6),
    Section_Term    CHAR(6)   CONSTRAINT Term_NN NOT NULL,
    Section_Year    Number(4) CONSTRAINT Year_NN NOT NULL,
    Course_Number   CHAR(7)   CONSTRAINT Section_To_Course_FK
        REFERENCES Course(Course_Number)
        ON DELETE SET NULL,
    Location        CHAR(10)  CONSTRAINT Location_NN NOT NULL,
  CONSTRAINT Section_PK
    PRIMARY KEY (Section_Number, Section_Term, Section_Year)
        USING INDEX Tablespace Index01
        PCTFREE 5
    )
PCTFREE 20 PCTUSED 65
TABLESPACE DATA01;
 
/* Insert valid rows into member for owner record 11 and 22 */
INSERT INTO Section
    VALUES (111111,'Summer', 2012,'CMIS565','FH-3208');
INSERT INTO Section
    VALUES (222222,'Fall', 2012, 'CMIS565','FH-3103');
INSERT INTO Section
    VALUES (111112,'Fall', 2012, 'CMIS460','FH-3208');
INSERT INTO Section
    VALUES (111113,'Summer', 2012, 'CMIS460','FH-0301');
 

266
The Course_Number foreign key in SECTION table references the primary key
(Course_Number shown in parentheses) for the COURSE table.  The constraint is
named Section_To_Course_FK and enforced by the CONSTRAINT clause.
 
Note the use of the CONSTRAINT and PRIMARY KEY clauses to create
the concatenated primary key and the storage of the key in the Index01 tablespace.
 
 
/* Display Owner and Member rows */
SQL> SELECT * FROM Course;
 
COURSE_ DESCRIPTION                             HOURS
------- ---------------------------------- ----------
CMIS460 Advanced VB Programming                     3
CMIS565 Oracle DBA                                  3
 
SQL> SELECT * FROM Section;
 
SECTION_NUMBER SECTIO SECTION_YEAR COURSE_ LOCATION
-------------- ------ ------------ ------- ----------
        111111 Summer         2012 CMIS565 FH-3208
        222222 Fall           2012 CMIS565 FH-3103
        111112 Fall           2012 CMIS460 FH-3208
        111113 Summer         2012 CMIS460 FH-0301
 
/* Test what happens when Course CMIS460 is deleted.
   The Section table will retain the rows but the
   Foreign key column will be NULL */

267
 
DELETE FROM Course
    WHERE Course_Number = 'CMIS460';
 
SELECT * FROM Section;
SECTION_NUMBER SECTIO SECTION_YEAR COURSE_ LOCATION
-------------- ------ ------------ ------- ----------
        111111 Summer         2012 CMIS565 FH-3208
        222222 Fall           2012 CMIS565 FH-3103
        111112 Fall           2012         FH-3208
        111113 Summer         2012         FH-0301
 
        Set Default.  This constraint ensures that any action that updates or deletes
referenced data values causes associated data values to be set to a default
value.  This rule is not enforceable by a clause that is used as part of a CREATE
TABLE command – enforce it by creating a trigger.
 
        Cascade.  When a referenced row is deleted, all dependent rows in other tables
are also deleted.  Likewise, when a key value is updated in a master table, the
corresponding foreign key values are updated in referenced tables.  Example –
this shows the specification using a constraint where the column SSN in the
“member” table references the primary key column in the STUDENT table.: 
 
CONSTRAINT Enr_Stu_SSN_FK FOREIGN KEY (SSN)

    REFERENCES Student

        ON DELETE CASCADE,

 
        No Action.  This constraint differs from Restrict in that it disallows the update or
deletion of referenced data, but the rule is checked at the end of a statement or
transaction.  This is the default action used by Oracle – no rule is necessary.
 
268
Testing Referential Integrity and Uniqueness Constraints for
the SECTION Table
 
Insert a good data row into the SECTION table.
 

INSERT INTO Section


    VALUES ('444444','Summer', 2012, 'CMIS460','FH-3208');
 
Insert a row with an invalid NULL Location field that violates
the Location_NN constraint.   Oracle rejects the insertion and rolls back the command,
then displays an appropriate error message.
 
INSERT INTO Section 
    VALUES ('555555', 'Fall', 2012, 'CMIS460', NULL);

ERROR at line 2:

ORA-01400: cannot insert NULL into ("DBOCK"."SECTION"."LOCATION")


 
Test the referential integrity constraint between the SECTION and COURSE tables by
attempting to insert a row into SECTION for a non-existent CMIS342
COURSE value.  Again Oracle rejects the insertion and displays an appropriate error
message.
 

INSERT INTO Section 


    VALUES ('555555', 'Fall', 2012, 'CMIS342', 'FH-3103');

ERROR at line 1:

ORA-02291: integrity constraint (DBOCK.SECTION_TO_COURSE_FK) violated – parent key not


found
 
Test the primary key constraint by attempting to insert a row with a duplicate primary
key.   Oracle rejects the insertion and displays an error message a row with primary
key 001 already exists.
 

269
INSERT INTO Section 
    VALUES ('111111', 'Summer', 2012, 'CMIS270', 'FH-3208');

ERROR at line 1:

ORA-00001: unique constraint (DBOCK.SECTION_PK) violated

Creating the STUDENT Master Table and Inserting Data


 
The COURSE and SECTION, created thus far are master tables.
 
One additional master table is required, the STUDENT table.
 
The many-to-many ENROLL relationship will be implemented as a table with a
concatenated primary key from the SECTION and STUDENT tables.  This is
anintersection or association table.
 
We will assume that the University's inventory of courses is fairly stable so that
the PCTFREE value for the tables are very low.  
                                                                                                                               
This gives the CREATE TABLE commands to create the STUDENT table. 
 
CREATE TABLE Student (

    SSN                CHAR(9)

        CONSTRAINT Student_PK PRIMARY KEY

           USING INDEX Tablespace Index01


           PCTFREE 5,

    Student_Name       VARCHAR(50)

        CONSTRAINT Student_Name_NN NOT NULL,

    Account_Balance    NUMBER(7,2),

    Date_Birth         Date Default NULL

    )

270
PCTFREE 10 PCTUSED 40

Tablespace Data01;

 
Insert several good data rows into STUDENT.
 
INSERT INTO Student VALUES ('111111111', 
    'Charley Daniels',1200.00,'01-MAR-80' );

INSERT INTO Student VALUES ('222222222', 


    'Faith Hill',2400.00,'05-FEB-81'); 
 
Verify the creation of the tables and the indexes by querying the data dictionary.
 
COLUMN Table_Name FORMAT A14;

COLUMN Tablespace_Name FORMAT A15;

SELECT Table_Name, Tablespace_Name 
    FROM Tabs 
    WHERE Table_Name = 'STUDENT'

        Or Table_Name = 'COURSE'

        Or Table_Name = 'SECTION';

 
TABLE_NAME     TABLESPACE_NAME
-------------- ---------------
COURSE         DATA01
SECTION        DATA01
STUDENT        DATA01
 
COLUMN Index_Name FORMAT A15;

SELECT INDEX_NAME, INDEX_TYPE, TABLE_NAME, UNIQUENESS

    FROM Dba_Indexes
    WHERE Table_Name = 'STUDENT'

        Or Table_Name = 'COURSE'

271
        Or Table_Name = 'SECTION';

 
 
INDEX_NAME      INDEX_TYPE     TABLE_NAME     UNIQUENES
--------------- -------------  -------------- ---------
STUDENT_PK      NORMAL         STUDENT        UNIQUE
COURSE_PK       NORMAL         COURSE         UNIQUE
SECTION_PK      NORMAL         SECTION        UNIQUE
 
 

Creating the ENROLL Table and Testing Constraints


 
Next create the ENROLL table which implements the many-to-many relationship
between STUDENT and SECTION. 
 
Note the column constraint for the Grade column, the table constraint used to specify
the composite primary key, and the referential integrity constraint enforced through
the FOREIGN KEY clauses.  This provides an alternative form of the Foreign
Key enforcement to delete related rows if the related STUDENT orSECTION rows are
deleted.
 
Also note the use of the ON DELETE CASCADE clause. 
 
CREATE TABLE Enroll (

    SSN                CHAR(9),

    Section_Number     NUMBER(6),

    Enroll_Term        CHAR(6),

    Enroll_Year        NUMBER(4),

    Grade              CHAR(2)

        CONSTRAINT Grade_Check_CK

            CHECK (Grade IN

272
                ('A','B','C','D','E','WP')), 
    CONSTRAINT Enr_Stu_SSN_FK FOREIGN KEY (SSN)

        REFERENCES Student

            ON DELETE CASCADE,

    CONSTRAINT Enr_Section_Number_FK

        FOREIGN KEY (Section_Number,

            Enroll_Term, Enroll_Year)

        REFERENCES Section

            ON DELETE CASCADE,

    CONSTRAINT Enroll_PK

        PRIMARY KEY (SSN, Section_Number,

            Enroll_Term, Enroll_Year)

        USING INDEX Tablespace Index01

        PCTFREE 5

    )

PCTFREE 30 PCTUSED 65

Tablespace Data01;

Next we insert both valid and invalid rows in order to test the various constraints for
the ENROLL table.   The first INSERT command inserts a valid row with aNull grade
value.  Note that the system allows a Null value since Not Null was not specified as a
constraint for the Grade column.
 
INSERT INTO Enroll 
    VALUES ('111111111', 111111, 'Summer', 2012, NULL); 
 
The next INSERT command attempts to insert a row that violates referential integrity to
the STUDENT table because a student with SSN=999-99-9999 is not stored in
the STUDENT table.
 
INSERT INTO Enroll 
    VALUES ('999999999', 111111, 'Summer', 2012, 'A');

ERROR at line 1:

ORA-02291: integrity constraint (DBOCK.ENR_STU_SSN_FK) violated - parent key not found 

273
 
The next INSERT command attempts to insert a row that violates referential integrity to
the SECTION table.
 
INSERT INTO Enroll 
    VALUES ('222222222', 999999, 'Summer', 2012, 'A');

ERROR at line 1:

ORA-02291: integrity constraint (DBOCK.ENR_SECTION_NUMBER_FK) violated – parent key not


found 

 
Attempt to insert a row that violates the Check Constraint for the Grade column.  The
row is rejected because the letter grade 'G' is not in the acceptable list of values.
 

INSERT INTO Enroll 


    VALUES ('222222222', 111111, 'Summer', 2012, 'G');

 
ERROR at line 1:

ORA-02290: check constraint (DBOCK.GRADE_CHECK_CK) violated

  
Virtual Column
New to Oracle 11g is the ability to specify a virtual column. 
        Treat them much as other columns. 
        Not stored on disk - the value is derived on demand by computing it as a set of
expressions or functions.
        Can be used in queries, DML, and DDL.
        Can be indexed - this is equivalent to a function-based index.
        Can have statistics collected on them.
        If the data type is not specified, Oracle will determine a data type based on the
underlying expressions
        Uses the keywords GENERATED ALWAYS.
        The AS column_expression clause determines the column content.
        Cannot use the SET clause of an UPDATE statement to specify a value.
        Can be used in the WHERE clause of
a SELECT, UPDATE or DELETE statement.
274
 
We cannot create these in Oracle 10g; however, a column definition for an Oracle 11g
database would be like this example:
 
    Account_Balance       NUMBER(7,2),
    Monthly_Due           NUMBER(6,2)
        GENERATED ALWAYS AS (Account_Balance/12),
 
 

Using the Data Dictionary


 
The queries shown in this section are used to display information about the tables and
indexes that were created by the CREATE TABLE commands.  We also verify the
existence of the specified constraints. 
 
COLUMN Table_name FORMAT A12;

COLUMN Tablespace_name FORMAT A15;

SELECT Table_name, Tablespace_name

    FROM User_tables

    WHERE Table_name IN ('STUDENT', 'COURSE', 'SECTION', 'ENROLL');

 
 
TABLE_NAME   TABLESPACE_NAME
------------ ---------------
COURSE       DATA01
ENROLL       DATA01
SECTION      DATA01
STUDENT      DATA01
 

275
COLUMN Index_name FORMAT A12;

SELECT Index_name, Table_name, Tablespace_name 


    FROM User_indexes

    WHERE Table_name IN ('STUDENT', 'COURSE',

        'SECTION', 'ENROLL');

 
INDEX_NAME   TABLE_NAME   TABLESPACE_NAME
------------ ------------ ---------------
COURSE_PK    COURSE       INDEX01
STUDENT_PK   STUDENT      INDEX01
SECTION_PK   SECTION      INDEX01
ENROLL_PK    ENROLL       INDEX01
 
COLUMN Constraint_name FORMAT A21;

COLUMN CT FORMAT A2;

COLUMN Search_condition FORMAT A37;

SELECT Constraint_name, Constraint_type CT, Table_Name, Search_Condition 
    FROM User_constraints

    WHERE Table_name IN ('STUDENT', 'COURSE',

        'SECTION', 'ENROLL');

CONSTRAINT_NAME       CT TABLE_NAME   SEARCH_CONDITION

--------------------- -- ------------ -----------------

STUDENT_NAME_NN       C  STUDENT      "STUDENT_NAME" IS

                                       NOT NULL

LOCATION_NN           C  SECTION      "LOCATION" IS NOT

                                      NULL

YEAR_NN               C  SECTION      "SECTION_YEAR" IS

                                      NOT NULL

TERM_NN               C  SECTION      "SECTION_TERM" IS

                                      NOT NULL

GRADE_CHECK_CK        C  ENROLL       Grade IN

                             ('A','B','C','D','E','WP')
276
HOURS_GREATER_ZERO    C  COURSE       Hours > 0

HOURS_LESS_12_CK      C  COURSE       Hours < 12

DESCRIPTION_NN        C  COURSE       "DESCRIPTION" IS

                                      NOT NULL

STUDENT_PK            P  STUDENT

SECTION_PK            P  SECTION

ENROLL_PK             P  ENROLL

COURSE_PK             P  COURSE

SECTION_TO_COURSE_FK  R  SECTION

ENR_SECTION_NUMBER_FK R  ENROLL

ENR_STU_SSN_FK        R  ENROLL

15 rows selected.

 
 

Unary Relationships
 
A unary relationship (also called a recursive relationship) occurs when the rows within
an individual table are related to other rows within the table.  For example, a database
table for FACULTY can store the same type of information for each faculty member. 
The chairperson of an academic department is both a faculty member and a supervisor
of other faculty members so the Supervisor relationship would represent an
association among faculty members.  This relationship is depicted below.
 

277
 
 
The CREATE TABLE command shown below creates the FACULTY table.  The
foreign key Fac_Supervisor_FK implements the one-to-many relationship among
the FACULTY rows. 
 
The Supervise relationship is implemented through use of referential integrity
constraints named FK_Fac_Supervisor.  Also note that if a supervisor is deleted, the
value for the cascading change to supervised faculty is set to Null through the ON
DELETE clause of the foreign key.
 
CREATE TABLE Faculty (
    Fac_SSN             CHAR(9),
    First_Name          VARCHAR(25)
        CONSTRAINT First_Name_NN NOT NULL,
    Last_Name           VARCHAR(25)
        CONSTRAINT Last_Name_NN NOT NULL,
    Fac_Dept            VARCHAR(12),
    Fac_Supervisor_SSN  CHAR(9),
    CONSTRAINT Faculty_PK
        PRIMARY KEY (Fac_SSN)
        USING INDEX Tablespace Index01
        PCTFREE 5,
    CONSTRAINT Fac_Supervisor_FK
        FOREIGN KEY (Fac_Supervisor_SSN)
            REFERENCES Faculty
            ON DELETE SET NULL
    )
PCTFREE 15 PCTUSED 65
Tablespace Data01;
 
The INSERT commands shown here insert good rows into the FACULTY table.  Note
that the Supervisor of the Department (Bock) has a NULL value for
theFac_Supervisor_SSN column.
 
INSERT INTO Faculty 
    VALUES ('123456789', 'Douglas', 'Bock', 'CMIS', NULL); 
INSERT INTO Faculty 

278
    VALUES ('234567890', 'Susan', 'Yager', 'CMIS', '123456789'); 
INSERT INTO Faculty 
    VALUES ('345678901', 'Anne', 'Powell', 'CMIS', '123456789'); 
 
Select rows from the FACULTY table.  Note the value for
the Fac_Supervisor_SSN column (labeled FAC_SUPER below).
 
COLUMN First_Name FORMAT a10; 
COLUMN Last_Name FORMAT a10; 
SELECT * FROM Faculty;

 
FAC_SSN   FIRST_NAME LAST_NAME  FAC_DEPT     FAC_SUPER

--------- ---------- ---------- ------------ ---------

123456789 Douglas    Bock       CMIS

234567890 Susan      Yager      CMIS         123456789

345678901 Anne       Powell     CMIS         123456789

 
Now we test the referential integrity constraint by attempting to store a row that violates
the referential integrity results in an error message from Oracle.
 
INSERT INTO Faculty 
    VALUES ('456789012','Joe','Prof','CMIS','444444444');

ERROR at line 1:

ORA-02291: integrity constraint (DBOCK.FAC_SUPERVISOR_FK) violated - parent key not found 

 
We next test the ON DELETE cascading constraint by deleting Bock's record.  This
should reset the Fac_Supervisor_SSN values for the subordinate faculty rows to
a Null value.
 
DELETE FROM Faculty WHERE Fac_SSN='123456789';

1 row deleted.

 
SELECT * FROM Faculty;
279
 
FAC_SSN   FIRST_NAME LAST_NAME  FAC_DEPT     FAC_SUPER

--------- ---------- ---------- ------------ ---------

234567890 Susan      Yager      CMIS

345678901 Anne       Powell     CMIS

Temporary Tables
 
Temporary tables are created to store session-private data that exists only while a
transaction is processing or while a session exists. 
        The tables are only visible to the individual system user creating them. 
        Useful approach if temporary tables are needed to support special programming
requirements.
 
The CREATE GLOBAL TEMPORARY TABLE command shown here has ON
COMMIT DELETE ROWS to specify that rows will only be visible within a
transaction.  Alternatively, the ON COMMIT PRESERVE ROWS will cause rows to be
visible for an entire session. 
 
You can also create indexes, views, and triggers for temporary tables. 
 
CREATE GLOBAL TEMPORARY TABLE admin_work_area (
  Start_date     DATE,
  End_date       DATE,
  Class_Desc     CHAR(20))
      ON COMMIT DELETE ROWS;
 
CREATE GLOBAL TEMPORARY TABLE department_temp
    ON COMMIT PRESERVE ROWS   
    AS SELECT * FROM dbock.department;
 

280
        Rows of a temporary table are stored to the default temporary tablespace of the
user creating the table.
        The TABLESPACE clause can be used to specify another tablespace for row
storage -- useful when the default temporary tablespace has large extents meant
for sorting.
        Segments to temporary tables and their index are not allocated until the first
INSERT takes place. 
        Backup/recovery of a temporary table is not available in the case of a system
failure - the data is after all temporary.
 
 

Index-Organized Tables
 
An Index-Organized Table (IOT) has a storage organization that is a variant of a
primary B-tree structure. 
        The figure shown here illustrates the difference in organization between a regular
table and an IOT. 
        Instead of storing each row’s ROWID value in an index, the non-key column
values are stored. 
        Each B-tree entry for an IOT has the following structure (PK=Primary Key): 
 
[PK column value, Non-PK column value1, Non-PK column value2, … ]
 

281
 
 
        Programming applications access and modify data stored in an IOT in the same
fashion as is done for a regular table – through the use of SQL and PL/SQL
commands. 
        The Oracle Server performs all of the operations required to maintain the
structure of the B-tree index structure that comprises the IOT.
 
        This table compares regular table and IOT characteristics.
 

Ordinary Table Index-Organized Table


Rowid uniquely identifies a row. Primary Primary key uniquely identifies a row.
key can be optionally specified Primary key must be specified

Physical rowid in Logical rowid in
ROWID pseudocolumn allows building ROWID pseudocolumn allows building
secondary indexes secondary indexes

282
Ordinary Table Index-Organized Table
Access is based on rowid Access is based on logical rowid

Sequential scan returns all rows Full-index scan returns all rows

Can be stored in a cluster with other tables Cannot be stored in a cluster

Can contain a column of the Can contain LOB columns but not LONG
LONG datatype and columns of columns
LOB datatypes
 

Benefits of IOT
 
IOTs improve database performance by providing faster access to table rows by the
primary key or any key that is a valid prefix of the primary key. 
        The storage of non-key columns for a row in the B-tree leaf block itself avoids an
additional block access. 
        Because rows are stored in primary key order, range access by the primary key
(or a valid prefix) involves minimum block accesses.
 
In order to allow even faster access to frequently accessed columns, you can use a
row overflow storage option (as described below) to push out infrequently accessed
non-key columns from the B-tree leaf block to an optional (heap-organized) overflow
storage area. 
        This allows limiting the size and content of the portion of a row that is actually
stored in the B-tree leaf block. 
        This can result in a higher number of rows in each leaf block and a smaller B-tree
structure.
 
Some storage space is saved because primary key columns are only stored once in
the IOT where with regular tables, the primary key column values are stored in the
table and in the associated index. 

283
        Because rows are stored in primary key order as opposed to the unordered
regular table, some storage space can be saved by using key compression 
        This is specified with the COMPRESS clause—Oracle Server handles the key
compression.
 
Primary key access for IOTs is based on logical rowids (the key column values) where
regular tables are accessed by physical rowed values as you saw earlier. This
approach also applies to secondary indexes. 
 
 

IOT Row Overflow


 
As you will see when we study B-tree indexes later in this course, the index nodes for
indexes that access regular tables are usually very small because they include just
the logical key value and a ROWID that points to the row location in a regular table.
 
With IOT, the B-tree index nodes can become quite large because they store the
entire row contents, especially where rows are 3000 characters or more in size. 
        This can adversely affect performance of this index approach. 
        In order to work around this potential imitation the Oracle Server can store rows
by using an overflow tablespace or area. 
        Columns that are accessed quite often are stored within the original IOT
structure. 
        Columns that are infrequently accessed can be stored in an overflow area.  This
enables the B-tree nodes to be smaller – more entries (rows) will fit within a
single block and retrieval performance for data will improve. 
 
The Oracle Server will automatically determine if a row should be stored in two parts
(in the index and in overflow).  You specify two
clauses:  PCTTHRESHOLDand INCLUDING to control the partitioning of a row. 
 
284
        PCTTHRESHOLD specifies a threshold value as a percentage of block size. 
o   If all non-key column values fit within the specified size limit, the row is not
broken into two parts.
o   Otherwise, the first non-key column that will not fit within the threshold and
all remaining non-key columns are moved to an overflow tablespace. 
 
        The INCLUDING clause lets the DBA specify the column name within the
CREATE TABLE statement where the break for overflow is to be made. 
 
 

The CREATE TABLE Command for an IOT


 
When you execute a CREATE TABLE command to create an IOT, additional
information must be specified in the command.
 
ORGANIZATION INDEX clause:  This indicates that this is an IOT.
 
PRIMARY KEY clause:  A primary key must be specified for an IOT.
 
OVERFLOW  and PCTTHRESHOLD clauses:  These specify the overflow tablespace
and the PCTTHRESHOLD value that defines the percentage of space reserved in the
index block for the IOT and the portion of a row that should be moved to
overflow.  Thus, the index entry contains the key value, the non-key column
values that fit the specified threshold, and a pointer to the rest of the row.
 
Example #1:  CREATE TABLE command for an IOT:  he
 

CREATE TABLE states_iot (
    State_ID          CHAR(2)
        CONSTRAINT State_ID_PK PRIMARY KEY,
285
    State_Name        VARCHAR2(50),
    Population        NUMBER )
  ORGANIZATION INDEX
  TABLESPACE Data01
  PCTTHRESHOLD 20
  OVERFLOW TABLESPACE Data01;
 

 
Example #2 (from the Oracle Database Administrator’s Guide): 
        This example specifies an index-organized table, where the key columns and
non-key columns reside in an index defined on columns that designate the
primary key (token, doc_id) for the table. 
        The break for data columns moved to overflow is specified as
the token_frequency column by the INCLUDING clause. 
        Primary key compression will be done by the Oracle Server because
the COMPRESS clause is specified.
 
CREATE TABLE admin_doc_iot(
    token            char(20),
    doc_id           NUMBER,
    token_frequency  NUMBER,
    token_offsets    VARCHAR2(512),
        CONSTRAINT admin_doc_iot_PK
            PRIMARY KEY (token, doc_id))
 ORGANIZATION INDEX COMPRESS
  TABLESPACE Data01
  PCTTHRESHOLD 20
  INCLUDING token_frequency
  OVERFLOW TABLESPACE Data01;
 
 Parallelizing Table Creation
Parallel execution enables faster creation of tables.  The CREATE TABLE . . . . AS
SELECT statement has two parts:
        CREATE (this is DDL)
286
        SELECT (this is a query.
        Oracle can parallelize both parts if a PARALLEL clause is specified in the
CREATE statement.
 
CREATE TABLE Test_Parallel
    PARALLEL COMPRESS
    AS SELECT * FROM PRODUCT
    WHERE Retail_Price > 200;
 
 

Indexed Clusters
 

General Concepts
 
Clusters store data from different tables in the same physical data blocks. 
        This approach can improve performance when application processing requires
that records from related tables be frequently accessed simultaneously. 
        Example:  Suppose that order processing requires the application program to
access frequently a row from a Customer Order table and the
associatedOrder_Details table rows.
o   The Order table is referred to as the "owner" table in the relationship.
o   The Order_Details table is the "member" table in the relationship.
 
While it is possible to create a cluster with more than two tables in the cluster, this is
only done when there is a single "owner" table and multiple "member" table
types.  Even then, the clustering of more than two tables is rare because of the
potential negative effects on performance for other queries.
 
The use of a cluster decreases the number of database block reads that are
necessary to retrieve the needed data for the cluster. 
287
        This improves performance for queries that require data stored in both (or more
than two if that is the case) tables in the cluster. 
        While clustered retrieval performance for all tables in the cluster may improve,
performance for other types of transactions may suffer. 
        This is especially true for transactions that only require one of the tables in the
cluster because all data within the data blocks will be transferred from disk to
memory for these transactions even though the rows from one of the tables are
not needed.
 
        Further performance gains can be obtained because the distinct value (cluster
key) in each cluster column is stored only once, regardless of the number of
times that it occurs in the tables and rows; thereby, reducing the storage space
requirements.
 
 

Cluster Key and Index


 
The columns within a cluster index are often also used as the cluster key.  This is a
set of one or more columns that both tables in the cluster have in common. 
        The cluster key must be columns of the same size and data type in both tables,
but need not be the same column name. 
        Typically, the cluster key is the foreign key from the member table that
references the primary key from the owner table in the cluster.   The cluster key
may also be part of a concatenated primary key in the member table.
 
For Indexed Clusters (as opposed to Hash Clusters), clusters must be indexed on the
cluster key. 
        This key must be created prior to the storage of any data in the cluster.
        There is one exception. If you specify a Hash Cluster form, you don't need to
create an index on the cluster key.  Instead, Oracle will use a hash function to
store table rows.

288
 
When you query tables in a cluster, the row data in a result table continues to be
displayed as if all of the data stored for the key values are stored in separate tables
even though cluster keys are only stored once for a data block.  In other words, the
system user can't tell that the tables are clustered.
 
The maximum length of all cluster columns combined is 239 characters and tables
with LONG columns cannot be clustered.
 

Steps in Creating an Indexed Cluster


 
The steps in creating a cluster are:
(1) create the cluster definition.
(2) create the cluster index definition.
(3) create the tables that form the cluster by defining them.
(4) load data into the tables.
 
As rows are inserted into the tables that form a cluster, the database stores the cluster
key and its associated rows in each cluster database block.
 
In this example a cluster of two tables will be
created: TestOrders and TestOrderDetails.  These two tables have the following
fields and characteristics.
 

Table FieldName FieldType/Size/Characteristics

TestOrders OrderId Number(3) Primary Key

  OrderDate Date NOT NULL

289
  OrderTotal Number(10,2)

TestOrderDetails ProductId Number(5) Primary Key

  OrderId Number(3) Primary Key

  QuantityOrdered Number(3)

  ItemPrice Number(10,2)

 
The cluster index will be on the OrderId field since this column is in common between
the two tables.  This column also serves as the Primary Key ofTestOrders, the
"owner" table in this relationship.
 
The PCTFREE and PCTUSED parameters are set during creation of the cluster with
the CREATE CLUSTER command.  Do not set values for
the PCTFREEand PCTUSED parameters for tables that form a cluster - if you set them
in the CREATE TABLE command, they will be ignored.  Also, clustered tables always
use the PCTFREE and PCTUSED settings of the cluster.
 
The CREATE CLUSTER command has an optional argument, SIZE.  This parameter
is the estimated number of bytes required by an average cluster key and associated
rows.
 
Oracle uses SIZE to estimate the number of cluster keys (and rows) that fit into a
clustered data block; and to limit the number of cluster keys placed in a clustered data
block.
 
If all rows for a unique cluster key will not fit into a single data block, extra data blocks
are chained to the first block to improve speed of access to all values for a given
cluster key.
 

290
By default only a single cluster key and associated rows are stored in each data block
of a cluster's data segment if you do not specify the SIZE parameter.
 
The cluster is named OrderCluster in the CREATE CLUSTER command shown here.
 
CREATE CLUSTER OrderCluster

    ( OrderId NUMBER(3) )

    PCTUSED 60

    PCTFREE 40

    SIZE 1200

    Tablespace Data01;

 
Oracle responds with the Cluster created message if no errors occur during the
process.
 
In step 2, the cluster index is created.  This step is skipped for a hashed cluster.  Note
that the keyword UNIQUE is not allowed with a cluster index.
 
CREATE INDEX OrderClusterIndex

    ON CLUSTER OrderCluster

        INITRANS 2

        MAXTRANS 5

    Tablespace Index01;

 
Oracle responds with the Index created message if no errors occur during the
process.
 
In step 3, the index cluster tables are created. 
        Note that you cannot specify a TABLESPACE clause for a table that is part of a
cluster because that option was specified when the cluster was created. 
        Attempting to specify a tablespace will return an Oracle error code.
291
 
CREATE TABLE TestOrders (

    OrderId           NUMBER(3)

      CONSTRAINT TestOrders_PK Primary Key

      USING INDEX Tablespace Index01,

    OrderDate         DATE

      CONSTRAINT OrderDate_NN NOT NULL,

    Order_Amount       NUMBER(10,2) )

  CLUSTER OrderCluster (OrderId) ;

 
CREATE TABLE TestOrderDetails (

    ProductId          NUMBER(5),

    OrderId            NUMBER(3),

    Quantity_Ordered   NUMBER(3)

      CONSTRAINT Quantity_Ordered_CK

        CHECK (Quantity_Ordered >= 0),

    ItemPrice          NUMBER(10,2),

  CONSTRAINT TestOrderDetails_FK FOREIGN KEY (OrderId)

      REFERENCES TestOrders

      ON DELETE CASCADE,

  CONSTRAINT TestOrderDetails_PK

      PRIMARY KEY (ProductId, OrderId)

      USING INDEX Tablespace Index01 )

  CLUSTER OrderCluster (OrderId) ;

 
In each case, Oracle responds with the Table created message if no errors occur
during the process.
 
In step 4, data rows are inserted into the tables.  In this first example, valid data rows
are entered for the TestOrders table and the associated TestOrderDetailsrecords for
a single order with OrderId = 111 for two items that total to
$75.95.  The ProductId values for the items are "55555" and "66666".
 
292
INSERT INTO TestOrders

    VALUES (111,'23-Jun-05',75.95);

INSERT INTO TestOrderDetails

    VALUES (55555,111,1,50.00);

INSERT INTO TestOrderDetails

    VALUES (66666,111,1,25.95);

 
The data are listed from the tables to verify the existence of the rows
 
SELECT * from TESTORDERS; 
 
   ORDERID ORDERDATE ORDER_AMOUNT

---------- --------- ------------

       111 23-JUN-05        75.95
 
SELECT * from TESTORDERDETAILS;

PRODUCTID    ORDERID QUANTITY_ORDERED  ITEMPRICE

---------- ---------- ---------------- ----------

     55555        111                1         50

     66666        111                1      25.95

 
Next we test the referential integrity constraints by attempt to insert a row
in TestOrderDetails for which there is no corresponding TestOrders row.  Note
thatOrderId "222" does not yet exist in the TestOrders table.
 
INSERT INTO TestOrderDetails

    VALUES (66666,222,1,25.95);

 
ORA-02291: integrity constraint (DBOCK.TESTORDERDETAILS_FK) violated – parent key not
found

293
 

Potential Referential Integrity Errors


 
It is possible for an application developer or DBA to introduce errors into the system
without recognizing the problem.  For example, if the TestOrderDetails table was
created without the FOREIGN KEY clause, then the system would be perfectly happy
allowing entry of a row in TestOrderDetails that has no corresponding "owner" row in
the TestOrders table.
 
This error can be demonstrated by dropping the TestOrderDetails table, then
recreating it without the FOREIGN KEY clause.  The attempted row insertion will now
produce the following results:
 
INSERT INTO TestOrderDetails

    VALUES (66666,222,1,25.95);

Oracle responded with: 1 row created.

 
Following this, the TestOrderDetails table was again dropped and created using the
original correct CREATE TABLE specification including the FOREIGN
KEYclause.  Data were again inserted into the table successfully.

Deleting Cluster Rows


 
If you examine the FOREIGN KEY clause for the TestOrderDetails table, you will find
the specification of CASCADE DELETE.  The CASCADE DELETE option specifies
that if a row in TestOrders is deleted, then associated rows for
the TestOrderDetails table will also be deleted.
 
This DELETE command example shows the results of deleting a row
in TestOrders and then selecting rows from TestOrderDetails.  Note that
the CASCADE DELETE option took effect.
294
 
DELETE FROM TestOrders

    WHERE Orderid = '111';

Oracle responds with:  1 row deleted.

 
Now we query the database.
 
SQL> SELECT * FROM TestOrderDetails;

Oracle responds with:  no rows selected

 
 

Cluster Information in the Data Dictionary


 
Querying the data dictionary User_Clusters view will provide information about the
cluster that was created in these notes.
 
SELECT Cluster_Name, TableSpace_Name

    FROM User_Clusters;

CLUSTER_NAME    TABLESPACE_NAME

------------    ---------------

ORDERCLUSTER    Data01

 
 

 Hashed Clusters
 

General Concepts
 
Hash clusters can improve data retrieval performance.
295
        Indexed tables or indexed clusters have rows located by using Key Value-
RowID pairs through indexes.
        Hashed clusters store and retrieve rows through a hash function.
        The hash function distributes hash values (based on cluster key values) evenly
throughout the storage area.
        The hash value corresponds to a data block in the cluster.
        NO Input/Output is required to locate a row in a hash cluster.
 

When to Use/Not Use Hash Clusters


 
Use hashing when:
        Queries are equality queries – the row with the key is found in one I/O
compared to indexed approaches that require one or more reads of the index
and another I/O read of the data block containing the row.
 
Example: 
SELECT column1, column2, . . . FROM table_name WHERE cluster_key = ' . . . '
 
        The tables in the hash cluster are static – extending a hash cluster beyond the
initial disk space allocation can cause degradation of performance due to the use
of overflow storage space.
 
Do NOT use hashing when:
        Queries retrieve rows over a range of values.  A hash function cannot determine
the row locations and a full table scan often results.
 
Example: 
SELECT column1, column2, … FROM table_name WHERE cluster_key < ' . . . '
 
296
        Tables continue to grow over the life of the database.
        Applications tend to require full-table scans as part of normal data processing –
hashing performs extremely poorly in this situation.
        There is not enough available disk space to allocate the space needed for the
hash cluster in advance.
 

Creating a Hash Cluster


 
A hash cluster is created through use of the HASHKEYS clause.
 
CREATE CLUSTER HashOrderCluster

    ( OrderId NUMBER(3) )

    PCTUSED 60

    PCTFREE 40

    SIZE 1200

    Tablespace Data01

  HASH IS OrderID HASHKEYS 150;

 
CREATE TABLE HashTestOrders (

    OrderId           NUMBER(3)

      CONSTRAINT HashTestOrders_PK Primary Key,

    OrderDate         DATE

      CONSTRAINT HashOrderDate_NN NOT NULL,

    Order_Amount       NUMBER(10,2) )

  CLUSTER HashOrderCluster (OrderId) ;

CREATE TABLE HashTestOrderDetails (

    ProductId          NUMBER(5),

    OrderId            NUMBER(3),

    Quantity_Ordered   NUMBER(3)

      CONSTRAINT HashQuantity_Ordered_CK

        CHECK (Quantity_Ordered >= 0),


297
    ItemPrice          NUMBER(10,2),

  CONSTRAINT HashTestOrderDetails_FK

      FOREIGN KEY (OrderId)

      REFERENCES TestOrders

      ON DELETE CASCADE,

  CONSTRAINT HashTestOrderDetails_PK

      PRIMARY KEY (ProductId, OrderId) )

  CLUSTER HashOrderCluster (OrderId) ;

 
        The primary key (hash key) can be one or more columns – the above example
has a single column named OrderID.
        The HASHKEYS clause value specifies and limits how many unique hash values
can be generated – the actual value is rounded to the nearest prime number by
Oracle.
        The HASH IS clause is optional. 
o   It can be used to specify the cluster key; however, this is only necessary if
the intent is NOT to hash on the primary key or if the primary key is of data
type NUMBER and is unique already so that the internal hash function will
be sure to distribute the rows uniformly among the disk space allocated to
the cluster.
o   Specify the HASH IS parameter only if the cluster key is a single column of
the NUMBER data type.
o   This parameter can also be used to specify a user-defined hash function to
use instead of the function provided by default by Oracle.
o   In this example a cluster is created that has a CHAR data type primary key
and no HASH IS clause is specified so the cluster will be hashed on
thePrescription_Drug_Code column value.
 
CREATE CLUSTER Prescription_Drug_Cluster
    (Prescription_Drug_Code CHAR(5))
    SIZE 512 SINGLE TABLE HASHKEYS 500;

298
 
        Hash clusters cannot be indexed.
        Hash clusters should have a hash key that reflects
how SELECT statement WHERE clauses will query the database, but most often
the cluster is hashed on the primary key.
 

Sorted Hash Cluster


 
Hash clusters with related rows (same hash key or part of hash key) that need to be
returned in sorted order can be sorted.
        Example (from Oracle Database Administrator's Guide) – this cluster has three
columns that form the hash key (telephone_number, call_timestamp,
andcall_duration).  The entries are sorted
by call_timestamp and call_duration. 
 
CREATE CLUSTER call_detail_cluster (
   telephone_number NUMBER,
   call_timestamp NUMBER SORT,
   call_duration NUMBER SORT )
  HASHKEYS 10000 HASH IS telephone_number
  SIZE 256;
 
CREATE TABLE call_detail (
   telephone_number     NUMBER,
   call_timestamp       NUMBER   SORT,
   call_duration        NUMBER   SORT,
   other_info           VARCHAR2(30) )
  CLUSTER call_detail_cluster (
   telephone_number, call_timestamp, call_duration );
 
        The query shown here returns call records for the hash key by oldest record first.
 
SELECT * FROM call_detail
    WHERE telephone_number = 6505551212;

299
 

Single Table Hash Cluster


 
A single-table hash cluster provides very fast access to individual table rows.
        The hash cluster here can contain only one table.
        There is a 1-to-1 mapping between hash keys and data rows.
        Note use of the SINGLE TABLE clause.
        The HASHKEYS value here rounds up to the nearest prime number (503) that
specifies the maximum number of possible hash key values, each of
size512 bytes.
 
Example:
CREATE
CLUSTER Physician_Specialty_Cluster (Specialty_Code NUMBER)
    SIZE 512 SINGLE TABLE HASHKEYS 500;
 
CREATE TABLE Physician_Specialty (
   Specialty_Code       NUMBER,
   Specialty_Description VARCHAR(75) )
  CLUSTER Physician_Specialty_Cluster (Specialty_Code);
 
Controlling Hash Cluster Disk Space Use
 
Choose the cluster key based on the most common type of query to be issued against
the cluster tables.
        Example:  An EMPLOYEE table may have the EmployeeID column as the
cluster key.
        Example:  A DEPARTMENT table may have the DepartmentNumber column as
the cluster key.
 

300
The HASH IS clause parameter should only be used if the cluster key is a single
column of NUMBER data type and if the column values are uniformly distributed
integers – this facilitates allocating rows within the cluster without having collisions
(two cluster key values that hash to the hash value).
 
The SIZE parameter should be set to the average disk space needed to store all rows
for a given hash key (need to calculate average row size for all related rows in a
cluster).
 
 

Additional Topics
 

ALTER TABLE Options


 
The ALTER TABLE command can be used to modify the structure of a table, to
change storage and block utilization parameters, to add a referential integrity
constraint, to remove a referential integrity constraint, and for several other
modifications.
 
Adding a Column to a Table:  This is an example of an ALTER TABLE command
that adds two columns to the DEPARTMENT table. 
 
ALTER TABLE Department ADD
    (Dept_Location   VARCHAR2(25),
     Dept_Division   VARCHAR2(50) DEFAULT 'Business' );
 
New columns initially store NULL values unless a DEFAULT clause is specified as
was done for the Dept_Division column shown above (the default is 'Business').
 

301
New columns can only be specified as NOT NULL if the table does not contain any
rows or if a DEFAULT value is specified for the column. 
 
Changing Storage/Block Utilization Parameters:  The ALTER TABLE command
below demonstrates changing PCTFREE, PCTUSED, and various STORAGE
clauses. 
 
ALTER TABLE Department
    PCTFREE 30
    PCTUSED 50
    STORAGE (NEXT 256K PCTINCREASE 20
        MINEXTENTS 2 MAXEXTENTS 50);
 
        The NEXT parameter change takes effect the next time the Oracle Server
allocates an extent for the DEPARTMENT table. 
        The PCTINCREASE change will be used to recalculate NEXT when the next
extent is allocated.  If both NEXT and PCTINCREASE are modified, then the
PCTINCREASE change goes into effect after the next extent is allocated. 
o   Example:  The DEPARTMENT  table has two extents of size 128K.  The
3rd extent will be 256K (as specified in the ALTER TABLE command shown
above).  The 4th extent will be approximately 307.2 (20% larger), but an
extent must be a multiple of the block size (assuming a block size of 8K) so
this would be rounded up to 312K.
        The MINEXTENTS clause can be modified to any value equal to or less than the
current number of allocated extents and will only have an effect if the table is
truncated.
        The MAXEXTENTS clause can be modified to any value equal to or greater than
the current number of extents for the table.  It can also be set toUNLIMITED.
        You'll notice that the INITIAL storage parameter is not listed – it cannot be
modified for a table that already exists.
 

302
Add a Referential Integrity Constraint:  The example ALTER TABLE command
shown here adds a referential integrity constraint.  You might not want the constraint
added when the table is initially created, or you might be adding a constraint for a new
table.
 
First a new table named TEACHER is created and a new row is inserted into the table.
 
/* Create Owner Table */
CREATE TABLE Teacher (
    Teacher_ID      CHAR(4)
        CONSTRAINT Teacher_PK PRIMARY KEY
            USING INDEX Tablespace Index01
            PCTFREE 5,
    Teacher_Name    VARCHAR(40)
        CONSTRAINT Teacher_Name_NN NOT NULL
    )
PCTFREE 5 PCTUSED 60
TABLESPACE DATA01;
 
INSERT INTO Teacher VALUES ('1234','Bock, Douglas');
 
Next the SECTION table has a column added, and then a foreign key constraint is
added to the SECTION table to enforce referential integrity to
the TEACHERtable's TEACHER_ID column.
 
ALTER TABLE Section ADD
    (Teacher_ID CHAR(4) DEFAULT '1234');
 
ALTER TABLE Section
303
    ADD CONSTRAINT Section_To_Teacher_FK
    FOREIGN KEY (Teacher_ID) REFERENCES Teacher;
 
Now test the new foreign key constraint by trying to insert a new SECTION row with an
invalid TEACHER_ID value.
 
INSERT INTO SECTION VALUES
    ('55555','Summer',2012,'CMIS460','FH-3208','2222')
ERROR at line 1:
ORA-02291: integrity constraint (DBOCK.SECTION_TO_TEACHER_FK)
violated - parent
 
Drop a Referential Integrity Constraint:  The example ALTER TABLE command
shown here drops the SECTION_TO_TEACHER_FK foreign key constraint for
the SECTION table.
 
ALTER TABLE Section
    DROP CONSTRAINT Section_To_Teacher_FK;
 
Table altered.
 
Manually Allocating Extents:  The example ALTER TABLE command shown here
manually allocates an extent to the DEPARTMENT table in order to control the
distribution of the table across files that comprise the tablespace where the table is
stored. 
 
ALTER TABLE department ALLOCATE EXTENT (SIZE 512K
  DATAFILE
  '/u01/student/dbockstd/oradata/USER350users01.dbf');
 
304
The manual allocation of extents does not affect the computation of the size of the
NEXT extent where the PCTINCREASE clause is not zero.
 
Move a Table:  A non-partitioned table can be moved from one tablespace to another
in order to support a shift in application processing of some sort.  This is also useful to
eliminate row migration that may exist in a table.
 
ALTER TABLE department
    MOVE TABLESPACE Index01;
 
Moving a table reorganizes the extents and eliminates row migration.  After moving a
table, the DBA must rebuild the indexes for the table manually; otherwise, Oracle will
return the ORA-01502 index 'Index_name" or partition of such index is in
unusable state error message.   Rebuilding Indexes is covered in a later module in
this course.
 
Rename a Column:  Sometimes a DBA will need to rename an existing column in
order to correct some earlier error in following an organization's design guidelines.  For
example, a column named Dept_Spvsr might need to be renamed to a more
meaningful column name, Dept_Supervisor.
 
ALTER TABLE department
    RENAME COLUMN Dept_Spvsr TO Dept_Supervisor;
 
The new name must obviously not duplicate the name of an existing column in the
table.  When a column is renamed, the Oracle Server updated associated data
dictionary tables so that function-based indexes and CHECK constraints remain valid
for the column. 
 
Drop a Column:  You can drop columns from table rows.  This is done to rid the
database of unused columns without having to export/import data and recreate indexes
and constraints.
305
 
ALTER TABLE department
    DROP COLUMN dept_resources
    CASCADE CONSTRAINTS CHECKPOINT 500;
 
Dropping a column can be time-consuming as all data in the column is deleted from the
table.  As such, it is often useful to specify checkpoints to minimize the size of the undo
space that is generated by the DROP procedure as was done with the CHECKPOINT
clause in the ALTER TABLE command given above.   
 
The UNUSED Option:  A table can be marked as unused and then later dropped when
a database's activity lessens.  This is also useful if you want to drop two or three
columns.  If you drop two columns, the table is updated twice, once for each column.  If
you mark the two columns as UNUSED, then drop the columns, the rows are updated
only one time.
 
#Mark column unused
ALTER TABLE department
    SET UNUSED COLUMN dept_resources
    CASCADE CONSTRAINTS;
 
#Drop unused columns
ALTER TABLE department
    DROP UNUSED COLUMNS CHECKPOINT 500;
 
 

Truncate a Table
 
When a DBA truncates a table, this action deletes all rows in a table and releases any
allocated disk space.  This also truncates indexes for the table. 
306
 
TRUNCATE TABLE department; 
 
Other effects of truncation include: 
        No undo data is generated – the TRUNCATE command commits implicitly
because TRUNCATE TABLE is a DDL command, not DML.
        A table referenced by a foreign key from another table cannot be truncated.
        Any triggers for deletion do not fire when a table is truncated.
 
 

Drop a Table
 
Tables are dropped when they are not needed any longer or when they are being
reorganized.  In the latter case, the contents of the table are typically exported, the
table is dropped, then recreated, and the contents are imported to the new table
structure.
 
DROP TABLE department
    CASCADE CONSTRAINTS;
 
The CASCADE CONSTRAINTS option is necessary if this is a parent table in a foreign
key relationship. 
 
Use the ALTER TABLE. . .DROP COLUMN command to drop a column(s) that is no
longer needed. 
 

Flashback Drop and the Recycle Bin


 

307
A table that is dropped does not have its space immediately recovered.  Instead the
table is placed in a recycle bin and the FLASHBACK TABLE command can restore the
table.
 
The recycle bin is a data dictionary table with information on dropped objects (tables,
indexes, constraints, etc). 
 
Each user essentially has their own recycle bin because the only objects the user can
access in the recycle bin are those the user owns (except a user with SYSDBA
privilege can access any object).
 
Objects are named using the convention:
 
BIN$unique_id$version
 
        Unique_ID is a 26-character globally unique identifier.
        Version is the version number assigned by the database.
 
To view the recycle bin use a SELECT statement.
 
COLUMN Original_Name FORMAT A14;
SELECT Object_Name, Original_Name, DropTime
FROM RECYCLEBIN;
 
 
OBJECT_NAME                    ORIGINAL_NAME  DROPTIME
------------------------------ -------------- ------------------
BIN$whZwQFKyp0LgQKOSZvwM3g==$0 TEST_PARALLEL  2012-06-09:22:14:05
BIN$whZwQFKwp0LgQKOSZvwM3g==$0 STUDENT        2012-06-09:22:00:40
BIN$qHjqiFu2x4zgQKOSZvxkIw==$0 INVOICE        2011-07-19:22:34:56
...more rows display
308
 
12 rows selected.

 
        Dropping a tablespace does not result in the objects going in the recycle bin. 
        Dropping a user - objects belonging to the user are not placed in the recycle bin
and the bin is purged.
        Dropping a cluster - member tables are not placed in the recycle bin.
 
To disable the recycle bin, use one of these statements.
 
ALTER SESSION SET Recyclebin = OFF;
 
ALTER SYSTEM SET Recyclebin = OFF SCOPE = SPFILE;
 
To enable the recycle bin, use one of these statements.
 
ALTER SESSION SET Recyclebin = ON;
 
ALTER SYSTEM SET Recyclebin = ON SCOPE = SPFILE;
 
In both cases above, using ALTER SYSTEM requires a database restart.
 
To restore a table use the FLASHBACK TABLE. . .TO BEFORE
DROP command.  This sequence demonstrates using the command.
 
CREATE TABLE Test (
    Column1  VARCHAR2(15));
INSERT INTO Test VALUES ('ABC');
INSERT INTO TEST VALUES ('XYZ');

309
COMMIT;
 
Commit complete.
 
SELECT * FROM Test;
More...
 
COLUMN1
---------------
ABC
XYZ
 
DROP TABLE Test;
Table dropped.
 
SELECT * FROM Test;
 
ERROR at line 1:
ORA-00942: table or view does not exist
 
FLASHBACK TABLE Test TO BEFORE DROP RENAME TO Test2;
Flashback complete.
 
SELECT * FROM Test2;
More...
 
COLUMN1

310
---------------
ABC
XYZ
 
 
 

Place a Table in Read-Only Mode


 
A table can be read-only mode.  An example is a table that contains data that you do
not want to be modified--very static data.
 
ALTER TABLE States READ ONLY;
 
To return the table to read/write mode:
 
ALTER TABLE States READ WRITE;
 
 

Data Integrity
 
The DBA is always concerned with maintaining Data Integrity.  This simply means that
the data stored in the database meets the requirements of establishedbusiness
rules for the organization.  Data integrity can be maintained by:
        Application programs – the code written can have integrity rules built-in that
keep invalid data from being added to a database.
        Database triggers – these are PL/SQL programs that execute whenever an
event, such as the insertion of a new table row or the modification of a value
stored in a column occurs.  Triggers are usually used to enforced complex
business rules that are not easily enforced through defined integrity constraints
311
        Integrity constraints – you've already seen examples of integrity constraints in
the CREATE TABLE commands shown earlier in these notes.  This is the
preferred way to enforce data integrity.
 
Constraint State:  An integrity constraint can be enabled or disabled.  An enabled
constraint enforces data integrity.  If data does not conform to the constraint, then new
rows are not inserted and modified rows are rolled back.
 
An Integrity constraint can have the following states:
        DISABLE NOVALIDATE:  Such a constraint is not checked so new data can be
entered that does not conform to the constraint's rules.
        DISABLE VALIDATE:  Modification of constrained columns is not allowed.  The
index on the constraint is dropped and the constraint is disabled.
        ENABLE NOVALIDATE:  New data violating the constraint cannot be entered,
but the table can contain invalid data that has already been entered – useful for
uploading data warehouses from valid OLTP data.
        ENABLE VALIDATE:  This is the enabled state for a constraint where data must
conform to the constraint.  A table moved to this state is locked and all data in the
table is checked for validity. 
 
A DBA may temporarily disable integrity constraints for a table in order to improve the
performance of these operations, especially for large data warehouse configurations:
        Load large amounts of data.
        Perform batch operations that change a majority of the rows in a table, for
example, updating each employee's salary as a result of a 3% raise.
        Import or export one table at a time.
 
Deferring Constraints:  Checking constraints can be deferred until the end of a
transaction – this is used most often for data loads of rows belonging to both parent
and child tables such as would be the case
for SALES_ORDER and ORDER_DETAILS tables. 
 
312
Non-deferred constraints are enforced at the end of each DML statement.  Violations
cause rows to be rolled back.
 
Deferred constraints are checked when a transaction commits.  Any violations detected
when the transaction commits cause the entire transaction to roll back.
 
Example commands:
 
CREATE TABLE department (
    Dept_Number  NUMBER(5) PRIMARY KEY DISABLE, . . . ;
 
ALTER TABLE department
    ADD PRIMARY KEY (Dept_Number) DISABLE;
 
ALTER TABLE department
    ENABLE NOVALIDATE CONSTRAINT PK_Dept_Number;
 
ALTER TABLE employee
    ENABLE VALIDATE CONSTRAINT FK_Dept_Employee_No;
 
ALTER SESSION
    SET CONSTRAINTS DEFERRED;
 
ALTER SESSION
    SET CONSTRAINTS IMMEDIATE;
 
 

Additional Data Dictionary Information


 
The views used most often by a DBA to query table and constraint information are:
        DBA_TABLES
        DBA_OBJECTS
        DBA_CONSTRAINTS
        DBA_CONS_COLUMNS

313
 
COLUMN table_name FORMAT A17;
COLUMN tablespace_name FORMAT A15;
SELECT table_name, tablespace_name, pct_free, pct_used
FROM dba_tables
WHERE owner = 'DBOCK'
ORDER BY table_name;
 
TABLE_NAME        TABLESPACE_NAME   PCT_FREE   PCT_USED
----------------- --------------- ---------- ----------
BED               Data01                  10
BEDCLASSIFICATION Data01                  10
DEPARTMENT        Data01                  10
DEPENDENT         Data01                  10
EMPLOYEE          Data01                  10
<more tables are listed>

  
COLUMN object_name FORMAT A15;
COLUMN object_type FORMAT A15;
SELECT object_name, object_type, created, last_ddl_time
FROM dba_objects
WHERE owner = 'DBOCK'
ORDER BY object_name;
 
OBJECT_NAME     OBJECT_TYPE     CREATED   LAST_DDL_
--------------- --------------- --------- ---------
<more objects are listed>
SPECIALTY       TABLE           14-JUN-09 14-JUN-09
TREATMENT       TABLE           15-JUN-09 15-JUN-09
UN_EMPLOYEEPARK INDEX           14-JUN-09 14-JUN-09
INGSPACE

314
Module 12 – Managing Indexes
Objectives
        Learn basic concepts about indexes including types of indexes.
        Create B-Tree, Bitmap, Hash Cluster, Global/Local, Reverse Key, and Function-
based indexes.
        Reorganize indexes.
        Drop indexes.
        Obtain data dictionary information about indexes. 

Introduction
 
Indexes are totally optional structures that are intended to speed up the execution of
SQL statements against table data and cluster data. 
 
From your earlier studies, you should realize that Indexes are used for direct
access to a particular row or set of rows in a table.  From your study of database
management systems, you learned that indexes are most typically organized as some
type of tree structure.  You may not have studied Bitmap Indexes – you will also
study this type of index in this course.
 
Oracle Database provides several indexing schemes that provide complementary
performance functionality. These are:
        B-tree indexes: the default and the most common
        B-tree cluster indexes: defined specifically for cluster
        Hash cluster indexes: defined specifically for a hash cluster
        Global and local indexes: relate to partitioned tables and indexes
        Reverse key indexes: most useful for Oracle Real Application Clusters
applications
        Bitmap indexes: compact; work best for columns with a small set of values
        Function-based indexes: contain the precomputed value of a function/expression
        Domain indexes: specific to an application or cartridge.
 
 
Index Concepts and Facts
 
An index can be composed of a single column for a table, or it may be comprised
of more than one column for a table.  An index based on more than one column is
termed a concatenated (or composite) index.
 
315
Examples:
        The SSN column serves as an index key that tracks individual students at a
university.
        The concatenated primary key index for an ENROLL table
(SSN + SectionID + Term + Year) is used to track the enrollment of a student in
a particular course section.
        The maximum number of columns for a concatenated index is 32; but
the combined size of the columns cannot exceed about one-half of a data
block size.
 
        A unique index allows no two rows to have the same index entry.  An example
would be an index on student SSN. 
        A non-unique index allows more than one row to have the same index entry
(this is also called a secondary key index).  An example would be an index on
U.S. Mail zip codes.
        A function-based index is created when using functions or expressions that
involve one or more columns in the table that is being indexed.  A function-based
index pre-computes the value of the function or expression and stores it in the
index.  Function-based indexes can be created as either a B-tree or a bitmap
index.
        A partitioned index allows an index to be spread across several tablespaces -
the index would have more than one segment and typically access a table that is
also partitioned to improve scalability of a system.  This type of index decreases
contention for index lookup and increases manageability. 
 
To create an index in your own schema:
        The table to be indexed must be in your schema, OR
        You have the INDEX privilege on the table to be indexed, OR
        You have the CREATE ANY INDEX system privilege.
 
To create an index in another schema:
        You have the CREATE ANY INDEX system privilege, AND
        The owner of the other schema has a quota for the tablespace that will store the
index segment (or UNLIMITED TABLESPACE privilege).
 
 
Loading Data
 
Data initially loaded into a table will load more efficiently if the index is created after the
table is created.  This is because the index must be updated after each row insertion if
the index is created before loading data.

316
 
Creating an index on an existing table requires sort space – typically memory values
that are paged in and out of segments in the TEMP tablespace allocated to a
user.  Users are also allocated memory for index creation based on
the SORT_AREA_SIZE parameter – if memory is insufficient, then swapping takes
place.
 
Guidelines for Creating an Index
Use the following guidelines for determining when to create an index:
        Create an index if you frequently want to retrieve less than 15% of the rows in a
large table.
o   The percentage varies greatly according to the relative speed of a table
scan and how the distribution of the row data in relation to the index key.
o   The faster the table scan, the lower the percentage; the more clustered the
row data, the higher the percentage.
        To improve performance on joins of multiple tables, index columns used for
joins.
 
Index columns with these characteristics:
 Take into consideration the typical queries that will access the table. 
A concatenated index is only used by Oracle in retrieving data when
the leading column of the index is used in a query's WHERE clause.
o   The order of the columns in the CREATE INDEX statement affect query
performance – specify the most frequently used columns first.
o   An index that specifies column1+column2+column3 can improve
performance for queries with WHERE clauses on column1 or
oncolumn1+column2, but will not be used for queries with a WHERE
clause that just uses column2 or column3.
        Values are relatively unique in the column (but not number columns that store
currency values).
        There is a wide range of values (good for regular B-tree indexes).
        There is a small range of values (good for bitmap indexes).
        The column is a virtual column (can create both unique and non-unique
indexes)
        The column contains many nulls, but queries often select all rows having a
value. In this case, use the following phrase:
 
       WHERE COL_X > <some numeric value goes her>
 
Using the preceding phrase is preferable to:
 

317
WHERE COL_X IS NOT NULL
 
This is because the first uses an index on COL_X (assuming that COL_X is a
numeric column).
 
Small tables don't need indexes – but it is a good idea to still index the Primary Key.
 
Do not index columns where:
        There are many nulls in the column and you do not search on the not null values.
        LONG and LONG RAW columns – these cannot be indexed.
 
 
 
B-Tree Index
 
B-Tree Structure
 
This figure illustrates in a very simple way the structure of a B-Tree index.  A B-tree
index usually stores a list of primary key values and a list of associated ROWID values
that point to the row location of the record with a given primary key value.  In this figure
the ROWID values are represented by the pointers at theleaf level.
 

 
 
The top level of the index is called the Root.  The Root points to entries at lower
levels of the index - these lower levels are termed Branch Blocks.
318
 
A node in the index may store multiple (more than one) key values - in fact, almost
all B-Trees have nodes with multiple values - the tree structure grows upward and
nodes split when they reach a specified size.
 
At the Leaf level the index entries point to rows in the table.  At the Leaf level the index
entries are linked in a bi-directional chain that allows scanning rows in both
ascending and descending order of key values - this supports sequential
processing as well as direct access processing.
 
In a non-partitioned table, Key values are repeated if multiple rows have the same
key value – this would be a non-unique index (unless the index is
compressed).  Index entries are not made for rows that have all of the key columns
NULL.
 
Leaf Index Format:  The index entry at the Leaf level is made up of three
components.
        Entry Header - stores number of columns in the index and locking information
about the row to which the index points.
        Key Column Length-Value Pairs - defines the size of the column in the key
followed by the actual column value.   These pairs are repeated for each column
in a composite index.
        ROWID - this is the ROWID of a row in a table that contains the key
value associated with this index entry.
 
Data Manipulation Language Effects:  Any DML on a table also causes the Oracle
Server to maintain the associated indexes.
        When a row is inserted into a table, the index must also be updated.  This
requires the physical insertion of an index entry into the index tree structure.
        When a row is deleted from a table, the index only has the entry "logically"
deleted (turn a delete bit from off to on).  The space for the deleted row is not
available for new entries until all rows in the block are deleted.
        When a row key column is updated for a table, the index has both a logical
deletion and a physical insertion into the index.
 PCTFREE has no effect on an index except when the index is created.  New
entries to an index may be added to an index block even if the free space in the
block is less than the PCTFREE setting.
1.   If an indexed table has lots of rows to be inserted, set PCTFREE high to
accommodate new index values.
2.   If the table is static, set PCTFREE low.
3.   PCTUSED cannot be specified for indexes.
 
319
Creating B-Tree Indexes
 
An example CREATE INDEX command for a normal B-tree index is given here.
 
CREATE [UNIQUE] INDEX USER350.Orders_Index 
    ON USER350.Orders(OrderId) 
    PCTFREE 20 
    INITTRANS 6 MAXTRANS 10 
    LOGGING 
    TABLESPACE Index01;
 
        The UNIQUE clause specifies unique entries -
the default is NONUNIQUE.  Note that the owner's schema (USER350) is
specified – this is optional.
         The PCTFREE parameter is only effective when the index is created - after that,
new index block entries are made and PCTFREE is ignored.  PCTFREE is
ignored because entries are not updated - instead a logical delete and physical
insert of a new index entry is made. 
        PCTUSED cannot be specified for an index because updates are not made to
index entries. 
 
Use a low PCTFREE when the indexed column is system generated as would be the
case with a sequence (sequence indexes tend to increase in an ascending fashion)
because new entries tend to be made to new data blocks - there are no or few
insertions into data blocks that already contain index entries.
 
Use a high PCTFREE when the indexed column or set of columns can take
on random values that are not predictable.  Such is the case when a
newOrderline row is inserted - the ProductID column may be a non-unique foreign
key index and the product to be sold on an Orderline is not predictable for any given
order.
 
The Default and Minimum for INITTRANS is 2.  The limit on MAXTRANS is 255 - this
number would be inordinately large.
 
By default, LOGGING is on so that the index creation is logged into the redo log file. 
Specifying NOLOGGING would increase the speed of index creation initially, but would
not enable recovery at the time the index is created.
 
Interestingly, Oracle will use existing indexes to create new indexes whenever the key
for the new index corresponds to the leading part of the key of an existing index.
 

320
Indexes and Constraints
 
The UNIQUE and PRIMARY KEY constraints on tables are enforced by creating
indexes on the unique key or primary key – creation of the index is automatic when
such a constraint is enabled.
 
You can specify a USING INDEX clause to control the creation process.  The index
created takes the name of the constraint unless otherwise specified.
 
Example creating a PRIMARY KEY constraint while creating a table.
 
CREATE TABLE District (
    District_Number     NUMBER(5)
        CONSTRAINT District_PK PRIMARY KEY,
    District_Name       VARCHAR2(50)
    )
  ENABLE PRIMARY KEY USING INDEX
  TABLESPACE Index01
  PCTFREE 0;
 
 
Key-Compressed Index
 
This is a B-tree with compression – compression eliminates duplicate occurrences of a
key in an index leaf block.
 
CREATE INDEX Emp_Name ON Emp (Last_Name, First_Name)
    TABLESPACE Index01
    COMPRESS 1;
 
This approach breaks an index key into a prefix and suffix entry in the index
block.  Compression causes sharing of the prefix entries among all suffix entries and
save lots of space allowing the storage of more keys in a block.
 
Use key compression when:
        The index is non-unique where the ROWID column is appended to the key to
make the index key unique.
        The index is a non-unique multicolumn index –
example:  Zip_Code + Last_Name.
 
Creating a Large Index
321
When creating an extremely large index, consider allocating a larger temporary
tablespace for the index creation using the following procedure:
 
1.   Create a new temporary tablespace using the CREATE
TABLESPACE or CREATE TEMPORARY TABLESPACE statement.
2.   Use the TEMPORARY TABLESPACE option of the ALTER USER statement to
make this your new temporary tablespace.
3.   Create the index using the CREATE INDEX statement.
4.   Drop this tablespace using the DROP TABLESPACE statement. Then use
the ALTER USER statement to reset your temporary tablespace to your original
temporary tablespace.
 
This procedure avoids the problem of expanding the usually shared, temporary
tablespace to an unreasonably large size that might affect future performance.
 
 
 
Bitmap Index
 
Bitmap Index Structure
 
Bitmap indexes are alternatives to B-tree indexes and are only used to create
secondary key indexes.  They are used in certain situations:
        If a table has millions of rows and there are very few distinct values for index
entries, for example, indexes on zip codes could have many, many rows for a
single zip code value, then a bitmap index may perform well. 
        A bitmap index may best support WHERE conditions involving the OR operator.
        A bitmap index may work best for low update activity tables or read-only key
columns.
 
A bitmap index is also organized as a B-tree structure; however, the leaf nodes in a
bitmap index stores a bitmap for each key value - it does NOT store the
ROWIDs.  Instead, each bit corresponds to a possible ROWID.  If the bit is set on, the
row with the corresponding ROWID contains the key value.  This figure illustrates this
concept.  In the figure, the key consists of only one column, and the first entry has a
logical key value of Blue, the second is Green, the third Red, and the fourth Yellow.
 

322
 
 
Comparisons and Limitations:  Updates to key columns for bitmap indexes require
locking a significant portion of the index in order to perform the update. This table
compares B-tree and bitmap indexes.
 
B-Tree Bitmap
Suitable for high-cardinality columns Suitable for low-cardinality columns
Updates on keys relatively Updates to key columns very
inexpensive expensive
Inefficient for queries using OR logical Efficient for queries using OR logical
operator operator
Used mostly for On-Line Transaction Very useful for Data Warehousing
Processing
 
 
Creating Bitmap Indexes
 
Example CREATE BITMAP INDEX command:
 
CREATE BITMAP INDEX USER350.Products_Region_Idx 
    ON USER350.Products_Region (RegionId ASC) 

323
    PCTFREE 40 
    INITTRANS 6 MAXTRANS 10 
    LOGGING 
    TABLESPACE Index01;
 
A BITMAP index cannot be specified as unique since they are only used
for secondary key indexes.
 
You can specify an init.ora parameter CREATE_BITMAP_AREA_SIZE to specify the
amount of memory allocated for storing bitmap segments. 
        The default memory allocated is 8MB. 
        If the memory allocated for bitmap index segment creation is larger, then indexes
are created faster. 
        If there are only a few unique values for the index key field, then you can allocate
much less memory to the creation of bitmap index segments, perhaps only a few
kilobytes.
 
 
Other General Topics
 
This section covers altering, rebuilding, coalescing, and dropping indexes.  It also
covers validating indexes and identifying unused indexes.  Guidelines are provided for
when to index columns.
 
Altering Index Storage Parameters
 
The ALTER INDEX command is used primarily to alter storage parameters.  Other
uses include:
 
         Rebuild or coalesce an existing index
        Deallocate unused space or allocate a new extent
        Specify parallel execution (or not) and alter the degree of parallelism
        Alter storage parameters or physical attributes
        Specify LOGGING or NOLOGGING
        Enable or disable key compression
        Mark the index unusable
        Make the index invisible
        Rename the index
        Start or stop the monitoring of index usage

You cannot alter index column structure.

324
Use of the ALTER INDEX command is straight-forward as illustrated in this example.
 
ALTER INDEX USER350.Products_Region_Idx 
    STORAGE (NEXT 200K MAXEXTENTS 200);
 
One of the most common changes is to increase MAXEXTENTS for an index as a
system grows in size.
 
This example enforces the primary key constraint that might have previous been
relaxed.
 
ALTER TABLE Student
     ENABLE PRIMARY KEY USING INDEX;
 
 
Allocating and De-allocating Index Space:  If a DBA is going to insert a lot of rows
into a table, performance can be improved by adding extents to the indexes associated
with the table prior to doing the inserts.  This improves performance by avoiding the
dynamic addition of index extents.  This can be accomplished by using the ALTER
INDEX command with the ALLOCATE EXTENT option.
 
ALTER INDEX USER350.Products_Region_Idx 
    ALLOCATE EXTENT (SIZE 400K 
    DATAFILE 
      '/u01/student/dbockstd/oradata/USER350index01.dbf');
 
You can DEALLOCATE unused index space after the insertions are complete.  This
will free up space that is not in use within the tablespace.
 
ALTER INDEX USER350.Products_Region_Idx 
    DEALLOCATE UNUSED;
 
Creating an Index Online
 
Creating an Online Index allows Data Manipulation Language (DML) operations
against the table while the index build is being completed.  DDL operations on the table
are not allowed during the index creation process.
 
CREATE INDEX Manager_Employee_Idx ON Employee
    (Manager_ID, Emp_SSN) ONLINE;
 
There are some restrictions:
        Temporary tables cannot be created/rebuilt online.
325
        Partitioned indexes must be created/rebuilt one partition at a time.
        You cannot de-allocate unused space during an online rebuild.
        You cannot change the PCTFREE parameter for the whole index.
 
 
Rebuilding Indexes
 
You may improve system performance by rebuilding an index that is highly fragmented
due to lots of physical insertions and logical deletions.  Such may be the case for
a SALES_ORDER table where filled orders are deleted over time.
 
ALTER INDEX USER350.Products_Region_Idx 
    REBUILD 
    TABLESPACE Index01;
 
        If you rebuild an index from an existing index, the rebuild operation is more
efficient because the index information does not need to be sorted - it is already
sorted. 
        You can specify a new tablespace if necessary for the rebuilt index. 
        The older index is deleted after the new index is rebuilt.  You must have
sufficient space in the tablespace for the old/new index during the rebuild
operation. 
        Queries use the old index while the rebuild operation is under way.
        You can also specify the ONLINE option when rebuilding an index – this allows
DML updates to process even while the index is being rebuilt.
 
ALTER INDEX USER350.Products_Region_Idx 
    REBUILD ONLINE;
 
Function-Based Indexes
 
These indexes support queries that use a value returned by a function or expression in
a WHERE clause.
        The index computes and stores the value from the function or expression in the
index.
        If user-based functions are used, the function must be
marked DETERMINISTIC (stochastic functions won't work).
        If the function is owned by another user, you must have the EXECUTE privilege
on the function object.
        The table must be analyzed after the index is created.
        The query must be guaranteed to not need NULL values from the indexed
expression because NULL values are not stored  in indexes.
326
 
Example --- this defines a function-based index (Last_Name_Caps_Idx) based on
the UPPER function – this facilitates use of the UPPER function when specifying
a WHERE clause condition and ensures use of the index to return values efficiently.
 
CREATE TABLE Faculty (
    Faculty_ID NUMBER(5)
       CONSTRAINT Employee_PK PRIMARY KEY,
    Last_Name  VARCHAR2(20),
    First_Name VARCHAR2(20)
    )
  TABLESPACE Data01;
INSERT INTO Faculty VALUES (1, 'Bock', 'Douglas');
INSERT INTO Faculty VALUES (2, 'Bordoloi', 'Bijoy');
INSERT INTO Faculty VALUES (3, 'Sumner', 'Mary');
INSERT INTO Faculty Values (4, 'BOCK', 'Ronald');
 
Now create the (non-unique) index, then select from the table.
 
CREATE INDEX Last_Name_Caps_Idx ON Faculty (UPPER(Last_Name))
TABLESPACE Index01;
 
Note that the Oracle optimizer considers using the index because of the use of
the UPPER function in the WHERE clause.
 
SELECT Faculty_ID, Last_Name, First_Name
FROM Faculty
WHERE UPPER(Last_Name) LIKE 'BOCK';
 
    EMP_ID LAST_NAME            FIRST_NAME
---------- -------------------- --------------------
         1 Bock                 Douglas
         4 BOCK                 Ronald
 
 
 
Coalescing Indexes
 
This is an alternative to the ALTER INDEX…REBUILD command – it eliminates index
fragmentation.  Coalesce on an index is a block rebuild that is done online. 
 
In situations where a B-tree index has leaf blocks that can be freed up for reuse, you
can merge leaf blocks using the ALTER INDEX…COALESCE statement.
327
 
ALTER INDEX USER350.Products_Region_Idx COALESCE; 

In this figure, the first two leaf node blocks are about 50% full – fragmentation is
evident.  After coalescing, blocks are merged freeing up a block for reuse.
 

 
When to Rebuild or Coalesce
 
Rebuild Index Coalesce Index
To quickly move an index to another Cannot move the index to another
tablespace. tablespace.
Requires more disk space. Operation does not require more disk
space.
Creates new index tree – height of tree Only coalesces leaf blocks.
may shrink.
Enables changing storage and Frees up index leaf blocks for use.
tablespace parameters.
 
 
 
Checking Index Validity
 
The ANALYZE INDEX command with the VALIDATE STRUCTURE option can be
used to check for block corruption in an index.
 
ANALYZE INDEX dbock.pk_course VALIDATE STRUCTURE;
328
 
You can query the INDEX_STATS view and obtain information about the index.
 
SQL> ANALYZE INDEX Course_PK VALIDATE STRUCTURE;
 
Index analyzed.
 
SELECT blocks, pct_used, distinct_keys, lf_rows, del_lf_rows
FROM index_stats;
 
    BLOCKS   PCT_USED DISTINCT_KEYS    LF_ROWS DEL_LF_ROWS
---------- ---------- ------------- ---------- -----------
        15          1             2          2           0
 
An index needs to be reorganized when the proportion of deleted rows
(DEL_LF_ROWS) to existing rows (LF_ROWS – short for Leaf) is greater than 30%. 
 
 
Dropping an Index
 
DBAs drop indexes if they are no longer needed by an application.  Here are reasons
for dropping an index:
         The index is no longer required.
        The index is not providing anticipated performance improvements for queries
issued against the associated table. For example, the table might be very small,
or there might be many rows in the table but very few index entries.
        Applications do not use the index to query the data.
        The index has become invalid and must be dropped before being rebuilt.
        The index has become too fragmented and must be dropped before being
rebuilt.
        A large data load is to be processed – drop indexes prior to large data loads -
this improves performance and tends to use index space more efficiently.
        You also drop indexes that are marked by the system as INVALID because of
some type of instance failure, or if the index is corrupt.
 
When you drop an index, all extents of the index segment are returned to the
containing tablespace and become available for other objects in the tablespace.
 
Example: 
DROP INDEX index_name;
 
You cannot drop an index used to implement an integrity constraint. 
329
         You would first need to remove the integrity constraint by altering the table, then
drop the index.
        You cannot drop only the index associated with an enabled UNIQUE key or PRIMARY
KEY constraint.
        To drop a constraints associated index, you must disable or drop the constraint
itself.
 
 
Identifying Unused Indexes
 
Statistics about the usage of an index can be gathered and displayed
in V$OBJECT_USAGE.
 
        Start usage monitoring of an index:
ALTER INDEX USER350.dept_id_idx
MONITORING USAGE;
 
        Stop usage monitoring of an index:
ALTER INDEX USER350.dept_id_idx
NOMONITORING USAGE;
 
        If the information gathered indicates that an index is never used, the index can
be dropped.
        In addition, eliminating unused indexes cuts down on overhead that the Oracle
server has to do for DML, thus performance is improved. 
 
Each time the MONITORING USAGE clause is specified, V$OBJECT_USAGE will be
reset for the specified index.  The previous information is cleared or reset, and a new
start time is recorded.
 
V$OBJECT_USAGE Columns
        INDEX_NAME: The index name
        TABLE_NAME: The corresponding table
        MONITORING: Indicates whether monitoring is ON or OFF
        USED: Indicates YES or NO whether index has been used during the monitoring
time
        START_MONITORING: Time monitoring began on index
        END_MONITORING: Time monitoring stopped on index
 
 
Guidelines for Creating Indexes
330
 
A table can have any number of indexes, but the more indexes, the more overhead
that is associated with DML operations:
        A single row insertion or deletion requires updating all indexes on the table. 
        An update operation may or may not affect indexes depending on the column
updated.
 
The use of indexes involves a tradeoff - query performance tends to speed up,
but data manipulation language operations tend to slow down.
        Query performance improves because the index speeds up row retrieval. 
        DML operations slow down because along with row insertions, deletions, and
updates, the indexes associated with a table must also have insertions and
deletions completed.
        For very volatile tables, minimize the number of indexes used.
 
When possible, store indexes to a tablespace that does not have rollback segments,
temporary segments, and user/data tables – DBAs usually create a separate
tablespace just used for index segments.
 
You can minimize fragmentation by using extent sizes that are at least multiples of 5
times the DB_BLOCK_SIZE for the database.
 
Create an index when row retrieval involves less than 15% of a large table's rows –
retrieval of more rows is generally more efficient with a full table scan.
 
You may want to use NOLOGGING when creating large indexes - you can improve
performance by avoiding redo generation.
        Indexes created with NOLOGGING requires a backup as their creation is not
archived.
        Example:
 
CREATE BITMAP INDEX USER350.Products_Region_Idx 
    ON USER350.Products_Region (RegionId ASC) 
    NOLOGGING;

Usually index entries are smaller than the rows they index.  Data blocks that store
index entries tend to store more entries in each block, so the INITRANSparameter
should be higher on indexes than on their related tables.
 
 
Checking Index Information in the Data Dictionary
331
 
You can check the DBA_INDEXES view (or ALL_INDEXES and USER_INDEXES) to
get information on the INDEX_NAME, TABLESPACE_NAME, INDEX_TYPE,
UNIQUENESS, and STATUS.
 
COLUMN tablespace_name FORMAT A15;
COLUMN index_type FORMAT A10;
SELECT Index_Name, Tablespace_Name, Index_Type, 
    Uniqueness, Status 
    FROM Dba_Indexes
    WHERE OWNER='DBOCK';
 
INDEX_NAME                     TABLESPACE_NAME INDEX_TYPE
UNIQUENES STATUS
------------------------------ --------------- ----------
--------- --------
PK_ROOM                        USERS           NORMAL     UNIQUE 
   VALID
PK_BEDCLASSIFICATION           USERS           NORMAL     UNIQUE 
   VALID
PK_BED                         USERS           NORMAL     UNIQUE 
   VALID
PK_PATIENT                     USERS           NORMAL     UNIQUE 
   VALID
<more rows display>
 
The STATUS column indicates if the index is
valid.  The INDEX_TYPE column indicates if the index is a bitmap or normal index.
 
You can query the DBA_IND_COLUMNS view
(or ALL_IND_COLUMNS and USER_IND_COLUMNS) to determine which columns of
a table are used for a particular index.
 
The DBA_IND_EXPRESSIONS (or ALL_IND_EXPRESSIONS and USER_IND_EXPR
ESSIONS) views describe the expressions of function-based indexes.
 
The V$OBJECT_USAGE view has information on index usage. produced by
the ALTER INDEX … MONITORING USAGE command.
 
 

332
Module 13 – Profiles and Resources
Objectives
        Create, alter, and administer profiles.
        Manage passwords by using profiles.
        Control resource usage with profiles.
        Obtain information about profiles and resources from the data dictionary. 
 
Profiles
 
Profile – is a database object – a named set of resource limits to:
        Restrict database usage by a system user – profiles restrict users from
performing operations that exceed reasonable resource utilization.  Examples of
resources that need to be managed:
o   Disk storage space.
o   I/O bandwidth to run queries.
o   CPU power.
o   Connect time.
        Enforce password practices – how user passwords are created, reused, and
validated.
        Profiles are assigned to users as part of the CREATE USER or ALTER
USER commands (User creation is covered in Module 14). 
o   User accounts can have only a single profile.
o   A default profile can be created – a default already exists within Oracle
named DEFAULT – it is applied to any user not assigned another profile.
o   Assigning a new profile to a user account supersedes any earlier profile.
o   Profiles cannot be assigned to roles or other profiles.
 
Profiles only take effect when resource limits are "turned on" for the database as a
whole.
        Specify the RESOURCE_LIMIT initialization parameter.
 
RESOURCE_LIMIT = TRUE
 
        Use the ALTER SYSTEM statement to turn on resource limits.
 
ALTER SYSTEM SET RESOURCE_LIMIT = TRUE;
 
        Resource limit specifications pertaining to passwords are always in effect.
 
333
Profile Specifications
 
Profile specifications include:
        Password aging and expiration
        Password history
        Password complexity verification
        Account locking
        CPU time
        Input/output (I/O) operations
        Idle time
        Connect time
        Memory space (private SQL area for Shared Server only)
        Concurrent sessions
 
System users not assigned a specific profile are automatically assigned
the DEFAULT profile.  The DEFAULT profile has only one significant restriction it
doesn't specify a password verification function. 
 
This query lists the resource limits for the DEFAULT profile.
 
COLUMN profile FORMAT A10;
COLUMN resource_name FORMAT a30;
COLUMN resource FORMAT a8;
COLUMN limit FORMAT a15;
SELECT * FROM DBA_PROFILES
    WHERE PROFILE = 'DEFAULT';
 
PROFILE    RESOURCE_NAME                  RESOURCE LIMIT
---------- ------------------------------ --------
---------------
DEFAULT    COMPOSITE_LIMIT                KERNEL   UNLIMITED
DEFAULT    SESSIONS_PER_USER              KERNEL   UNLIMITED
DEFAULT    CPU_PER_SESSION                KERNEL   UNLIMITED
DEFAULT    CPU_PER_CALL                   KERNEL   UNLIMITED
DEFAULT    LOGICAL_READS_PER_SESSION      KERNEL   UNLIMITED
DEFAULT    LOGICAL_READS_PER_CALL         KERNEL   UNLIMITED
DEFAULT    IDLE_TIME                      KERNEL   UNLIMITED
DEFAULT    CONNECT_TIME                   KERNEL   UNLIMITED
DEFAULT    PRIVATE_SGA                    KERNEL   UNLIMITED
DEFAULT    FAILED_LOGIN_ATTEMPTS          PASSWORD 10
DEFAULT    PASSWORD_LIFE_TIME             PASSWORD UNLIMITED
DEFAULT    PASSWORD_REUSE_TIME            PASSWORD UNLIMITED
334
DEFAULT    PASSWORD_REUSE_MAX             PASSWORD UNLIMITED
DEFAULT    PASSWORD_VERIFY_FUNCTION       PASSWORD NULL
DEFAULT    PASSWORD_LOCK_TIME             PASSWORD UNLIMITED
DEFAULT    PASSWORD_GRACE_TIME            PASSWORD UNLIMITED
 
16 rows selected.
 
Creating a Profile
 
A DBA creates a profile with the CREATE PROFILE command. 
        This command has clauses that explicitly set resource limits. 
        A DBA must have the CREATE PROFILE system privilege in order to use this
command. 
        Example: 
 
CREATE PROFILE accountant LIMIT
    SESSIONS_PER_USER 4
    CPU_PER_SESSION unlimited
    CPU_PER_CALL 6000
    LOGICAL_READS_PER_SESSION unlimited
    LOGICAL_READS_PER_CALL 100
    IDLE_TIME 30
    CONNECT_TIME 480
    PASSWORD_REUSE_TIME 1
    PASSWORD_LOCK_TIME 7
    PASSWORD_REUSE_MAX 3;
 
Profile created.
 
Resource limits that are not specified for a new profile inherit the limit set in
the DEFAULT profile.  These clauses are covered in detail later in these notes.
 
 
Assigning Profiles
 
Profiles can only be assigned to system users if the profile has first been
created.  Each system user is assigned only one profile at a time.  When a profile is
assigned to a system user who already has a profile, the new profile replaces the old
one – the current session, if one is taking place, is not affected, but subsequent
sessions are affected.  Also, you cannot assign a profile to a role or another profile
(Roles are covered in Module 16).
 
335
As was noted above, profiles are assigned with the CREATE USER and ALTER
USER command.  An example CREATE USER command is shown here – this
command is covered in more detail in Module 14. 
 
CREATE USER USER349
    IDENTIFIED BY secret 
    PROFILE Accountant 
    PASSWORD EXPIRE; 
 
User created.
 
 
SELECT username, profile FROM dba_users
WHERE username = 'USER349';
 
USERNAME                       PROFILE
------------------------------ ----------
USER349                        ACCOUNTANT
 
 
Altering Profiles
 
Profiles can be altered with the ALTER PROFILE command. 
        A DBA must have the ALTER PROFILE system privilege to use this command. 
        When a profile limit is adjusted, the new setting overrides the previous setting for
the limit, but these changes do not affect current sessions in process. 
        Example:
 
ALTER PROFILE Accountant LIMIT
    CPU_PER_CALL default
    LOGICAL_READS_PER_SESSION 20000
    SESSIONS_PER_USER 1;
 
Test this limit by trying to connect
twice with the account user349.
 
 
Dropping a Profile
 
Profiles no longer required can be dropped with the DROP PROFILE command.
        The DEFAULT profile cannot be dropped.

336
        The CASCADE clause revokes the profile from any user account to which it was
assigned – the CASCADE clause MUST BE USED if the profile has been
assigned to any user account.
        When a profile is dropped, any user account with that profile is reassigned
the DEFAULT profile.
        Examples:
 
DROP PROFILE Accountant;
ERROR at line 1:
ORA-02382: profile ACCOUNTANT has users assigned, cannot
drop without CASCADE
 
DROP PROFILE accountant CASCADE;
 
Profile dropped.
 
SELECT username, profile FROM dba_users
WHERE username = 'USER349';
 
USERNAME                       PROFILE
------------------------------ ----------
USER349                        DEFAULT
 
        Changes that result from dropping a profile only apply to sessions that are
created after the change – current sessions are not modified.
 
 Password Management
 
Password management can be easily controlled by a DBA through the use of profiles.
 

337
 
 
Enabling Password Management
 
Password management is enabled by creating a profile and assigning the profile to
system users when their account is created or by altering system user profile
assignments. 
 
Password limits set in this fashion are always enforced.  When password management
is in use, an existing user account can be locked or unlocked by theALTER
USER command.
 
Password Account Locking:  This option automatically locks a system user account if
the user fails to execute proper login account name/password entries after a specified
number of login attempts.
 

 
        The FAILED_LOGIN_ATTEMPTS and PASSWORD_LOCK_TIME parameter
are specified as part of a profile.
        The FAILED_LOGIN_ATTEMPTS is specified as an
integer.  The PASSWORD_LOCK_TIME is specified as days.
        The database account can be explicitly locked with the ALTER
USER command.  When this happens, the account is not automatically unlocked.
 
Password Expiration/Aging:  Specifies the lifetime of a password – after the specified
period, the password must be changed. 
 

338
 
        The PASSWORD_LIFE_TIME and PASSWORD_GRACE_TIME parameters
are specified as part of a profile. 
        PASSWORD_LIFE_TIME specifies the maximum life of a password.
        If the PASSWORD_GRACE_TIME is exceeded, the account automatically locks.
        Both of these parameters are specified in days.
 
Password History:  This option ensures that a password is not reused within a
specified period of time or number of password changes. 
 

 
        If either PASSWORD_REUSE_TIME or PASSWORD_REUSE_MAX are set to
a value other than DEFAULT or UNLIMITED, the other parameter must be set
to UNLIMITED.
        PASSWORD_REUSE_TIME is specified in days.
        PASSWORD_REUSE_MAX is an integer value specifying the number of
password changes required before a password can be reused. 
        If you set PASSWORD_REUSE_TIME to an integer value, then you must
set PASSWORD_REUSE_MAX to UNLIMITED.  

339
        If you set PASSWORD_REUSE_MAX to an integer value, then you must
set PASSWORD_REUSE_TIME to UNLIMITED
 
Password Complexity Verification:  This option ensures that a password is complex
– this helps provide protection against system intruders who attempt to guess a
password. 
 
        This is implemented by use of a password verification function.  A DBA can write
such a function or can use the default function namedVERIFY_FUNCTION. 
        The function that is used for password complexity verification is specified with
the profile parameter, PASSWORD_VERIFY_FUNCTION.  
        If NULL is specified (the default), no password verification is performed.
 

 
        The default VERIFY_FUNCTION has the characteristics shown in the figure
below.
 

 
When a DBA connected as the user SYS executes the utlpwdmg.sql script (located
at $ORACLE_HOME/rdbms/admin/utlpwdmg.sql) , the Oracle Server creates
the VERIFY_FUNCTION .  The script also executes the ALTER PROFILE command
given below – the command modifies the DEFAULT profile.
 
340
Example of executing the utlpwdmg.sql script.
 
SQL> Connect SYS as SYSDBA
SQL> start $ORACLE_HOME/rdbms/admin/utlpwdmg.sql
 
Function created.
 
Profile altered.
 
This ALTER PROFILE command is part of the utlpwdmg.sql script and does not need
to be executed separately.
 
-- This script alters the default parameters for Password
Management
-- This means that all the users on the system have Password
Management
-- enabled and set to the following values unless another
profile is
-- created with parameter values set to different value or
UNLIMITED
-- is created and assigned to the user.
ALTER PROFILE DEFAULT LIMIT
    PASSWORD_LIFE_TIME 60
    PASSWORD_GRACE_TIME 10
    PASSWORD_REUSE_TIME 1800
    PASSWORD_REUSE_MAX UNLIMITED
    FAILED_LOGIN_ATTEMPTS 3
    PASSWORD_LOCK_TIME 1/1440
    PASSWORD_VERIFY_FUNCTION Verify_Function;
 
 Creating a Profile with Password Protection:  The figure shown below provides an
example CREATE PROFILE command.
 

341
 
Use these parameters values when setting parameters to values that are less than a
day:
        1 hour: PASSWORD_LOCK_TIME = 1/24
        10 minutes: PASSWORD_LOCK_TIME = 10/1400
        5 minutes: PASSWORD_LOCK_TIME = 5/1440
 
 
Resource Management
 
Enabling Resource Limits
 
As noted earlier, resource limits are enabled by setting
the RESOURCE_LIMIT initialization parameter to TRUE (the default is FALSE) or by
enabling the parameter with the ALTER SYSTEM command. 
 
ALTER SYSTEM SET RESOURCE_LIMIT=TRUE
 
System altered.
 
 
Setting User Session Resource Limits
 
Resource limits can also be managed through use of a Profile object.
 
This table describes the resource limit parameters for a Profile.
        Parameters can be either an integer value, or the
keyword UNLIMITED or DEFAULT.
        DEFAULT specifies the limit from the DEFAULT profile.
        UNLIMITED specifies no limit on the resource is enforced.
        The COMPOSITE_LIMIT parameter enables controlling a group of resource
limits – example a system user may use a lot of CPU time, but not much disk I/O
during a session, or vice versa during another session – this keeps the policy
from disconnecting the user.
 
Resource Description
CPU_PER_SESSION Total CPU time – measured in hundredths of seconds
CPU_PER_CALL Maximum CPU time allowed for a statement parse,
execute, or fetch operation, in hundredths of a
second.
SESSIONS_PER_USER Maximum number of concurrent sessions allowed for
342
each user name
CONNECT_TIME Maximum total elapsed connect time measured in
minutes
IDLE_TIME Maximum continuous inactive time in a session
measured in minutes when a query or other operation
is not in progress.
LOGICAL_READS_ Number of data blocks (physical and logical reads)
PER_SESSION read per session from either memory or disk.
LOGICAL_READS_PER_CALL Maximum number of data blocks read for a statement
parse, execute, or fetch operation.
COMPOSITE_LIMIT Total Resource cost, in service units, as a composite
weighted sum of CPU_PER_SESSION,
CONNECT_TIME,
LOGICAL_READS_PER_SESSION, and
PRIVATE_SGA.
PRIVATE_SGA Maximum amount of memory a session can allocate
in the shared pool of the SGA measured in bytes,
kilobytes, or megabytes (applies to Shared Server
only).
 
        Profile limits enforced at the session level are enforced for each connection
where a system user can have more than one concurrent connection.
 
        If a session-level limit is exceeded, then the Oracle Server issues an error
message such as ORA-02391: exceeded simultaneous SESSION_PER_USER
limit, and then disconnects the system user.
 
        Resource limits can also be set at the Call-level, but this applies to PL/SQL
programming limitations and we do not cover setting these Call-level limits in this
course.
 
Adjusting Resource Cost Weights
 
The ALTER RESOURCE COST command is used to adjust weightings for resource
costs.  This can affect the impact of the COMPOSITE_LIMIT parameter.
 
Example:  Here the weights are changed so CPU_PER_SESSION favors CPU usage
over connect time by a factor of 50 to 1.  This means it is much more likely that a
system user will be disconnected from excessive CPU usage than from the use of
excessive connect time.
 
        Step 1.  Alter the resource cost for these two parameters. 
343
 
ALTER RESOURCE COST
    CPU_PER_SESSION 50
    CONNECT_TIME 1;
 
Resource cost altered.
 
SELECT * FROM Resource_Cost;
 
RESOURCE_NAME                     UNIT_COST
-------------------------------- ----------
CPU_PER_SESSION                          50
LOGICAL_READS_PER_SESSION                 0
CONNECT_TIME                              1
PRIVATE_SGA                               0
 
        Step 2.  Create a new profile or modify an existing profile to use
a COMPOSITE_LIMIT parameter.  Here the Accountant profile is recreated
based on the command given earlier in these notes, then altered to set
the COMPOSITE_LIMIT to 300.  We also ensure that user349 is assigned this
profile.
 
CREATE PROFILE Accountant LIMIT
    SESSIONS_PER_USER 4
    CPU_PER_SESSION unlimited
    CPU_PER_CALL 6000
    LOGICAL_READS_PER_SESSION unlimited
    LOGICAL_READS_PER_CALL 100
    IDLE_TIME 30
    CONNECT_TIME 480
    PASSWORD_REUSE_TIME 1
    PASSWORD_LOCK_TIME 7
    PASSWORD_REUSE_MAX 3;
 
ALTER PROFILE Accountant LIMIT
    COMPOSITE_LIMIT 300;
 
Profile altered.
 
ALTER USER user349 PROFILE Accountant;
 
User altered.
 
344
 
        Step 3.  Test the new limit.  The COMPOSITE_COST can be computed.  This is
the formula.  This table compares high/low values for CPU and CONNECTusage
to compute the composite cost and indicates if the resource limit is exceeded.
 
Composite_Cost = (50 * CPU_PER_SESSION) + (1 * CONNECT_TIME)
 
 
  CPU Connect Composite Cost Exceeded
(Seconds) (Seconds) Limit of
300
High     (50 * 6) + (1 * 250) = 300 + 250 = 490 Yes
CPU 0.06 250
High
Connect
Medium     (50 * 5) + (1 * 40) = 250 + 40 = 290 No
CPU 0.05 40
Low
Connect
Low     (50 * 2) + (1 * 175) = 100 + 175 = 275 No
CPU 0.02 175
Medium
Connect
Low     (50 * 2) + (1 * 40) = 100 + 40 = 140 No
CPU 0.02 40
Low
Connect
 
 
 
The Database Resource Manager
 
The Database Resource Manager can provide the Oracle server more control over
resource management decisions; thus, avoiding problems from inefficient operating
system management. 
 
Oracle Database Resource Manager (the Resource Manager) enables you to manage
multiple workloads within a database through the creation of resource plans and
resource groups, and the allocation of individual user accounts to resource groups that
are, in turn, allocated resource plans.
 

345
Generally the operating system handles resource management.  However, within an
Oracle database, this can result in a number of problems:

        Excessive overhead from operating system context switching between Oracle


Database server processes when the number of server processes is high.

        Inefficient scheduling because the O/S may deschedule database servers while


they hold latches, which is inefficient.

        Inappropriate allocation of resources by not prioritizing tasks properly among


active processes.

        Inability to manage database-specific resources, such as parallel execution


servers and active sessions

Example:  Allocate 80% of available CPU resources to online users leaving 20% for
batch users and jobs.
 
The Resource Manager enables you to classify sessions into groups based
on session attributes, and to then allocate resources to those groups in a way that
optimizes hardware utilization for your application environment.
 
The elements of the Resource Manager include:
        Resource consumer group – Sessions grouped together based on the
resources that they require – the resource manager allocates resources to
consumer groups, not individual sessions.
        Resource plan – this is a database object – a container for resource directives
on how resources should be allocated.
        Resource plan directive – this associates a resource consumer group to a
resource plan.
 
You can use the DBMS_RESOURCE_MANAGER PL/SQL package to create and
maintain these elements.  The objects created are stored in the data dictionary.
 
Some special consumer groups always exist in the data dictionary and cannot be
modified or deleted:
        SYS_GROUP – the initial consumer group for all sessions created by SYS or
SYSTEM.
        OTHER_GROUPS – this group contains all sessions not assigned to a
consumer group.  Any resource plan must always have a directive for the
OTHER_GROUPS.
 

346
This figure from your readings shows a simple resource plan for an OLTP and
reporting set of applications. 
        The plan is named DAYTIME.
        It allocates CPU resources among three resource consumer groups
named OLTP, REPORTING, and OTHER_GROUPS.
 

 
 
Oracle provides a predefined procedure named CREATE_SIMPLE_PLAN so that a
DBA can create simple resource plans.
 
A resource plan can reference subplans.  This figure illustrates a top plan and all
descending plans and groups.
 

347
 
 
 
In order to administer the Resource Manager, a DBA must have
the ADMINISTER_RESOURCE_MANAGER system privilege – this privilege is part of
the DBArole along with the ADMIN option.
        The DBA can execute all procedures.
        The DBA can grant or revoke privileges to other system managers.
        The DBA can grant privileges to the user named HR – an internal user for Oracle
human resources software.
 
The Resource Manager is not enabled by default.  This command (or init.ora file
parameter) by the DBA actives the Resource Manager and sets the top plan.
 
RESOURCE_MANAGER_PLAN = DAYTIME.
 
Activate or deactivate the Resource Manager dynamically or change plans with the
ALTER SYSTEM command.
 
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = ‘Alternate_Plan’;
 
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = ‘ ’;
 
348
 
Note:  The Database Resource Manager is covered further in the Oracle
course Oracle Performance Tuning.
 
 
 
Using the Data Dictionary
 
Information about password and resource limits can be obtained by querying the
following views:
        DBA_USERS
        DBA_PROFILES
 
COLUMN username FORMAT A15;
COLUMN password FORMAT A20;
COLUMN account_status FORMAT A30;
SELECT username, password, account_status
FROM dba_users;
 
USERNAME        PASSWORD             ACCOUNT_STATUS
--------------- --------------------
------------------------------
OUTLN           4A3BA55E08595C81     OPEN
USER350         2D5E5DB47A5419B2     OPEN
DBOCK           0D25D10037ACDC6A     OPEN
SYS             DCB748A5BC5390F2     OPEN
SYSTEM          EED9B65CCECDB2E9     OPEN
USER349         E6677904C9407D8A     EXPIRED
TSMSYS          3DF26A8B17D0F29F     EXPIRED & LOCKED
DIP             CE4A36B8E06CA59C     EXPIRED & LOCKED
DBSNMP          E066D214D5421CCC     EXPIRED & LOCKED
ORACLE_OCM      6D17CF1EB1611F94     EXPIRED & LOCKED
 
10 rows selected.
 
COLUMN profile FORMAT A16;
COLUMN resource_name FORMAT A26;
COLUMN resource_type FORMAT A13;
COLUMN limit FORMAT A10;
SELECT profile, resource_name, resource_type, limit
FROM dba_profiles
WHERE resource_type = 'PASSWORD';
 
349
 
PROFILE    RESOURCE_NAME              RESOURCE_TYPE LIMIT
---------- -------------------------- ------------- ----------
ACCOUNTANT FAILED_LOGIN_ATTEMPTS      PASSWORD      DEFAULT
DEFAULT    FAILED_LOGIN_ATTEMPTS      PASSWORD      3
ACCOUNTANT PASSWORD_LIFE_TIME         PASSWORD      DEFAULT
DEFAULT    PASSWORD_LIFE_TIME         PASSWORD      60
ACCOUNTANT PASSWORD_REUSE_TIME        PASSWORD      1
DEFAULT    PASSWORD_REUSE_TIME        PASSWORD      1800
ACCOUNTANT PASSWORD_REUSE_MAX         PASSWORD      3
DEFAULT    PASSWORD_REUSE_MAX         PASSWORD      UNLIMITED
ACCOUNTANT PASSWORD_VERIFY_FUNCTION   PASSWORD      DEFAULT
DEFAULT    PASSWORD_VERIFY_FUNCTION   PASSWORD      VERIFY_FUN
ACCOUNTANT PASSWORD_LOCK_TIME         PASSWORD      7
DEFAULT    PASSWORD_LOCK_TIME         PASSWORD      .0006
ACCOUNTANT PASSWORD_GRACE_TIME        PASSWORD      DEFAULT
DEFAULT    PASSWORD_GRACE_TIME        PASSWORD      10
 
14 rows selected.
 
 
  
 

350
Module 14-1 – Create and Manage Oracle Users
Objectives
 
        Create Oracle user accounts.
        Alter Oracle user accounts.
        Familiarize with database/external authentication.
 Learn about Oracle site licensing.  

 
Oracle User Accounts
 
User Account Creation
 
The CREATE USER command creates a system user as shown here. 
 
CREATE USER Scott IDENTIFIED BY Tiger;
 
        The user Scott is a standard "dummy" user account found on many Oracle
systems for the purposes of system testing – it needs to be disabled to remove a
potential hacker access route.
        The IDENTIFIED BY clause specifies the user password.
        In order to create a user, a DBA must have the CREATE USER system
privilege.
        Users also have a privilege domain – initially the user account has NO privileges
– it is empty.
        In order for a user to connect to Oracle, you must grant the user the CREATE
SESSION system privilege.
        Each username must be unique within a database.  A username cannot be the
same as the name of a role (roles are described in a later module).
 
Each user has a schema for the storage of objects within the database (see the figure
below). 
        Two users can name objects identically because the objects are referred to
globally by using a combination of the username and object name.
        Example:  User350.Employee – each user account can have a table named
Employee because each table is stored within the user's schema.
 

351
 
A complete example of the CREATE USER command: 
 
CREATE USER Scott 
    IDENTIFIED BY New_Pa$$w0rd 
    DEFAULT TABLESPACE Users 
    TEMPORARY TABLESPACE Temp 
    QUOTA 10M ON Users 
    QUOTA 5M ON Data01 
    PROFILE Accountant
    ACCOUNT UNLOCK
    PASSWORD EXPIRE;
 
Scott has two tablespaces identified, one for DEFAULT storage of objects and one
for TEMPORARY objects. 
 
Scott has a quota set on 2 tablespaces.  More details about tablespace allocation are
given later in these notes.
 
Scott has the resource limitations allocated by the PROFILE named accountant.  The
account is unlocked (the default – alternatively the account could be created initially
with the LOCK specification). 
 
The PASSWORD EXPIRE clause requires Scott to change the password prior to
connecting to the database.  After the password is set, when the user logs on using
352
SQLPlus or any other software product that connects to the database, the user
receives the following message at logon, and is prompted to enter a new password:
 
 
ERROR:
ORA-28001: the account has expired
Changing password for SCOTT
Old password:
New password:
Retype new password:
Password changed
 
 
Database Authentication
 
Database authentication involves the use of a standard user account and password. 
Oracle performs the authentication. 
        System users can change their password at any time. 
        Passwords are stored in an encrypted format. 
        Each password must be made up of single-byte characters, even if the database
uses a multi-byte character set.
        Advantages: 
o   User accounts and all authentication are controlled by the database. There
is no reliance on anything outside of the database.
o   Oracle provides strong password management features to enhance security
when using database authentication.
o   It is easier to administer when there are small user communities.
 
Oracle recommends using password management that includes password
aging/expiration, account locking, password history, and password complexity
verification.
 
 
External Authentication
 
External Authentication requires the creation of user accounts that are maintained by
Oracle.  Passwords are administered by an external service such as theoperating
system or a network service (Oracle Networks – Network authentication through
the network is covered in the course Oracle Database Administration Fundamentals
II).  This option is generally useful when a user logs on directly to the machine where
the Oracle server is running.
        A database password is not used for this type of login.
353
        In order for the operating system to authenticate users, a DBA sets
the init.ora parameter OS_AUTHENT_PREFIX to some set value – the default
value isOPS$ in order to provide for backward compatibility to earlier versions of
Oracle. 
        This prefix is used at the operating system level when the user's
account  username. 
        You can also use a NULL string (a set of empty double quotes: "" ) for the prefix
so that the Oracle username exactly matches the Operating System user
name.  This eliminates the need for any prefix.
 
#init.ora parameter 
OS_AUTHENT_PREFIX=OPS$
 
#create user command 
CREATE USER OPS$Scott 
    IDENTIFIED EXTERNALLY 
    DEFAULT TABLESPACE users 
    TEMPORARY TABLESPACE temp 
    QUOTA UNLIMITED ON Users;
 
When Scott attempts to connect to the database, Oracle will check to see if there is a
database user named OPS$Scott and allow or deny the user access as
appropriate.  Thus, to use SQLPlus to log on to the system,
the LINUX/UNIX user Scott enters the following command from the operating system:
 
$ sqlplus /
 
All references in commands that refer to a user that is authenticated by the operating
system must include the defined prefix OPS$.
 
Oracle allows operating-system authentication only for secure connections – this is the
default.  This precludes use of Oracle Net or a shared server configuration and
prevents a remote user from impersonating another operating system user over a
network.
 
The REMOTE_OS_AUTHENT parameter can be set to force acceptance of a client
operating system user name from a nonsecure connection. 
        This is NOT a good security practice.
        Setting REMOTE_OS_AUTHENT = FALSE creates a more secure configuration
based on server-based authentication of clients.
        Changes in the parameter take effect the next time the instance starts and the
database is mounted.
354
 
 
Global Authentication
 
Central authentication can be accomplished through the use of Oracle Advanced
Security software for a directory service.
 
Global users termed Enterprise Users are authenticated by SSL (secure socket
layers) and the user accounts are managed outside of the database.
 
Global Roles are defined in a database and known only to that database and
authorization for the roles is done through the directory service.  The roles can be used
to provide access privileges
 
Enterprise Roles can be created to provide access across multiple databases.  They
can consist of one or more global roles and are essentially containers for global roles.
 
Creating a Global User Example: 
 
CREATE USER Scott
  IDENTIFIED GLOBALLY AS 'CN=Scott, OU=division1, O=oracle,
C=US';
 
        Scott is authenticated by SSL and authorized by the enterprise directory service.
        The AS clause provides a string identifier (distinguished name – DN) to the
enterprise directory.
        Disadvantage:  Scott must have a user account created in every database to
be accessed as well as in the directory service.
 
Creating a Schema-Independent User Example:
 
Schema-independent user accounts allow more than one enterprise user to access a
shared database schema.  These users are:
        Authenticated by SSL or passwords.
        Not created in the database with a CREATE USER statement.
        Privileges are managed in a directory.
        Most users don't need their own schemas – this approach separates users from
databases.
 
CREATE USER inventory_schema IDENTIFIED GLOBALLY AS '';
 

355
        In the directory create multiple enterprise users and a mapping object to tell the
database how to map users DNs to the shared schema. 
 
 
Proxy Authentication and Authorization
 
This approach to authentication and authorization uses a middle-tier server to proxy
clients securely.
 
Three forms of proxy authentication:
        Middle-tier server authenticates itself with the database server and client – an
application user or another application.
        Client (a database user) is not authenticated by the middle-tier server – instead
the identity and database password are passed through the middle-tier server to
the database server for authentication.
        Global users are authenticated by the middle-tier server and it passes either a
Distinguished Name (DN) or Certificate through the middle-tier for retrieval of a
client user name.
         
The middle-tier server proxies a client through the GRANT CONNECT THROUGH
clause of the ALTER USER statement.
 
ALTER USER Scott GRANT CONNECT THROUGH Proxy_Server
    WITH ROLE ALL EXCEPT Inventory;
 
        This grants authorization through the middle-tier server named Proxy_Server.
        The WITH ROLE clause specifies that Proxy_Server can active all roles for the
user Scott except the role named Inventory.
 
Revoking the middle-tier's proxy server authorization:
 
ALTER USER Scott REVOKE CONNECT THROUGH Proxy_Server;
 
Default Tablespace
If one is not specified, the default tablespace for a user is the SYSTEM tablespace –
not a good choice for a default tablespace.  The standard practice to always set a
default tablespace as was shown in the CREATE USER command.
 
CREATE USER ops$Scott 
    IDENTIFIED EXTERNALLY 
    DEFAULT TABLESPACE Users 

356
    TEMPORARY TABLESPACE Temp 
    QUOTA UNLIMITED ON Users;
 
Use the ALTER USER command to change a user's default tablespace.
 
ALTER USER ops$Scott 
    DEFAULT TABLESPACE Data01 
    QUOTA 5M on Data01;
 
Changing a default tablespace does not affect the storage location of any user schema
objects that were created before the default tablespace modification.

You can assign each user a tablespace quota for any tablespace (except a temporary
tablespace). Assigning a quota does the following things:

         Userswith privileges to create certain types of objects can create those objects
in the specified tablespace.

         Oracle
Database limits the amount of space that can be allocated for storage of
a user's objects within the specified tablespace to the amount of the quota.

By default, a user has no quota on any tablespace in the database.

         If
the user has the privilege to create a schema object, then you must assign a
quota to allow the user to create objects.

         Minimally,
assign users a quota for the default tablespace, and additional quotas
for other tablespaces in which they can create objects.

 
 
 
Temporary Tablespace
 
The default Temporary Tablespace for a user is also the SYSTEM tablespace.
 
         Allowing this situation to exist for system users will guarantee that user
processing will cause contention with access to the data dictionary.
 
         Generally a DBA will create a TEMP tablespace that will be shared by all users
for processing that requires sorting and joins.
 
 
357
Tablespace Quotas
 
Assigning a quota ensures that users with privileges to create objects can create those
objects in the tablespace.
 
A quota also ensures the amount of space allocated for storage by an individual user is
not exceeded.  The default is NO QUOTA on any tablespace so a quota must be set or
else the Oracle user account cannot be used to create any objects.
 
Assigning Other Tablespace Quotas:  You can assign a quota on tablespaces other
than the DEFAULT and TEMPORARY tablespaces for users. 
        This enables the user to create objects in the other tablespaces. 
        This is often done for senior systems analysts and programmers who are
authorized to create objects in a DATA tablespace.
 
If you change a quota and the new quota is smaller than the old one, then the following
rules apply:
        For users who have already exceeded the new quota, new objects cannot be
created, and existing objects cannot be allocated more space until the combined
space of the user's objects is within the new quota.
        For users who have not exceeded the new quota, user objects can be allocated
additional space up to the new quota.
 
Granting the UNLIMITED TABLESPACE privilege to a user account overrides all
quota settings for all tablespaces.
 
 
Revoking Tablespace Access
 
A DBA can revoke tablespace access by setting the user's quota to zero for the
tablespace through use of the ALTER USER command.  This example alters the user
named SCOTT for the USERS tablespace.
 
ALTER USER Scott QUOTA 0 ON Users;
 
Existing objects for the user will remain within the tablespace, but cannot be allocated
additional disk space.
 
 
Alter User Command
 
358
Users can use the ALTER USER command to change their own password.
 
To make any other use of the command, a user must have the ALTER USER system
privilege - something the DBA should not give to individual users.
 
Changing a user's security setting with the ALTER USER command changes future
sessions, not a current session to which the user may be connected. 
 
Example ALTER USER command:
 
ALTER USER Scott 
    IDENTIFIED by New_Pa$$w0rd 
    DEFAULT TABLESPACE Data01 
    TEMPORARY TABLESPACE Temp
    QUOTA 100M ON Data01 
    QUOTA 0 ON Inventory_TBS
    PROFILE Almost_Unemployeed; 
  
  
Drop User Command
 
The DROP USER command is used to drop a user.  Examples:
 
DROP USER User105; 
DROP USER Scott CASCADE;
 
        Dropping a user causes the user and the user schema to be immediately deleted
from the database.
        If the user has created objects within their schema, it is necessary to use
the CASCADE option in order to drop a user. 
        If you fail to specify CASCADE when user objects exist, an error message is
generated and the user is not dropped.
        In order for a DBA to drop a user, the DBA must have the DROP USER system
privilege.
 
 
CAUTION:  You need to exercise caution with the CASCADE option to ensure that you
don't drop a user where views or procedures exist that depend upon tables that the
user created.  In those cases, dropping a user requires a lot of detailed investigation
and careful deletion of objects.
 

359
If you want to deny access to the database, but do not want to drop the user and the
user's objects, you should revoke the CREATE SESSION privilege for the user
temporarily.
 
You cannot drop a user who is connected to the database - you must first terminate the
user's session with the ALTER SYSTEM KILL SESSION command.
 
 
Data Dictionary Tables for User Accounts
 
The only data dictionary table used by a DBA for user account information
is DBA_USERS. 
 
COLUMN username FORMAT A15;
COLUMN account_status FORMAT A20;
COLUMN default_tablespace FORMAT A19;
SELECT username, account_status, default_tablespace
FROM dba_users;
 
USERNAME        ACCOUNT_STATUS       DEFAULT_TABLESPACE
--------------- -------------------- -------------------
OUTLN           OPEN                 SYSTEM
USER350         OPEN                 USERS
DBOCK           OPEN                 DATA01
SYS             OPEN                 SYSTEM
SYSTEM          OPEN                 SYSTEM
USER349         EXPIRED              SYSTEM
SCOTT           EXPIRED              USERS
TSMSYS          EXPIRED & LOCKED     SYSTEM
DIP             EXPIRED & LOCKED     SYSTEM
DBSNMP          EXPIRED & LOCKED     SYSAUX
ORACLE_OCM      EXPIRED & LOCKED     SYSTEM
 
11 rows selected.
 
Site Licensing
 
One of the DBA's responsibilities is to ensure that the Oracle Server license agreement
is maintained. 
 
A DBA can track and limit session access for users concurrently accessing the
database through use of
360
the LICENSE_MAX_SESSIONS,LICENSE_SESSIONS_WARNING,
and LICENSE_MAX_USERS parameters in the PFILE.  If an organization's license is
unlimited, these parameters may have their value set to 0.
 
If the limit for the number of authorized connections to an Oracle Instance session is
met, Oracle will only allow users with the RESTRICTED SESSION privilege (usually
DBAs) to connect to the database.
 
When the maximum limit is reached, Oracle writes a message in the ALERT file
indicating the maximum connections was reached.  A DBA can also set awarning
limit on the number of concurrent sessions so that Oracle writes a message to the
ALERT file indicating that the warning limit was reached.
 
When the maximum limit is reached, Oracle enforces the limit by restricting access to
the database.  Oracle also tracks the highest number of concurrent sessions for each
instance.  This is termed the "high water mark" and the information is written to the
ALERT file.
 
 
Setting Concurrent Session and Warning Limits
 
Set the maximum number of concurrent sessions in the init.ora file with the command: 
 
LICENSE_MAX_SESSIONS = 80
 
A DBA does not have to set the warning limit (LICENSE_SESSIONS_WARNING), but
this parameter makes it easier to manage site licensing.  Set the warning limit in the
init.ora file with the command:
 
LICENSE_SESSIONS_WARNING = 70
 
The usage limits can be changed while the database is running with the ALTER
SYSTEM command.  This example alters the number of concurrent sessions and the
warning limit:
 
ALTER SYSTEM 
    SET LICENSE_MAX_SESSIONS = 100 
        LICENSE_SESSIONS_WARNING = 90;
 
If the new value is lower than the number of users currently logged on, Oracle does not
force any users off of the system, but enforces the new limit for new users who attempt
to connect.
361
 
 
Limiting Named Users
 
If a site license is for named users as opposed to concurrent accesses, you can limit
the number of named users by limiting the number of users that can be created in the
database before an instance is started up.  This command in the init.ora file sets the
maximum number of users:
 
LICENSE_MAX_USERS = 100
 
Attempting to create users after the limit is reached generates an error and a message
is written to the ALERT file.  A DBA can change the maximum named users limit with
the ALTER SYSTEM command as shown here:
 
ALTER SYSTEM SET LICENSE_MAX_USERS = 125;
 
To view the current session limits, query the V$LICENSE data dictionary view as
shown in this SELECT statement. 
 
SELECT sessions_max s_max,
    sessions_warning s_warning, 
    sessions_current s_current,
    sessions_highwater s_high, 
    users_max 
FROM v$license; 
  
S_MAX  S_WARNING  S_CURRENT  S_HIGH  USERS_MAX 
-----  ---------  ---------  ------  ---------
100    80         65         82      50 

362
Module 14-2 – Privileges
Objectives
 
        Learn the different system and object privileges.
        Learn to grant and revoke privileges. 
 
General
 
Authentication means to authenticate a system user account ID for access to an
Oracle database. 
 
Authorization means to verify that a system user account ID has been granted the
right, called a privilege, to execute a particular type of SQL statement or to access
objects belonging to another system user account.
 
In order to manage system user access and use of various system objects, such as
tables, indexes, and clusters, Oracle provides the capability to grant and
revoke privileges to individual user accounts.
 
Example privileges include the right to:
        Connect to a database
        Create a table
        Select rows from another user’s table
        Execute another user’s stored procedure
 
Excessive granting of privileges can lead to situations where security is compromised.
 
There are six categories of privileges:
        System privileges allow a system user to perform a specific type of operation or
set of operations.  Typical operations are creating objects, dropping objects, and
altering objects.
        Schema Object privileges allow a system user to perform a specific type of
operation on a specific schema object.  Typical objects include tables, views,
procedures, functions, sequences, etc.
        Table privileges are schema object privileges specifically applicable to Data
Manipulation Language (DML) operations and Data Definition Language (DDL)
operations for tables.
        View privileges apply to the use of view objects that reference base tables and
other views.
        Procedure privileges apply to procedures, functions, and packages.
363
        Type privileges apply to the creation of named types such as object types,
VARRAYs, and nested tables.
  
System Privileges
 
As Oracle has matured as a product, the number of system privileges has grown.  The
current number is over 100.  A complete listing is available by querying the view
named SYSTEM_PRIVILEGE_MAP.
 

 
Privileges can be divided into three categories:
        Those enabling system wide operations, for example, CREATE SESSION,
CREATE TABLESPACE.
        Those enabling the management of an object that is owned by the system user,
for example, CREATE TABLE.
        Those enabling the management of an object that is owned by any system user,
for example, CREATE ANY TABLE.
 
If you can create an object, such as that privilege provided by the CREATE
TABLE privilege, then you can also drop the objects you create.
 
Some examples of system privileges include:
 
Category Privilege
Create Session 
SESSION
Alter Session
Create Tablespace 
Alter Tablespace 
TABLESPACE
Drop Tablespace 
Unlimited Tablespace
TABLE Create Table 
364
Create Any Table 
Alter Any Table 
Drop Any Table 
Select Any Table
Create Any Index 
INDEX
Alter Any Index
 
Some privileges that you might expect to exist, such as CREATE INDEX, do not exist
since if you can CREATE TABLE, you can also create the indexes that go with it and
use the ANALYZE command.
 
Some privileges, such as UNLIMITED TABLESPACE cannot be granted to a role
(roles are covered in Module 14-3)
  
Granting System Privileges
 
The command to grant a system privilege is the GRANT command.  Some
example GRANT commands are shown here.
 
GRANT ALTER TABLESPACE, DROP TABLESPACE 
    TO USER349;
 Grant succeeded.
 
GRANT CREATE SESSION TO USER350 
    WITH ADMIN OPTION;
 Grant succeeded.
  
In general, you can grant a privilege to either a user or to a role.  You can also grant a
privilege to PUBLIC - this makes the privilege available to every system user.
 
The WITH ADMIN OPTION clause enables the grantee (person receiving the privilege)
to grant the privilege or role to other system users or roles; however, you cannot use
this clause unless you have, yourself, been granted the privilege with this clause.
 
The GRANT ANY PRIVILEGE system privilege also enables a system user to grant or
revoke privileges.
 
The GRANT ANY ROLE system privilege is a dangerous one that you don't give to the
average system user since then the user could grant any role to any other system user.
 
 
SYSDBA and SYSOPER Privileges
365
 
SYSDBA and SYSOPER are special privileges that should only be granted to a DBA.
 
This table lists example privileges associated with each of these special privileges.
 
SYSOPER SYSDBA
SYSOPER PRIVILEGES THAT
STARTUP  INCLUDE THE WITH ADMIN
SHUTDOWN  OPTION.
ALTER DATABASE OPEN
| MOUNT 
RECOVER DATABASE  CREATE DATABASE 
ALTER DATABASE ARCHIVELOG 
RESTRICTED SESSION 
ALTER DATABASE BEGIN/END RECOVER DATABASE UNTIL
BACKUP  
 
 
When you allow database access through a Password File using
the REMOTE_LOGIN_PASSWORDFILE parameter that was discussed in an earlier
module, you can add users to this password file by granting
them SYSOPER or SYSDBA system privileges.
 
You cannot grant the SYSDBA or SYSOPER privileges by using the WITH ADMIN
OPTION.  Also, you must have these privileges in order to grant/revoke them from
another system user.
 
 
Displaying System Privileges
 
You can display system privileges by querying the DBA_SYS_PRIVS view.  Here is
the result of a query of the SIUE Oracle database.
 
SELECT * FROM dba_sys_privs
WHERE Grantee = 'USER349';
 
GRANTEE                        PRIVILEGE                                ADM
------------------------------ ---------------------------------------- ---
USER349                        DROP TABLESPACE                          NO
USER349                        ALTER TABLESPACE                         NO
 

366
You can view the users who have SYSOPER and SYSDBA privileges by
querying v$pwfile_users.  Note:  Your student databases will display no rows selected
—this output comes from the DBORCL database.
 
SELECT * FROM v$pwfile_users;
 
USERNAME        SYSDB SYSOP
--------------- ----- -----
INTERNAL        TRUE  TRUE
SYS             TRUE  TRUE
DBOCK           TRUE  FALSE
JAGREEN         TRUE  TRUE
 
The view SESSION_PRIVS gives the privileges held by a user for the current logon
session.
 
 
Revoking System Privileges
 
The REVOKE command can be used to revoke privileges from a system user or from a
role.
 
Only privileges granted directly with a GRANT command can be revoked.
 
There are no cascading effects when a system privilege is revoked.  For example, the
DBA grants the SELECT ANY TABLE WITH ADMIN OPTION to systemuser1, and
then system user1 grants the SELECT ANY TABLE to system user2, then if
system user1 has the privilege revoked, system user2 still has the privilege.
 
 
System Privilege Restrictions
 
Oracle provides for data dictionary protection by enabling the restriction of access to
dictionary objects to the SYSDBA and SYSOPER roles. 
 
For example, if this protection is in place, the SELECT ANY TABLE privilege to allow a
user to access views and tables in other schemas would not enable the system user to
access dictionary objects. 
 
The appropriate init.ora parameter is O7_DICTIONARY_ACCESSIBILITY and it is set
to FALSE, SYSTEM privileges allowing access to objects in other schemas would not

367
allow access to the dictionary schema.  If it is set =TRUE, then access to
the SYS schema is allowed (this is the behavior of Oracle 7).
 
 
Schema Object Privileges
 
Schema object privileges authorize the system user to perform an operation on the
object, such as selecting or deleting rows in a table.
 
A user account automatically has all object privileges for schema objects created within
his/her schema.  Any privilege owned by a user account can be granted to another
user account or to a role.
 
The following table provided by Oracle Corporation gives a map of object privileges
and the type of object to which a privilege applies.
 
OBJECT PRIVILEGE Table View Sequence Procedure
ALTER XXX XXX XXX XXX
DELETE XXX XXX    
EXECUTE       XXX
INDEX XXX XXX    
INSERT XXX XXX    
REFERENCES XXX      
SELECT XXX XXX XXX  
UPDATE XXX XXX    
 
To grant an object privilege, you must specify the privilege and the object.  Example
commands are shown here.
 
GRANT SELECT, ALTER ON User350.Orders TO PUBLIC;
GRANT SELECT, DELETE ON User350.Order_details TO user349;
GRANT SELECT ON User350.Order_details 
    TO User349 WITH GRANT OPTION;
GRANT ALL ON User350.Order_details 
    TO Accountant__Role;
GRANT UPDATE (Price, Description) ON USER350.Order_details TO
User349;
 
Here the SELECT and ALTER privileges were granted for the Orders table belonging
to the system user User350.  These two privileges were granted to allsystem users
through the PUBLIC specification.
368
 
In the 3rd example, User349 receives the SELECT privilege
on User350's Order_Details table and can also grant that privilege to other system
users via theWITH GRANT OPTION.
 
In the 4th example, the Accountant_Role role receives ALL privileges associated with
the Order_Details table.
 
In the 5th example UPDATE privilege is allocated for only two columns
(Price and Description) of the Order_Details table.
 
Notice the difference between WITH ADMIN OPTION and WITH GRANT OPTION -
the first applying to System privileges (these are administrative in nature), the second
applying to Object privileges.
 
Revoking Schema Object Privileges
 
Object privileges are revoked the same way that system privileges are revoked.
 
Several example REVOKE commands are shown here.  Note the use of ALL (to
revoke all object privileges granted to a system user) and ON (to identify the object).
 
REVOKE SELECT ON dbock.orders FROM User350;
REVOKE ALL on User350.Order_Details FROM User349;
REVOKE ALL on User350.Order_Details FROM User349 
    CASCADE CONSTRAINTS;
 
In the latter example, the CASCADE CONSTRAINTS clause would drop referential
integrity constraints defined by the revocation of ALL privileges.
 
There is a difference in how the revocation of object privileges affects other
users.  If user1 grants a SELECT on a table with GRANT OPTION to user2,
anduser2 grants the SELECT on the table to user3, if the SELECT privilege is revoked
from user2 by user1, then user3 also loses the SELECT privilege.  This is
acritical difference.
 

Table Privileges
 

369
Table privileges are schema object privileges specifically applicable to Data
Manipulation Language (DML) operations and Data Definition Language (DDL)
operations for tables.
 
DML Operations
 
As was noted earlier, privileges to DELETE, INSERT, SELECT, and UPDATE for a
table or view should only be granted to a system user account or role that need to
query or manipulate the table data.
 
INSERT and UPDATE privileges can be restricted for a table to specific columns.
        A selective INSERT causes a new row to have values inserted for columns that
are specified in a privilege – all other columns store NULL or pre-defined default
values.
        A selective UPDATE restricts updates only to privileged columns.
 
DDL Operations
 
The ALTER, INDEX, and REFERENCES privileges allow DDL operations on a table.
        Grant these privileges conservatively.
        Users attempting DDL on a table may need additional system or object schema
privileges, e.g., to create a table trigger, the user requires the CREATE
TRIGGER system privilege as well as the ALTER TABLE object privilege.
 
 
View Privileges
 
As you've learned, a view is a virtual table that presents data from one or more tables
in a database.
        Views show the structure of underlying tables and are essentially a stored query.
        Views store no actual data – the data displayed is derived from the tables (or
views) upon which the view is based.
        A view can be queried.
        A view can be used to update data, providing the view is "updatable" by
definition.
 
View Privileges include:
        CREATE VIEW – a system privilege to create a view in your schema.
        CREATE ANY VIEW – a system privilege to create a view in another schema.
        Your account must have been granted appropriate SELECT, INSERT, UPDATE,
or DELETE object privileges on base objects underlying the view, or
370
        Been granted the SELECT ANY TABLE, INSERT ANY TABLE, UPDATE ANY
TABLE, or DELETE ANY TABLE system privileges.
        To grant other users to access your view, you must have object privileges on the
underlying objects with the GRANT OPTION clause or system privileges with
the ADMIN OPTION clause.
 
To use a view, a system user account only requires appropriate privileges on the view
itself – privileges on the underlying base objects are NOT required.
 
Procedure Privileges
 
EXECUTE and EXECUTE ANY PROCEDURE
 
The EXECUTE privilege is the only schema object privilege for procedures. 
        This privilege applies to procedures, functions, and packages.
        Grant this privilege only to system users that will execute a procedure or compile
another procedure that calls a procedure.
 
The EXECUTE ANY PROCEDURE system privilege provides the ability to execute any
procedure in a database.
 
Roles can be used to grant privileges to users.
 
Definer and Invoker Rights
In order to grant EXECUTE to another user, the procedure owner must have all
necessary object (or system) privileges for objects referenced by the procedure. The
individual user account granting EXECUTE on a procedure is termed the Definer.
 
A user of a procedure requires only the EXECUTE privilege on the procedure, and
does NOT require privileges on underlying objects.  A user of a procedure is termed
the Invoker.
 
At runtime, the privileges of the Definer are checked – if required privileges on
referenced objects have been revoked, then neither the Definer or
any Invokergranted EXECUTE on the procedure can execute the procedure.
 
Other Privileges
 
CREATE PROCEDURE or CREATE ANY PROCEDURE system privileges must be
granted to a user account in order for that user to create a procedure.
 

371
To alter a procedure (manually recompile), a user must own the procedure or have
the ALTER ANY PROCEDURE system privilege.
 
Procedure owners must have appropriate schema object privileges for any objects
referenced in the procedure body – these must be explicitly granted and cannot be
obtained through a role.
 
Type Privileges
 
Type privileges are typically system privileges for named types that include object
types, VARRAYs, and nested tables.  The system privileges in this area are detailed in
this table.
 
Privilege Allows a user account to:
CREATE TYPE Create a named type in your own schema.
CREATE ANY TYPE Create a named type in any schema.
ALTER ANY TYPE Alter a type in any schema.
DROP ANY TYPE Drop a named type in any schema.
EXECUTE ANY TYPE Use and reference a named type in any schema
(not obtainable through a role).
 
The CONNECT and RESOURCE roles are granted the CREATE TYPE system
privilege and the DBA role includes all of the above privileges.
 
Object Privileges
 
The EXECUTE privilege permits a user account to use the type's methods.  The user
can use the named type to:
        Define a table.
        Define a column in a table.
        Declare a variable or parameter of the named type.
 
Example from Oracle Database Security Guide Part Number B10773-
01 documentation:

Assume that three users exist with the CONNECT and RESOURCE roles:

 User1
 User2
 User3

User1 performs the following DDL in his schema:


372
CREATE TYPE Type1 AS OBJECT (
  Attribute_1 NUMBER);
 
CREATE TYPE Type2 AS OBJECT (
  Attribute_2 NUMBER);
 
GRANT EXECUTE ON Type1 TO User2;
GRANT EXECUTE ON Type2 TO User2 WITH GRANT OPTION;
 

User2 performs the following DDL in his schema:

CREATE TABLE Tab1 OF User1.Type1;


CREATE TYPE Type3 AS OBJECT (
  Attribute_3 User1.Type2);
CREATE TABLE Tab2 (
  Column_1 User1.Type2);
 

The following statements succeed because User2 has EXECUTE privilege


on User1's TYPE2 with the GRANT OPTION:

GRANT EXECUTE ON Type3 TO User3;


GRANT SELECT on Tab2 TO User3;
 

However, the following grant fails because User2 does not have EXECUTE privilege


on User1's TYPE1 with the GRANT OPTION:

GRANT SELECT ON Tab1 TO User3;


 
 
Data Dictionary Information
 
Displaying Schema Object Privileges
 
Several views provide information about object privileges.  These can be queried as
you have time and include:
 DBA_TAB_PRIVS - all object privileges granted to a user.
 DBA_COL_PRIVS - all privileges granted on specific columns of a table.
 
 

373
Module 14-3 – Roles
Objectives
 
        Create roles as a database object.
        Drop roles.
        Manage the allocation of privileges through roles.
        Familiarize with various dictionary views that provide information about roles.
 
General
 
The Role database object is used to improve the management of various system
objects, such as tables, indexes, and clusters by granting privileges to access these
objects to roles.  As you learned in earlier studies, there are two types of
privileges, System and Object.  Both types of privileges can be allocated to roles. 
 
The concept of a role is a simple one – a role is created as a container for groups of
privileges that are granted to system users who perform similar, typical tasks in a
business. 
 
Example:  A system user fills the position of Account_Manager.  This is a business
role.  The role is created as a database object and privileges are allocated to the
role.  In turn the role is allocated to all employees that work as account managers, and
all account managers thereby inherit the privileges needed to perform their duties. 
 
This figure shows privileges being allocated to roles, and the roles being allocated to
two types of system users – Account_Mgr and Inventory_Mgr.
 

 
 
374
From the figure it should be obvious that if you add a new system user who works as
an Account_Manager, then you can allocate almost all of the privileges this user will
need by simply allocating the role named ACCOUNT_MGR to the system user.
 
Facts About Roles
 
        You may also grant a role to another role (except to itself).
        A role can include both system and object privileges.  Roles have system and
object privileges granted to them just the same way that these privileges are
granted to system users.
        You can require a password to enable a role.
        A role name must be unique.
        Roles are not owned by anyone - are not in anyone's schema.
        If a role has its privileges modified, then the privileges of the system users
granted the role are also modified.
        There are no cascading revokes with roles.
        Using roles reduces how many Grants are stored in a database data dictionary.
        There is a limited set of privileges that cannot be granted to a role, but most
privileges can be granted to roles.
 
Role Benefits
 
        Easier privilege management:  Use roles to simplify privilege management.
Rather than granting the same set of privileges to several users, you can grant
the privileges to a role, and then grant that role to each user.
 
        Dynamic privilege management:  If the privileges associated with a role are
modified, all the users who are granted the role acquire the modified privileges
automatically and immediately.
 
        Selective availability of privileges:  Roles can be enabled and disabled to turn
privileges on and off temporarily. Enabling a role can also be used to verify that a
user has been granted that role.
 
        Can be granted through the operating system:  Operating system commands
or utilities can be used to assign roles to users in the database.
 
Predefined Roles
 
Numerous predefined roles are created as part of a database.  These are listed and
described in the following table. 
375
 
The first three roles are provided to maintain compatibility with previous versions of
Oracle and may not be created automatically in future versions of Oracle. Oracle
Corporation recommends that you design your own roles for database security, rather
than relying on these roles.
 
Script to
ROLE Create Role DESCRIPTION
SQL.BSQ Includes system privileges: ALTER SESSION
CONNECT (This role has been deprecated and has only been retained
with the ALTER SESSION privilege for compatibility with
previous Oracle versions)
SQL.BSQ
Includes system privileges: CREATE
CLUSTER, CREATE INDEXTYPE, CREATE
RESOURCE OPERATOR,CREATE PROCEDURE, CREATE
SEQUENCE, CREATE TABLE, CREATE
TRIGGER, CREATE TYPE
SQL.BSQ Gives all system privileges to the grantee
DBA
WITH ADMIN OPTION.
CATEXP.SQL Provides the privileges required to perform
full and incremental database exports.
Includes: SELECT ANY TABLE, BACKUP ANY
TABLE, EXECUTE ANY PROCEDURE, EXECUTE ANY
TYPE,ADMINISTER RESOURCE MANAGER,
EXP_FULL_DATABASE
and INSERT, DELETE, and UPDATE on the
tablesSYS.INCVID, SYS.INCFIL, and SYS.INCEXP.
Also the following
roles: EXECUTE_CATALOG_ROLEand SELECT_CAT
ALOG_ROLE.
CATEXP.SQL Provides the privileges required to perform
full database imports. Includes an
extensive list of system privileges (use
IMP_FULL_DATABASE view DBA_SYS_PRIVS to view privileges) and
the following
roles:EXECUTE_CATALOG_ROLE and SELECT_CAT
ALOG_ROLE.
DELETE_CATALOG_ROL SQL.BSQ Provides DELETE privilege on the system
E audit table (AUD$)
EXECUTE_CATALOG_RO SQL.BSQ Provides EXECUTE privilege on objects in the
LE data dictionary. Also, HS_ADMIN_ROLE.
SELECT_CATALOG_ROL SQL.BSQ Provides SELECT privilege on objects in the
E data dictionary. Also, HS_ADMIN_ROLE.
376
CATALOG.S Provides privileges for owner of the
QL recovery catalog. Includes: CREATE
SESSION, ALTER SESSION,CREATE
RECOVERY_CATALOG_O
SYNONYM, CREATE VIEW, CREATE DATABASE
WNER
LINK, CREATE TABLE, CREATE CLUSTER, CREATE
SEQUENCE, CREATE TRIGGER, and CREATE
PROCEDURE
CATHS.SQL Used to protect access to the HS
(Heterogeneous Services) data dictionary
tables (grants SELECT) and packages
(grants EXECUTE). It is granted
HS_ADMIN_ROLE
to SELECT_CATALOG_ROLE andEXECUTE_CATALOG
_ROLE such that users with generic data
dictionary access also can access the HS
data dictionary.
CATQUEUE. Provides privileges to administer Advance
SQL Queuing. Includes ENQUEUE ANY
AQ_ADMINISTRATOR_RO
QUEUE, DEQUEUE ANY QUEUE, and MANAGE ANY
LE
QUEUE, SELECT privileges on AQ tables
and EXECUTE privileges on AQ packages.
 
Note:  HS (Heterogeneous Services) – Heterogeneous Services (HS) is an
integrated component within the Oracle Database server and the enabling technology
for the current suite of Oracle Transparent Gateway products. HS provides the
common architecture and administration mechanisms for Oracle Database gateway
products and other heterogeneous access facilities. Also, it provides upwardly
compatible functionality for users of most of the earlier Oracle Transparent Gateway
releases.  The transparent gateway agent facilitates communication between Oracle
Database and non-Oracle Database systems and uses the Heterogeneous Services
component in the Oracle Database server.
 
 
RESOURCE role – when granted to a system user, the system user automatically has
the UNLIMITED TABLESPACE privilege. 
        We grant this role to students that need to design with the Internet Developer
Suite that includes Oracle Designer, Reports, Forms and other rapid application
development software. 
        Normally the RESOURCE role would not be granted to organizational members
who are not information technology professionals.
 
You should design your own roles to provide data security.
 
377
 
Commands for Creating, Altering, and Dropping Roles
 
Creating Roles
 
Sample commands to create roles are shown here.  You must have the CREATE
ROLE system privilege.
 
CREATE ROLE Account_Mgr;
 
CREATE ROLE Inventory_Mgr 
    IDENTIFIED BY <password>;
 
The IDENTIFIED BY clause specifies how the user must be authorized before the role can
be enabled for use by a specific user to which it has been granted. If this clause is not
specified, or NOT IDENTIFIED is specified, then no authorization is required when the role
is enabled.
 
Roles can be specified to be authorized several ways.
        The database using a password – a role authorized by the database can be
protected by an associated password. If you are granted a role protected by a
password, you can enable or disable the role by supplying the proper password
for the role in a SET ROLE statement. However, if the role is made a default role
and enabled at connect time, the user is not required to enter a password.
 
        An application using a specified package -- The INDENTIFIED
USING package_name clause lets you create an application role, which is a role that
can be enabled only by applications using an authorized package. 
 
o   Application developers do not need to secure a role by embedding
passwords inside applications. Instead, they can create an application role
and specify which PL/SQL package is authorized to enable the role.
o   The following example indicates that the role Admin_Role is an application role
and the role can only be enabled by any module defined inside the PL/SQL
package hr.admin.
 
CREATE ROLE Admin_Role IDENTIFIED USING HR.Admin;
 
        Externally by the operating system, network, or other external source – the
following statement creates a role named ACCTS_REC and requires that the user be
authorized by an external source before it can be enabled:
378
 
CREATE ROLE Accts_Rec IDENTIFIED EXTERNALLY;
 
        Globally by an enterprise directory service – a role can be defined as a global
role, whereby a (global) user can only be authorized to use the role by an
enterprise directory service.
o   You define the global role locally in the database by granting privileges and
roles to it, but you cannot grant the global role itself to any user or other
role in the database.
o   When a global user attempts to connect to the database, the enterprise
directory is queried to obtain any global roles associated with the user.
o   The following statement creates a global role:
 
CREATE ROLE Supervisor IDENTIFIED GLOBALLY;
 
 
Altering Roles
 
Use the ALTER ROLE command as is shown in these examples.
 
ALTER ROLE Account_Mgr IDENTIFIED BY <password>;
 
ALTER ROLE Inventory_Mgr NOT IDENTIFIED;
 
 
Granting Roles
 
General facts about roles:
        Grant system privileges and roles to users and to other roles.
        To grant a privilege to a role, you must be granted a system privilege with
the ADMIN OPTION or have the GRANT ANY PRIVILEGE system privilege.
        To grant a role, you must have been granted the role yourself with the ADMIN
OPTION or have the GRANT ANY ROLE system privilege.
        You cannot grant a role that is IDENTIFIED GLOBALLY as global roles are
controlled entirely by the enterprise directory service.
 
Use the GRANT command to grant a role to a system user or to another role, as is
shown in these examples.
 
GRANT Account_Mgr TO User150; 

GRANT Inventory_Mgr TO Account_Mgr, User151;
379
 
GRANT Inventory_Mgr TO User152 WITH ADMIN OPTION;
 
GRANT Access_MyBank_Acct TO PUBLIC;
 
 
The WITH ADMIN OPTION provides the grantee expanded capabilities:
        Can grant or revoke the system privilege or role to or from any user or other
database role.
        Can further grant the system privilege or role with ADMIN OPTION.
        Can alter or drop the role.
        CANNOT revoke a role from theirself.
 
When you create a role, the role is automatically granted to you with the ADMIN
OPTION.
 
Granting with ADMIN OPTION is rarely done except to allocate privileges to security
administrators, not to other administrators or system users.
 
Creating a New User with the GRANT Command
 
If you grant a role to a user name and the user does not exist, then a new
user/password combination is created.
 
Example:  This example creates a new user dbock with the specified password.
 
GRANT CONNECT TO Dbock IDENTIFIED BY Secret_Pa$$w0rd;
 
 
Granting Object Privileges
 
To GRANT object privileges to a role or user, you must:
        Own the object specified, or
        Have the GRANT ANY OBJECT PRIVILEGE system privilege (to grant/revoke
privileges on behalf of the object owner), or
        Have been granted an object privilege by the owner with the WITH GRANT
OPTION clause.
 
You cannot grant system privileges and roles with object privileges in the same
GRANT statement.
 

380
Example:  This grants SELECT, INSERT, and DELETE privileges for all columns of
the EMPLOYEE table to two user accounts.
 
GRANT SELECT, INSERT, DELETE ON Employee TO User350, User349;
 
Example:  This grants all object privileges on the SUPERVISOR view to a user by use
of the ALL keyword.
 
GRANT ALL ON Supervisor TO User350;
 
Example:  This specifies the WITH GRANT OPTION to enable User350 to grant the
object privileges to other users and roles.
        The grantee can grant object privileges to other users and roles in the database.
        The grantee can create views on the table.
        The grantee can grant corresponding privileges on the views to other users and
roles.
        The grantee CANNOT use the WITH GRANT OPTION when granting object
privileges to a role.
 
GRANT SELECT, INSERT, DELETE ON Employee TO User350 WITH GRANT
OPTION;
 
Granting Column Privileges
 
Use this approach to control privileges on individual table columns. 
        Before granting an INSERT privilege for a column, determine if any columns
have NOT NULL constraints.
        Granting an INSERT privilege on a column where other columns are
specified NOT NULL prevents inserting any table rows.
 
Example:  This grants the INSERT and UPDATE privileges on
the Employee_ID, Last_Name, and First_Name columns of the Employee table.
 
GRANT INSERT, UPDATE (Employee_Id, Last_Name, First_Name) ON
Employee TO User350, User349;
 
 
Default Roles
 
Oracle enables all privileges granted to a user and through user default roles when a
user logs on.
 
381
The ALTER USER statement enables a DBA to specify the roles to be enabled when a
system user connects to the database without requiring the user to specify the roles'
passwords.  These roles must have already been granted to the user with the GRANT
statement.
 
System users can be assigned default roles as shown in these examples.
 
ALTER USER User152 
    DEFAULT ROLE Account_Mgr;
 
ALTER USER User152, User151 
    DEFAULT ROLE Account_Mgr, Inventory_Mgr;
 
ALTER USER User150 
    DEFAULT ROLE ALL EXCEPT Account_Mgr;
 
ALTER USER User153 DEFAULT ROLE NONE;
 
Using the ALTER USER command to limit the default role causes privileges assigned
to the user by other roles to be temporarily removed.
 
The last example limits User153 only to privileges granted directly to the user, with no
privileges being allowed through roles.
 
You can also enable/disable roles through the SET ROLE command.
 
You cannot set a user's default roles with the CREATE USER statement.
 
The number of default roles a user can have is specified with
the MAX_ENABLED_ROLES parameter.
 
 
The SET ROLE Statement
 
This statement enables/disables roles for a session.  You must have been granted
any roles you name in a SET ROLE statement.
 
Example:  This enables the role Inventory_Mgr that you have been granted by
specifying the password.
 
SET ROLE Inventory_Mgr IDENTIFIED BY Pa$$w0rd;
 
382
Example:  This disables all roles.
 
SET ROLE NONE;
 
 
Revoking Roles and Privileges
 
Roles, system privileges, and object privileges are revoked with the REVOKE
command.
        Requires the ADMIN OPTION to revoke a system privilege or role.
        Users with GRANT ANY ROLE can also revoke any role.
        You cannot revoke the ADMIN OPTION for a role or system privilege – you must
revoke the privilege or role and then grant it again without the ADMIN OPTION.
 
REVOKE Account_Mgr FROM User151;
 
REVOKE Account_Mgr FROM Inventory_Mgr;
 
REVOKE Access_MyBank_Acct FROM PUBLIC;
 
The second example revokes the role Account_Mgr from the
role Inventory_Mgr.  The third example revokes the
role Access_MyBank_Acct from PUBLIC.
 
When revoking object privileges:
        To revoke an object privilege you must have previously granted the object
privilege to the user or role, or you have the GRANT ANY OBJECT
PRIVILEGEsystem privilege.
        You can only revoke object privileges you directly granted, not grants made by
others to whom you granted the GRANT OPTION – but there is a cascading
effect – object privilege grants propagated with the GRANT OPTION are revoked
if the grantor's object privilege is revoked.
 
Example:  You are the original grantor, this REVOKE will revoke the specified
privileges from the users specified.
 
REVOKE SELECT, INSERT, DELETE ON Employee FROM
User350, Inventory_Mgr;
 
Example:  You granted User350 the privilege to UPDATE the Birth_Date, Last_Name,
and First_Name columns for the Employee table, but now want to revoke the UPDATE
privilege on the Birth_Date column.
383
 
REVOKE UPDATE ON Employee FROM User350;
GRANT UPDATE (Last_Name, First_Name) ON Employee TO User350;
 
You must first revoke the UPDATE privilege on all columns, then issue
a GRANT to regrant the UPDATE privilege on the specified columns.
 
 
Cascading Revoke Effects
 
There are no cascading effects for revoking a system privilege related to a DDL
operation.
 
Example: 
        You as the DBA grant the CREATE VIEW system privilege to User350 WITH
ADMIN OPTION.
        User350 creates a view named Employee_Supervisor.
        User350 grants the CREATE VIEW system privilege to user349.
        User349 creates a view named Special_Inventory.
        You as the DBA revoke CREATE VIEW from User350.
        The Employee_Supervisor view continues to exist.
        User349 still has the CREATE VIEW system privilege and
the Special_Inventory view continues to exist.
 
Cascading revoke effects do occur for system privileges related to DML operations.
 
Example: 
        You as the DBA grant the UPDATE ANY TABLE to User350.
        User350 creates a procedure that updates the Employee table, but User350 has
not received specific privileges on the Employee table.
        You as the DBA revoke the UPDATE ANY TABLE privilege. 
        The procedure will fail.
 
 

Dropping Roles
 
If you drop a role:
        Oracle revokes the role from all system users and roles. 
        The role is removed from the data dictionary.
        The role is automatically removed from all user default role lists.
384
        There is NO impact on objects created such as tables because the creation of
objects is not depending on privileges received through a role.
 
In order to drop a role, you must have been granted the role with the ADMIN
OPTION or have the DROP ANY ROLE system privilege.
 
DROP ROLE Account_Mgr;
 
 
Guidelines for Creating Roles
 
Role names are usually an application task or job title because a role has to include the
privileges needed to perform a task or work in a specific job.  The figure shown here
uses both application tasks and
job titles for role names.
 

 
 
Use the following steps to create, assign, and grant users roles:
 
1. Create a role for each application task. The name of the application role corresponds
to a task in the application, such as PAYROLL.
 
2. Assign the privileges necessary to perform the task to the application role.
385
 
3. Create a role for each type of user. The name of the user role corresponds to a job
title, such as PAY_CLERK.
 
4. Grant application roles to user’s roles.
 
5. Grant user’s roles to users.
 
If a modification to the application requires that new privileges are needed to perform
the payroll task, then the DBA only needs to assign the new privileges to the
application role, PAYROLL.  All of the users that are currently performing this task will
receive the new privileges.
 
 
Guidelines for Using Passwords and Default Roles
 
Passwords provide an additional level of security when enabling a role. For example,
the application might require a user to enter a password when enabling
thePAY_CLERK role, because this role can be used to issue checks.  Passwords
allow a role to be enabled only through an application. This technique is shown in the
example in the figure.
 
 

 
The DBA has granted the user two roles, PAY_CLERK and PAY_CLERK_RO.
        The PAY_CLERK role has been granted all of the privileges that are necessary
to perform the payroll clerk function.
        The PAY_CLERK_RO (RO for read only) role has been granted
only SELECT privileges on the tables required to perform the payroll clerk
function.
386
        The user can log in to SQL*Plus to perform queries, but cannot modify any of the
data, because the PAY_CLERK is not a default role, and the user does not know
the password for PAY_CLERK.
        When the user logs in to the payroll application, it enables the PAY_CLERK by
providing the password.  It is coded in the program; the user is not prompted for
it.
 
 
Role Data Dictionary Views
The following views provide information about roles that are useful for managing a
database.
        DBA_ROLES - Listing of all roles in the database.
        DBA_ROLE_PRIVS - Listing of roles granted to system users and to other roles.
        ROLE_ROLE_PRIVS - Roles granted to roles.
        DBA_SYS_PRIVS - System privileges granted to users and roles.
        ROLE_SYS_PRIVS - System privileges granted to roles.
        ROLE_TAB_PRIVS - Table privileges granted to roles.
        SESSION_ROLES - Roles the user has enabled.
 
 

387

You might also like